From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.4 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6DFAC432C0 for ; Mon, 18 Nov 2019 18:32:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 71702222BF for ; Mon, 18 Nov 2019 18:32:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="o965utNs" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 71702222BF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 05DA26B0007; Mon, 18 Nov 2019 13:32:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F29AC6B0008; Mon, 18 Nov 2019 13:32:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF24C6B000A; Mon, 18 Nov 2019 13:32:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0206.hostedemail.com [216.40.44.206]) by kanga.kvack.org (Postfix) with ESMTP id C453B6B0007 for ; Mon, 18 Nov 2019 13:32:23 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 882EE180AD80F for ; Mon, 18 Nov 2019 18:32:23 +0000 (UTC) X-FDA: 76170243366.24.pet86_8936ec4b47318 X-HE-Tag: pet86_8936ec4b47318 X-Filterd-Recvd-Size: 5456 Received: from hqemgate16.nvidia.com (hqemgate16.nvidia.com [216.228.121.65]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Mon, 18 Nov 2019 18:32:22 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 18 Nov 2019 10:32:21 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 18 Nov 2019 10:32:21 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 18 Nov 2019 10:32:21 -0800 Received: from rcampbell-dev.nvidia.com (172.20.13.39) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 18 Nov 2019 18:32:18 +0000 Subject: Re: [PATCH v4 2/2] mm/hmm/test: add self tests for HMM To: Jason Gunthorpe CC: Christoph Hellwig , Andrew Morton , Jerome Glisse , John Hubbard , Shuah Khan , "linux-rdma@vger.kernel.org" , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , "linux-kselftest@vger.kernel.org" References: <20191104222141.5173-1-rcampbell@nvidia.com> <20191104222141.5173-3-rcampbell@nvidia.com> <20191112152521.GC12550@lst.de> <07589a71-3984-b2a6-b24b-6b9a23e1b60d@nvidia.com> <20191112234549.GX21728@mellanox.com> <20191113135115.GA10688@lst.de> <21d6b69c-3167-e60d-eed2-65bb1f8515ae@nvidia.com> <20191115140619.GC3873@mellanox.com> From: Ralph Campbell X-Nvconfidentiality: public Message-ID: <912f9f23-fa2a-1dd7-3f91-f7175094c2e2@nvidia.com> Date: Mon, 18 Nov 2019 10:32:18 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.1.0 MIME-Version: 1.0 In-Reply-To: <20191115140619.GC3873@mellanox.com> X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1574101941; bh=39u0XUhcHMVb+ZMv/dn3CtMgXSJFUOYDTSmuZ9uY/T0=; h=X-PGP-Universal:Subject:To:CC:References:From:X-Nvconfidentiality: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=o965utNs7sb47SjzfNx79P1NDcBYHTTGYjQg5eKJdAz3jTgIhAnFr7N7geNbmi9ao 0aR+c+4bAWxpWTsY8M7EzLb02XVMovvgMAPCe6jdZBm6jwo5VnQrt9+rCKJqo5Mkon LrArGzwWtC3qJKyXDLyDSzFu3Qfb38n35KX+O1XdeUetSQm5A71fFIm2qI6frO6BLZ YWQKfutfxInIgfUt8Gqfws0OZXSfTM0mz83p6l/T1h40NtNdZnySgsZATpTc2AtoSH N0DZphSt/6agDSMc8H6BBxkfDZwbXAJPZmi2srMIK0eBBEXuDroYwod1i4932/8BLH QNbuhWw7JCpYg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 11/15/19 6:06 AM, Jason Gunthorpe wrote: > On Thu, Nov 14, 2019 at 03:06:05PM -0800, Ralph Campbell wrote: >> >> On 11/13/19 5:51 AM, Christoph Hellwig wrote: >>> On Tue, Nov 12, 2019 at 11:45:52PM +0000, Jason Gunthorpe wrote: >>>>> Well, it would mean registering for the whole process address space. >>>>> I'll give it a try. >>>> >>>> I'm not sure it makes much sense that this testing is essentially >>>> modeled after nouveau's usage which is very strange compared to the >>>> other drivers. >>> >>> Which means we really should make the test cases fit the proper usage. >>> Maybe defer the tests for 5.5 and just merge the first patch for now? >>> >> >> I think this a good point to discuss. >> Some devices will want to register for all changes to the process address >> space because there is no requirement to preregister regions that the >> device can access verses devices like InfiniBand where a range of addresses >> have to be registered before the device can access those addresses. > > But this is a very bad idea to register and do HW actions for ranges > that can't possibly have any pages registered. It slows down the > entire application > > I think the ODP approach might be saner, when it mirrors the entire > address space it chops it up into VA chunks, and once a page is > registered on the HW the VA chunk goes into the interval tree. > > Presumably the GPU also has some kind of page table tree and you could > set one of the levels as the VA interval when there are populated children > > Jason I wasn't suggesting that HW invalidates happen in two places. I'm suggesting the two styles of invalidates can work together. For example, what if a driver calls mmu_notifier_register(mn, mm) to register for address space wide invalidations, then some time later there is a device page table fault and the driver calls mmu_range_notifier_insert() but with a NULL ops.invalidate. The fault handler follows the nouveau/test_hmm pattern to call mmu_range_read_begin() hmm_range_fault() device lock mmu_range_read_retry() update device page tables device unlock mmu_range_notifier_remove() The global invalidate() callback would get the device lock and call into mm to update the sequence number of any affected ranges instead of having a range invalidate callback, and then do the HW invalidations.