From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A97BDD2069D for ; Wed, 16 Oct 2024 04:49:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 375146B007B; Wed, 16 Oct 2024 00:49:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 324006B0082; Wed, 16 Oct 2024 00:49:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1EB946B0083; Wed, 16 Oct 2024 00:49:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 01FFC6B007B for ; Wed, 16 Oct 2024 00:49:33 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 9262F1A094D for ; Wed, 16 Oct 2024 04:49:16 +0000 (UTC) X-FDA: 82678236816.22.A54C168 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf19.hostedemail.com (Postfix) with ESMTP id D81BF1A0003 for ; Wed, 16 Oct 2024 04:49:21 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=bombadil.20210309 header.b=MqV+qFQc; dmarc=none; spf=none (imf19.hostedemail.com: domain of BATV+6680796fe0e0bc9bbc38+7724+infradead.org+hch@bombadil.srs.infradead.org has no SPF policy when checking 198.137.202.133) smtp.mailfrom=BATV+6680796fe0e0bc9bbc38+7724+infradead.org+hch@bombadil.srs.infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729054139; a=rsa-sha256; cv=none; b=E859U8Cu+j4Y3cNzTskxPq+SyjCHxaHnS6u3Rx4NBZdCC/smF3D+u4i8QI9QboymQgJz1M /kj2yqDOd9dpyh5PE5SCeFz78Jw+26Cgsx5QKs0kLgPNmbY+oWnNkSbHv/W+maOeQvXw3w uH2RJa9JJ8BdEce0rw3n8F6T0no2qHY= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=bombadil.20210309 header.b=MqV+qFQc; dmarc=none; spf=none (imf19.hostedemail.com: domain of BATV+6680796fe0e0bc9bbc38+7724+infradead.org+hch@bombadil.srs.infradead.org has no SPF policy when checking 198.137.202.133) smtp.mailfrom=BATV+6680796fe0e0bc9bbc38+7724+infradead.org+hch@bombadil.srs.infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729054139; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kxyPD5bG6ZQ9YWFtuWdz5TSG4aAx8Z8FeonMRrsbG2Y=; b=wBUJl7Gq5VW8OgOaieMpZFscxnsMbBcp0pRRnzyAjuukKRxu8kttBTybApmrwyvOpo6Fp7 btFH705bpTq+HwCu9kdGTf8YcXwngW2uIqajJKUbV98xjGZNnMiW0FbkYdoI1wCTqyhTra ewmm/N2RnbPTNYJ+BlYgJAjqEHgVuLQ= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=kxyPD5bG6ZQ9YWFtuWdz5TSG4aAx8Z8FeonMRrsbG2Y=; b=MqV+qFQc92N3CL/ic5ApcrdMw2 c24JnpTSRS5PAr8laq3vclDbHiC6L0/MRDr8JVgE/EsQukR0EE6g4rPwEJaEdUlUpBkE/+wzFpauT bxG6GsG9/5BgqmfpGX1IO5RqAZswJUoLsESFQJIToTz03iIPPK2sqTVnXcvte/MAYAQakMxzxd7BA 3+1XPhuEQERK61zayh0nfC8QteteaigVzouwtESK8HIdZOl2kU2aoBGFwfM31U+5stL/bkQpOCl0S u5Pac120Gt8yJfbhXtDYfv+Qkv2ytyAtql74XxyRz4n9kNfBgwcwFdcDgkAF4fFX3QpdfCEn3t35Z mbbl8dww==; Received: from hch by bombadil.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1t0vyU-0000000AWGS-1PeP; Wed, 16 Oct 2024 04:49:30 +0000 Date: Tue, 15 Oct 2024 21:49:30 -0700 From: Christoph Hellwig To: Yonatan Maman Cc: nouveau@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, linux-mm@kvack.org, herbst@redhat.com, lyude@redhat.com, dakr@redhat.com, airlied@gmail.com, simona@ffwll.ch, jgg@ziepe.ca, leon@kernel.org, jglisse@redhat.com, akpm@linux-foundation.org, dri-devel@lists.freedesktop.org, apopple@nvidia.com, bskeggs@nvidia.com, Gal Shalom Subject: Re: [PATCH v1 1/4] mm/hmm: HMM API for P2P DMA to device zone pages Message-ID: References: <20241015152348.3055360-1-ymaman@nvidia.com> <20241015152348.3055360-2-ymaman@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20241015152348.3055360-2-ymaman@nvidia.com> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Rspam-User: X-Stat-Signature: aw4635brgmqyonsfr6n3x1z6n7azh661 X-Rspamd-Queue-Id: D81BF1A0003 X-Rspamd-Server: rspam02 X-HE-Tag: 1729054161-548240 X-HE-Meta: U2FsdGVkX19LwpbQhyDobOufQmsjZ4BWTci/RQviKshRwmURDCtByQDG/X2VbwAnkePGtTRnx0tU2pWpY7q2P8F3aw9wocEMHt2Z5ka/TUOHI9oLEcpv3JdULGyfstukHl2ms/19vEtZGgWavlNqev86ldomsm/Prr25FQkduyoibqcyy3dG2pI2Cz/KiXJoB6olaol6Q/dsCj2yTZ1dNN1zxRnbo0lrZTcoIOU7Lp3/yUXKzfFANdXf1acCYX6/Os4raPZR6x+nxCZNGFvZWjzzgJM0AbjFLz6147WUpWdFGA6dejt7Ykn9mvVzKZTdl5XSak7kJPPLvKcFDW0ADtObOoJkuCfl0W0zm/UCs4K3rjn6MPwhuzzycx+TC1r/dYjg/sxr/Vy3KTP3U8IqruT+YYK7JfYOC4NSrqPQIPTxWy0ZJKExYLrkCVRDtZbwPj0y1K3A+Fx4c1UIGfuLMEUGRsyZ3nyNmeYBUCax2zYw1bHncmtecicqxW2rZrFbqOAmQAxuwTSBEjTTBZtahWHuvkj9pdHMi8Ev8W4EjyE/L0U8UnVOVuam5gkZtgs2ln09tqJaT9vZtHjU1NcQBOwNH1Ni68zArwFsch73FE2fWrw9wJ1SIgf3z3N+0EIjBtRg5H40i7Qd4gnceJaETbvEKbvjwJ7KDezQQ57zLH6shDk0UTMm3kA3h1hBI3H6hWQFhPoS6oThoUPcS5e6ohC/m6PqjoPW1a9f/5LMlU6Dvc0rveK6xgF10oMZ3BEK8A6+YhrWdY9oXtjdXzs3jG/gPNGRHP/kL/k4TcsSpNkqjuNocCmc/M5JXwrjBDwOI14YCq04XywA56HSaRMP9q9EEDy+uuH6v7x/Q9CuudguTLGbq0atqUP6rYBYFyaWEbAaM2fbivv8h6Y6q805i1L/hdBNLEe78gzEu4dMVWP+TXkewpwnod5/Vw7DeNAw2F382sUXCfESJhLhf45 FEOGQ57l nCLBgfEDCLTcAHOwtcJLI8NAVhvC6kVWYv9gD4ovykiH2lX23brSKfYQ4UmOJwxzEQeE4IK/y+SZgryVknuVfGgWAM8hlwyfDzOAHU9DU34uo9eBuf6GeTICia0I0ZbL13VORHrS+Je5Vpe7ENwNo+nzVMtgMLjX/HDN/DlrTy+zVcGAnvp3Z3qq8zdl3oaSZ0YKjgZgZJ8XuNsMA/XeNoN9gZckHXH3/YyegAbRam2s8fZxavdHLQGRSVj9hFvO7MzK2IhPV73Uri4Vx2MMw+BmQin40YAuKwziHzp9Rp+q1eCFqmMhjZT5yt25qK5Zl2Mq/pnWtq5k4rfJOe0L+1mSCjt7XlWgMvwlWTFe3HkD8vgCuBWVXVaDeOGDUdB79F7nG2/F+D8JXVMdIOOOnFfgu4eA8F4IUOGYarsaEp4x3ficGc/52A0YBs2Mfb6fDwSxFcZwcuct0UQJs63WEY1wLNKZXWHHHmUQbuHNuC0hh/eLmJdqBOGzgC72RFHUb+Ck/ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The subject does not make sense. All P2P is on ZONE_DEVICE pages. It seems like this is about device private memory? On Tue, Oct 15, 2024 at 06:23:45PM +0300, Yonatan Maman wrote: > From: Yonatan Maman > > hmm_range_fault() natively triggers a page fault on device private > pages, migrating them to RAM. That "natively" above doesn't make sense to me. > In some cases, such as with RDMA devices, > the migration overhead between the device (e.g., GPU) and the CPU, and > vice-versa, significantly damages performance. Thus, enabling Peer-to- s/damages/degrades/ > Peer (P2P) DMA access for device private page might be crucial for > minimizing data transfer overhead. > > This change introduces an API to support P2P connections for device > private pages by implementing the following: "This change.. " or "This patch.." is pointless, just explain what you are doing. > > - Leveraging the struct pagemap_ops for P2P Page Callbacks. This > callback involves mapping the page to MMIO and returning the > corresponding PCI_P2P page. While P2P uses the same underlying PCIe TLPs as MMIO, it is not MMIO by definition, as memory mapped I/O is by definition about the CPU memory mappping so that load and store instructions cause the I/O. It also uses very different concepts in Linux. > - Utilizing hmm_range_fault for Initializing P2P Connections. The API There is no concept of a "connection" in PCIe dta transfers. > also adds the HMM_PFN_REQ_TRY_P2P flag option for the > hmm_range_fault caller to initialize P2P. If set, hmm_range_fault > attempts initializing the P2P connection first, if the owner device > supports P2P, using p2p_page. In case of failure or lack of support, > hmm_range_fault will continue with the regular flow of migrating the > page to RAM. What is the need for the flag? As far as I can tell from reading the series, the P2P mapping is entirely transparent to the callers of hmm_range_fault. > + /* > + * Used for private (un-addressable) device memory only. Return a > + * corresponding struct page, that can be mapped to device > + * (e.g using dma_map_page) > + */ > + struct page *(*get_dma_page_for_device)(struct page *private_page); We are talking about P2P memory here. How do you manage to get a page that dma_map_page can be used on? All P2P memory needs to use the P2P aware dma_map_sg as the pages for P2P memory are just fake zone device pages. > + * P2P for supported pages, and according to caller request > + * translate the private page to the match P2P page if it fails > + * continue with the regular flow > + */ > + if (is_device_private_entry(entry)) { > + get_dma_page_handler = > + pfn_swap_entry_to_page(entry) > + ->pgmap->ops->get_dma_page_for_device; > + if ((hmm_vma_walk->range->default_flags & > + HMM_PFN_REQ_ALLOW_P2P) && > + get_dma_page_handler) { > + dma_page = get_dma_page_handler( > + pfn_swap_entry_to_page(entry)); This is really messy. You probably really want to share a branch with the private page handling for the owner so that you only need a single is_device_private_entry and can use a local variable for to shortcut finding the page. Probably best done with a little helper: Then this becomes: static bool hmm_handle_device_private(struct hmm_range *range, swp_entry_t entry, unsigned long *hmm_pfn) { struct page *page = pfn_swap_entry_to_page(entry); struct dev_pagemap *pgmap = page->pgmap; if (pgmap->owner == range->dev_private_owner) { *hmm_pfn = swp_offset_pfn(entry); goto found; } if (pgmap->ops->get_dma_page_for_device) { *hmm_pfn = page_to_pfn(pgmap->ops->get_dma_page_for_device(page)); goto found; } return false; found: *hmm_pfn |= HMM_PFN_VALID if (is_writable_device_private_entry(entry)) *hmm_pfn |= HMM_PFN_WRITE; return true; } which also makes it clear that returning a page from the method is not that great, a PFN might work a lot better, e.g. unsigned long (*device_private_dma_pfn)(struct page *page);