From: Balbir Singh <balbirs@nvidia.com>
To: linux-mm@kvack.org, akpm@linux-foundation.org
Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org,
"Balbir Singh" <balbirs@nvidia.com>,
"Karol Herbst" <kherbst@redhat.com>,
"Lyude Paul" <lyude@redhat.com>,
"Danilo Krummrich" <dakr@kernel.org>,
"David Airlie" <airlied@gmail.com>,
"Simona Vetter" <simona@ffwll.ch>,
"Jérôme Glisse" <jglisse@redhat.com>,
"Shuah Khan" <shuah@kernel.org>,
"David Hildenbrand" <david@redhat.com>,
"Barry Song" <baohua@kernel.org>,
"Baolin Wang" <baolin.wang@linux.alibaba.com>,
"Ryan Roberts" <ryan.roberts@arm.com>,
"Matthew Wilcox" <willy@infradead.org>,
"Peter Xu" <peterx@redhat.com>, "Zi Yan" <ziy@nvidia.com>,
"Kefeng Wang" <wangkefeng.wang@huawei.com>,
"Jane Chu" <jane.chu@oracle.com>,
"Alistair Popple" <apopple@nvidia.com>,
"Donet Tom" <donettom@linux.ibm.com>
Subject: [RFC 05/11] mm/memory/fault: Add support for zone device THP fault handling
Date: Thu, 6 Mar 2025 15:42:33 +1100 [thread overview]
Message-ID: <20250306044239.3874247-6-balbirs@nvidia.com> (raw)
In-Reply-To: <20250306044239.3874247-1-balbirs@nvidia.com>
When the CPU touches a zone device THP entry, the data needs to
be migrated back to the CPU, call migrate_to_ram() on these pages
via do_huge_pmd_device_private() fault handling helper.
Signed-off-by: Balbir Singh <balbirs@nvidia.com>
---
include/linux/huge_mm.h | 7 +++++++
mm/huge_memory.c | 35 +++++++++++++++++++++++++++++++++++
mm/memory.c | 6 ++++--
3 files changed, 46 insertions(+), 2 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index e893d546a49f..ad0c0ccfcbc2 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -479,6 +479,8 @@ struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf);
+vm_fault_t do_huge_pmd_device_private(struct vm_fault *vmf);
+
extern struct folio *huge_zero_folio;
extern unsigned long huge_zero_pfn;
@@ -634,6 +636,11 @@ static inline vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
return 0;
}
+static inline vm_fault_t do_huge_pmd_device_private(struct vm_fault *vmf)
+{
+ return 0;
+}
+
static inline bool is_huge_zero_folio(const struct folio *folio)
{
return false;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index d8e018d1bdbd..995ac8be5709 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1375,6 +1375,41 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
return __do_huge_pmd_anonymous_page(vmf);
}
+vm_fault_t do_huge_pmd_device_private(struct vm_fault *vmf)
+{
+ struct vm_area_struct *vma = vmf->vma;
+ unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
+ vm_fault_t ret;
+ spinlock_t *ptl;
+ swp_entry_t swp_entry;
+ struct page *page;
+
+ if (!thp_vma_suitable_order(vma, haddr, PMD_ORDER))
+ return VM_FAULT_FALLBACK;
+
+ if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
+ vma_end_read(vma);
+ return VM_FAULT_RETRY;
+ }
+
+ ptl = pmd_lock(vma->vm_mm, vmf->pmd);
+ if (unlikely(!pmd_same(*vmf->pmd, vmf->orig_pmd))) {
+ spin_unlock(ptl);
+ return 0;
+ }
+
+ swp_entry = pmd_to_swp_entry(vmf->orig_pmd);
+ page = pfn_swap_entry_to_page(swp_entry);
+ vmf->page = page;
+ vmf->pte = NULL;
+ get_page(page);
+ spin_unlock(ptl);
+ ret = page_pgmap(page)->ops->migrate_to_ram(vmf);
+ put_page(page);
+
+ return ret;
+}
+
static int insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
pmd_t *pmd, pfn_t pfn, pgprot_t prot, bool write,
pgtable_t pgtable)
diff --git a/mm/memory.c b/mm/memory.c
index a838c8c44bfd..deaa67b88708 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6149,8 +6149,10 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma,
vmf.orig_pmd = pmdp_get_lockless(vmf.pmd);
if (unlikely(is_swap_pmd(vmf.orig_pmd))) {
- VM_BUG_ON(thp_migration_supported() &&
- !is_pmd_migration_entry(vmf.orig_pmd));
+ if (is_device_private_entry(
+ pmd_to_swp_entry(vmf.orig_pmd)))
+ return do_huge_pmd_device_private(&vmf);
+
if (is_pmd_migration_entry(vmf.orig_pmd))
pmd_migration_entry_wait(mm, vmf.pmd);
return 0;
--
2.48.1
next prev parent reply other threads:[~2025-03-06 4:46 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-06 4:42 [RFC 00/11] THP support for zone device pages Balbir Singh
2025-03-06 4:42 ` [RFC 01/11] mm/zone_device: support large zone device private folios Balbir Singh
2025-03-06 23:02 ` Alistair Popple
2025-07-08 13:37 ` David Hildenbrand
2025-07-09 5:25 ` Balbir Singh
2025-03-06 4:42 ` [RFC 02/11] mm/migrate_device: flags for selecting device private THP pages Balbir Singh
2025-07-08 13:41 ` David Hildenbrand
2025-07-09 5:25 ` Balbir Singh
2025-03-06 4:42 ` [RFC 03/11] mm/thp: zone_device awareness in THP handling code Balbir Singh
2025-07-08 14:10 ` David Hildenbrand
2025-07-09 6:06 ` Alistair Popple
2025-07-09 12:30 ` Balbir Singh
2025-03-06 4:42 ` [RFC 04/11] mm/migrate_device: THP migration of zone device pages Balbir Singh
2025-03-06 9:24 ` Mika Penttilä
2025-03-06 21:35 ` Balbir Singh
2025-03-06 4:42 ` Balbir Singh [this message]
2025-07-08 14:40 ` [RFC 05/11] mm/memory/fault: Add support for zone device THP fault handling David Hildenbrand
2025-07-09 23:26 ` Balbir Singh
2025-03-06 4:42 ` [RFC 06/11] lib/test_hmm: test cases and support for zone device private THP Balbir Singh
2025-03-06 4:42 ` [RFC 07/11] mm/memremap: Add folio_split support Balbir Singh
2025-03-06 8:16 ` Mika Penttilä
2025-03-06 21:42 ` Balbir Singh
2025-03-06 22:36 ` Alistair Popple
2025-07-08 14:31 ` David Hildenbrand
2025-07-09 23:34 ` Balbir Singh
2025-03-06 4:42 ` [RFC 08/11] mm/thp: add split during migration support Balbir Singh
2025-07-08 14:38 ` David Hildenbrand
2025-07-08 14:46 ` Zi Yan
2025-07-08 14:53 ` David Hildenbrand
2025-03-06 4:42 ` [RFC 09/11] lib/test_hmm: add test case for split pages Balbir Singh
2025-03-06 4:42 ` [RFC 10/11] selftests/mm/hmm-tests: new tests for zone device THP migration Balbir Singh
2025-03-06 4:42 ` [RFC 11/11] gpu/drm/nouveau: Add THP migration support Balbir Singh
2025-03-06 23:08 ` [RFC 00/11] THP support for zone device pages Matthew Brost
2025-03-06 23:20 ` Balbir Singh
2025-07-04 13:52 ` Francois Dugast
2025-07-04 16:17 ` Zi Yan
2025-07-06 1:25 ` Balbir Singh
2025-07-06 16:34 ` Francois Dugast
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250306044239.3874247-6-balbirs@nvidia.com \
--to=balbirs@nvidia.com \
--cc=airlied@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=dakr@kernel.org \
--cc=david@redhat.com \
--cc=donettom@linux.ibm.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=jane.chu@oracle.com \
--cc=jglisse@redhat.com \
--cc=kherbst@redhat.com \
--cc=linux-mm@kvack.org \
--cc=lyude@redhat.com \
--cc=nouveau@lists.freedesktop.org \
--cc=peterx@redhat.com \
--cc=ryan.roberts@arm.com \
--cc=shuah@kernel.org \
--cc=simona@ffwll.ch \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox