linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [v6 00/15] mm: support device-private THP
@ 2025-09-16 12:21 Balbir Singh
  2025-09-16 12:21 ` [v6 01/15] mm/zone_device: support large zone device private folios Balbir Singh
                   ` (14 more replies)
  0 siblings, 15 replies; 57+ messages in thread
From: Balbir Singh @ 2025-09-16 12:21 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: damon, dri-devel, Balbir Singh, Andrew Morton, David Hildenbrand,
	Zi Yan, Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Alistair Popple, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

This patch series introduces support for Transparent Huge Page
(THP) migration in zone device-private memory. The implementation enables
efficient migration of large folios between system memory and
device-private memory

Background

Current zone device-private memory implementation only supports PAGE_SIZE
granularity, leading to:
- Increased TLB pressure
- Inefficient migration between CPU and device memory

This series extends the existing zone device-private infrastructure to
support THP, leading to:
- Reduced page table overhead
- Improved memory bandwidth utilization
- Seamless fallback to base pages when needed

In my local testing (using lib/test_hmm) and a throughput test, the
series shows a 350% improvement in data transfer throughput and a
80% improvement in latency

These patches build on the earlier posts by Ralph Campbell [1]

Two new flags are added in vma_migration to select and mark compound pages.
migrate_vma_setup(), migrate_vma_pages() and migrate_vma_finalize()
support migration of these pages when MIGRATE_VMA_SELECT_COMPOUND
is passed in as arguments.

The series also adds zone device awareness to (m)THP pages along
with fault handling of large zone device private pages. page vma walk
and the rmap code is also zone device aware. Support has also been
added for folios that might need to be split in the middle
of migration (when the src and dst do not agree on
MIGRATE_PFN_COMPOUND), that occurs when src side of the migration can
migrate large pages, but the destination has not been able to allocate
large pages. The code supported and used folio_split() when migrating
THP pages, this is used when MIGRATE_VMA_SELECT_COMPOUND is not passed
as an argument to migrate_vma_setup().

The test infrastructure lib/test_hmm.c has been enhanced to support THP
migration. A new ioctl to emulate failure of large page allocations has
been added to test the folio split code path. hmm-tests.c has new test
cases for huge page migration and to test the folio split path. A new
throughput test has been added as well.

The nouveau dmem code has been enhanced to use the new THP migration
capability. 

mTHP support:

The patches hard code, HPAGE_PMD_NR in a few places, but the code has
been kept generic to support various order sizes. With additional
refactoring of the code support of different order sizes should be
possible.

The future plan is to post enhancements to support mTHP with a rough
design as follows:

1. Add the notion of allowable thp orders to the HMM based test driver
2. For non PMD based THP paths in migrate_device.c, check to see if
   a suitable order is found and supported by the driver
3. Iterate across orders to check the highest supported order for migration
4. Migrate and finalize

The mTHP patches can be built on top of this series, the key design
elements that need to be worked out are infrastructure and driver support
for multiple ordered pages and their migration.

HMM support for large folios:
Currently in mm-unstable [4]

Cc: Andrew Morton <akpm@linux-foundation.org> 
Cc: David Hildenbrand <david@redhat.com> 
Cc: Zi Yan <ziy@nvidia.com>  
Cc: Joshua Hahn <joshua.hahnjy@gmail.com> 
Cc: Rakie Kim <rakie.kim@sk.com> 
Cc: Byungchul Park <byungchul@sk.com> 
Cc: Gregory Price <gourry@gourry.net> 
Cc: Ying Huang <ying.huang@linux.alibaba.com> 
Cc: Alistair Popple <apopple@nvidia.com> 
Cc: Oscar Salvador <osalvador@suse.de> 
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 
Cc: Baolin Wang <baolin.wang@linux.alibaba.com> 
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com> 
Cc: Nico Pache <npache@redhat.com> 
Cc: Ryan Roberts <ryan.roberts@arm.com> 
Cc: Dev Jain <dev.jain@arm.com> 
Cc: Barry Song <baohua@kernel.org> 
Cc: Lyude Paul <lyude@redhat.com> 
Cc: Danilo Krummrich <dakr@kernel.org> 
Cc: David Airlie <airlied@gmail.com> 
Cc: Simona Vetter <simona@ffwll.ch> 
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mika Penttilä <mpenttil@redhat.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Francois Dugast <francois.dugast@intel.com>

References:
[1] https://lore.kernel.org/linux-mm/20201106005147.20113-1-rcampbell@nvidia.com/
[2] https://lore.kernel.org/linux-mm/20250306044239.3874247-3-balbirs@nvidia.com/T/
[3] https://lore.kernel.org/lkml/20250703233511.2028395-1-balbirs@nvidia.com/
[4] https://lkml.kernel.org/r/20250902130713.1644661-1-francois.dugast@intel.com
[5] https://lore.kernel.org/lkml/20250730092139.3890844-1-balbirs@nvidia.com/
[6] https://lore.kernel.org/lkml/20250812024036.690064-1-balbirs@nvidia.com/
[7] https://lore.kernel.org/lkml/20250903011900.3657435-1-balbirs@nvidia.com/
[8] https://lore.kernel.org/all/20250908000448.180088-1-balbirs@nvidia.com/

These patches are built on top of mm/mm-new

Changelog v6 [8]:
- Rebased against mm/mm-new after fixing the following
  - Two issues reported by kernel test robot
    - m68k requires an lvalue for pmd_present()
    - BUILD_BUG_ON() issues when THP is disabled
  - kernel doc warnings reported on linux-next
    - Thanks Stephen Rothwell!
  - smatch fixes and issues reported
    - Fix issue with potential NULL page
    - Report about young being uninitialized for device-private pages in
      __split_huge_pmd_locked()
- Several Review comments from David Hildenbrand
  - Indentation changes and style improvements
  - Removal of some unwanted extra lines
  - Introduction of new helper function is_pmd_non_present_folio_entry()
    to represent migration and device private pmd's
  - Code flow refactoring into migration and device private paths
  - More consistent use of helper function is_pmd_device_private()
- Review comments from Mika Penttilä
  - folio_get() is not required for huge_pmd prior to split

Changelog v5 [7] :
- Rebased against mm/mm-new (resolved conflict caused by
  MIGRATEPAGE_SUCCESS removal)
- Fixed a kernel-doc warning reported by kernel test robot

Changelog v4 [6] :
- Addressed review comments
  - Split patch 2 into a smaller set of patches
  - PVMW_THP_DEVICE_PRIVATE flag is no longer present
  - damon/page_idle and other page_vma_mapped_walk paths are aware of
    device-private folios
  - No more flush for non-present entries in set_pmd_migration_entry
  - Implemented a helper function for migrate_vma_split_folio() which
    splits large folios if seen during a pte walk
  - Removed the controversial change for folio_ref_freeze using
    folio_expected_ref_count()
  - Removed functions invoked from with VM_WARN_ON
  - New test cases and fixes from Matthew Brost
  - Fixed bugs reported by kernel test robot (Thanks!)
  - Several fixes for THP support in nouveau driver

Changelog v3 [5] :
- Addressed review comments
  - No more split_device_private_folio() helper
  - Device private large folios do not end up on deferred scan lists
  - Removed THP size order checks when initializing zone device folio
  - Fixed bugs reported by kernel test robot (Thanks!)

Changelog v2 [3] :
- Several review comments from David Hildenbrand were addressed, Mika,
  Zi, Matthew also provided helpful review comments
  - In paths where it makes sense a new helper
    is_pmd_device_private_entry() is used
  - anon_exclusive handling of zone device private pages in
    split_huge_pmd_locked() has been fixed
  - Patches that introduced helpers have been folded into where they
    are used
- Zone device handling in mm/huge_memory.c has benefited from the code
  and testing of Matthew Brost, he helped find bugs related to
  copy_huge_pmd() and partial unmapping of folios.
- Zone device THP PMD support via page_vma_mapped_walk() is restricted
  to try_to_migrate_one()
- There is a new dedicated helper to split large zone device folios

Changelog v1 [2]:
- Support for handling fault_folio and using trylock in the fault path
- A new test case has been added to measure the throughput improvement
- General refactoring of code to keep up with the changes in mm
- New split folio callback when the entire split is complete/done. The
  callback is used to know when the head order needs to be reset.

Testing:
- Testing was done with ZONE_DEVICE private pages on an x86 VM


Balbir Singh (14):
  mm/zone_device: support large zone device private folios
  mm/huge_memory: add device-private THP support to PMD operations
  mm/rmap: extend rmap and migration support device-private entries
  mm/huge_memory: implement device-private THP splitting
  mm/migrate_device: handle partially mapped folios during collection
  mm/migrate_device: implement THP migration of zone device pages
  mm/memory/fault: add THP fault handling for zone device private pages
  lib/test_hmm: add zone device private THP test infrastructure
  mm/memremap: add driver callback support for folio splitting
  mm/migrate_device: add THP splitting during migration
  lib/test_hmm: add large page allocation failure testing
  selftests/mm/hmm-tests: new tests for zone device THP migration
  selftests/mm/hmm-tests: new throughput tests including THP
  gpu/drm/nouveau: enable THP support for GPU memory migration

Matthew Brost (1):
  selftests/mm/hmm-tests: partial unmap, mremap and anon_write tests

 drivers/gpu/drm/nouveau/nouveau_dmem.c | 304 +++++---
 drivers/gpu/drm/nouveau/nouveau_svm.c  |   6 +-
 drivers/gpu/drm/nouveau/nouveau_svm.h  |   3 +-
 include/linux/huge_mm.h                |  18 +-
 include/linux/memremap.h               |  51 +-
 include/linux/migrate.h                |   2 +
 include/linux/mm.h                     |   1 +
 include/linux/swapops.h                |  32 +
 lib/test_hmm.c                         | 443 +++++++++---
 lib/test_hmm_uapi.h                    |   3 +
 mm/damon/ops-common.c                  |  20 +-
 mm/huge_memory.c                       | 292 ++++++--
 mm/memory.c                            |   5 +-
 mm/memremap.c                          |  34 +-
 mm/migrate_device.c                    | 611 ++++++++++++++--
 mm/page_idle.c                         |   7 +-
 mm/page_vma_mapped.c                   |   7 +
 mm/pgtable-generic.c                   |   2 +-
 mm/rmap.c                              |  27 +-
 tools/testing/selftests/mm/hmm-tests.c | 919 +++++++++++++++++++++++--
 20 files changed, 2392 insertions(+), 395 deletions(-)

-- 
2.50.1



^ permalink raw reply	[flat|nested] 57+ messages in thread

* [v6 01/15] mm/zone_device: support large zone device private folios
  2025-09-16 12:21 [v6 00/15] mm: support device-private THP Balbir Singh
@ 2025-09-16 12:21 ` Balbir Singh
  2025-09-18  2:49   ` Zi Yan
  2025-09-16 12:21 ` [v6 02/15] mm/huge_memory: add device-private THP support to PMD operations Balbir Singh
                   ` (13 subsequent siblings)
  14 siblings, 1 reply; 57+ messages in thread
From: Balbir Singh @ 2025-09-16 12:21 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: damon, dri-devel, Balbir Singh, David Hildenbrand, Zi Yan,
	Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Alistair Popple, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

Add routines to support allocation of large order zone device folios
and helper functions for zone device folios, to check if a folio is
device private and helpers for setting zone device data.

When large folios are used, the existing page_free() callback in
pgmap is called when the folio is freed, this is true for both
PAGE_SIZE and higher order pages.

Zone device private large folios do not support deferred split and
scan like normal THP folios.

Signed-off-by: Balbir Singh <balbirs@nvidia.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
Cc: Rakie Kim <rakie.kim@sk.com>
Cc: Byungchul Park <byungchul@sk.com>
Cc: Gregory Price <gourry@gourry.net>
Cc: Ying Huang <ying.huang@linux.alibaba.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Nico Pache <npache@redhat.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Dev Jain <dev.jain@arm.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Danilo Krummrich <dakr@kernel.org>
Cc: David Airlie <airlied@gmail.com>
Cc: Simona Vetter <simona@ffwll.ch>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mika Penttilä <mpenttil@redhat.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Francois Dugast <francois.dugast@intel.com>
---
 include/linux/memremap.h | 10 +++++++++-
 mm/memremap.c            | 34 +++++++++++++++++++++-------------
 mm/rmap.c                |  6 +++++-
 3 files changed, 35 insertions(+), 15 deletions(-)

diff --git a/include/linux/memremap.h b/include/linux/memremap.h
index e5951ba12a28..9c20327c2be5 100644
--- a/include/linux/memremap.h
+++ b/include/linux/memremap.h
@@ -206,7 +206,7 @@ static inline bool is_fsdax_page(const struct page *page)
 }
 
 #ifdef CONFIG_ZONE_DEVICE
-void zone_device_page_init(struct page *page);
+void zone_device_folio_init(struct folio *folio, unsigned int order);
 void *memremap_pages(struct dev_pagemap *pgmap, int nid);
 void memunmap_pages(struct dev_pagemap *pgmap);
 void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
@@ -215,6 +215,14 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn);
 bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn);
 
 unsigned long memremap_compat_align(void);
+
+static inline void zone_device_page_init(struct page *page)
+{
+	struct folio *folio = page_folio(page);
+
+	zone_device_folio_init(folio, 0);
+}
+
 #else
 static inline void *devm_memremap_pages(struct device *dev,
 		struct dev_pagemap *pgmap)
diff --git a/mm/memremap.c b/mm/memremap.c
index 46cb1b0b6f72..a8481ebf94cc 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -416,20 +416,19 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap);
 void free_zone_device_folio(struct folio *folio)
 {
 	struct dev_pagemap *pgmap = folio->pgmap;
+	unsigned long nr = folio_nr_pages(folio);
+	int i;
 
 	if (WARN_ON_ONCE(!pgmap))
 		return;
 
 	mem_cgroup_uncharge(folio);
 
-	/*
-	 * Note: we don't expect anonymous compound pages yet. Once supported
-	 * and we could PTE-map them similar to THP, we'd have to clear
-	 * PG_anon_exclusive on all tail pages.
-	 */
 	if (folio_test_anon(folio)) {
-		VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
-		__ClearPageAnonExclusive(folio_page(folio, 0));
+		for (i = 0; i < nr; i++)
+			__ClearPageAnonExclusive(folio_page(folio, i));
+	} else {
+		VM_WARN_ON_ONCE(folio_test_large(folio));
 	}
 
 	/*
@@ -456,8 +455,8 @@ void free_zone_device_folio(struct folio *folio)
 	case MEMORY_DEVICE_COHERENT:
 		if (WARN_ON_ONCE(!pgmap->ops || !pgmap->ops->page_free))
 			break;
-		pgmap->ops->page_free(folio_page(folio, 0));
-		put_dev_pagemap(pgmap);
+		pgmap->ops->page_free(&folio->page);
+		percpu_ref_put_many(&folio->pgmap->ref, nr);
 		break;
 
 	case MEMORY_DEVICE_GENERIC:
@@ -480,14 +479,23 @@ void free_zone_device_folio(struct folio *folio)
 	}
 }
 
-void zone_device_page_init(struct page *page)
+void zone_device_folio_init(struct folio *folio, unsigned int order)
 {
+	struct page *page = folio_page(folio, 0);
+
+	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
+
 	/*
 	 * Drivers shouldn't be allocating pages after calling
 	 * memunmap_pages().
 	 */
-	WARN_ON_ONCE(!percpu_ref_tryget_live(&page_pgmap(page)->ref));
-	set_page_count(page, 1);
+	WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
+	folio_set_count(folio, 1);
 	lock_page(page);
+
+	if (order > 1) {
+		prep_compound_page(page, order);
+		folio_set_large_rmappable(folio);
+	}
 }
-EXPORT_SYMBOL_GPL(zone_device_page_init);
+EXPORT_SYMBOL_GPL(zone_device_folio_init);
diff --git a/mm/rmap.c b/mm/rmap.c
index 34333ae3bd80..9a2aabfaea6f 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1769,9 +1769,13 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
 	 * the folio is unmapped and at least one page is still mapped.
 	 *
 	 * Check partially_mapped first to ensure it is a large folio.
+	 *
+	 * Device private folios do not support deferred splitting and
+	 * shrinker based scanning of the folios to free.
 	 */
 	if (partially_mapped && folio_test_anon(folio) &&
-	    !folio_test_partially_mapped(folio))
+	    !folio_test_partially_mapped(folio) &&
+	    !folio_is_device_private(folio))
 		deferred_split_folio(folio, true);
 
 	__folio_mod_stat(folio, -nr, -nr_pmdmapped);
-- 
2.50.1



^ permalink raw reply	[flat|nested] 57+ messages in thread

* [v6 02/15] mm/huge_memory: add device-private THP support to PMD operations
  2025-09-16 12:21 [v6 00/15] mm: support device-private THP Balbir Singh
  2025-09-16 12:21 ` [v6 01/15] mm/zone_device: support large zone device private folios Balbir Singh
@ 2025-09-16 12:21 ` Balbir Singh
  2025-09-18 18:45   ` Zi Yan
  2025-09-25  0:25   ` Alistair Popple
  2025-09-16 12:21 ` [v6 03/15] mm/rmap: extend rmap and migration support device-private entries Balbir Singh
                   ` (12 subsequent siblings)
  14 siblings, 2 replies; 57+ messages in thread
From: Balbir Singh @ 2025-09-16 12:21 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: damon, dri-devel, Balbir Singh, Matthew Brost, David Hildenbrand,
	Zi Yan, Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Alistair Popple, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Francois Dugast

Extend core huge page management functions to handle device-private THP
entries.  This enables proper handling of large device-private folios in
fundamental MM operations.

The following functions have been updated:

- copy_huge_pmd(): Handle device-private entries during fork/clone
- zap_huge_pmd(): Properly free device-private THP during munmap
- change_huge_pmd(): Support protection changes on device-private THP
- __pte_offset_map(): Add device-private entry awareness

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Balbir Singh <balbirs@nvidia.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
Cc: Rakie Kim <rakie.kim@sk.com>
Cc: Byungchul Park <byungchul@sk.com>
Cc: Gregory Price <gourry@gourry.net>
Cc: Ying Huang <ying.huang@linux.alibaba.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Nico Pache <npache@redhat.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Dev Jain <dev.jain@arm.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Danilo Krummrich <dakr@kernel.org>
Cc: David Airlie <airlied@gmail.com>
Cc: Simona Vetter <simona@ffwll.ch>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mika Penttilä <mpenttil@redhat.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Francois Dugast <francois.dugast@intel.com>
---
 include/linux/swapops.h | 32 +++++++++++++++++++++++
 mm/huge_memory.c        | 56 ++++++++++++++++++++++++++++++++++-------
 mm/pgtable-generic.c    |  2 +-
 3 files changed, 80 insertions(+), 10 deletions(-)

diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index 64ea151a7ae3..2687928a8146 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -594,10 +594,42 @@ static inline int is_pmd_migration_entry(pmd_t pmd)
 }
 #endif  /* CONFIG_ARCH_ENABLE_THP_MIGRATION */
 
+#if defined(CONFIG_ZONE_DEVICE) && defined(CONFIG_ARCH_ENABLE_THP_MIGRATION)
+
+/**
+ * is_pmd_device_private_entry() - Check if PMD contains a device private swap entry
+ * @pmd: The PMD to check
+ *
+ * Returns true if the PMD contains a swap entry that represents a device private
+ * page mapping. This is used for zone device private pages that have been
+ * swapped out but still need special handling during various memory management
+ * operations.
+ *
+ * Return: 1 if PMD contains device private entry, 0 otherwise
+ */
+static inline int is_pmd_device_private_entry(pmd_t pmd)
+{
+	return is_swap_pmd(pmd) && is_device_private_entry(pmd_to_swp_entry(pmd));
+}
+
+#else /* CONFIG_ZONE_DEVICE && CONFIG_ARCH_ENABLE_THP_MIGRATION */
+
+static inline int is_pmd_device_private_entry(pmd_t pmd)
+{
+	return 0;
+}
+
+#endif /* CONFIG_ZONE_DEVICE && CONFIG_ARCH_ENABLE_THP_MIGRATION */
+
 static inline int non_swap_entry(swp_entry_t entry)
 {
 	return swp_type(entry) >= MAX_SWAPFILES;
 }
 
+static inline int is_pmd_non_present_folio_entry(pmd_t pmd)
+{
+	return is_pmd_migration_entry(pmd) || is_pmd_device_private_entry(pmd);
+}
+
 #endif /* CONFIG_MMU */
 #endif /* _LINUX_SWAPOPS_H */
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 5acca24bbabb..a5e4c2aef191 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1703,17 +1703,45 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 	if (unlikely(is_swap_pmd(pmd))) {
 		swp_entry_t entry = pmd_to_swp_entry(pmd);
 
-		VM_BUG_ON(!is_pmd_migration_entry(pmd));
-		if (!is_readable_migration_entry(entry)) {
-			entry = make_readable_migration_entry(
-							swp_offset(entry));
+		VM_WARN_ON(!is_pmd_non_present_folio_entry(pmd));
+
+		if (is_writable_migration_entry(entry) ||
+		    is_readable_exclusive_migration_entry(entry)) {
+			entry = make_readable_migration_entry(swp_offset(entry));
 			pmd = swp_entry_to_pmd(entry);
 			if (pmd_swp_soft_dirty(*src_pmd))
 				pmd = pmd_swp_mksoft_dirty(pmd);
 			if (pmd_swp_uffd_wp(*src_pmd))
 				pmd = pmd_swp_mkuffd_wp(pmd);
 			set_pmd_at(src_mm, addr, src_pmd, pmd);
+		} else if (is_device_private_entry(entry)) {
+			/*
+			 * For device private entries, since there are no
+			 * read exclusive entries, writable = !readable
+			 */
+			if (is_writable_device_private_entry(entry)) {
+				entry = make_readable_device_private_entry(swp_offset(entry));
+				pmd = swp_entry_to_pmd(entry);
+
+				if (pmd_swp_soft_dirty(*src_pmd))
+					pmd = pmd_swp_mksoft_dirty(pmd);
+				if (pmd_swp_uffd_wp(*src_pmd))
+					pmd = pmd_swp_mkuffd_wp(pmd);
+				set_pmd_at(src_mm, addr, src_pmd, pmd);
+			}
+
+			src_folio = pfn_swap_entry_folio(entry);
+			VM_WARN_ON(!folio_test_large(src_folio));
+
+			folio_get(src_folio);
+			/*
+			 * folio_try_dup_anon_rmap_pmd does not fail for
+			 * device private entries.
+			 */
+			folio_try_dup_anon_rmap_pmd(src_folio, &src_folio->page,
+							dst_vma, src_vma);
 		}
+
 		add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
 		mm_inc_nr_ptes(dst_mm);
 		pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable);
@@ -2211,15 +2239,16 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 			folio_remove_rmap_pmd(folio, page, vma);
 			WARN_ON_ONCE(folio_mapcount(folio) < 0);
 			VM_BUG_ON_PAGE(!PageHead(page), page);
-		} else if (thp_migration_supported()) {
+		} else if (is_pmd_non_present_folio_entry(orig_pmd)) {
 			swp_entry_t entry;
 
-			VM_BUG_ON(!is_pmd_migration_entry(orig_pmd));
 			entry = pmd_to_swp_entry(orig_pmd);
 			folio = pfn_swap_entry_folio(entry);
 			flush_needed = 0;
-		} else
-			WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!");
+
+			if (!thp_migration_supported())
+				WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!");
+		}
 
 		if (folio_test_anon(folio)) {
 			zap_deposited_table(tlb->mm, pmd);
@@ -2239,6 +2268,12 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 				folio_mark_accessed(folio);
 		}
 
+		if (folio_is_device_private(folio)) {
+			folio_remove_rmap_pmd(folio, &folio->page, vma);
+			WARN_ON_ONCE(folio_mapcount(folio) < 0);
+			folio_put(folio);
+		}
+
 		spin_unlock(ptl);
 		if (flush_needed)
 			tlb_remove_page_size(tlb, &folio->page, HPAGE_PMD_SIZE);
@@ -2367,7 +2402,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		struct folio *folio = pfn_swap_entry_folio(entry);
 		pmd_t newpmd;
 
-		VM_BUG_ON(!is_pmd_migration_entry(*pmd));
+		VM_WARN_ON(!is_pmd_non_present_folio_entry(*pmd));
 		if (is_writable_migration_entry(entry)) {
 			/*
 			 * A protection check is difficult so
@@ -2380,6 +2415,9 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 			newpmd = swp_entry_to_pmd(entry);
 			if (pmd_swp_soft_dirty(*pmd))
 				newpmd = pmd_swp_mksoft_dirty(newpmd);
+		} else if (is_writable_device_private_entry(entry)) {
+			entry = make_readable_device_private_entry(swp_offset(entry));
+			newpmd = swp_entry_to_pmd(entry);
 		} else {
 			newpmd = *pmd;
 		}
diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
index 567e2d084071..0c847cdf4fd3 100644
--- a/mm/pgtable-generic.c
+++ b/mm/pgtable-generic.c
@@ -290,7 +290,7 @@ pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp)
 
 	if (pmdvalp)
 		*pmdvalp = pmdval;
-	if (unlikely(pmd_none(pmdval) || is_pmd_migration_entry(pmdval)))
+	if (unlikely(pmd_none(pmdval) || !pmd_present(pmdval)))
 		goto nomap;
 	if (unlikely(pmd_trans_huge(pmdval)))
 		goto nomap;
-- 
2.50.1



^ permalink raw reply	[flat|nested] 57+ messages in thread

* [v6 03/15] mm/rmap: extend rmap and migration support device-private entries
  2025-09-16 12:21 [v6 00/15] mm: support device-private THP Balbir Singh
  2025-09-16 12:21 ` [v6 01/15] mm/zone_device: support large zone device private folios Balbir Singh
  2025-09-16 12:21 ` [v6 02/15] mm/huge_memory: add device-private THP support to PMD operations Balbir Singh
@ 2025-09-16 12:21 ` Balbir Singh
  2025-09-22 20:13   ` Zi Yan
  2025-09-16 12:21 ` [v6 04/15] mm/huge_memory: implement device-private THP splitting Balbir Singh
                   ` (11 subsequent siblings)
  14 siblings, 1 reply; 57+ messages in thread
From: Balbir Singh @ 2025-09-16 12:21 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: damon, dri-devel, Balbir Singh, SeongJae Park, David Hildenbrand,
	Zi Yan, Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Alistair Popple, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

Add device-private THP support to reverse mapping infrastructure, enabling
proper handling during migration and walk operations.

The key changes are:
- add_migration_pmd()/remove_migration_pmd(): Handle device-private
  entries during folio migration and splitting
- page_vma_mapped_walk(): Recognize device-private THP entries during
  VMA traversal operations

This change supports folio splitting and migration operations on
device-private entries.

Signed-off-by: Balbir Singh <balbirs@nvidia.com>
Reviewed-by: SeongJae Park <sj@kernel.org>
Cc: David Hildenbrand <david@redhat.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
Cc: Rakie Kim <rakie.kim@sk.com>
Cc: Byungchul Park <byungchul@sk.com>
Cc: Gregory Price <gourry@gourry.net>
Cc: Ying Huang <ying.huang@linux.alibaba.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Nico Pache <npache@redhat.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Dev Jain <dev.jain@arm.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Danilo Krummrich <dakr@kernel.org>
Cc: David Airlie <airlied@gmail.com>
Cc: Simona Vetter <simona@ffwll.ch>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mika Penttilä <mpenttil@redhat.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Francois Dugast <francois.dugast@intel.com>
---
 mm/damon/ops-common.c | 20 +++++++++++++++++---
 mm/huge_memory.c      | 16 +++++++++++++++-
 mm/page_idle.c        |  7 +++++--
 mm/page_vma_mapped.c  |  7 +++++++
 mm/rmap.c             | 21 +++++++++++++++++----
 5 files changed, 61 insertions(+), 10 deletions(-)

diff --git a/mm/damon/ops-common.c b/mm/damon/ops-common.c
index 998c5180a603..eda4de553611 100644
--- a/mm/damon/ops-common.c
+++ b/mm/damon/ops-common.c
@@ -75,12 +75,24 @@ void damon_ptep_mkold(pte_t *pte, struct vm_area_struct *vma, unsigned long addr
 void damon_pmdp_mkold(pmd_t *pmd, struct vm_area_struct *vma, unsigned long addr)
 {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-	struct folio *folio = damon_get_folio(pmd_pfn(pmdp_get(pmd)));
+	pmd_t pmdval = pmdp_get(pmd);
+	struct folio *folio;
+	bool young = false;
+	unsigned long pfn;
+
+	if (likely(pmd_present(pmdval)))
+		pfn = pmd_pfn(pmdval);
+	else
+		pfn = swp_offset_pfn(pmd_to_swp_entry(pmdval));
 
+	folio = damon_get_folio(pfn);
 	if (!folio)
 		return;
 
-	if (pmdp_clear_young_notify(vma, addr, pmd))
+	if (likely(pmd_present(pmdval)))
+		young |= pmdp_clear_young_notify(vma, addr, pmd);
+	young |= mmu_notifier_clear_young(vma->vm_mm, addr, addr + PAGE_SIZE);
+	if (young)
 		folio_set_young(folio);
 
 	folio_set_idle(folio);
@@ -203,7 +215,9 @@ static bool damon_folio_young_one(struct folio *folio,
 				mmu_notifier_test_young(vma->vm_mm, addr);
 		} else {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-			*accessed = pmd_young(pmdp_get(pvmw.pmd)) ||
+			pmd_t pmd = pmdp_get(pvmw.pmd);
+
+			*accessed = (pmd_present(pmd) && pmd_young(pmd)) ||
 				!folio_test_idle(folio) ||
 				mmu_notifier_test_young(vma->vm_mm, addr);
 #else
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index a5e4c2aef191..78166db72f4d 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -4637,7 +4637,10 @@ int set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
 		return 0;
 
 	flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);
-	pmdval = pmdp_invalidate(vma, address, pvmw->pmd);
+	if (unlikely(!pmd_present(*pvmw->pmd)))
+		pmdval = pmdp_huge_get_and_clear(vma->vm_mm, address, pvmw->pmd);
+	else
+		pmdval = pmdp_invalidate(vma, address, pvmw->pmd);
 
 	/* See folio_try_share_anon_rmap_pmd(): invalidate PMD first. */
 	anon_exclusive = folio_test_anon(folio) && PageAnonExclusive(page);
@@ -4687,6 +4690,17 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
 	entry = pmd_to_swp_entry(*pvmw->pmd);
 	folio_get(folio);
 	pmde = folio_mk_pmd(folio, READ_ONCE(vma->vm_page_prot));
+
+	if (folio_is_device_private(folio)) {
+		if (pmd_write(pmde))
+			entry = make_writable_device_private_entry(
+							page_to_pfn(new));
+		else
+			entry = make_readable_device_private_entry(
+							page_to_pfn(new));
+		pmde = swp_entry_to_pmd(entry);
+	}
+
 	if (pmd_swp_soft_dirty(*pvmw->pmd))
 		pmde = pmd_mksoft_dirty(pmde);
 	if (is_writable_migration_entry(entry))
diff --git a/mm/page_idle.c b/mm/page_idle.c
index a82b340dc204..3bf0fbe05cc2 100644
--- a/mm/page_idle.c
+++ b/mm/page_idle.c
@@ -71,8 +71,11 @@ static bool page_idle_clear_pte_refs_one(struct folio *folio,
 				referenced |= ptep_test_and_clear_young(vma, addr, pvmw.pte);
 			referenced |= mmu_notifier_clear_young(vma->vm_mm, addr, addr + PAGE_SIZE);
 		} else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
-			if (pmdp_clear_young_notify(vma, addr, pvmw.pmd))
-				referenced = true;
+			pmd_t pmdval = pmdp_get(pvmw.pmd);
+
+			if (likely(pmd_present(pmdval)))
+				referenced |= pmdp_clear_young_notify(vma, addr, pvmw.pmd);
+			referenced |= mmu_notifier_clear_young(vma->vm_mm, addr, addr + PAGE_SIZE);
 		} else {
 			/* unexpected pmd-mapped page? */
 			WARN_ON_ONCE(1);
diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index e981a1a292d2..159953c590cc 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -277,6 +277,13 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
 			 * cannot return prematurely, while zap_huge_pmd() has
 			 * cleared *pmd but not decremented compound_mapcount().
 			 */
+			swp_entry_t entry = pmd_to_swp_entry(pmde);
+
+			if (is_device_private_entry(entry)) {
+				pvmw->ptl = pmd_lock(mm, pvmw->pmd);
+				return true;
+			}
+
 			if ((pvmw->flags & PVMW_SYNC) &&
 			    thp_vma_suitable_order(vma, pvmw->address,
 						   PMD_ORDER) &&
diff --git a/mm/rmap.c b/mm/rmap.c
index 9a2aabfaea6f..080fc4048431 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1063,9 +1063,11 @@ static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw)
 		} else {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 			pmd_t *pmd = pvmw->pmd;
-			pmd_t entry;
+			pmd_t entry = pmdp_get(pmd);
 
-			if (!pmd_dirty(*pmd) && !pmd_write(*pmd))
+			if (!pmd_present(entry))
+				continue;
+			if (!pmd_dirty(entry) && !pmd_write(entry))
 				continue;
 
 			flush_cache_range(vma, address,
@@ -2330,6 +2332,11 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
 	while (page_vma_mapped_walk(&pvmw)) {
 		/* PMD-mapped THP migration entry */
 		if (!pvmw.pte) {
+#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+			unsigned long pfn;
+			pmd_t pmdval;
+#endif
+
 			if (flags & TTU_SPLIT_HUGE_PMD) {
 				split_huge_pmd_locked(vma, pvmw.address,
 						      pvmw.pmd, true);
@@ -2338,8 +2345,14 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
 				break;
 			}
 #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
-			subpage = folio_page(folio,
-				pmd_pfn(*pvmw.pmd) - folio_pfn(folio));
+			pmdval = pmdp_get(pvmw.pmd);
+			if (likely(pmd_present(pmdval)))
+				pfn = pmd_pfn(pmdval);
+			else
+				pfn = swp_offset_pfn(pmd_to_swp_entry(pmdval));
+
+			subpage = folio_page(folio, pfn - folio_pfn(folio));
+
 			VM_BUG_ON_FOLIO(folio_test_hugetlb(folio) ||
 					!folio_test_pmd_mappable(folio), folio);
 
-- 
2.50.1



^ permalink raw reply	[flat|nested] 57+ messages in thread

* [v6 04/15] mm/huge_memory: implement device-private THP splitting
  2025-09-16 12:21 [v6 00/15] mm: support device-private THP Balbir Singh
                   ` (2 preceding siblings ...)
  2025-09-16 12:21 ` [v6 03/15] mm/rmap: extend rmap and migration support device-private entries Balbir Singh
@ 2025-09-16 12:21 ` Balbir Singh
  2025-09-22 21:09   ` Zi Yan
  2025-09-25 10:01   ` David Hildenbrand
  2025-09-16 12:21 ` [v6 05/15] mm/migrate_device: handle partially mapped folios during collection Balbir Singh
                   ` (10 subsequent siblings)
  14 siblings, 2 replies; 57+ messages in thread
From: Balbir Singh @ 2025-09-16 12:21 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: damon, dri-devel, Balbir Singh, David Hildenbrand, Zi Yan,
	Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Alistair Popple, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

Add support for splitting device-private THP folios, enabling fallback
to smaller page sizes when large page allocation or migration fails.

Key changes:
- split_huge_pmd(): Handle device-private PMD entries during splitting
- Preserve RMAP_EXCLUSIVE semantics for anonymous exclusive folios
- Skip RMP_USE_SHARED_ZEROPAGE for device-private entries as they
  don't support shared zero page semantics

Signed-off-by: Balbir Singh <balbirs@nvidia.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
Cc: Rakie Kim <rakie.kim@sk.com>
Cc: Byungchul Park <byungchul@sk.com>
Cc: Gregory Price <gourry@gourry.net>
Cc: Ying Huang <ying.huang@linux.alibaba.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Nico Pache <npache@redhat.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Dev Jain <dev.jain@arm.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Danilo Krummrich <dakr@kernel.org>
Cc: David Airlie <airlied@gmail.com>
Cc: Simona Vetter <simona@ffwll.ch>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mika Penttilä <mpenttil@redhat.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Francois Dugast <francois.dugast@intel.com>
---
 mm/huge_memory.c | 138 +++++++++++++++++++++++++++++++++--------------
 1 file changed, 98 insertions(+), 40 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 78166db72f4d..5291ee155a02 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2872,16 +2872,18 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
 	struct page *page;
 	pgtable_t pgtable;
 	pmd_t old_pmd, _pmd;
-	bool young, write, soft_dirty, pmd_migration = false, uffd_wp = false;
-	bool anon_exclusive = false, dirty = false;
+	bool soft_dirty, uffd_wp = false, young = false, write = false;
+	bool anon_exclusive = false, dirty = false, present = false;
 	unsigned long addr;
 	pte_t *pte;
 	int i;
+	swp_entry_t swp_entry;
 
 	VM_BUG_ON(haddr & ~HPAGE_PMD_MASK);
 	VM_BUG_ON_VMA(vma->vm_start > haddr, vma);
 	VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PMD_SIZE, vma);
-	VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd));
+
+	VM_WARN_ON(!is_pmd_non_present_folio_entry(*pmd) && !pmd_trans_huge(*pmd));
 
 	count_vm_event(THP_SPLIT_PMD);
 
@@ -2929,20 +2931,47 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
 		return __split_huge_zero_page_pmd(vma, haddr, pmd);
 	}
 
-	pmd_migration = is_pmd_migration_entry(*pmd);
-	if (unlikely(pmd_migration)) {
-		swp_entry_t entry;
 
+	present = pmd_present(*pmd);
+	if (is_pmd_migration_entry(*pmd)) {
 		old_pmd = *pmd;
-		entry = pmd_to_swp_entry(old_pmd);
-		page = pfn_swap_entry_to_page(entry);
-		write = is_writable_migration_entry(entry);
+		swp_entry = pmd_to_swp_entry(old_pmd);
+		page = pfn_swap_entry_to_page(swp_entry);
+		folio = page_folio(page);
+
+		soft_dirty = pmd_swp_soft_dirty(old_pmd);
+		uffd_wp = pmd_swp_uffd_wp(old_pmd);
+
+		write = is_writable_migration_entry(swp_entry);
 		if (PageAnon(page))
-			anon_exclusive = is_readable_exclusive_migration_entry(entry);
-		young = is_migration_entry_young(entry);
-		dirty = is_migration_entry_dirty(entry);
+			anon_exclusive = is_readable_exclusive_migration_entry(swp_entry);
+		young = is_migration_entry_young(swp_entry);
+		dirty = is_migration_entry_dirty(swp_entry);
+	} else if (is_pmd_device_private_entry(*pmd)) {
+		old_pmd = *pmd;
+		swp_entry = pmd_to_swp_entry(old_pmd);
+		page = pfn_swap_entry_to_page(swp_entry);
+		folio = page_folio(page);
+
 		soft_dirty = pmd_swp_soft_dirty(old_pmd);
 		uffd_wp = pmd_swp_uffd_wp(old_pmd);
+
+		write = is_writable_device_private_entry(swp_entry);
+		anon_exclusive = PageAnonExclusive(page);
+
+		if (freeze && anon_exclusive &&
+		    folio_try_share_anon_rmap_pmd(folio, page))
+			freeze = false;
+		if (!freeze) {
+			rmap_t rmap_flags = RMAP_NONE;
+
+			folio_ref_add(folio, HPAGE_PMD_NR - 1);
+			if (anon_exclusive)
+				rmap_flags |= RMAP_EXCLUSIVE;
+
+			folio_add_anon_rmap_ptes(folio, page, HPAGE_PMD_NR,
+						 vma, haddr, rmap_flags);
+		}
 	} else {
 		/*
 		 * Up to this point the pmd is present and huge and userland has
@@ -3026,32 +3055,57 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
 	 * Note that NUMA hinting access restrictions are not transferred to
 	 * avoid any possibility of altering permissions across VMAs.
 	 */
-	if (freeze || pmd_migration) {
-		for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) {
-			pte_t entry;
-			swp_entry_t swp_entry;
-
-			if (write)
-				swp_entry = make_writable_migration_entry(
-							page_to_pfn(page + i));
-			else if (anon_exclusive)
-				swp_entry = make_readable_exclusive_migration_entry(
-							page_to_pfn(page + i));
-			else
-				swp_entry = make_readable_migration_entry(
-							page_to_pfn(page + i));
-			if (young)
-				swp_entry = make_migration_entry_young(swp_entry);
-			if (dirty)
-				swp_entry = make_migration_entry_dirty(swp_entry);
-			entry = swp_entry_to_pte(swp_entry);
-			if (soft_dirty)
-				entry = pte_swp_mksoft_dirty(entry);
-			if (uffd_wp)
-				entry = pte_swp_mkuffd_wp(entry);
+	if (freeze || !present) {
+		pte_t entry;
 
-			VM_WARN_ON(!pte_none(ptep_get(pte + i)));
-			set_pte_at(mm, addr, pte + i, entry);
+		if (freeze || is_migration_entry(swp_entry)) {
+			for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) {
+				if (write)
+					swp_entry = make_writable_migration_entry(
+								page_to_pfn(page + i));
+				else if (anon_exclusive)
+					swp_entry = make_readable_exclusive_migration_entry(
+								page_to_pfn(page + i));
+				else
+					swp_entry = make_readable_migration_entry(
+								page_to_pfn(page + i));
+				if (young)
+					swp_entry = make_migration_entry_young(swp_entry);
+				if (dirty)
+					swp_entry = make_migration_entry_dirty(swp_entry);
+
+				entry = swp_entry_to_pte(swp_entry);
+				if (soft_dirty)
+					entry = pte_swp_mksoft_dirty(entry);
+				if (uffd_wp)
+					entry = pte_swp_mkuffd_wp(entry);
+				VM_WARN_ON(!pte_none(ptep_get(pte + i)));
+				set_pte_at(mm, addr, pte + i, entry);
+			}
+		} else {
+			for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) {
+				/*
+				 * anon_exclusive was already propagated to the relevant
+				 * pages corresponding to the pte entries when freeze
+				 * is false.
+				 */
+				if (write)
+					swp_entry = make_writable_device_private_entry(
+								page_to_pfn(page + i));
+				else
+					swp_entry = make_readable_device_private_entry(
+								page_to_pfn(page + i));
+				/*
+				 * Young and dirty bits are not progated via swp_entry
+				 */
+				entry = swp_entry_to_pte(swp_entry);
+				if (soft_dirty)
+					entry = pte_swp_mksoft_dirty(entry);
+				if (uffd_wp)
+					entry = pte_swp_mkuffd_wp(entry);
+				VM_WARN_ON(!pte_none(ptep_get(pte + i)));
+				set_pte_at(mm, addr, pte + i, entry);
+			}
 		}
 	} else {
 		pte_t entry;
@@ -3076,7 +3130,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
 	}
 	pte_unmap(pte);
 
-	if (!pmd_migration)
+	if (!is_pmd_migration_entry(*pmd))
 		folio_remove_rmap_pmd(folio, page, vma);
 	if (freeze)
 		put_page(page);
@@ -3089,7 +3143,7 @@ void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
 			   pmd_t *pmd, bool freeze)
 {
 	VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PMD_SIZE));
-	if (pmd_trans_huge(*pmd) || is_pmd_migration_entry(*pmd))
+	if (pmd_trans_huge(*pmd) || is_pmd_non_present_folio_entry(*pmd))
 		__split_huge_pmd_locked(vma, pmd, address, freeze);
 }
 
@@ -3268,6 +3322,9 @@ static void lru_add_split_folio(struct folio *folio, struct folio *new_folio,
 	VM_BUG_ON_FOLIO(folio_test_lru(new_folio), folio);
 	lockdep_assert_held(&lruvec->lru_lock);
 
+	if (folio_is_device_private(folio))
+		return;
+
 	if (list) {
 		/* page reclaim is reclaiming a huge page */
 		VM_WARN_ON(folio_test_lru(folio));
@@ -3885,8 +3942,9 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
 	if (nr_shmem_dropped)
 		shmem_uncharge(mapping->host, nr_shmem_dropped);
 
-	if (!ret && is_anon)
+	if (!ret && is_anon && !folio_is_device_private(folio))
 		remap_flags = RMP_USE_SHARED_ZEROPAGE;
+
 	remap_page(folio, 1 << order, remap_flags);
 
 	/*
-- 
2.50.1



^ permalink raw reply	[flat|nested] 57+ messages in thread

* [v6 05/15] mm/migrate_device: handle partially mapped folios during collection
  2025-09-16 12:21 [v6 00/15] mm: support device-private THP Balbir Singh
                   ` (3 preceding siblings ...)
  2025-09-16 12:21 ` [v6 04/15] mm/huge_memory: implement device-private THP splitting Balbir Singh
@ 2025-09-16 12:21 ` Balbir Singh
  2025-09-23  2:23   ` Zi Yan
  2025-09-16 12:21 ` [v6 06/15] mm/migrate_device: implement THP migration of zone device pages Balbir Singh
                   ` (9 subsequent siblings)
  14 siblings, 1 reply; 57+ messages in thread
From: Balbir Singh @ 2025-09-16 12:21 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: damon, dri-devel, Balbir Singh, David Hildenbrand, Zi Yan,
	Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Alistair Popple, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

Extend migrate_vma_collect_pmd() to handle partially mapped large folios
that require splitting before migration can proceed.

During PTE walk in the collection phase, if a large folio is only
partially mapped in the migration range, it must be split to ensure the
folio is correctly migrated.

Signed-off-by: Balbir Singh <balbirs@nvidia.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
Cc: Rakie Kim <rakie.kim@sk.com>
Cc: Byungchul Park <byungchul@sk.com>
Cc: Gregory Price <gourry@gourry.net>
Cc: Ying Huang <ying.huang@linux.alibaba.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Nico Pache <npache@redhat.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Dev Jain <dev.jain@arm.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Danilo Krummrich <dakr@kernel.org>
Cc: David Airlie <airlied@gmail.com>
Cc: Simona Vetter <simona@ffwll.ch>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mika Penttilä <mpenttil@redhat.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Francois Dugast <francois.dugast@intel.com>
---
 mm/migrate_device.c | 82 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 82 insertions(+)

diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index abd9f6850db6..70c0601f70ea 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -54,6 +54,53 @@ static int migrate_vma_collect_hole(unsigned long start,
 	return 0;
 }
 
+/**
+ * migrate_vma_split_folio() - Helper function to split a THP folio
+ * @folio: the folio to split
+ * @fault_page: struct page associated with the fault if any
+ *
+ * Returns 0 on success
+ */
+static int migrate_vma_split_folio(struct folio *folio,
+				   struct page *fault_page)
+{
+	int ret;
+	struct folio *fault_folio = fault_page ? page_folio(fault_page) : NULL;
+	struct folio *new_fault_folio = NULL;
+
+	if (folio != fault_folio) {
+		folio_get(folio);
+		folio_lock(folio);
+	}
+
+	ret = split_folio(folio);
+	if (ret) {
+		if (folio != fault_folio) {
+			folio_unlock(folio);
+			folio_put(folio);
+		}
+		return ret;
+	}
+
+	new_fault_folio = fault_page ? page_folio(fault_page) : NULL;
+
+	/*
+	 * Ensure the lock is held on the correct
+	 * folio after the split
+	 */
+	if (!new_fault_folio) {
+		folio_unlock(folio);
+		folio_put(folio);
+	} else if (folio != new_fault_folio) {
+		folio_get(new_fault_folio);
+		folio_lock(new_fault_folio);
+		folio_unlock(folio);
+		folio_put(folio);
+	}
+
+	return 0;
+}
+
 static int migrate_vma_collect_pmd(pmd_t *pmdp,
 				   unsigned long start,
 				   unsigned long end,
@@ -136,6 +183,8 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
 			 * page table entry. Other special swap entries are not
 			 * migratable, and we ignore regular swapped page.
 			 */
+			struct folio *folio;
+
 			entry = pte_to_swp_entry(pte);
 			if (!is_device_private_entry(entry))
 				goto next;
@@ -147,6 +196,23 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
 			    pgmap->owner != migrate->pgmap_owner)
 				goto next;
 
+			folio = page_folio(page);
+			if (folio_test_large(folio)) {
+				int ret;
+
+				pte_unmap_unlock(ptep, ptl);
+				ret = migrate_vma_split_folio(folio,
+							  migrate->fault_page);
+
+				if (ret) {
+					ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl);
+					goto next;
+				}
+
+				addr = start;
+				goto again;
+			}
+
 			mpfn = migrate_pfn(page_to_pfn(page)) |
 					MIGRATE_PFN_MIGRATE;
 			if (is_writable_device_private_entry(entry))
@@ -171,6 +237,22 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
 					pgmap->owner != migrate->pgmap_owner)
 					goto next;
 			}
+			folio = page ? page_folio(page) : NULL;
+			if (folio && folio_test_large(folio)) {
+				int ret;
+
+				pte_unmap_unlock(ptep, ptl);
+				ret = migrate_vma_split_folio(folio,
+							  migrate->fault_page);
+
+				if (ret) {
+					ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl);
+					goto next;
+				}
+
+				addr = start;
+				goto again;
+			}
 			mpfn = migrate_pfn(pfn) | MIGRATE_PFN_MIGRATE;
 			mpfn |= pte_write(pte) ? MIGRATE_PFN_WRITE : 0;
 		}
-- 
2.50.1



^ permalink raw reply	[flat|nested] 57+ messages in thread

* [v6 06/15] mm/migrate_device: implement THP migration of zone device pages
  2025-09-16 12:21 [v6 00/15] mm: support device-private THP Balbir Singh
                   ` (4 preceding siblings ...)
  2025-09-16 12:21 ` [v6 05/15] mm/migrate_device: handle partially mapped folios during collection Balbir Singh
@ 2025-09-16 12:21 ` Balbir Singh
  2025-09-16 12:21 ` [v6 07/15] mm/memory/fault: add THP fault handling for zone device private pages Balbir Singh
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 57+ messages in thread
From: Balbir Singh @ 2025-09-16 12:21 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: damon, dri-devel, Balbir Singh, David Hildenbrand, Zi Yan,
	Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Alistair Popple, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

MIGRATE_VMA_SELECT_COMPOUND will be used to select THP pages during
migrate_vma_setup() and MIGRATE_PFN_COMPOUND will make migrating device
pages as compound pages during device pfn migration.

migrate_device code paths go through the collect, setup and finalize
phases of migration.

The entries in src and dst arrays passed to these functions still remain
at a PAGE_SIZE granularity.  When a compound page is passed, the first
entry has the PFN along with MIGRATE_PFN_COMPOUND and other flags set
(MIGRATE_PFN_MIGRATE, MIGRATE_PFN_VALID), the remaining entries
(HPAGE_PMD_NR - 1) are filled with 0's.  This representation allows for
the compound page to be split into smaller page sizes.

migrate_vma_collect_hole(), migrate_vma_collect_pmd() are now THP page
aware.  Two new helper functions migrate_vma_collect_huge_pmd() and
migrate_vma_insert_huge_pmd_page() have been added.

migrate_vma_collect_huge_pmd() can collect THP pages, but if for some
reason this fails, there is fallback support to split the folio and
migrate it.

migrate_vma_insert_huge_pmd_page() closely follows the logic of
migrate_vma_insert_page()

Support for splitting pages as needed for migration will follow in later
patches in this series.

Signed-off-by: Balbir Singh <balbirs@nvidia.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
Cc: Rakie Kim <rakie.kim@sk.com>
Cc: Byungchul Park <byungchul@sk.com>
Cc: Gregory Price <gourry@gourry.net>
Cc: Ying Huang <ying.huang@linux.alibaba.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Nico Pache <npache@redhat.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Dev Jain <dev.jain@arm.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Danilo Krummrich <dakr@kernel.org>
Cc: David Airlie <airlied@gmail.com>
Cc: Simona Vetter <simona@ffwll.ch>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mika Penttilä <mpenttil@redhat.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Francois Dugast <francois.dugast@intel.com>
---
 include/linux/migrate.h |   2 +
 mm/migrate_device.c     | 468 ++++++++++++++++++++++++++++++++++------
 2 files changed, 407 insertions(+), 63 deletions(-)

diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 1f0ac122c3bf..41b4cc05a450 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -125,6 +125,7 @@ static inline int migrate_misplaced_folio(struct folio *folio, int node)
 #define MIGRATE_PFN_VALID	(1UL << 0)
 #define MIGRATE_PFN_MIGRATE	(1UL << 1)
 #define MIGRATE_PFN_WRITE	(1UL << 3)
+#define MIGRATE_PFN_COMPOUND	(1UL << 4)
 #define MIGRATE_PFN_SHIFT	6
 
 static inline struct page *migrate_pfn_to_page(unsigned long mpfn)
@@ -143,6 +144,7 @@ enum migrate_vma_direction {
 	MIGRATE_VMA_SELECT_SYSTEM = 1 << 0,
 	MIGRATE_VMA_SELECT_DEVICE_PRIVATE = 1 << 1,
 	MIGRATE_VMA_SELECT_DEVICE_COHERENT = 1 << 2,
+	MIGRATE_VMA_SELECT_COMPOUND = 1 << 3,
 };
 
 struct migrate_vma {
diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index 70c0601f70ea..1663ce553184 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -14,6 +14,7 @@
 #include <linux/pagewalk.h>
 #include <linux/rmap.h>
 #include <linux/swapops.h>
+#include <linux/pgalloc.h>
 #include <asm/tlbflush.h>
 #include "internal.h"
 
@@ -44,6 +45,23 @@ static int migrate_vma_collect_hole(unsigned long start,
 	if (!vma_is_anonymous(walk->vma))
 		return migrate_vma_collect_skip(start, end, walk);
 
+	if (thp_migration_supported() &&
+		(migrate->flags & MIGRATE_VMA_SELECT_COMPOUND) &&
+		(IS_ALIGNED(start, HPAGE_PMD_SIZE) &&
+		 IS_ALIGNED(end, HPAGE_PMD_SIZE))) {
+		migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE |
+						MIGRATE_PFN_COMPOUND;
+		migrate->dst[migrate->npages] = 0;
+		migrate->npages++;
+		migrate->cpages++;
+
+		/*
+		 * Collect the remaining entries as holes, in case we
+		 * need to split later
+		 */
+		return migrate_vma_collect_skip(start + PAGE_SIZE, end, walk);
+	}
+
 	for (addr = start; addr < end; addr += PAGE_SIZE) {
 		migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE;
 		migrate->dst[migrate->npages] = 0;
@@ -101,57 +119,150 @@ static int migrate_vma_split_folio(struct folio *folio,
 	return 0;
 }
 
-static int migrate_vma_collect_pmd(pmd_t *pmdp,
-				   unsigned long start,
-				   unsigned long end,
-				   struct mm_walk *walk)
+/** migrate_vma_collect_huge_pmd - collect THP pages without splitting the
+ * folio for device private pages.
+ * @pmdp: pointer to pmd entry
+ * @start: start address of the range for migration
+ * @end: end address of the range for migration
+ * @walk: mm_walk callback structure
+ *
+ * Collect the huge pmd entry at @pmdp for migration and set the
+ * MIGRATE_PFN_COMPOUND flag in the migrate src entry to indicate that
+ * migration will occur at HPAGE_PMD granularity
+ */
+static int migrate_vma_collect_huge_pmd(pmd_t *pmdp, unsigned long start,
+					unsigned long end, struct mm_walk *walk,
+					struct folio *fault_folio)
 {
+	struct mm_struct *mm = walk->mm;
+	struct folio *folio;
 	struct migrate_vma *migrate = walk->private;
-	struct folio *fault_folio = migrate->fault_page ?
-		page_folio(migrate->fault_page) : NULL;
-	struct vm_area_struct *vma = walk->vma;
-	struct mm_struct *mm = vma->vm_mm;
-	unsigned long addr = start, unmapped = 0;
 	spinlock_t *ptl;
-	pte_t *ptep;
+	swp_entry_t entry;
+	int ret;
+	unsigned long write = 0;
 
-again:
-	if (pmd_none(*pmdp))
+	ptl = pmd_lock(mm, pmdp);
+	if (pmd_none(*pmdp)) {
+		spin_unlock(ptl);
 		return migrate_vma_collect_hole(start, end, -1, walk);
+	}
 
 	if (pmd_trans_huge(*pmdp)) {
-		struct folio *folio;
-
-		ptl = pmd_lock(mm, pmdp);
-		if (unlikely(!pmd_trans_huge(*pmdp))) {
+		if (!(migrate->flags & MIGRATE_VMA_SELECT_SYSTEM)) {
 			spin_unlock(ptl);
-			goto again;
+			return migrate_vma_collect_skip(start, end, walk);
 		}
 
 		folio = pmd_folio(*pmdp);
 		if (is_huge_zero_folio(folio)) {
 			spin_unlock(ptl);
-			split_huge_pmd(vma, pmdp, addr);
-		} else {
-			int ret;
+			return migrate_vma_collect_hole(start, end, -1, walk);
+		}
+		if (pmd_write(*pmdp))
+			write = MIGRATE_PFN_WRITE;
+	} else if (!pmd_present(*pmdp)) {
+		entry = pmd_to_swp_entry(*pmdp);
+		folio = pfn_swap_entry_folio(entry);
+
+		if (!is_device_private_entry(entry) ||
+			!(migrate->flags & MIGRATE_VMA_SELECT_DEVICE_PRIVATE) ||
+			(folio->pgmap->owner != migrate->pgmap_owner)) {
+			spin_unlock(ptl);
+			return migrate_vma_collect_skip(start, end, walk);
+		}
 
-			folio_get(folio);
+		if (is_migration_entry(entry)) {
+			migration_entry_wait_on_locked(entry, ptl);
 			spin_unlock(ptl);
-			/* FIXME: we don't expect THP for fault_folio */
-			if (WARN_ON_ONCE(fault_folio == folio))
-				return migrate_vma_collect_skip(start, end,
-								walk);
-			if (unlikely(!folio_trylock(folio)))
-				return migrate_vma_collect_skip(start, end,
-								walk);
-			ret = split_folio(folio);
-			if (fault_folio != folio)
-				folio_unlock(folio);
-			folio_put(folio);
-			if (ret)
-				return migrate_vma_collect_skip(start, end,
-								walk);
+			return -EAGAIN;
+		}
+
+		if (is_writable_device_private_entry(entry))
+			write = MIGRATE_PFN_WRITE;
+	} else {
+		spin_unlock(ptl);
+		return -EAGAIN;
+	}
+
+	folio_get(folio);
+	if (folio != fault_folio && unlikely(!folio_trylock(folio))) {
+		spin_unlock(ptl);
+		folio_put(folio);
+		return migrate_vma_collect_skip(start, end, walk);
+	}
+
+	if (thp_migration_supported() &&
+		(migrate->flags & MIGRATE_VMA_SELECT_COMPOUND) &&
+		(IS_ALIGNED(start, HPAGE_PMD_SIZE) &&
+		 IS_ALIGNED(end, HPAGE_PMD_SIZE))) {
+
+		struct page_vma_mapped_walk pvmw = {
+			.ptl = ptl,
+			.address = start,
+			.pmd = pmdp,
+			.vma = walk->vma,
+		};
+
+		unsigned long pfn = page_to_pfn(folio_page(folio, 0));
+
+		migrate->src[migrate->npages] = migrate_pfn(pfn) | write
+						| MIGRATE_PFN_MIGRATE
+						| MIGRATE_PFN_COMPOUND;
+		migrate->dst[migrate->npages++] = 0;
+		migrate->cpages++;
+		ret = set_pmd_migration_entry(&pvmw, folio_page(folio, 0));
+		if (ret) {
+			migrate->npages--;
+			migrate->cpages--;
+			migrate->src[migrate->npages] = 0;
+			migrate->dst[migrate->npages] = 0;
+			goto fallback;
 		}
+		migrate_vma_collect_skip(start + PAGE_SIZE, end, walk);
+		spin_unlock(ptl);
+		return 0;
+	}
+
+fallback:
+	spin_unlock(ptl);
+	if (!folio_test_large(folio))
+		goto done;
+	ret = split_folio(folio);
+	if (fault_folio != folio)
+		folio_unlock(folio);
+	folio_put(folio);
+	if (ret)
+		return migrate_vma_collect_skip(start, end, walk);
+	if (pmd_none(pmdp_get_lockless(pmdp)))
+		return migrate_vma_collect_hole(start, end, -1, walk);
+
+done:
+	return -ENOENT;
+}
+
+static int migrate_vma_collect_pmd(pmd_t *pmdp,
+				   unsigned long start,
+				   unsigned long end,
+				   struct mm_walk *walk)
+{
+	struct migrate_vma *migrate = walk->private;
+	struct vm_area_struct *vma = walk->vma;
+	struct mm_struct *mm = vma->vm_mm;
+	unsigned long addr = start, unmapped = 0;
+	spinlock_t *ptl;
+	struct folio *fault_folio = migrate->fault_page ?
+		page_folio(migrate->fault_page) : NULL;
+	pte_t *ptep;
+
+again:
+	if (pmd_trans_huge(*pmdp) || !pmd_present(*pmdp)) {
+		int ret = migrate_vma_collect_huge_pmd(pmdp, start, end, walk, fault_folio);
+
+		if (ret == -EAGAIN)
+			goto again;
+		if (ret == 0)
+			return 0;
 	}
 
 	ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl);
@@ -257,8 +368,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
 			mpfn |= pte_write(pte) ? MIGRATE_PFN_WRITE : 0;
 		}
 
-		/* FIXME support THP */
-		if (!page || !page->mapping || PageTransCompound(page)) {
+		if (!page || !page->mapping) {
 			mpfn = 0;
 			goto next;
 		}
@@ -429,14 +539,6 @@ static bool migrate_vma_check_page(struct page *page, struct page *fault_page)
 	 */
 	int extra = 1 + (page == fault_page);
 
-	/*
-	 * FIXME support THP (transparent huge page), it is bit more complex to
-	 * check them than regular pages, because they can be mapped with a pmd
-	 * or with a pte (split pte mapping).
-	 */
-	if (folio_test_large(folio))
-		return false;
-
 	/* Page from ZONE_DEVICE have one extra reference */
 	if (folio_is_zone_device(folio))
 		extra++;
@@ -467,17 +569,24 @@ static unsigned long migrate_device_unmap(unsigned long *src_pfns,
 
 	lru_add_drain();
 
-	for (i = 0; i < npages; i++) {
+	for (i = 0; i < npages; ) {
 		struct page *page = migrate_pfn_to_page(src_pfns[i]);
 		struct folio *folio;
+		unsigned int nr = 1;
 
 		if (!page) {
 			if (src_pfns[i] & MIGRATE_PFN_MIGRATE)
 				unmapped++;
-			continue;
+			goto next;
 		}
 
 		folio =	page_folio(page);
+		nr = folio_nr_pages(folio);
+
+		if (nr > 1)
+			src_pfns[i] |= MIGRATE_PFN_COMPOUND;
+
+
 		/* ZONE_DEVICE folios are not on LRU */
 		if (!folio_is_zone_device(folio)) {
 			if (!folio_test_lru(folio) && allow_drain) {
@@ -489,7 +598,7 @@ static unsigned long migrate_device_unmap(unsigned long *src_pfns,
 			if (!folio_isolate_lru(folio)) {
 				src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
 				restore++;
-				continue;
+				goto next;
 			}
 
 			/* Drop the reference we took in collect */
@@ -508,10 +617,12 @@ static unsigned long migrate_device_unmap(unsigned long *src_pfns,
 
 			src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
 			restore++;
-			continue;
+			goto next;
 		}
 
 		unmapped++;
+next:
+		i += nr;
 	}
 
 	for (i = 0; i < npages && restore; i++) {
@@ -657,6 +768,160 @@ int migrate_vma_setup(struct migrate_vma *args)
 }
 EXPORT_SYMBOL(migrate_vma_setup);
 
+#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+/**
+ * migrate_vma_insert_huge_pmd_page: Insert a huge folio into @migrate->vma->vm_mm
+ * at @addr. folio is already allocated as a part of the migration process with
+ * large page.
+ *
+ * @page needs to be initialized and setup after it's allocated. The code bits
+ * here follow closely the code in __do_huge_pmd_anonymous_page(). This API does
+ * not support THP zero pages.
+ *
+ * @migrate: migrate_vma arguments
+ * @addr: address where the folio will be inserted
+ * @page: page to be inserted at @addr
+ * @src: src pfn which is being migrated
+ * @pmdp: pointer to the pmd
+ */
+static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
+					 unsigned long addr,
+					 struct page *page,
+					 unsigned long *src,
+					 pmd_t *pmdp)
+{
+	struct vm_area_struct *vma = migrate->vma;
+	gfp_t gfp = vma_thp_gfp_mask(vma);
+	struct folio *folio = page_folio(page);
+	int ret;
+	vm_fault_t csa_ret;
+	spinlock_t *ptl;
+	pgtable_t pgtable;
+	pmd_t entry;
+	bool flush = false;
+	unsigned long i;
+
+	VM_WARN_ON_FOLIO(!folio, folio);
+	VM_WARN_ON_ONCE(!pmd_none(*pmdp) && !is_huge_zero_pmd(*pmdp));
+
+	if (!thp_vma_suitable_order(vma, addr, HPAGE_PMD_ORDER))
+		return -EINVAL;
+
+	ret = anon_vma_prepare(vma);
+	if (ret)
+		return ret;
+
+	folio_set_order(folio, HPAGE_PMD_ORDER);
+	folio_set_large_rmappable(folio);
+
+	if (mem_cgroup_charge(folio, migrate->vma->vm_mm, gfp)) {
+		count_vm_event(THP_FAULT_FALLBACK);
+		count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
+		ret = -ENOMEM;
+		goto abort;
+	}
+
+	__folio_mark_uptodate(folio);
+
+	pgtable = pte_alloc_one(vma->vm_mm);
+	if (unlikely(!pgtable))
+		goto abort;
+
+	if (folio_is_device_private(folio)) {
+		swp_entry_t swp_entry;
+
+		if (vma->vm_flags & VM_WRITE)
+			swp_entry = make_writable_device_private_entry(
+						page_to_pfn(page));
+		else
+			swp_entry = make_readable_device_private_entry(
+						page_to_pfn(page));
+		entry = swp_entry_to_pmd(swp_entry);
+	} else {
+		if (folio_is_zone_device(folio) &&
+		    !folio_is_device_coherent(folio)) {
+			goto abort;
+		}
+		entry = folio_mk_pmd(folio, vma->vm_page_prot);
+		if (vma->vm_flags & VM_WRITE)
+			entry = pmd_mkwrite(pmd_mkdirty(entry), vma);
+	}
+
+	ptl = pmd_lock(vma->vm_mm, pmdp);
+	csa_ret = check_stable_address_space(vma->vm_mm);
+	if (csa_ret)
+		goto abort;
+
+	/*
+	 * Check for userfaultfd but do not deliver the fault. Instead,
+	 * just back off.
+	 */
+	if (userfaultfd_missing(vma))
+		goto unlock_abort;
+
+	if (!pmd_none(*pmdp)) {
+		if (!is_huge_zero_pmd(*pmdp))
+			goto unlock_abort;
+		flush = true;
+	} else if (!pmd_none(*pmdp))
+		goto unlock_abort;
+
+	add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
+	folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE);
+	if (!folio_is_zone_device(folio))
+		folio_add_lru_vma(folio, vma);
+	folio_get(folio);
+
+	if (flush) {
+		pte_free(vma->vm_mm, pgtable);
+		flush_cache_page(vma, addr, addr + HPAGE_PMD_SIZE);
+		pmdp_invalidate(vma, addr, pmdp);
+	} else {
+		pgtable_trans_huge_deposit(vma->vm_mm, pmdp, pgtable);
+		mm_inc_nr_ptes(vma->vm_mm);
+	}
+	set_pmd_at(vma->vm_mm, addr, pmdp, entry);
+	update_mmu_cache_pmd(vma, addr, pmdp);
+
+	spin_unlock(ptl);
+
+	count_vm_event(THP_FAULT_ALLOC);
+	count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC);
+	count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC);
+
+	return 0;
+
+unlock_abort:
+	spin_unlock(ptl);
+abort:
+	for (i = 0; i < HPAGE_PMD_NR; i++)
+		src[i] &= ~MIGRATE_PFN_MIGRATE;
+	return 0;
+}
+#else /* !CONFIG_ARCH_ENABLE_THP_MIGRATION */
+static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
+					 unsigned long addr,
+					 struct page *page,
+					 unsigned long *src,
+					 pmd_t *pmdp)
+{
+	return 0;
+}
+#endif
+
+static unsigned long migrate_vma_nr_pages(unsigned long *src)
+{
+	unsigned long nr = 1;
+#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+	if (*src & MIGRATE_PFN_COMPOUND)
+		nr = HPAGE_PMD_NR;
+#else
+	if (*src & MIGRATE_PFN_COMPOUND)
+		VM_WARN_ON_ONCE(true);
+#endif
+	return nr;
+}
+
 /*
  * This code closely matches the code in:
  *   __handle_mm_fault()
@@ -667,9 +932,10 @@ EXPORT_SYMBOL(migrate_vma_setup);
  */
 static void migrate_vma_insert_page(struct migrate_vma *migrate,
 				    unsigned long addr,
-				    struct page *page,
+				    unsigned long *dst,
 				    unsigned long *src)
 {
+	struct page *page = migrate_pfn_to_page(*dst);
 	struct folio *folio = page_folio(page);
 	struct vm_area_struct *vma = migrate->vma;
 	struct mm_struct *mm = vma->vm_mm;
@@ -697,8 +963,24 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate,
 	pmdp = pmd_alloc(mm, pudp, addr);
 	if (!pmdp)
 		goto abort;
-	if (pmd_trans_huge(*pmdp))
-		goto abort;
+
+	if (thp_migration_supported() && (*dst & MIGRATE_PFN_COMPOUND)) {
+		int ret = migrate_vma_insert_huge_pmd_page(migrate, addr, page,
+								src, pmdp);
+		if (ret)
+			goto abort;
+		return;
+	}
+
+	if (!pmd_none(*pmdp)) {
+		if (pmd_trans_huge(*pmdp)) {
+			if (!is_huge_zero_pmd(*pmdp))
+				goto abort;
+			split_huge_pmd(vma, pmdp, addr);
+		} else if (pmd_leaf(*pmdp))
+			goto abort;
+	}
+
 	if (pte_alloc(mm, pmdp))
 		goto abort;
 	if (unlikely(anon_vma_prepare(vma)))
@@ -789,23 +1071,24 @@ static void __migrate_device_pages(unsigned long *src_pfns,
 	unsigned long i;
 	bool notified = false;
 
-	for (i = 0; i < npages; i++) {
+	for (i = 0; i < npages; ) {
 		struct page *newpage = migrate_pfn_to_page(dst_pfns[i]);
 		struct page *page = migrate_pfn_to_page(src_pfns[i]);
 		struct address_space *mapping;
 		struct folio *newfolio, *folio;
 		int r, extra_cnt = 0;
+		unsigned long nr = 1;
 
 		if (!newpage) {
 			src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
-			continue;
+			goto next;
 		}
 
 		if (!page) {
 			unsigned long addr;
 
 			if (!(src_pfns[i] & MIGRATE_PFN_MIGRATE))
-				continue;
+				goto next;
 
 			/*
 			 * The only time there is no vma is when called from
@@ -823,15 +1106,47 @@ static void __migrate_device_pages(unsigned long *src_pfns,
 					migrate->pgmap_owner);
 				mmu_notifier_invalidate_range_start(&range);
 			}
-			migrate_vma_insert_page(migrate, addr, newpage,
+
+			if ((src_pfns[i] & MIGRATE_PFN_COMPOUND) &&
+				(!(dst_pfns[i] & MIGRATE_PFN_COMPOUND))) {
+				nr = migrate_vma_nr_pages(&src_pfns[i]);
+				src_pfns[i] &= ~MIGRATE_PFN_COMPOUND;
+				src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
+				goto next;
+			}
+
+			migrate_vma_insert_page(migrate, addr, &dst_pfns[i],
 						&src_pfns[i]);
-			continue;
+			goto next;
 		}
 
 		newfolio = page_folio(newpage);
 		folio = page_folio(page);
 		mapping = folio_mapping(folio);
 
+		/*
+		 * If THP migration is enabled, check if both src and dst
+		 * can migrate large pages
+		 */
+		if (thp_migration_supported()) {
+			if ((src_pfns[i] & MIGRATE_PFN_MIGRATE) &&
+				(src_pfns[i] & MIGRATE_PFN_COMPOUND) &&
+				!(dst_pfns[i] & MIGRATE_PFN_COMPOUND)) {
+
+				if (!migrate) {
+					src_pfns[i] &= ~(MIGRATE_PFN_MIGRATE |
+							 MIGRATE_PFN_COMPOUND);
+					goto next;
+				}
+				src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
+			} else if ((src_pfns[i] & MIGRATE_PFN_MIGRATE) &&
+				(dst_pfns[i] & MIGRATE_PFN_COMPOUND) &&
+				!(src_pfns[i] & MIGRATE_PFN_COMPOUND)) {
+				src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
+			}
+		}
+
+
 		if (folio_is_device_private(newfolio) ||
 		    folio_is_device_coherent(newfolio)) {
 			if (mapping) {
@@ -844,7 +1159,7 @@ static void __migrate_device_pages(unsigned long *src_pfns,
 				if (!folio_test_anon(folio) ||
 				    !folio_free_swap(folio)) {
 					src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
-					continue;
+					goto next;
 				}
 			}
 		} else if (folio_is_zone_device(newfolio)) {
@@ -852,7 +1167,7 @@ static void __migrate_device_pages(unsigned long *src_pfns,
 			 * Other types of ZONE_DEVICE page are not supported.
 			 */
 			src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
-			continue;
+			goto next;
 		}
 
 		BUG_ON(folio_test_writeback(folio));
@@ -864,6 +1179,8 @@ static void __migrate_device_pages(unsigned long *src_pfns,
 			src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
 		else
 			folio_migrate_flags(newfolio, folio);
+next:
+		i += nr;
 	}
 
 	if (notified)
@@ -1025,10 +1342,23 @@ static unsigned long migrate_device_pfn_lock(unsigned long pfn)
 int migrate_device_range(unsigned long *src_pfns, unsigned long start,
 			unsigned long npages)
 {
-	unsigned long i, pfn;
+	unsigned long i, j, pfn;
+
+	for (pfn = start, i = 0; i < npages; pfn++, i++) {
+		struct page *page = pfn_to_page(pfn);
+		struct folio *folio = page_folio(page);
+		unsigned int nr = 1;
 
-	for (pfn = start, i = 0; i < npages; pfn++, i++)
 		src_pfns[i] = migrate_device_pfn_lock(pfn);
+		nr = folio_nr_pages(folio);
+		if (nr > 1) {
+			src_pfns[i] |= MIGRATE_PFN_COMPOUND;
+			for (j = 1; j < nr; j++)
+				src_pfns[i+j] = 0;
+			i += j - 1;
+			pfn += j - 1;
+		}
+	}
 
 	migrate_device_unmap(src_pfns, npages, NULL);
 
@@ -1046,10 +1376,22 @@ EXPORT_SYMBOL(migrate_device_range);
  */
 int migrate_device_pfns(unsigned long *src_pfns, unsigned long npages)
 {
-	unsigned long i;
+	unsigned long i, j;
+
+	for (i = 0; i < npages; i++) {
+		struct page *page = pfn_to_page(src_pfns[i]);
+		struct folio *folio = page_folio(page);
+		unsigned int nr = 1;
 
-	for (i = 0; i < npages; i++)
 		src_pfns[i] = migrate_device_pfn_lock(src_pfns[i]);
+		nr = folio_nr_pages(folio);
+		if (nr > 1) {
+			src_pfns[i] |= MIGRATE_PFN_COMPOUND;
+			for (j = 1; j < nr; j++)
+				src_pfns[i+j] = 0;
+			i += j - 1;
+		}
+	}
 
 	migrate_device_unmap(src_pfns, npages, NULL);
 
-- 
2.50.1



^ permalink raw reply	[flat|nested] 57+ messages in thread

* [v6 07/15] mm/memory/fault: add THP fault handling for zone device private pages
  2025-09-16 12:21 [v6 00/15] mm: support device-private THP Balbir Singh
                   ` (5 preceding siblings ...)
  2025-09-16 12:21 ` [v6 06/15] mm/migrate_device: implement THP migration of zone device pages Balbir Singh
@ 2025-09-16 12:21 ` Balbir Singh
  2025-09-25 10:11   ` David Hildenbrand
  2025-09-16 12:21 ` [v6 08/15] lib/test_hmm: add zone device private THP test infrastructure Balbir Singh
                   ` (7 subsequent siblings)
  14 siblings, 1 reply; 57+ messages in thread
From: Balbir Singh @ 2025-09-16 12:21 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: damon, dri-devel, Balbir Singh, David Hildenbrand, Zi Yan,
	Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Alistair Popple, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

Implement CPU fault handling for zone device THP entries through
do_huge_pmd_device_private(), enabling transparent migration of
device-private large pages back to system memory on CPU access.

When the CPU accesses a zone device THP entry, the fault handler calls the
device driver's migrate_to_ram() callback to migrate the entire large page
back to system memory.

Signed-off-by: Balbir Singh <balbirs@nvidia.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
Cc: Rakie Kim <rakie.kim@sk.com>
Cc: Byungchul Park <byungchul@sk.com>
Cc: Gregory Price <gourry@gourry.net>
Cc: Ying Huang <ying.huang@linux.alibaba.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Nico Pache <npache@redhat.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Dev Jain <dev.jain@arm.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Danilo Krummrich <dakr@kernel.org>
Cc: David Airlie <airlied@gmail.com>
Cc: Simona Vetter <simona@ffwll.ch>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mika Penttilä <mpenttil@redhat.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Francois Dugast <francois.dugast@intel.com>
---
 include/linux/huge_mm.h |  7 +++++++
 mm/huge_memory.c        | 36 ++++++++++++++++++++++++++++++++++++
 mm/memory.c             |  5 +++--
 3 files changed, 46 insertions(+), 2 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index f327d62fc985..2d669be7f1c8 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -496,6 +496,8 @@ static inline bool folio_test_pmd_mappable(struct folio *folio)
 
 vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf);
 
+vm_fault_t do_huge_pmd_device_private(struct vm_fault *vmf);
+
 extern struct folio *huge_zero_folio;
 extern unsigned long huge_zero_pfn;
 
@@ -671,6 +673,11 @@ static inline vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
 	return 0;
 }
 
+static inline vm_fault_t do_huge_pmd_device_private(struct vm_fault *vmf)
+{
+	return 0;
+}
+
 static inline bool is_huge_zero_folio(const struct folio *folio)
 {
 	return false;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 5291ee155a02..90a1939455dd 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1287,6 +1287,42 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf)
 
 }
 
+vm_fault_t do_huge_pmd_device_private(struct vm_fault *vmf)
+{
+	struct vm_area_struct *vma = vmf->vma;
+	vm_fault_t ret = 0;
+	spinlock_t *ptl;
+	swp_entry_t swp_entry;
+	struct page *page;
+
+	if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
+		vma_end_read(vma);
+		return VM_FAULT_RETRY;
+	}
+
+	ptl = pmd_lock(vma->vm_mm, vmf->pmd);
+	if (unlikely(!pmd_same(*vmf->pmd, vmf->orig_pmd))) {
+		spin_unlock(ptl);
+		return 0;
+	}
+
+	swp_entry = pmd_to_swp_entry(vmf->orig_pmd);
+	page = pfn_swap_entry_to_page(swp_entry);
+	vmf->page = page;
+	vmf->pte = NULL;
+	if (trylock_page(vmf->page)) {
+		get_page(page);
+		spin_unlock(ptl);
+		ret = page_pgmap(page)->ops->migrate_to_ram(vmf);
+		unlock_page(vmf->page);
+		put_page(page);
+	} else {
+		spin_unlock(ptl);
+	}
+
+	return ret;
+}
+
 /*
  * always: directly stall for all thp allocations
  * defer: wake kswapd and fail if not immediately available
diff --git a/mm/memory.c b/mm/memory.c
index 39ed698dfc37..912c4f3367a4 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6332,8 +6332,9 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma,
 		vmf.orig_pmd = pmdp_get_lockless(vmf.pmd);
 
 		if (unlikely(is_swap_pmd(vmf.orig_pmd))) {
-			VM_BUG_ON(thp_migration_supported() &&
-					  !is_pmd_migration_entry(vmf.orig_pmd));
+			if (is_pmd_device_private_entry(vmf.orig_pmd))
+				return do_huge_pmd_device_private(&vmf);
+
 			if (is_pmd_migration_entry(vmf.orig_pmd))
 				pmd_migration_entry_wait(mm, vmf.pmd);
 			return 0;
-- 
2.50.1



^ permalink raw reply	[flat|nested] 57+ messages in thread

* [v6 08/15] lib/test_hmm: add zone device private THP test infrastructure
  2025-09-16 12:21 [v6 00/15] mm: support device-private THP Balbir Singh
                   ` (6 preceding siblings ...)
  2025-09-16 12:21 ` [v6 07/15] mm/memory/fault: add THP fault handling for zone device private pages Balbir Singh
@ 2025-09-16 12:21 ` Balbir Singh
  2025-09-16 12:21 ` [v6 09/15] mm/memremap: add driver callback support for folio splitting Balbir Singh
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 57+ messages in thread
From: Balbir Singh @ 2025-09-16 12:21 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: damon, dri-devel, Balbir Singh, David Hildenbrand, Zi Yan,
	Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Alistair Popple, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

Enhance the hmm test driver (lib/test_hmm) with support for THP pages.

A new pool of free_folios() has now been added to the dmirror device,
which can be allocated when a request for a THP zone device private page
is made.

Add compound page awareness to the allocation function during normal
migration and fault based migration.  These routines also copy
folio_nr_pages() when moving data between system memory and device memory.

args.src and args.dst used to hold migration entries are now dynamically
allocated (as they need to hold HPAGE_PMD_NR entries or more).

Split and migrate support will be added in future patches in this series.

Signed-off-by: Balbir Singh <balbirs@nvidia.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
Cc: Rakie Kim <rakie.kim@sk.com>
Cc: Byungchul Park <byungchul@sk.com>
Cc: Gregory Price <gourry@gourry.net>
Cc: Ying Huang <ying.huang@linux.alibaba.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Nico Pache <npache@redhat.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Dev Jain <dev.jain@arm.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Danilo Krummrich <dakr@kernel.org>
Cc: David Airlie <airlied@gmail.com>
Cc: Simona Vetter <simona@ffwll.ch>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mika Penttilä <mpenttil@redhat.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Francois Dugast <francois.dugast@intel.com>
---
 include/linux/memremap.h |  12 ++
 lib/test_hmm.c           | 368 +++++++++++++++++++++++++++++++--------
 2 files changed, 304 insertions(+), 76 deletions(-)

diff --git a/include/linux/memremap.h b/include/linux/memremap.h
index 9c20327c2be5..75987a8cfc6b 100644
--- a/include/linux/memremap.h
+++ b/include/linux/memremap.h
@@ -177,6 +177,18 @@ static inline bool folio_is_pci_p2pdma(const struct folio *folio)
 		folio->pgmap->type == MEMORY_DEVICE_PCI_P2PDMA;
 }
 
+static inline void *folio_zone_device_data(const struct folio *folio)
+{
+	VM_WARN_ON_FOLIO(!folio_is_device_private(folio), folio);
+	return folio->page.zone_device_data;
+}
+
+static inline void folio_set_zone_device_data(struct folio *folio, void *data)
+{
+	VM_WARN_ON_FOLIO(!folio_is_device_private(folio), folio);
+	folio->page.zone_device_data = data;
+}
+
 static inline bool is_pci_p2pdma_page(const struct page *page)
 {
 	return IS_ENABLED(CONFIG_PCI_P2PDMA) &&
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index 83e3d8208a54..50e175edc58a 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -119,6 +119,7 @@ struct dmirror_device {
 	unsigned long		calloc;
 	unsigned long		cfree;
 	struct page		*free_pages;
+	struct folio		*free_folios;
 	spinlock_t		lock;		/* protects the above */
 };
 
@@ -492,7 +493,7 @@ static int dmirror_write(struct dmirror *dmirror, struct hmm_dmirror_cmd *cmd)
 }
 
 static int dmirror_allocate_chunk(struct dmirror_device *mdevice,
-				   struct page **ppage)
+				  struct page **ppage, bool is_large)
 {
 	struct dmirror_chunk *devmem;
 	struct resource *res = NULL;
@@ -572,20 +573,45 @@ static int dmirror_allocate_chunk(struct dmirror_device *mdevice,
 		pfn_first, pfn_last);
 
 	spin_lock(&mdevice->lock);
-	for (pfn = pfn_first; pfn < pfn_last; pfn++) {
+	for (pfn = pfn_first; pfn < pfn_last; ) {
 		struct page *page = pfn_to_page(pfn);
 
+		if (is_large && IS_ALIGNED(pfn, HPAGE_PMD_NR)
+			&& (pfn + HPAGE_PMD_NR <= pfn_last)) {
+			page->zone_device_data = mdevice->free_folios;
+			mdevice->free_folios = page_folio(page);
+			pfn += HPAGE_PMD_NR;
+			continue;
+		}
+
 		page->zone_device_data = mdevice->free_pages;
 		mdevice->free_pages = page;
+		pfn++;
 	}
+
+	ret = 0;
 	if (ppage) {
-		*ppage = mdevice->free_pages;
-		mdevice->free_pages = (*ppage)->zone_device_data;
-		mdevice->calloc++;
+		if (is_large) {
+			if (!mdevice->free_folios) {
+				ret = -ENOMEM;
+				goto err_unlock;
+			}
+			*ppage = folio_page(mdevice->free_folios, 0);
+			mdevice->free_folios = (*ppage)->zone_device_data;
+			mdevice->calloc += HPAGE_PMD_NR;
+		} else if (mdevice->free_pages) {
+			*ppage = mdevice->free_pages;
+			mdevice->free_pages = (*ppage)->zone_device_data;
+			mdevice->calloc++;
+		} else {
+			ret = -ENOMEM;
+			goto err_unlock;
+		}
 	}
+err_unlock:
 	spin_unlock(&mdevice->lock);
 
-	return 0;
+	return ret;
 
 err_release:
 	mutex_unlock(&mdevice->devmem_lock);
@@ -598,10 +624,13 @@ static int dmirror_allocate_chunk(struct dmirror_device *mdevice,
 	return ret;
 }
 
-static struct page *dmirror_devmem_alloc_page(struct dmirror_device *mdevice)
+static struct page *dmirror_devmem_alloc_page(struct dmirror *dmirror,
+					      bool is_large)
 {
 	struct page *dpage = NULL;
 	struct page *rpage = NULL;
+	unsigned int order = is_large ? HPAGE_PMD_ORDER : 0;
+	struct dmirror_device *mdevice = dmirror->mdevice;
 
 	/*
 	 * For ZONE_DEVICE private type, this is a fake device so we allocate
@@ -610,49 +639,55 @@ static struct page *dmirror_devmem_alloc_page(struct dmirror_device *mdevice)
 	 * data and ignore rpage.
 	 */
 	if (dmirror_is_private_zone(mdevice)) {
-		rpage = alloc_page(GFP_HIGHUSER);
+		rpage = folio_page(folio_alloc(GFP_HIGHUSER, order), 0);
 		if (!rpage)
 			return NULL;
 	}
 	spin_lock(&mdevice->lock);
 
-	if (mdevice->free_pages) {
+	if (is_large && mdevice->free_folios) {
+		dpage = folio_page(mdevice->free_folios, 0);
+		mdevice->free_folios = dpage->zone_device_data;
+		mdevice->calloc += 1 << order;
+		spin_unlock(&mdevice->lock);
+	} else if (!is_large && mdevice->free_pages) {
 		dpage = mdevice->free_pages;
 		mdevice->free_pages = dpage->zone_device_data;
 		mdevice->calloc++;
 		spin_unlock(&mdevice->lock);
 	} else {
 		spin_unlock(&mdevice->lock);
-		if (dmirror_allocate_chunk(mdevice, &dpage))
+		if (dmirror_allocate_chunk(mdevice, &dpage, is_large))
 			goto error;
 	}
 
-	zone_device_page_init(dpage);
+	zone_device_folio_init(page_folio(dpage), order);
 	dpage->zone_device_data = rpage;
 	return dpage;
 
 error:
 	if (rpage)
-		__free_page(rpage);
+		__free_pages(rpage, order);
 	return NULL;
 }
 
 static void dmirror_migrate_alloc_and_copy(struct migrate_vma *args,
 					   struct dmirror *dmirror)
 {
-	struct dmirror_device *mdevice = dmirror->mdevice;
 	const unsigned long *src = args->src;
 	unsigned long *dst = args->dst;
 	unsigned long addr;
 
-	for (addr = args->start; addr < args->end; addr += PAGE_SIZE,
-						   src++, dst++) {
+	for (addr = args->start; addr < args->end; ) {
 		struct page *spage;
 		struct page *dpage;
 		struct page *rpage;
+		bool is_large = *src & MIGRATE_PFN_COMPOUND;
+		int write = (*src & MIGRATE_PFN_WRITE) ? MIGRATE_PFN_WRITE : 0;
+		unsigned long nr = 1;
 
 		if (!(*src & MIGRATE_PFN_MIGRATE))
-			continue;
+			goto next;
 
 		/*
 		 * Note that spage might be NULL which is OK since it is an
@@ -662,17 +697,45 @@ static void dmirror_migrate_alloc_and_copy(struct migrate_vma *args,
 		if (WARN(spage && is_zone_device_page(spage),
 		     "page already in device spage pfn: 0x%lx\n",
 		     page_to_pfn(spage)))
+			goto next;
+
+		dpage = dmirror_devmem_alloc_page(dmirror, is_large);
+		if (!dpage) {
+			struct folio *folio;
+			unsigned long i;
+			unsigned long spfn = *src >> MIGRATE_PFN_SHIFT;
+			struct page *src_page;
+
+			if (!is_large)
+				goto next;
+
+			if (!spage && is_large) {
+				nr = HPAGE_PMD_NR;
+			} else {
+				folio = page_folio(spage);
+				nr = folio_nr_pages(folio);
+			}
+
+			for (i = 0; i < nr && addr < args->end; i++) {
+				dpage = dmirror_devmem_alloc_page(dmirror, false);
+				rpage = BACKING_PAGE(dpage);
+				rpage->zone_device_data = dmirror;
+
+				*dst = migrate_pfn(page_to_pfn(dpage)) | write;
+				src_page = pfn_to_page(spfn + i);
+
+				if (spage)
+					copy_highpage(rpage, src_page);
+				else
+					clear_highpage(rpage);
+				src++;
+				dst++;
+				addr += PAGE_SIZE;
+			}
 			continue;
-
-		dpage = dmirror_devmem_alloc_page(mdevice);
-		if (!dpage)
-			continue;
+		}
 
 		rpage = BACKING_PAGE(dpage);
-		if (spage)
-			copy_highpage(rpage, spage);
-		else
-			clear_highpage(rpage);
 
 		/*
 		 * Normally, a device would use the page->zone_device_data to
@@ -684,10 +747,42 @@ static void dmirror_migrate_alloc_and_copy(struct migrate_vma *args,
 
 		pr_debug("migrating from sys to dev pfn src: 0x%lx pfn dst: 0x%lx\n",
 			 page_to_pfn(spage), page_to_pfn(dpage));
-		*dst = migrate_pfn(page_to_pfn(dpage));
-		if ((*src & MIGRATE_PFN_WRITE) ||
-		    (!spage && args->vma->vm_flags & VM_WRITE))
-			*dst |= MIGRATE_PFN_WRITE;
+
+		*dst = migrate_pfn(page_to_pfn(dpage)) | write;
+
+		if (is_large) {
+			int i;
+			struct folio *folio = page_folio(dpage);
+			*dst |= MIGRATE_PFN_COMPOUND;
+
+			if (folio_test_large(folio)) {
+				for (i = 0; i < folio_nr_pages(folio); i++) {
+					struct page *dst_page =
+						pfn_to_page(page_to_pfn(rpage) + i);
+					struct page *src_page =
+						pfn_to_page(page_to_pfn(spage) + i);
+
+					if (spage)
+						copy_highpage(dst_page, src_page);
+					else
+						clear_highpage(dst_page);
+					src++;
+					dst++;
+					addr += PAGE_SIZE;
+				}
+				continue;
+			}
+		}
+
+		if (spage)
+			copy_highpage(rpage, spage);
+		else
+			clear_highpage(rpage);
+
+next:
+		src++;
+		dst++;
+		addr += PAGE_SIZE;
 	}
 }
 
@@ -734,14 +829,17 @@ static int dmirror_migrate_finalize_and_map(struct migrate_vma *args,
 	const unsigned long *src = args->src;
 	const unsigned long *dst = args->dst;
 	unsigned long pfn;
+	const unsigned long start_pfn = start >> PAGE_SHIFT;
+	const unsigned long end_pfn = end >> PAGE_SHIFT;
 
 	/* Map the migrated pages into the device's page tables. */
 	mutex_lock(&dmirror->mutex);
 
-	for (pfn = start >> PAGE_SHIFT; pfn < (end >> PAGE_SHIFT); pfn++,
-								src++, dst++) {
+	for (pfn = start_pfn; pfn < end_pfn; pfn++, src++, dst++) {
 		struct page *dpage;
 		void *entry;
+		int nr, i;
+		struct page *rpage;
 
 		if (!(*src & MIGRATE_PFN_MIGRATE))
 			continue;
@@ -750,13 +848,25 @@ static int dmirror_migrate_finalize_and_map(struct migrate_vma *args,
 		if (!dpage)
 			continue;
 
-		entry = BACKING_PAGE(dpage);
-		if (*dst & MIGRATE_PFN_WRITE)
-			entry = xa_tag_pointer(entry, DPT_XA_TAG_WRITE);
-		entry = xa_store(&dmirror->pt, pfn, entry, GFP_ATOMIC);
-		if (xa_is_err(entry)) {
-			mutex_unlock(&dmirror->mutex);
-			return xa_err(entry);
+		if (*dst & MIGRATE_PFN_COMPOUND)
+			nr = folio_nr_pages(page_folio(dpage));
+		else
+			nr = 1;
+
+		WARN_ON_ONCE(end_pfn < start_pfn + nr);
+
+		rpage = BACKING_PAGE(dpage);
+		VM_WARN_ON(folio_nr_pages(page_folio(rpage)) != nr);
+
+		for (i = 0; i < nr; i++) {
+			entry = folio_page(page_folio(rpage), i);
+			if (*dst & MIGRATE_PFN_WRITE)
+				entry = xa_tag_pointer(entry, DPT_XA_TAG_WRITE);
+			entry = xa_store(&dmirror->pt, pfn + i, entry, GFP_ATOMIC);
+			if (xa_is_err(entry)) {
+				mutex_unlock(&dmirror->mutex);
+				return xa_err(entry);
+			}
 		}
 	}
 
@@ -829,31 +939,66 @@ static vm_fault_t dmirror_devmem_fault_alloc_and_copy(struct migrate_vma *args,
 	unsigned long start = args->start;
 	unsigned long end = args->end;
 	unsigned long addr;
+	unsigned int order = 0;
+	int i;
 
-	for (addr = start; addr < end; addr += PAGE_SIZE,
-				       src++, dst++) {
+	for (addr = start; addr < end; ) {
 		struct page *dpage, *spage;
 
 		spage = migrate_pfn_to_page(*src);
-		if (!spage || !(*src & MIGRATE_PFN_MIGRATE))
-			continue;
+		if (!spage || !(*src & MIGRATE_PFN_MIGRATE)) {
+			addr += PAGE_SIZE;
+			goto next;
+		}
 
 		if (WARN_ON(!is_device_private_page(spage) &&
-			    !is_device_coherent_page(spage)))
-			continue;
+			    !is_device_coherent_page(spage))) {
+			addr += PAGE_SIZE;
+			goto next;
+		}
+
 		spage = BACKING_PAGE(spage);
-		dpage = alloc_page_vma(GFP_HIGHUSER_MOVABLE, args->vma, addr);
-		if (!dpage)
-			continue;
-		pr_debug("migrating from dev to sys pfn src: 0x%lx pfn dst: 0x%lx\n",
-			 page_to_pfn(spage), page_to_pfn(dpage));
+		order = folio_order(page_folio(spage));
 
+		if (order)
+			dpage = folio_page(vma_alloc_folio(GFP_HIGHUSER_MOVABLE,
+						order, args->vma, addr), 0);
+		else
+			dpage = alloc_page_vma(GFP_HIGHUSER_MOVABLE, args->vma, addr);
+
+		/* Try with smaller pages if large allocation fails */
+		if (!dpage && order) {
+			dpage = alloc_page_vma(GFP_HIGHUSER_MOVABLE, args->vma, addr);
+			if (!dpage)
+				return VM_FAULT_OOM;
+			order = 0;
+		}
+
+		pr_debug("migrating from sys to dev pfn src: 0x%lx pfn dst: 0x%lx\n",
+				page_to_pfn(spage), page_to_pfn(dpage));
 		lock_page(dpage);
 		xa_erase(&dmirror->pt, addr >> PAGE_SHIFT);
 		copy_highpage(dpage, spage);
 		*dst = migrate_pfn(page_to_pfn(dpage));
 		if (*src & MIGRATE_PFN_WRITE)
 			*dst |= MIGRATE_PFN_WRITE;
+		if (order)
+			*dst |= MIGRATE_PFN_COMPOUND;
+
+		for (i = 0; i < (1 << order); i++) {
+			struct page *src_page;
+			struct page *dst_page;
+
+			src_page = pfn_to_page(page_to_pfn(spage) + i);
+			dst_page = pfn_to_page(page_to_pfn(dpage) + i);
+
+			xa_erase(&dmirror->pt, addr >> PAGE_SHIFT);
+			copy_highpage(dst_page, src_page);
+		}
+next:
+		addr += PAGE_SIZE << order;
+		src += 1 << order;
+		dst += 1 << order;
 	}
 	return 0;
 }
@@ -879,11 +1024,14 @@ static int dmirror_migrate_to_system(struct dmirror *dmirror,
 	unsigned long size = cmd->npages << PAGE_SHIFT;
 	struct mm_struct *mm = dmirror->notifier.mm;
 	struct vm_area_struct *vma;
-	unsigned long src_pfns[32] = { 0 };
-	unsigned long dst_pfns[32] = { 0 };
 	struct migrate_vma args = { 0 };
 	unsigned long next;
 	int ret;
+	unsigned long *src_pfns;
+	unsigned long *dst_pfns;
+
+	src_pfns = kvcalloc(PTRS_PER_PTE, sizeof(*src_pfns), GFP_KERNEL | __GFP_NOFAIL);
+	dst_pfns = kvcalloc(PTRS_PER_PTE, sizeof(*dst_pfns), GFP_KERNEL | __GFP_NOFAIL);
 
 	start = cmd->addr;
 	end = start + size;
@@ -902,7 +1050,7 @@ static int dmirror_migrate_to_system(struct dmirror *dmirror,
 			ret = -EINVAL;
 			goto out;
 		}
-		next = min(end, addr + (ARRAY_SIZE(src_pfns) << PAGE_SHIFT));
+		next = min(end, addr + (PTRS_PER_PTE << PAGE_SHIFT));
 		if (next > vma->vm_end)
 			next = vma->vm_end;
 
@@ -912,7 +1060,7 @@ static int dmirror_migrate_to_system(struct dmirror *dmirror,
 		args.start = addr;
 		args.end = next;
 		args.pgmap_owner = dmirror->mdevice;
-		args.flags = dmirror_select_device(dmirror);
+		args.flags = dmirror_select_device(dmirror) | MIGRATE_VMA_SELECT_COMPOUND;
 
 		ret = migrate_vma_setup(&args);
 		if (ret)
@@ -928,6 +1076,8 @@ static int dmirror_migrate_to_system(struct dmirror *dmirror,
 out:
 	mmap_read_unlock(mm);
 	mmput(mm);
+	kvfree(src_pfns);
+	kvfree(dst_pfns);
 
 	return ret;
 }
@@ -939,12 +1089,12 @@ static int dmirror_migrate_to_device(struct dmirror *dmirror,
 	unsigned long size = cmd->npages << PAGE_SHIFT;
 	struct mm_struct *mm = dmirror->notifier.mm;
 	struct vm_area_struct *vma;
-	unsigned long src_pfns[32] = { 0 };
-	unsigned long dst_pfns[32] = { 0 };
 	struct dmirror_bounce bounce;
 	struct migrate_vma args = { 0 };
 	unsigned long next;
 	int ret;
+	unsigned long *src_pfns = NULL;
+	unsigned long *dst_pfns = NULL;
 
 	start = cmd->addr;
 	end = start + size;
@@ -955,6 +1105,18 @@ static int dmirror_migrate_to_device(struct dmirror *dmirror,
 	if (!mmget_not_zero(mm))
 		return -EINVAL;
 
+	ret = -ENOMEM;
+	src_pfns = kvcalloc(PTRS_PER_PTE, sizeof(*src_pfns),
+			  GFP_KERNEL | __GFP_NOFAIL);
+	if (!src_pfns)
+		goto free_mem;
+
+	dst_pfns = kvcalloc(PTRS_PER_PTE, sizeof(*dst_pfns),
+			  GFP_KERNEL | __GFP_NOFAIL);
+	if (!dst_pfns)
+		goto free_mem;
+
+	ret = 0;
 	mmap_read_lock(mm);
 	for (addr = start; addr < end; addr = next) {
 		vma = vma_lookup(mm, addr);
@@ -962,7 +1124,7 @@ static int dmirror_migrate_to_device(struct dmirror *dmirror,
 			ret = -EINVAL;
 			goto out;
 		}
-		next = min(end, addr + (ARRAY_SIZE(src_pfns) << PAGE_SHIFT));
+		next = min(end, addr + (PTRS_PER_PTE << PAGE_SHIFT));
 		if (next > vma->vm_end)
 			next = vma->vm_end;
 
@@ -972,7 +1134,8 @@ static int dmirror_migrate_to_device(struct dmirror *dmirror,
 		args.start = addr;
 		args.end = next;
 		args.pgmap_owner = dmirror->mdevice;
-		args.flags = MIGRATE_VMA_SELECT_SYSTEM;
+		args.flags = MIGRATE_VMA_SELECT_SYSTEM |
+				MIGRATE_VMA_SELECT_COMPOUND;
 		ret = migrate_vma_setup(&args);
 		if (ret)
 			goto out;
@@ -992,7 +1155,7 @@ static int dmirror_migrate_to_device(struct dmirror *dmirror,
 	 */
 	ret = dmirror_bounce_init(&bounce, start, size);
 	if (ret)
-		return ret;
+		goto free_mem;
 	mutex_lock(&dmirror->mutex);
 	ret = dmirror_do_read(dmirror, start, end, &bounce);
 	mutex_unlock(&dmirror->mutex);
@@ -1003,11 +1166,14 @@ static int dmirror_migrate_to_device(struct dmirror *dmirror,
 	}
 	cmd->cpages = bounce.cpages;
 	dmirror_bounce_fini(&bounce);
-	return ret;
+	goto free_mem;
 
 out:
 	mmap_read_unlock(mm);
 	mmput(mm);
+free_mem:
+	kfree(src_pfns);
+	kfree(dst_pfns);
 	return ret;
 }
 
@@ -1200,6 +1366,7 @@ static void dmirror_device_evict_chunk(struct dmirror_chunk *chunk)
 	unsigned long i;
 	unsigned long *src_pfns;
 	unsigned long *dst_pfns;
+	unsigned int order = 0;
 
 	src_pfns = kvcalloc(npages, sizeof(*src_pfns), GFP_KERNEL | __GFP_NOFAIL);
 	dst_pfns = kvcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL | __GFP_NOFAIL);
@@ -1215,13 +1382,25 @@ static void dmirror_device_evict_chunk(struct dmirror_chunk *chunk)
 		if (WARN_ON(!is_device_private_page(spage) &&
 			    !is_device_coherent_page(spage)))
 			continue;
+
+		order = folio_order(page_folio(spage));
 		spage = BACKING_PAGE(spage);
-		dpage = alloc_page(GFP_HIGHUSER_MOVABLE | __GFP_NOFAIL);
+		if (src_pfns[i] & MIGRATE_PFN_COMPOUND) {
+			dpage = folio_page(folio_alloc(GFP_HIGHUSER_MOVABLE,
+					      order), 0);
+		} else {
+			dpage = alloc_page(GFP_HIGHUSER_MOVABLE | __GFP_NOFAIL);
+			order = 0;
+		}
+
+		/* TODO Support splitting here */
 		lock_page(dpage);
-		copy_highpage(dpage, spage);
 		dst_pfns[i] = migrate_pfn(page_to_pfn(dpage));
 		if (src_pfns[i] & MIGRATE_PFN_WRITE)
 			dst_pfns[i] |= MIGRATE_PFN_WRITE;
+		if (order)
+			dst_pfns[i] |= MIGRATE_PFN_COMPOUND;
+		folio_copy(page_folio(dpage), page_folio(spage));
 	}
 	migrate_device_pages(src_pfns, dst_pfns, npages);
 	migrate_device_finalize(src_pfns, dst_pfns, npages);
@@ -1234,7 +1413,12 @@ static void dmirror_remove_free_pages(struct dmirror_chunk *devmem)
 {
 	struct dmirror_device *mdevice = devmem->mdevice;
 	struct page *page;
+	struct folio *folio;
+
 
+	for (folio = mdevice->free_folios; folio; folio = folio_zone_device_data(folio))
+		if (dmirror_page_to_chunk(folio_page(folio, 0)) == devmem)
+			mdevice->free_folios = folio_zone_device_data(folio);
 	for (page = mdevice->free_pages; page; page = page->zone_device_data)
 		if (dmirror_page_to_chunk(page) == devmem)
 			mdevice->free_pages = page->zone_device_data;
@@ -1265,6 +1449,7 @@ static void dmirror_device_remove_chunks(struct dmirror_device *mdevice)
 		mdevice->devmem_count = 0;
 		mdevice->devmem_capacity = 0;
 		mdevice->free_pages = NULL;
+		mdevice->free_folios = NULL;
 		kfree(mdevice->devmem_chunks);
 		mdevice->devmem_chunks = NULL;
 	}
@@ -1378,18 +1563,30 @@ static void dmirror_devmem_free(struct page *page)
 {
 	struct page *rpage = BACKING_PAGE(page);
 	struct dmirror_device *mdevice;
+	struct folio *folio = page_folio(rpage);
+	unsigned int order = folio_order(folio);
 
-	if (rpage != page)
-		__free_page(rpage);
+	if (rpage != page) {
+		if (order)
+			__free_pages(rpage, order);
+		else
+			__free_page(rpage);
+		rpage = NULL;
+	}
 
 	mdevice = dmirror_page_to_device(page);
 	spin_lock(&mdevice->lock);
 
 	/* Return page to our allocator if not freeing the chunk */
 	if (!dmirror_page_to_chunk(page)->remove) {
-		mdevice->cfree++;
-		page->zone_device_data = mdevice->free_pages;
-		mdevice->free_pages = page;
+		mdevice->cfree += 1 << order;
+		if (order) {
+			page->zone_device_data = mdevice->free_folios;
+			mdevice->free_folios = page_folio(page);
+		} else {
+			page->zone_device_data = mdevice->free_pages;
+			mdevice->free_pages = page;
+		}
 	}
 	spin_unlock(&mdevice->lock);
 }
@@ -1397,36 +1594,52 @@ static void dmirror_devmem_free(struct page *page)
 static vm_fault_t dmirror_devmem_fault(struct vm_fault *vmf)
 {
 	struct migrate_vma args = { 0 };
-	unsigned long src_pfns = 0;
-	unsigned long dst_pfns = 0;
 	struct page *rpage;
 	struct dmirror *dmirror;
-	vm_fault_t ret;
+	vm_fault_t ret = 0;
+	unsigned int order, nr;
 
 	/*
 	 * Normally, a device would use the page->zone_device_data to point to
 	 * the mirror but here we use it to hold the page for the simulated
 	 * device memory and that page holds the pointer to the mirror.
 	 */
-	rpage = vmf->page->zone_device_data;
+	rpage = folio_zone_device_data(page_folio(vmf->page));
 	dmirror = rpage->zone_device_data;
 
 	/* FIXME demonstrate how we can adjust migrate range */
+	order = folio_order(page_folio(vmf->page));
+	nr = 1 << order;
+
+	/*
+	 * Consider a per-cpu cache of src and dst pfns, but with
+	 * large number of cpus that might not scale well.
+	 */
+	args.start = ALIGN_DOWN(vmf->address, (PAGE_SIZE << order));
 	args.vma = vmf->vma;
-	args.start = vmf->address;
-	args.end = args.start + PAGE_SIZE;
-	args.src = &src_pfns;
-	args.dst = &dst_pfns;
+	args.end = args.start + (PAGE_SIZE << order);
+
+	nr = (args.end - args.start) >> PAGE_SHIFT;
+	args.src = kcalloc(nr, sizeof(unsigned long), GFP_KERNEL);
+	args.dst = kcalloc(nr, sizeof(unsigned long), GFP_KERNEL);
 	args.pgmap_owner = dmirror->mdevice;
 	args.flags = dmirror_select_device(dmirror);
 	args.fault_page = vmf->page;
 
+	if (!args.src || !args.dst) {
+		ret = VM_FAULT_OOM;
+		goto err;
+	}
+
+	if (order)
+		args.flags |= MIGRATE_VMA_SELECT_COMPOUND;
+
 	if (migrate_vma_setup(&args))
 		return VM_FAULT_SIGBUS;
 
 	ret = dmirror_devmem_fault_alloc_and_copy(&args, dmirror);
 	if (ret)
-		return ret;
+		goto err;
 	migrate_vma_pages(&args);
 	/*
 	 * No device finalize step is needed since
@@ -1434,7 +1647,10 @@ static vm_fault_t dmirror_devmem_fault(struct vm_fault *vmf)
 	 * invalidated the device page table.
 	 */
 	migrate_vma_finalize(&args);
-	return 0;
+err:
+	kfree(args.src);
+	kfree(args.dst);
+	return ret;
 }
 
 static const struct dev_pagemap_ops dmirror_devmem_ops = {
@@ -1465,7 +1681,7 @@ static int dmirror_device_init(struct dmirror_device *mdevice, int id)
 		return ret;
 
 	/* Build a list of free ZONE_DEVICE struct pages */
-	return dmirror_allocate_chunk(mdevice, NULL);
+	return dmirror_allocate_chunk(mdevice, NULL, false);
 }
 
 static void dmirror_device_remove(struct dmirror_device *mdevice)
-- 
2.50.1



^ permalink raw reply	[flat|nested] 57+ messages in thread

* [v6 09/15] mm/memremap: add driver callback support for folio splitting
  2025-09-16 12:21 [v6 00/15] mm: support device-private THP Balbir Singh
                   ` (7 preceding siblings ...)
  2025-09-16 12:21 ` [v6 08/15] lib/test_hmm: add zone device private THP test infrastructure Balbir Singh
@ 2025-09-16 12:21 ` Balbir Singh
  2025-09-16 12:21 ` [v6 10/15] mm/migrate_device: add THP splitting during migration Balbir Singh
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 57+ messages in thread
From: Balbir Singh @ 2025-09-16 12:21 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: damon, dri-devel, Balbir Singh, David Hildenbrand, Zi Yan,
	Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Alistair Popple, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

When a zone device page is split (via huge pmd folio split).  The driver
callback for folio_split is invoked to let the device driver know that the
folio size has been split into a smaller order.

Provide a default implementation for drivers that do not provide this
callback that copies the pgmap and mapping fields for the split folios.

Update the HMM test driver to handle the split.

Signed-off-by: Balbir Singh <balbirs@nvidia.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
Cc: Rakie Kim <rakie.kim@sk.com>
Cc: Byungchul Park <byungchul@sk.com>
Cc: Gregory Price <gourry@gourry.net>
Cc: Ying Huang <ying.huang@linux.alibaba.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Nico Pache <npache@redhat.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Dev Jain <dev.jain@arm.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Danilo Krummrich <dakr@kernel.org>
Cc: David Airlie <airlied@gmail.com>
Cc: Simona Vetter <simona@ffwll.ch>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mika Penttilä <mpenttil@redhat.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Francois Dugast <francois.dugast@intel.com>
---
 include/linux/memremap.h | 29 +++++++++++++++++++++++++++++
 include/linux/mm.h       |  1 +
 lib/test_hmm.c           | 35 +++++++++++++++++++++++++++++++++++
 3 files changed, 65 insertions(+)

diff --git a/include/linux/memremap.h b/include/linux/memremap.h
index 75987a8cfc6b..ba95c31a7251 100644
--- a/include/linux/memremap.h
+++ b/include/linux/memremap.h
@@ -100,6 +100,13 @@ struct dev_pagemap_ops {
 	 */
 	int (*memory_failure)(struct dev_pagemap *pgmap, unsigned long pfn,
 			      unsigned long nr_pages, int mf_flags);
+
+	/*
+	 * Used for private (un-addressable) device memory only.
+	 * This callback is used when a folio is split into
+	 * a smaller folio
+	 */
+	void (*folio_split)(struct folio *head, struct folio *tail);
 };
 
 #define PGMAP_ALTMAP_VALID	(1 << 0)
@@ -235,6 +242,23 @@ static inline void zone_device_page_init(struct page *page)
 	zone_device_folio_init(folio, 0);
 }
 
+static inline void zone_device_private_split_cb(struct folio *original_folio,
+						struct folio *new_folio)
+{
+	if (folio_is_device_private(original_folio)) {
+		if (!original_folio->pgmap->ops->folio_split) {
+			if (new_folio) {
+				new_folio->pgmap = original_folio->pgmap;
+				new_folio->page.mapping =
+					original_folio->page.mapping;
+			}
+		} else {
+			original_folio->pgmap->ops->folio_split(original_folio,
+								 new_folio);
+		}
+	}
+}
+
 #else
 static inline void *devm_memremap_pages(struct device *dev,
 		struct dev_pagemap *pgmap)
@@ -268,6 +292,11 @@ static inline unsigned long memremap_compat_align(void)
 {
 	return PAGE_SIZE;
 }
+
+static inline void zone_device_private_split_cb(struct folio *original_folio,
+						struct folio *new_folio)
+{
+}
 #endif /* CONFIG_ZONE_DEVICE */
 
 static inline void put_dev_pagemap(struct dev_pagemap *pgmap)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index d004fb7d805d..be3e6fb4d0db 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1265,6 +1265,7 @@ static inline struct folio *virt_to_folio(const void *x)
 void __folio_put(struct folio *folio);
 
 void split_page(struct page *page, unsigned int order);
+void prep_compound_page(struct page *page, unsigned int order);
 void folio_copy(struct folio *dst, struct folio *src);
 int folio_mc_copy(struct folio *dst, struct folio *src);
 
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index 50e175edc58a..41092c065c2d 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -1653,9 +1653,44 @@ static vm_fault_t dmirror_devmem_fault(struct vm_fault *vmf)
 	return ret;
 }
 
+static void dmirror_devmem_folio_split(struct folio *head, struct folio *tail)
+{
+	struct page *rpage = BACKING_PAGE(folio_page(head, 0));
+	struct page *rpage_tail;
+	struct folio *rfolio;
+	unsigned long offset = 0;
+
+	if (!rpage) {
+		tail->page.zone_device_data = NULL;
+		return;
+	}
+
+	rfolio = page_folio(rpage);
+
+	if (tail == NULL) {
+		folio_reset_order(rfolio);
+		rfolio->mapping = NULL;
+		folio_set_count(rfolio, 1);
+		return;
+	}
+
+	offset = folio_pfn(tail) - folio_pfn(head);
+
+	rpage_tail = folio_page(rfolio, offset);
+	tail->page.zone_device_data = rpage_tail;
+	rpage_tail->zone_device_data = rpage->zone_device_data;
+	clear_compound_head(rpage_tail);
+	rpage_tail->mapping = NULL;
+
+	folio_page(tail, 0)->mapping = folio_page(head, 0)->mapping;
+	tail->pgmap = head->pgmap;
+	folio_set_count(page_folio(rpage_tail), 1);
+}
+
 static const struct dev_pagemap_ops dmirror_devmem_ops = {
 	.page_free	= dmirror_devmem_free,
 	.migrate_to_ram	= dmirror_devmem_fault,
+	.folio_split	= dmirror_devmem_folio_split,
 };
 
 static int dmirror_device_init(struct dmirror_device *mdevice, int id)
-- 
2.50.1



^ permalink raw reply	[flat|nested] 57+ messages in thread

* [v6 10/15] mm/migrate_device: add THP splitting during migration
  2025-09-16 12:21 [v6 00/15] mm: support device-private THP Balbir Singh
                   ` (8 preceding siblings ...)
  2025-09-16 12:21 ` [v6 09/15] mm/memremap: add driver callback support for folio splitting Balbir Singh
@ 2025-09-16 12:21 ` Balbir Singh
  2025-09-16 12:21 ` [v6 11/15] lib/test_hmm: add large page allocation failure testing Balbir Singh
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 57+ messages in thread
From: Balbir Singh @ 2025-09-16 12:21 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: damon, dri-devel, Balbir Singh, David Hildenbrand, Zi Yan,
	Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Alistair Popple, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

Implement migrate_vma_split_pages() to handle THP splitting during the
migration process when destination cannot allocate compound pages.

This addresses the common scenario where migrate_vma_setup() succeeds with
MIGRATE_PFN_COMPOUND pages, but the destination device cannot allocate
large pages during the migration phase.

Key changes:
- migrate_vma_split_pages(): Split already-isolated pages during migration
- Enhanced folio_split() and __split_unmapped_folio() with isolated
  parameter to avoid redundant unmap/remap operations

This provides a fallback mechansim to ensure migration succeeds even when
large page allocation fails at the destination.

Signed-off-by: Balbir Singh <balbirs@nvidia.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
Cc: Rakie Kim <rakie.kim@sk.com>
Cc: Byungchul Park <byungchul@sk.com>
Cc: Gregory Price <gourry@gourry.net>
Cc: Ying Huang <ying.huang@linux.alibaba.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Nico Pache <npache@redhat.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Dev Jain <dev.jain@arm.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Danilo Krummrich <dakr@kernel.org>
Cc: David Airlie <airlied@gmail.com>
Cc: Simona Vetter <simona@ffwll.ch>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mika Penttilä <mpenttil@redhat.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Francois Dugast <francois.dugast@intel.com>
---
 include/linux/huge_mm.h | 11 +++++--
 lib/test_hmm.c          |  9 ++++++
 mm/huge_memory.c        | 46 ++++++++++++++-------------
 mm/migrate_device.c     | 69 ++++++++++++++++++++++++++++++++++-------
 4 files changed, 101 insertions(+), 34 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 2d669be7f1c8..a166be872628 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -365,8 +365,8 @@ unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long add
 		vm_flags_t vm_flags);
 
 bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins);
-int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
-		unsigned int new_order);
+int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
+		unsigned int new_order, bool unmapped);
 int min_order_for_split(struct folio *folio);
 int split_folio_to_list(struct folio *folio, struct list_head *list);
 bool uniform_split_supported(struct folio *folio, unsigned int new_order,
@@ -375,6 +375,13 @@ bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
 		bool warns);
 int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
 		struct list_head *list);
+
+static inline int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
+		unsigned int new_order)
+{
+	return __split_huge_page_to_list_to_order(page, list, new_order, false);
+}
+
 /*
  * try_folio_split - try to split a @folio at @page using non uniform split.
  * @folio: folio to be split
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index 41092c065c2d..6455707df902 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -1611,6 +1611,15 @@ static vm_fault_t dmirror_devmem_fault(struct vm_fault *vmf)
 	order = folio_order(page_folio(vmf->page));
 	nr = 1 << order;
 
+	/*
+	 * When folios are partially mapped, we can't rely on the folio
+	 * order of vmf->page as the folio might not be fully split yet
+	 */
+	if (vmf->pte) {
+		order = 0;
+		nr = 1;
+	}
+
 	/*
 	 * Consider a per-cpu cache of src and dst pfns, but with
 	 * large number of cpus that might not scale well.
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 90a1939455dd..5a587149e34a 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3456,15 +3456,6 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
 		new_folio->mapping = folio->mapping;
 		new_folio->index = folio->index + i;
 
-		/*
-		 * page->private should not be set in tail pages. Fix up and warn once
-		 * if private is unexpectedly set.
-		 */
-		if (unlikely(new_folio->private)) {
-			VM_WARN_ON_ONCE_PAGE(true, new_head);
-			new_folio->private = NULL;
-		}
-
 		if (folio_test_swapcache(folio))
 			new_folio->swap.val = folio->swap.val + i;
 
@@ -3693,6 +3684,7 @@ bool uniform_split_supported(struct folio *folio, unsigned int new_order,
  * @lock_at: a page within @folio to be left locked to caller
  * @list: after-split folios will be put on it if non NULL
  * @uniform_split: perform uniform split or not (non-uniform split)
+ * @unmapped: The pages are already unmapped, they are migration entries.
  *
  * It calls __split_unmapped_folio() to perform uniform and non-uniform split.
  * It is in charge of checking whether the split is supported or not and
@@ -3708,7 +3700,7 @@ bool uniform_split_supported(struct folio *folio, unsigned int new_order,
  */
 static int __folio_split(struct folio *folio, unsigned int new_order,
 		struct page *split_at, struct page *lock_at,
-		struct list_head *list, bool uniform_split)
+		struct list_head *list, bool uniform_split, bool unmapped)
 {
 	struct deferred_split *ds_queue = get_deferred_split_queue(folio);
 	XA_STATE(xas, &folio->mapping->i_pages, folio->index);
@@ -3758,13 +3750,15 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
 		 * is taken to serialise against parallel split or collapse
 		 * operations.
 		 */
-		anon_vma = folio_get_anon_vma(folio);
-		if (!anon_vma) {
-			ret = -EBUSY;
-			goto out;
+		if (!unmapped) {
+			anon_vma = folio_get_anon_vma(folio);
+			if (!anon_vma) {
+				ret = -EBUSY;
+				goto out;
+			}
+			anon_vma_lock_write(anon_vma);
 		}
 		mapping = NULL;
-		anon_vma_lock_write(anon_vma);
 	} else {
 		unsigned int min_order;
 		gfp_t gfp;
@@ -3831,7 +3825,8 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
 		goto out_unlock;
 	}
 
-	unmap_folio(folio);
+	if (!unmapped)
+		unmap_folio(folio);
 
 	/* block interrupt reentry in xa_lock and spinlock */
 	local_irq_disable();
@@ -3918,10 +3913,13 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
 
 			next = folio_next(new_folio);
 
+			zone_device_private_split_cb(folio, new_folio);
+
 			expected_refs = folio_expected_ref_count(new_folio) + 1;
 			folio_ref_unfreeze(new_folio, expected_refs);
 
-			lru_add_split_folio(folio, new_folio, lruvec, list);
+			if (!unmapped)
+				lru_add_split_folio(folio, new_folio, lruvec, list);
 
 			/*
 			 * Anonymous folio with swap cache.
@@ -3952,6 +3950,8 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
 			__filemap_remove_folio(new_folio, NULL);
 			folio_put_refs(new_folio, nr_pages);
 		}
+
+		zone_device_private_split_cb(folio, NULL);
 		/*
 		 * Unfreeze @folio only after all page cache entries, which
 		 * used to point to it, have been updated with new folios.
@@ -3975,6 +3975,9 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
 
 	local_irq_enable();
 
+	if (unmapped)
+		return ret;
+
 	if (nr_shmem_dropped)
 		shmem_uncharge(mapping->host, nr_shmem_dropped);
 
@@ -4065,12 +4068,13 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
  * Returns -EINVAL when trying to split to an order that is incompatible
  * with the folio. Splitting to order 0 is compatible with all folios.
  */
-int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
-				     unsigned int new_order)
+int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
+				     unsigned int new_order, bool unmapped)
 {
 	struct folio *folio = page_folio(page);
 
-	return __folio_split(folio, new_order, &folio->page, page, list, true);
+	return __folio_split(folio, new_order, &folio->page, page, list, true,
+				unmapped);
 }
 
 /*
@@ -4099,7 +4103,7 @@ int folio_split(struct folio *folio, unsigned int new_order,
 		struct page *split_at, struct list_head *list)
 {
 	return __folio_split(folio, new_order, split_at, &folio->page, list,
-			false);
+			false, false);
 }
 
 int min_order_for_split(struct folio *folio)
diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index 1663ce553184..9f6a18269ff6 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -898,6 +898,29 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
 		src[i] &= ~MIGRATE_PFN_MIGRATE;
 	return 0;
 }
+
+static int migrate_vma_split_pages(struct migrate_vma *migrate,
+					unsigned long idx, unsigned long addr,
+					struct folio *folio)
+{
+	unsigned long i;
+	unsigned long pfn;
+	unsigned long flags;
+	int ret = 0;
+
+	folio_get(folio);
+	split_huge_pmd_address(migrate->vma, addr, true);
+	ret = __split_huge_page_to_list_to_order(folio_page(folio, 0), NULL,
+							0, true);
+	if (ret)
+		return ret;
+	migrate->src[idx] &= ~MIGRATE_PFN_COMPOUND;
+	flags = migrate->src[idx] & ((1UL << MIGRATE_PFN_SHIFT) - 1);
+	pfn = migrate->src[idx] >> MIGRATE_PFN_SHIFT;
+	for (i = 1; i < HPAGE_PMD_NR; i++)
+		migrate->src[i+idx] = migrate_pfn(pfn + i) | flags;
+	return ret;
+}
 #else /* !CONFIG_ARCH_ENABLE_THP_MIGRATION */
 static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
 					 unsigned long addr,
@@ -907,6 +930,13 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
 {
 	return 0;
 }
+
+static int migrate_vma_split_pages(struct migrate_vma *migrate,
+					unsigned long idx, unsigned long addr,
+					struct folio *folio)
+{
+	return 0;
+}
 #endif
 
 static unsigned long migrate_vma_nr_pages(unsigned long *src)
@@ -1068,8 +1098,9 @@ static void __migrate_device_pages(unsigned long *src_pfns,
 				struct migrate_vma *migrate)
 {
 	struct mmu_notifier_range range;
-	unsigned long i;
+	unsigned long i, j;
 	bool notified = false;
+	unsigned long addr;
 
 	for (i = 0; i < npages; ) {
 		struct page *newpage = migrate_pfn_to_page(dst_pfns[i]);
@@ -1111,12 +1142,16 @@ static void __migrate_device_pages(unsigned long *src_pfns,
 				(!(dst_pfns[i] & MIGRATE_PFN_COMPOUND))) {
 				nr = migrate_vma_nr_pages(&src_pfns[i]);
 				src_pfns[i] &= ~MIGRATE_PFN_COMPOUND;
-				src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
-				goto next;
+			} else {
+				nr = 1;
 			}
 
-			migrate_vma_insert_page(migrate, addr, &dst_pfns[i],
-						&src_pfns[i]);
+			for (j = 0; j < nr && i + j < npages; j++) {
+				src_pfns[i+j] |= MIGRATE_PFN_MIGRATE;
+				migrate_vma_insert_page(migrate,
+					addr + j * PAGE_SIZE,
+					&dst_pfns[i+j], &src_pfns[i+j]);
+			}
 			goto next;
 		}
 
@@ -1138,7 +1173,14 @@ static void __migrate_device_pages(unsigned long *src_pfns,
 							 MIGRATE_PFN_COMPOUND);
 					goto next;
 				}
-				src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
+				nr = 1 << folio_order(folio);
+				addr = migrate->start + i * PAGE_SIZE;
+				if (migrate_vma_split_pages(migrate, i, addr,
+								folio)) {
+					src_pfns[i] &= ~(MIGRATE_PFN_MIGRATE |
+							 MIGRATE_PFN_COMPOUND);
+					goto next;
+				}
 			} else if ((src_pfns[i] & MIGRATE_PFN_MIGRATE) &&
 				(dst_pfns[i] & MIGRATE_PFN_COMPOUND) &&
 				!(src_pfns[i] & MIGRATE_PFN_COMPOUND)) {
@@ -1174,11 +1216,16 @@ static void __migrate_device_pages(unsigned long *src_pfns,
 
 		if (migrate && migrate->fault_page == page)
 			extra_cnt = 1;
-		r = folio_migrate_mapping(mapping, newfolio, folio, extra_cnt);
-		if (r)
-			src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
-		else
-			folio_migrate_flags(newfolio, folio);
+		for (j = 0; j < nr && i + j < npages; j++) {
+			folio = page_folio(migrate_pfn_to_page(src_pfns[i+j]));
+			newfolio = page_folio(migrate_pfn_to_page(dst_pfns[i+j]));
+
+			r = folio_migrate_mapping(mapping, newfolio, folio, extra_cnt);
+			if (r)
+				src_pfns[i+j] &= ~MIGRATE_PFN_MIGRATE;
+			else
+				folio_migrate_flags(newfolio, folio);
+		}
 next:
 		i += nr;
 	}
-- 
2.50.1



^ permalink raw reply	[flat|nested] 57+ messages in thread

* [v6 11/15] lib/test_hmm: add large page allocation failure testing
  2025-09-16 12:21 [v6 00/15] mm: support device-private THP Balbir Singh
                   ` (9 preceding siblings ...)
  2025-09-16 12:21 ` [v6 10/15] mm/migrate_device: add THP splitting during migration Balbir Singh
@ 2025-09-16 12:21 ` Balbir Singh
  2025-09-16 12:21 ` [v6 12/15] selftests/mm/hmm-tests: new tests for zone device THP migration Balbir Singh
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 57+ messages in thread
From: Balbir Singh @ 2025-09-16 12:21 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: damon, dri-devel, Balbir Singh, David Hildenbrand, Zi Yan,
	Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Alistair Popple, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

Add HMM_DMIRROR_FLAG_FAIL_ALLOC flag to simulate large page allocation
failures, enabling testing of split migration code paths.

This test flag allows validation of the fallback behavior when destination
device cannot allocate compound pages.  This is useful for testing the
split migration functionality.

Signed-off-by: Balbir Singh <balbirs@nvidia.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
Cc: Rakie Kim <rakie.kim@sk.com>
Cc: Byungchul Park <byungchul@sk.com>
Cc: Gregory Price <gourry@gourry.net>
Cc: Ying Huang <ying.huang@linux.alibaba.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Nico Pache <npache@redhat.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Dev Jain <dev.jain@arm.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Danilo Krummrich <dakr@kernel.org>
Cc: David Airlie <airlied@gmail.com>
Cc: Simona Vetter <simona@ffwll.ch>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mika Penttilä <mpenttil@redhat.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Francois Dugast <francois.dugast@intel.com>
---
 lib/test_hmm.c      | 61 ++++++++++++++++++++++++++++++---------------
 lib/test_hmm_uapi.h |  3 +++
 2 files changed, 44 insertions(+), 20 deletions(-)

diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index 6455707df902..bb9324b9b04c 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -92,6 +92,7 @@ struct dmirror {
 	struct xarray			pt;
 	struct mmu_interval_notifier	notifier;
 	struct mutex			mutex;
+	__u64			flags;
 };
 
 /*
@@ -699,7 +700,12 @@ static void dmirror_migrate_alloc_and_copy(struct migrate_vma *args,
 		     page_to_pfn(spage)))
 			goto next;
 
-		dpage = dmirror_devmem_alloc_page(dmirror, is_large);
+		if (dmirror->flags & HMM_DMIRROR_FLAG_FAIL_ALLOC) {
+			dmirror->flags &= ~HMM_DMIRROR_FLAG_FAIL_ALLOC;
+			dpage = NULL;
+		} else
+			dpage = dmirror_devmem_alloc_page(dmirror, is_large);
+
 		if (!dpage) {
 			struct folio *folio;
 			unsigned long i;
@@ -959,44 +965,55 @@ static vm_fault_t dmirror_devmem_fault_alloc_and_copy(struct migrate_vma *args,
 
 		spage = BACKING_PAGE(spage);
 		order = folio_order(page_folio(spage));
-
 		if (order)
+			*dst = MIGRATE_PFN_COMPOUND;
+		if (*src & MIGRATE_PFN_WRITE)
+			*dst |= MIGRATE_PFN_WRITE;
+
+		if (dmirror->flags & HMM_DMIRROR_FLAG_FAIL_ALLOC) {
+			dmirror->flags &= ~HMM_DMIRROR_FLAG_FAIL_ALLOC;
+			*dst &= ~MIGRATE_PFN_COMPOUND;
+			dpage = NULL;
+		} else if (order) {
 			dpage = folio_page(vma_alloc_folio(GFP_HIGHUSER_MOVABLE,
 						order, args->vma, addr), 0);
-		else
-			dpage = alloc_page_vma(GFP_HIGHUSER_MOVABLE, args->vma, addr);
-
-		/* Try with smaller pages if large allocation fails */
-		if (!dpage && order) {
+		} else {
 			dpage = alloc_page_vma(GFP_HIGHUSER_MOVABLE, args->vma, addr);
-			if (!dpage)
-				return VM_FAULT_OOM;
-			order = 0;
 		}
 
+		if (!dpage && !order)
+			return VM_FAULT_OOM;
+
 		pr_debug("migrating from sys to dev pfn src: 0x%lx pfn dst: 0x%lx\n",
 				page_to_pfn(spage), page_to_pfn(dpage));
-		lock_page(dpage);
-		xa_erase(&dmirror->pt, addr >> PAGE_SHIFT);
-		copy_highpage(dpage, spage);
-		*dst = migrate_pfn(page_to_pfn(dpage));
-		if (*src & MIGRATE_PFN_WRITE)
-			*dst |= MIGRATE_PFN_WRITE;
-		if (order)
-			*dst |= MIGRATE_PFN_COMPOUND;
+
+		if (dpage) {
+			lock_page(dpage);
+			*dst |= migrate_pfn(page_to_pfn(dpage));
+		}
 
 		for (i = 0; i < (1 << order); i++) {
 			struct page *src_page;
 			struct page *dst_page;
 
+			/* Try with smaller pages if large allocation fails */
+			if (!dpage && order) {
+				dpage = alloc_page_vma(GFP_HIGHUSER_MOVABLE, args->vma, addr);
+				lock_page(dpage);
+				dst[i] = migrate_pfn(page_to_pfn(dpage));
+				dst_page = pfn_to_page(page_to_pfn(dpage));
+				dpage = NULL; /* For the next iteration */
+			} else {
+				dst_page = pfn_to_page(page_to_pfn(dpage) + i);
+			}
+
 			src_page = pfn_to_page(page_to_pfn(spage) + i);
-			dst_page = pfn_to_page(page_to_pfn(dpage) + i);
 
 			xa_erase(&dmirror->pt, addr >> PAGE_SHIFT);
+			addr += PAGE_SIZE;
 			copy_highpage(dst_page, src_page);
 		}
 next:
-		addr += PAGE_SIZE << order;
 		src += 1 << order;
 		dst += 1 << order;
 	}
@@ -1514,6 +1531,10 @@ static long dmirror_fops_unlocked_ioctl(struct file *filp,
 		dmirror_device_remove_chunks(dmirror->mdevice);
 		ret = 0;
 		break;
+	case HMM_DMIRROR_FLAGS:
+		dmirror->flags = cmd.npages;
+		ret = 0;
+		break;
 
 	default:
 		return -EINVAL;
diff --git a/lib/test_hmm_uapi.h b/lib/test_hmm_uapi.h
index 8c818a2cf4f6..f94c6d457338 100644
--- a/lib/test_hmm_uapi.h
+++ b/lib/test_hmm_uapi.h
@@ -37,6 +37,9 @@ struct hmm_dmirror_cmd {
 #define HMM_DMIRROR_EXCLUSIVE		_IOWR('H', 0x05, struct hmm_dmirror_cmd)
 #define HMM_DMIRROR_CHECK_EXCLUSIVE	_IOWR('H', 0x06, struct hmm_dmirror_cmd)
 #define HMM_DMIRROR_RELEASE		_IOWR('H', 0x07, struct hmm_dmirror_cmd)
+#define HMM_DMIRROR_FLAGS		_IOWR('H', 0x08, struct hmm_dmirror_cmd)
+
+#define HMM_DMIRROR_FLAG_FAIL_ALLOC	(1ULL << 0)
 
 /*
  * Values returned in hmm_dmirror_cmd.ptr for HMM_DMIRROR_SNAPSHOT.
-- 
2.50.1



^ permalink raw reply	[flat|nested] 57+ messages in thread

* [v6 12/15] selftests/mm/hmm-tests: new tests for zone device THP migration
  2025-09-16 12:21 [v6 00/15] mm: support device-private THP Balbir Singh
                   ` (10 preceding siblings ...)
  2025-09-16 12:21 ` [v6 11/15] lib/test_hmm: add large page allocation failure testing Balbir Singh
@ 2025-09-16 12:21 ` Balbir Singh
  2025-09-16 12:21 ` [v6 13/15] selftests/mm/hmm-tests: partial unmap, mremap and anon_write tests Balbir Singh
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 57+ messages in thread
From: Balbir Singh @ 2025-09-16 12:21 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: damon, dri-devel, Balbir Singh, David Hildenbrand, Zi Yan,
	Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Alistair Popple, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

Add new tests for migrating anon THP pages, including anon_huge,
anon_huge_zero and error cases involving forced splitting of pages during
migration.

Signed-off-by: Balbir Singh <balbirs@nvidia.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
Cc: Rakie Kim <rakie.kim@sk.com>
Cc: Byungchul Park <byungchul@sk.com>
Cc: Gregory Price <gourry@gourry.net>
Cc: Ying Huang <ying.huang@linux.alibaba.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Nico Pache <npache@redhat.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Dev Jain <dev.jain@arm.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Danilo Krummrich <dakr@kernel.org>
Cc: David Airlie <airlied@gmail.com>
Cc: Simona Vetter <simona@ffwll.ch>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mika Penttilä <mpenttil@redhat.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Francois Dugast <francois.dugast@intel.com>
---
 tools/testing/selftests/mm/hmm-tests.c | 410 +++++++++++++++++++++++++
 1 file changed, 410 insertions(+)

diff --git a/tools/testing/selftests/mm/hmm-tests.c b/tools/testing/selftests/mm/hmm-tests.c
index 15aadaf24a66..339a90183930 100644
--- a/tools/testing/selftests/mm/hmm-tests.c
+++ b/tools/testing/selftests/mm/hmm-tests.c
@@ -2055,4 +2055,414 @@ TEST_F(hmm, hmm_cow_in_device)
 
 	hmm_buffer_free(buffer);
 }
+
+/*
+ * Migrate private anonymous huge empty page.
+ */
+TEST_F(hmm, migrate_anon_huge_empty)
+{
+	struct hmm_buffer *buffer;
+	unsigned long npages;
+	unsigned long size;
+	unsigned long i;
+	void *old_ptr;
+	void *map;
+	int *ptr;
+	int ret;
+
+	size = TWOMEG;
+
+	buffer = malloc(sizeof(*buffer));
+	ASSERT_NE(buffer, NULL);
+
+	buffer->fd = -1;
+	buffer->size = 2 * size;
+	buffer->mirror = malloc(size);
+	ASSERT_NE(buffer->mirror, NULL);
+	memset(buffer->mirror, 0xFF, size);
+
+	buffer->ptr = mmap(NULL, 2 * size,
+			   PROT_READ,
+			   MAP_PRIVATE | MAP_ANONYMOUS,
+			   buffer->fd, 0);
+	ASSERT_NE(buffer->ptr, MAP_FAILED);
+
+	npages = size >> self->page_shift;
+	map = (void *)ALIGN((uintptr_t)buffer->ptr, size);
+	ret = madvise(map, size, MADV_HUGEPAGE);
+	ASSERT_EQ(ret, 0);
+	old_ptr = buffer->ptr;
+	buffer->ptr = map;
+
+	/* Migrate memory to device. */
+	ret = hmm_migrate_sys_to_dev(self->fd, buffer, npages);
+	ASSERT_EQ(ret, 0);
+	ASSERT_EQ(buffer->cpages, npages);
+
+	/* Check what the device read. */
+	for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
+		ASSERT_EQ(ptr[i], 0);
+
+	buffer->ptr = old_ptr;
+	hmm_buffer_free(buffer);
+}
+
+/*
+ * Migrate private anonymous huge zero page.
+ */
+TEST_F(hmm, migrate_anon_huge_zero)
+{
+	struct hmm_buffer *buffer;
+	unsigned long npages;
+	unsigned long size;
+	unsigned long i;
+	void *old_ptr;
+	void *map;
+	int *ptr;
+	int ret;
+	int val;
+
+	size = TWOMEG;
+
+	buffer = malloc(sizeof(*buffer));
+	ASSERT_NE(buffer, NULL);
+
+	buffer->fd = -1;
+	buffer->size = 2 * size;
+	buffer->mirror = malloc(size);
+	ASSERT_NE(buffer->mirror, NULL);
+	memset(buffer->mirror, 0xFF, size);
+
+	buffer->ptr = mmap(NULL, 2 * size,
+			   PROT_READ,
+			   MAP_PRIVATE | MAP_ANONYMOUS,
+			   buffer->fd, 0);
+	ASSERT_NE(buffer->ptr, MAP_FAILED);
+
+	npages = size >> self->page_shift;
+	map = (void *)ALIGN((uintptr_t)buffer->ptr, size);
+	ret = madvise(map, size, MADV_HUGEPAGE);
+	ASSERT_EQ(ret, 0);
+	old_ptr = buffer->ptr;
+	buffer->ptr = map;
+
+	/* Initialize a read-only zero huge page. */
+	val = *(int *)buffer->ptr;
+	ASSERT_EQ(val, 0);
+
+	/* Migrate memory to device. */
+	ret = hmm_migrate_sys_to_dev(self->fd, buffer, npages);
+	ASSERT_EQ(ret, 0);
+	ASSERT_EQ(buffer->cpages, npages);
+
+	/* Check what the device read. */
+	for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
+		ASSERT_EQ(ptr[i], 0);
+
+	/* Fault pages back to system memory and check them. */
+	for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) {
+		ASSERT_EQ(ptr[i], 0);
+		/* If it asserts once, it probably will 500,000 times */
+		if (ptr[i] != 0)
+			break;
+	}
+
+	buffer->ptr = old_ptr;
+	hmm_buffer_free(buffer);
+}
+
+/*
+ * Migrate private anonymous huge page and free.
+ */
+TEST_F(hmm, migrate_anon_huge_free)
+{
+	struct hmm_buffer *buffer;
+	unsigned long npages;
+	unsigned long size;
+	unsigned long i;
+	void *old_ptr;
+	void *map;
+	int *ptr;
+	int ret;
+
+	size = TWOMEG;
+
+	buffer = malloc(sizeof(*buffer));
+	ASSERT_NE(buffer, NULL);
+
+	buffer->fd = -1;
+	buffer->size = 2 * size;
+	buffer->mirror = malloc(size);
+	ASSERT_NE(buffer->mirror, NULL);
+	memset(buffer->mirror, 0xFF, size);
+
+	buffer->ptr = mmap(NULL, 2 * size,
+			   PROT_READ | PROT_WRITE,
+			   MAP_PRIVATE | MAP_ANONYMOUS,
+			   buffer->fd, 0);
+	ASSERT_NE(buffer->ptr, MAP_FAILED);
+
+	npages = size >> self->page_shift;
+	map = (void *)ALIGN((uintptr_t)buffer->ptr, size);
+	ret = madvise(map, size, MADV_HUGEPAGE);
+	ASSERT_EQ(ret, 0);
+	old_ptr = buffer->ptr;
+	buffer->ptr = map;
+
+	/* Initialize buffer in system memory. */
+	for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i)
+		ptr[i] = i;
+
+	/* Migrate memory to device. */
+	ret = hmm_migrate_sys_to_dev(self->fd, buffer, npages);
+	ASSERT_EQ(ret, 0);
+	ASSERT_EQ(buffer->cpages, npages);
+
+	/* Check what the device read. */
+	for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
+		ASSERT_EQ(ptr[i], i);
+
+	/* Try freeing it. */
+	ret = madvise(map, size, MADV_FREE);
+	ASSERT_EQ(ret, 0);
+
+	buffer->ptr = old_ptr;
+	hmm_buffer_free(buffer);
+}
+
+/*
+ * Migrate private anonymous huge page and fault back to sysmem.
+ */
+TEST_F(hmm, migrate_anon_huge_fault)
+{
+	struct hmm_buffer *buffer;
+	unsigned long npages;
+	unsigned long size;
+	unsigned long i;
+	void *old_ptr;
+	void *map;
+	int *ptr;
+	int ret;
+
+	size = TWOMEG;
+
+	buffer = malloc(sizeof(*buffer));
+	ASSERT_NE(buffer, NULL);
+
+	buffer->fd = -1;
+	buffer->size = 2 * size;
+	buffer->mirror = malloc(size);
+	ASSERT_NE(buffer->mirror, NULL);
+	memset(buffer->mirror, 0xFF, size);
+
+	buffer->ptr = mmap(NULL, 2 * size,
+			   PROT_READ | PROT_WRITE,
+			   MAP_PRIVATE | MAP_ANONYMOUS,
+			   buffer->fd, 0);
+	ASSERT_NE(buffer->ptr, MAP_FAILED);
+
+	npages = size >> self->page_shift;
+	map = (void *)ALIGN((uintptr_t)buffer->ptr, size);
+	ret = madvise(map, size, MADV_HUGEPAGE);
+	ASSERT_EQ(ret, 0);
+	old_ptr = buffer->ptr;
+	buffer->ptr = map;
+
+	/* Initialize buffer in system memory. */
+	for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i)
+		ptr[i] = i;
+
+	/* Migrate memory to device. */
+	ret = hmm_migrate_sys_to_dev(self->fd, buffer, npages);
+	ASSERT_EQ(ret, 0);
+	ASSERT_EQ(buffer->cpages, npages);
+
+	/* Check what the device read. */
+	for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
+		ASSERT_EQ(ptr[i], i);
+
+	/* Fault pages back to system memory and check them. */
+	for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i)
+		ASSERT_EQ(ptr[i], i);
+
+	buffer->ptr = old_ptr;
+	hmm_buffer_free(buffer);
+}
+
+/*
+ * Migrate private anonymous huge page with allocation errors.
+ */
+TEST_F(hmm, migrate_anon_huge_err)
+{
+	struct hmm_buffer *buffer;
+	unsigned long npages;
+	unsigned long size;
+	unsigned long i;
+	void *old_ptr;
+	void *map;
+	int *ptr;
+	int ret;
+
+	size = TWOMEG;
+
+	buffer = malloc(sizeof(*buffer));
+	ASSERT_NE(buffer, NULL);
+
+	buffer->fd = -1;
+	buffer->size = 2 * size;
+	buffer->mirror = malloc(2 * size);
+	ASSERT_NE(buffer->mirror, NULL);
+	memset(buffer->mirror, 0xFF, 2 * size);
+
+	old_ptr = mmap(NULL, 2 * size, PROT_READ | PROT_WRITE,
+			MAP_PRIVATE | MAP_ANONYMOUS, buffer->fd, 0);
+	ASSERT_NE(old_ptr, MAP_FAILED);
+
+	npages = size >> self->page_shift;
+	map = (void *)ALIGN((uintptr_t)old_ptr, size);
+	ret = madvise(map, size, MADV_HUGEPAGE);
+	ASSERT_EQ(ret, 0);
+	buffer->ptr = map;
+
+	/* Initialize buffer in system memory. */
+	for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i)
+		ptr[i] = i;
+
+	/* Migrate memory to device but force a THP allocation error. */
+	ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_FLAGS, buffer,
+			      HMM_DMIRROR_FLAG_FAIL_ALLOC);
+	ASSERT_EQ(ret, 0);
+	ret = hmm_migrate_sys_to_dev(self->fd, buffer, npages);
+	ASSERT_EQ(ret, 0);
+	ASSERT_EQ(buffer->cpages, npages);
+
+	/* Check what the device read. */
+	for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i) {
+		ASSERT_EQ(ptr[i], i);
+		if (ptr[i] != i)
+			break;
+	}
+
+	/* Try faulting back a single (PAGE_SIZE) page. */
+	ptr = buffer->ptr;
+	ASSERT_EQ(ptr[2048], 2048);
+
+	/* unmap and remap the region to reset things. */
+	ret = munmap(old_ptr, 2 * size);
+	ASSERT_EQ(ret, 0);
+	old_ptr = mmap(NULL, 2 * size, PROT_READ | PROT_WRITE,
+			MAP_PRIVATE | MAP_ANONYMOUS, buffer->fd, 0);
+	ASSERT_NE(old_ptr, MAP_FAILED);
+	map = (void *)ALIGN((uintptr_t)old_ptr, size);
+	ret = madvise(map, size, MADV_HUGEPAGE);
+	ASSERT_EQ(ret, 0);
+	buffer->ptr = map;
+
+	/* Initialize buffer in system memory. */
+	for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i)
+		ptr[i] = i;
+
+	/* Migrate THP to device. */
+	ret = hmm_migrate_sys_to_dev(self->fd, buffer, npages);
+	ASSERT_EQ(ret, 0);
+	ASSERT_EQ(buffer->cpages, npages);
+
+	/*
+	 * Force an allocation error when faulting back a THP resident in the
+	 * device.
+	 */
+	ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_FLAGS, buffer,
+			      HMM_DMIRROR_FLAG_FAIL_ALLOC);
+	ASSERT_EQ(ret, 0);
+
+	ret = hmm_migrate_dev_to_sys(self->fd, buffer, npages);
+	ASSERT_EQ(ret, 0);
+	ptr = buffer->ptr;
+	ASSERT_EQ(ptr[2048], 2048);
+
+	buffer->ptr = old_ptr;
+	hmm_buffer_free(buffer);
+}
+
+/*
+ * Migrate private anonymous huge zero page with allocation errors.
+ */
+TEST_F(hmm, migrate_anon_huge_zero_err)
+{
+	struct hmm_buffer *buffer;
+	unsigned long npages;
+	unsigned long size;
+	unsigned long i;
+	void *old_ptr;
+	void *map;
+	int *ptr;
+	int ret;
+
+	size = TWOMEG;
+
+	buffer = malloc(sizeof(*buffer));
+	ASSERT_NE(buffer, NULL);
+
+	buffer->fd = -1;
+	buffer->size = 2 * size;
+	buffer->mirror = malloc(2 * size);
+	ASSERT_NE(buffer->mirror, NULL);
+	memset(buffer->mirror, 0xFF, 2 * size);
+
+	old_ptr = mmap(NULL, 2 * size, PROT_READ,
+			MAP_PRIVATE | MAP_ANONYMOUS, buffer->fd, 0);
+	ASSERT_NE(old_ptr, MAP_FAILED);
+
+	npages = size >> self->page_shift;
+	map = (void *)ALIGN((uintptr_t)old_ptr, size);
+	ret = madvise(map, size, MADV_HUGEPAGE);
+	ASSERT_EQ(ret, 0);
+	buffer->ptr = map;
+
+	/* Migrate memory to device but force a THP allocation error. */
+	ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_FLAGS, buffer,
+			      HMM_DMIRROR_FLAG_FAIL_ALLOC);
+	ASSERT_EQ(ret, 0);
+	ret = hmm_migrate_sys_to_dev(self->fd, buffer, npages);
+	ASSERT_EQ(ret, 0);
+	ASSERT_EQ(buffer->cpages, npages);
+
+	/* Check what the device read. */
+	for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
+		ASSERT_EQ(ptr[i], 0);
+
+	/* Try faulting back a single (PAGE_SIZE) page. */
+	ptr = buffer->ptr;
+	ASSERT_EQ(ptr[2048], 0);
+
+	/* unmap and remap the region to reset things. */
+	ret = munmap(old_ptr, 2 * size);
+	ASSERT_EQ(ret, 0);
+	old_ptr = mmap(NULL, 2 * size, PROT_READ,
+			MAP_PRIVATE | MAP_ANONYMOUS, buffer->fd, 0);
+	ASSERT_NE(old_ptr, MAP_FAILED);
+	map = (void *)ALIGN((uintptr_t)old_ptr, size);
+	ret = madvise(map, size, MADV_HUGEPAGE);
+	ASSERT_EQ(ret, 0);
+	buffer->ptr = map;
+
+	/* Initialize buffer in system memory (zero THP page). */
+	ret = ptr[0];
+	ASSERT_EQ(ret, 0);
+
+	/* Migrate memory to device but force a THP allocation error. */
+	ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_FLAGS, buffer,
+			      HMM_DMIRROR_FLAG_FAIL_ALLOC);
+	ASSERT_EQ(ret, 0);
+	ret = hmm_migrate_sys_to_dev(self->fd, buffer, npages);
+	ASSERT_EQ(ret, 0);
+	ASSERT_EQ(buffer->cpages, npages);
+
+	/* Fault the device memory back and check it. */
+	for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i)
+		ASSERT_EQ(ptr[i], 0);
+
+	buffer->ptr = old_ptr;
+	hmm_buffer_free(buffer);
+}
 TEST_HARNESS_MAIN
-- 
2.50.1



^ permalink raw reply	[flat|nested] 57+ messages in thread

* [v6 13/15] selftests/mm/hmm-tests: partial unmap, mremap and anon_write tests
  2025-09-16 12:21 [v6 00/15] mm: support device-private THP Balbir Singh
                   ` (11 preceding siblings ...)
  2025-09-16 12:21 ` [v6 12/15] selftests/mm/hmm-tests: new tests for zone device THP migration Balbir Singh
@ 2025-09-16 12:21 ` Balbir Singh
  2025-09-16 12:21 ` [v6 14/15] selftests/mm/hmm-tests: new throughput tests including THP Balbir Singh
  2025-09-16 12:21 ` [v6 15/15] gpu/drm/nouveau: enable THP support for GPU memory migration Balbir Singh
  14 siblings, 0 replies; 57+ messages in thread
From: Balbir Singh @ 2025-09-16 12:21 UTC (permalink / raw)
  To: linux-kernel, linux-mm; +Cc: damon, dri-devel, Matthew Brost, Balbir Singh

From: Matthew Brost <matthew.brost@intel.com>

Add partial unmap test case which munmaps memory while in the device.

Add tests exercising mremap on faulted-in memory (CPU and GPU) at
various offsets and verify correctness.

Update anon_write_child to read device memory after fork verifying
this flow works in the kernel.

Both THP and non-THP cases are updated.

Signed-off-by: Balbir Singh <balbirs@nvidia.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 tools/testing/selftests/mm/hmm-tests.c | 312 ++++++++++++++++++++-----
 1 file changed, 252 insertions(+), 60 deletions(-)

diff --git a/tools/testing/selftests/mm/hmm-tests.c b/tools/testing/selftests/mm/hmm-tests.c
index 339a90183930..dedc1049bd4d 100644
--- a/tools/testing/selftests/mm/hmm-tests.c
+++ b/tools/testing/selftests/mm/hmm-tests.c
@@ -50,6 +50,8 @@ enum {
 	HMM_COHERENCE_DEVICE_TWO,
 };
 
+#define ONEKB		(1 << 10)
+#define ONEMEG		(1 << 20)
 #define TWOMEG		(1 << 21)
 #define HMM_BUFFER_SIZE (1024 << 12)
 #define HMM_PATH_MAX    64
@@ -525,6 +527,8 @@ TEST_F(hmm, anon_write_prot)
 /*
  * Check that a device writing an anonymous private mapping
  * will copy-on-write if a child process inherits the mapping.
+ *
+ * Also verifies after fork() memory the device can be read by child.
  */
 TEST_F(hmm, anon_write_child)
 {
@@ -532,72 +536,101 @@ TEST_F(hmm, anon_write_child)
 	unsigned long npages;
 	unsigned long size;
 	unsigned long i;
+	void *old_ptr;
+	void *map;
 	int *ptr;
 	pid_t pid;
 	int child_fd;
-	int ret;
-
-	npages = ALIGN(HMM_BUFFER_SIZE, self->page_size) >> self->page_shift;
-	ASSERT_NE(npages, 0);
-	size = npages << self->page_shift;
-
-	buffer = malloc(sizeof(*buffer));
-	ASSERT_NE(buffer, NULL);
-
-	buffer->fd = -1;
-	buffer->size = size;
-	buffer->mirror = malloc(size);
-	ASSERT_NE(buffer->mirror, NULL);
-
-	buffer->ptr = mmap(NULL, size,
-			   PROT_READ | PROT_WRITE,
-			   MAP_PRIVATE | MAP_ANONYMOUS,
-			   buffer->fd, 0);
-	ASSERT_NE(buffer->ptr, MAP_FAILED);
-
-	/* Initialize buffer->ptr so we can tell if it is written. */
-	for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i)
-		ptr[i] = i;
+	int ret, use_thp, migrate;
+
+	for (migrate = 0; migrate < 2; ++migrate) {
+		for (use_thp = 0; use_thp < 2; ++use_thp) {
+			npages = ALIGN(use_thp ? TWOMEG : HMM_BUFFER_SIZE,
+				       self->page_size) >> self->page_shift;
+			ASSERT_NE(npages, 0);
+			size = npages << self->page_shift;
+
+			buffer = malloc(sizeof(*buffer));
+			ASSERT_NE(buffer, NULL);
+
+			buffer->fd = -1;
+			buffer->size = size * 2;
+			buffer->mirror = malloc(size);
+			ASSERT_NE(buffer->mirror, NULL);
+
+			buffer->ptr = mmap(NULL, size * 2,
+					   PROT_READ | PROT_WRITE,
+					   MAP_PRIVATE | MAP_ANONYMOUS,
+					   buffer->fd, 0);
+			ASSERT_NE(buffer->ptr, MAP_FAILED);
+
+			old_ptr = buffer->ptr;
+			if (use_thp) {
+				map = (void *)ALIGN((uintptr_t)buffer->ptr, size);
+				ret = madvise(map, size, MADV_HUGEPAGE);
+				ASSERT_EQ(ret, 0);
+				buffer->ptr = map;
+			}
+
+			/* Initialize buffer->ptr so we can tell if it is written. */
+			for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i)
+				ptr[i] = i;
+
+			/* Initialize data that the device will write to buffer->ptr. */
+			for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
+				ptr[i] = -i;
+
+			if (migrate) {
+				ret = hmm_migrate_sys_to_dev(self->fd, buffer, npages);
+				ASSERT_EQ(ret, 0);
+				ASSERT_EQ(buffer->cpages, npages);
+
+			}
+
+			pid = fork();
+			if (pid == -1)
+				ASSERT_EQ(pid, 0);
+			if (pid != 0) {
+				waitpid(pid, &ret, 0);
+				ASSERT_EQ(WIFEXITED(ret), 1);
+
+				/* Check that the parent's buffer did not change. */
+				for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i)
+					ASSERT_EQ(ptr[i], i);
+
+				buffer->ptr = old_ptr;
+				hmm_buffer_free(buffer);
+				continue;
+			}
+
+			/* Check that we see the parent's values. */
+			for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i)
+				ASSERT_EQ(ptr[i], i);
+			if (!migrate) {
+				for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
+					ASSERT_EQ(ptr[i], -i);
+			}
+
+			/* The child process needs its own mirror to its own mm. */
+			child_fd = hmm_open(0);
+			ASSERT_GE(child_fd, 0);
+
+			/* Simulate a device writing system memory. */
+			ret = hmm_dmirror_cmd(child_fd, HMM_DMIRROR_WRITE, buffer, npages);
+			ASSERT_EQ(ret, 0);
+			ASSERT_EQ(buffer->cpages, npages);
+			ASSERT_EQ(buffer->faults, 1);
 
-	/* Initialize data that the device will write to buffer->ptr. */
-	for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
-		ptr[i] = -i;
+			/* Check what the device wrote. */
+			if (!migrate) {
+				for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i)
+					ASSERT_EQ(ptr[i], -i);
+			}
 
-	pid = fork();
-	if (pid == -1)
-		ASSERT_EQ(pid, 0);
-	if (pid != 0) {
-		waitpid(pid, &ret, 0);
-		ASSERT_EQ(WIFEXITED(ret), 1);
-
-		/* Check that the parent's buffer did not change. */
-		for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i)
-			ASSERT_EQ(ptr[i], i);
-		return;
+			close(child_fd);
+			exit(0);
+		}
 	}
-
-	/* Check that we see the parent's values. */
-	for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i)
-		ASSERT_EQ(ptr[i], i);
-	for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
-		ASSERT_EQ(ptr[i], -i);
-
-	/* The child process needs its own mirror to its own mm. */
-	child_fd = hmm_open(0);
-	ASSERT_GE(child_fd, 0);
-
-	/* Simulate a device writing system memory. */
-	ret = hmm_dmirror_cmd(child_fd, HMM_DMIRROR_WRITE, buffer, npages);
-	ASSERT_EQ(ret, 0);
-	ASSERT_EQ(buffer->cpages, npages);
-	ASSERT_EQ(buffer->faults, 1);
-
-	/* Check what the device wrote. */
-	for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i)
-		ASSERT_EQ(ptr[i], -i);
-
-	close(child_fd);
-	exit(0);
 }
 
 /*
@@ -2289,6 +2322,165 @@ TEST_F(hmm, migrate_anon_huge_fault)
 	hmm_buffer_free(buffer);
 }
 
+/*
+ * Migrate memory and fault back to sysmem after partially unmapping.
+ */
+TEST_F(hmm, migrate_partial_unmap_fault)
+{
+	struct hmm_buffer *buffer;
+	unsigned long npages;
+	unsigned long size = TWOMEG;
+	unsigned long i;
+	void *old_ptr;
+	void *map;
+	int *ptr;
+	int ret, j, use_thp;
+	int offsets[] = { 0, 512 * ONEKB, ONEMEG };
+
+	for (use_thp = 0; use_thp < 2; ++use_thp) {
+		for (j = 0; j < ARRAY_SIZE(offsets); ++j) {
+			buffer = malloc(sizeof(*buffer));
+			ASSERT_NE(buffer, NULL);
+
+			buffer->fd = -1;
+			buffer->size = 2 * size;
+			buffer->mirror = malloc(size);
+			ASSERT_NE(buffer->mirror, NULL);
+			memset(buffer->mirror, 0xFF, size);
+
+			buffer->ptr = mmap(NULL, 2 * size,
+					   PROT_READ | PROT_WRITE,
+					   MAP_PRIVATE | MAP_ANONYMOUS,
+					   buffer->fd, 0);
+			ASSERT_NE(buffer->ptr, MAP_FAILED);
+
+			npages = size >> self->page_shift;
+			map = (void *)ALIGN((uintptr_t)buffer->ptr, size);
+			if (use_thp)
+				ret = madvise(map, size, MADV_HUGEPAGE);
+			else
+				ret = madvise(map, size, MADV_NOHUGEPAGE);
+			ASSERT_EQ(ret, 0);
+			old_ptr = buffer->ptr;
+			buffer->ptr = map;
+
+			/* Initialize buffer in system memory. */
+			for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i)
+				ptr[i] = i;
+
+			/* Migrate memory to device. */
+			ret = hmm_migrate_sys_to_dev(self->fd, buffer, npages);
+			ASSERT_EQ(ret, 0);
+			ASSERT_EQ(buffer->cpages, npages);
+
+			/* Check what the device read. */
+			for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
+				ASSERT_EQ(ptr[i], i);
+
+			munmap(buffer->ptr + offsets[j], ONEMEG);
+
+			/* Fault pages back to system memory and check them. */
+			for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i)
+				if (i * sizeof(int) < offsets[j] ||
+				    i * sizeof(int) >= offsets[j] + ONEMEG)
+					ASSERT_EQ(ptr[i], i);
+
+			buffer->ptr = old_ptr;
+			hmm_buffer_free(buffer);
+		}
+	}
+}
+
+TEST_F(hmm, migrate_remap_fault)
+{
+	struct hmm_buffer *buffer;
+	unsigned long npages;
+	unsigned long size = TWOMEG;
+	unsigned long i;
+	void *old_ptr, *new_ptr = NULL;
+	void *map;
+	int *ptr;
+	int ret, j, use_thp, dont_unmap, before;
+	int offsets[] = { 0, 512 * ONEKB, ONEMEG };
+
+	for (before = 0; before < 2; ++before) {
+		for (dont_unmap = 0; dont_unmap < 2; ++dont_unmap) {
+			for (use_thp = 0; use_thp < 2; ++use_thp) {
+				for (j = 0; j < ARRAY_SIZE(offsets); ++j) {
+					int flags = MREMAP_MAYMOVE | MREMAP_FIXED;
+
+					if (dont_unmap)
+						flags |= MREMAP_DONTUNMAP;
+
+					buffer = malloc(sizeof(*buffer));
+					ASSERT_NE(buffer, NULL);
+
+					buffer->fd = -1;
+					buffer->size = 8 * size;
+					buffer->mirror = malloc(size);
+					ASSERT_NE(buffer->mirror, NULL);
+					memset(buffer->mirror, 0xFF, size);
+
+					buffer->ptr = mmap(NULL, buffer->size,
+							   PROT_READ | PROT_WRITE,
+							   MAP_PRIVATE | MAP_ANONYMOUS,
+							   buffer->fd, 0);
+					ASSERT_NE(buffer->ptr, MAP_FAILED);
+
+					npages = size >> self->page_shift;
+					map = (void *)ALIGN((uintptr_t)buffer->ptr, size);
+					if (use_thp)
+						ret = madvise(map, size, MADV_HUGEPAGE);
+					else
+						ret = madvise(map, size, MADV_NOHUGEPAGE);
+					ASSERT_EQ(ret, 0);
+					old_ptr = buffer->ptr;
+					munmap(map + size, size * 2);
+					buffer->ptr = map;
+
+					/* Initialize buffer in system memory. */
+					for (i = 0, ptr = buffer->ptr;
+					     i < size / sizeof(*ptr); ++i)
+						ptr[i] = i;
+
+					if (before) {
+						new_ptr = mremap((void *)map, size, size, flags,
+								 map + size + offsets[j]);
+						ASSERT_NE(new_ptr, MAP_FAILED);
+						buffer->ptr = new_ptr;
+					}
+
+					/* Migrate memory to device. */
+					ret = hmm_migrate_sys_to_dev(self->fd, buffer, npages);
+					ASSERT_EQ(ret, 0);
+					ASSERT_EQ(buffer->cpages, npages);
+
+					/* Check what the device read. */
+					for (i = 0, ptr = buffer->mirror;
+					     i < size / sizeof(*ptr); ++i)
+						ASSERT_EQ(ptr[i], i);
+
+					if (!before) {
+						new_ptr = mremap((void *)map, size, size, flags,
+								 map + size + offsets[j]);
+						ASSERT_NE(new_ptr, MAP_FAILED);
+						buffer->ptr = new_ptr;
+					}
+
+					/* Fault pages back to system memory and check them. */
+					for (i = 0, ptr = buffer->ptr;
+					     i < size / sizeof(*ptr); ++i)
+						ASSERT_EQ(ptr[i], i);
+
+					munmap(new_ptr, size);
+					buffer->ptr = old_ptr;
+					hmm_buffer_free(buffer);
+				}
+			}
+		}
+	}
+}
+
 /*
  * Migrate private anonymous huge page with allocation errors.
  */
-- 
2.50.1



^ permalink raw reply	[flat|nested] 57+ messages in thread

* [v6 14/15] selftests/mm/hmm-tests: new throughput tests including THP
  2025-09-16 12:21 [v6 00/15] mm: support device-private THP Balbir Singh
                   ` (12 preceding siblings ...)
  2025-09-16 12:21 ` [v6 13/15] selftests/mm/hmm-tests: partial unmap, mremap and anon_write tests Balbir Singh
@ 2025-09-16 12:21 ` Balbir Singh
  2025-09-16 12:21 ` [v6 15/15] gpu/drm/nouveau: enable THP support for GPU memory migration Balbir Singh
  14 siblings, 0 replies; 57+ messages in thread
From: Balbir Singh @ 2025-09-16 12:21 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: damon, dri-devel, Balbir Singh, David Hildenbrand, Zi Yan,
	Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Alistair Popple, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

Add new benchmark style support to test transfer bandwidth for zone device
memory operations.

Signed-off-by: Balbir Singh <balbirs@nvidia.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
Cc: Rakie Kim <rakie.kim@sk.com>
Cc: Byungchul Park <byungchul@sk.com>
Cc: Gregory Price <gourry@gourry.net>
Cc: Ying Huang <ying.huang@linux.alibaba.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Nico Pache <npache@redhat.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Dev Jain <dev.jain@arm.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Danilo Krummrich <dakr@kernel.org>
Cc: David Airlie <airlied@gmail.com>
Cc: Simona Vetter <simona@ffwll.ch>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mika Penttilä <mpenttil@redhat.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Francois Dugast <francois.dugast@intel.com>
---
 tools/testing/selftests/mm/hmm-tests.c | 197 ++++++++++++++++++++++++-
 1 file changed, 196 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/mm/hmm-tests.c b/tools/testing/selftests/mm/hmm-tests.c
index dedc1049bd4d..5a1525f72daa 100644
--- a/tools/testing/selftests/mm/hmm-tests.c
+++ b/tools/testing/selftests/mm/hmm-tests.c
@@ -25,6 +25,7 @@
 #include <sys/stat.h>
 #include <sys/mman.h>
 #include <sys/ioctl.h>
+#include <sys/time.h>
 
 
 /*
@@ -209,8 +210,10 @@ static void hmm_buffer_free(struct hmm_buffer *buffer)
 	if (buffer == NULL)
 		return;
 
-	if (buffer->ptr)
+	if (buffer->ptr) {
 		munmap(buffer->ptr, buffer->size);
+		buffer->ptr = NULL;
+	}
 	free(buffer->mirror);
 	free(buffer);
 }
@@ -2657,4 +2660,196 @@ TEST_F(hmm, migrate_anon_huge_zero_err)
 	buffer->ptr = old_ptr;
 	hmm_buffer_free(buffer);
 }
+
+struct benchmark_results {
+	double sys_to_dev_time;
+	double dev_to_sys_time;
+	double throughput_s2d;
+	double throughput_d2s;
+};
+
+static double get_time_ms(void)
+{
+	struct timeval tv;
+
+	gettimeofday(&tv, NULL);
+	return (tv.tv_sec * 1000.0) + (tv.tv_usec / 1000.0);
+}
+
+static inline struct hmm_buffer *hmm_buffer_alloc(unsigned long size)
+{
+	struct hmm_buffer *buffer;
+
+	buffer = malloc(sizeof(*buffer));
+
+	buffer->fd = -1;
+	buffer->size = size;
+	buffer->mirror = malloc(size);
+	memset(buffer->mirror, 0xFF, size);
+	return buffer;
+}
+
+static void print_benchmark_results(const char *test_name, size_t buffer_size,
+				     struct benchmark_results *thp,
+				     struct benchmark_results *regular)
+{
+	double s2d_improvement = ((regular->sys_to_dev_time - thp->sys_to_dev_time) /
+				 regular->sys_to_dev_time) * 100.0;
+	double d2s_improvement = ((regular->dev_to_sys_time - thp->dev_to_sys_time) /
+				 regular->dev_to_sys_time) * 100.0;
+	double throughput_s2d_improvement = ((thp->throughput_s2d - regular->throughput_s2d) /
+					    regular->throughput_s2d) * 100.0;
+	double throughput_d2s_improvement = ((thp->throughput_d2s - regular->throughput_d2s) /
+					    regular->throughput_d2s) * 100.0;
+
+	printf("\n=== %s (%.1f MB) ===\n", test_name, buffer_size / (1024.0 * 1024.0));
+	printf("                     | With THP        | Without THP     | Improvement\n");
+	printf("---------------------------------------------------------------------\n");
+	printf("Sys->Dev Migration   | %.3f ms        | %.3f ms        | %.1f%%\n",
+	       thp->sys_to_dev_time, regular->sys_to_dev_time, s2d_improvement);
+	printf("Dev->Sys Migration   | %.3f ms        | %.3f ms        | %.1f%%\n",
+	       thp->dev_to_sys_time, regular->dev_to_sys_time, d2s_improvement);
+	printf("S->D Throughput      | %.2f GB/s      | %.2f GB/s      | %.1f%%\n",
+	       thp->throughput_s2d, regular->throughput_s2d, throughput_s2d_improvement);
+	printf("D->S Throughput      | %.2f GB/s      | %.2f GB/s      | %.1f%%\n",
+	       thp->throughput_d2s, regular->throughput_d2s, throughput_d2s_improvement);
+}
+
+/*
+ * Run a single migration benchmark
+ * fd: file descriptor for hmm device
+ * use_thp: whether to use THP
+ * buffer_size: size of buffer to allocate
+ * iterations: number of iterations
+ * results: where to store results
+ */
+static inline int run_migration_benchmark(int fd, int use_thp, size_t buffer_size,
+					   int iterations, struct benchmark_results *results)
+{
+	struct hmm_buffer *buffer;
+	unsigned long npages = buffer_size / sysconf(_SC_PAGESIZE);
+	double start, end;
+	double s2d_total = 0, d2s_total = 0;
+	int ret, i;
+	int *ptr;
+
+	buffer = hmm_buffer_alloc(buffer_size);
+
+	/* Map memory */
+	buffer->ptr = mmap(NULL, buffer_size, PROT_READ | PROT_WRITE,
+			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+
+	if (!buffer->ptr)
+		return -1;
+
+	/* Apply THP hint if requested */
+	if (use_thp)
+		ret = madvise(buffer->ptr, buffer_size, MADV_HUGEPAGE);
+	else
+		ret = madvise(buffer->ptr, buffer_size, MADV_NOHUGEPAGE);
+
+	if (ret)
+		return ret;
+
+	/* Initialize memory to make sure pages are allocated */
+	ptr = (int *)buffer->ptr;
+	for (i = 0; i < buffer_size / sizeof(int); i++)
+		ptr[i] = i & 0xFF;
+
+	/* Warmup iteration */
+	ret = hmm_migrate_sys_to_dev(fd, buffer, npages);
+	if (ret)
+		return ret;
+
+	ret = hmm_migrate_dev_to_sys(fd, buffer, npages);
+	if (ret)
+		return ret;
+
+	/* Benchmark iterations */
+	for (i = 0; i < iterations; i++) {
+		/* System to device migration */
+		start = get_time_ms();
+
+		ret = hmm_migrate_sys_to_dev(fd, buffer, npages);
+		if (ret)
+			return ret;
+
+		end = get_time_ms();
+		s2d_total += (end - start);
+
+		/* Device to system migration */
+		start = get_time_ms();
+
+		ret = hmm_migrate_dev_to_sys(fd, buffer, npages);
+		if (ret)
+			return ret;
+
+		end = get_time_ms();
+		d2s_total += (end - start);
+	}
+
+	/* Calculate average times and throughput */
+	results->sys_to_dev_time = s2d_total / iterations;
+	results->dev_to_sys_time = d2s_total / iterations;
+	results->throughput_s2d = (buffer_size / (1024.0 * 1024.0 * 1024.0)) /
+				 (results->sys_to_dev_time / 1000.0);
+	results->throughput_d2s = (buffer_size / (1024.0 * 1024.0 * 1024.0)) /
+				 (results->dev_to_sys_time / 1000.0);
+
+	/* Cleanup */
+	hmm_buffer_free(buffer);
+	return 0;
+}
+
+/*
+ * Benchmark THP migration with different buffer sizes
+ */
+TEST_F_TIMEOUT(hmm, benchmark_thp_migration, 120)
+{
+	struct benchmark_results thp_results, regular_results;
+	size_t thp_size = 2 * 1024 * 1024; /* 2MB - typical THP size */
+	int iterations = 5;
+
+	printf("\nHMM THP Migration Benchmark\n");
+	printf("---------------------------\n");
+	printf("System page size: %ld bytes\n", sysconf(_SC_PAGESIZE));
+
+	/* Test different buffer sizes */
+	size_t test_sizes[] = {
+		thp_size / 4,      /* 512KB - smaller than THP */
+		thp_size / 2,      /* 1MB - half THP */
+		thp_size,          /* 2MB - single THP */
+		thp_size * 2,      /* 4MB - two THPs */
+		thp_size * 4,      /* 8MB - four THPs */
+		thp_size * 8,       /* 16MB - eight THPs */
+		thp_size * 128,       /* 256MB - one twenty eight THPs */
+	};
+
+	static const char *const test_names[] = {
+		"Small Buffer (512KB)",
+		"Half THP Size (1MB)",
+		"Single THP Size (2MB)",
+		"Two THP Size (4MB)",
+		"Four THP Size (8MB)",
+		"Eight THP Size (16MB)",
+		"One twenty eight THP Size (256MB)"
+	};
+
+	int num_tests = ARRAY_SIZE(test_sizes);
+
+	/* Run all tests */
+	for (int i = 0; i < num_tests; i++) {
+		/* Test with THP */
+		ASSERT_EQ(run_migration_benchmark(self->fd, 1, test_sizes[i],
+					iterations, &thp_results), 0);
+
+		/* Test without THP */
+		ASSERT_EQ(run_migration_benchmark(self->fd, 0, test_sizes[i],
+					iterations, &regular_results), 0);
+
+		/* Print results */
+		print_benchmark_results(test_names[i], test_sizes[i],
+					&thp_results, &regular_results);
+	}
+}
 TEST_HARNESS_MAIN
-- 
2.50.1



^ permalink raw reply	[flat|nested] 57+ messages in thread

* [v6 15/15] gpu/drm/nouveau: enable THP support for GPU memory migration
  2025-09-16 12:21 [v6 00/15] mm: support device-private THP Balbir Singh
                   ` (13 preceding siblings ...)
  2025-09-16 12:21 ` [v6 14/15] selftests/mm/hmm-tests: new throughput tests including THP Balbir Singh
@ 2025-09-16 12:21 ` Balbir Singh
  14 siblings, 0 replies; 57+ messages in thread
From: Balbir Singh @ 2025-09-16 12:21 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: damon, dri-devel, Balbir Singh, David Hildenbrand, Zi Yan,
	Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Alistair Popple, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

Enable MIGRATE_VMA_SELECT_COMPOUND support in nouveau driver to take
advantage of THP zone device migration capabilities.

Update migration and eviction code paths to handle compound page sizes
appropriately, improving memory bandwidth utilization and reducing
migration overhead for large GPU memory allocations.

Signed-off-by: Balbir Singh <balbirs@nvidia.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
Cc: Rakie Kim <rakie.kim@sk.com>
Cc: Byungchul Park <byungchul@sk.com>
Cc: Gregory Price <gourry@gourry.net>
Cc: Ying Huang <ying.huang@linux.alibaba.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Nico Pache <npache@redhat.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Dev Jain <dev.jain@arm.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Danilo Krummrich <dakr@kernel.org>
Cc: David Airlie <airlied@gmail.com>
Cc: Simona Vetter <simona@ffwll.ch>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mika Penttilä <mpenttil@redhat.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Francois Dugast <francois.dugast@intel.com>
---
 drivers/gpu/drm/nouveau/nouveau_dmem.c | 304 ++++++++++++++++++-------
 drivers/gpu/drm/nouveau/nouveau_svm.c  |   6 +-
 drivers/gpu/drm/nouveau/nouveau_svm.h  |   3 +-
 3 files changed, 230 insertions(+), 83 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
index ca4932a150e3..3bcfd2e5ee4c 100644
--- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
@@ -50,6 +50,7 @@
  */
 #define DMEM_CHUNK_SIZE (2UL << 20)
 #define DMEM_CHUNK_NPAGES (DMEM_CHUNK_SIZE >> PAGE_SHIFT)
+#define NR_CHUNKS (128)
 
 enum nouveau_aper {
 	NOUVEAU_APER_VIRT,
@@ -83,9 +84,15 @@ struct nouveau_dmem {
 	struct list_head chunks;
 	struct mutex mutex;
 	struct page *free_pages;
+	struct folio *free_folios;
 	spinlock_t lock;
 };
 
+struct nouveau_dmem_dma_info {
+	dma_addr_t dma_addr;
+	size_t size;
+};
+
 static struct nouveau_dmem_chunk *nouveau_page_to_chunk(struct page *page)
 {
 	return container_of(page_pgmap(page), struct nouveau_dmem_chunk,
@@ -112,10 +119,16 @@ static void nouveau_dmem_page_free(struct page *page)
 {
 	struct nouveau_dmem_chunk *chunk = nouveau_page_to_chunk(page);
 	struct nouveau_dmem *dmem = chunk->drm->dmem;
+	struct folio *folio = page_folio(page);
 
 	spin_lock(&dmem->lock);
-	page->zone_device_data = dmem->free_pages;
-	dmem->free_pages = page;
+	if (folio_order(folio)) {
+		page->zone_device_data = dmem->free_folios;
+		dmem->free_folios = folio;
+	} else {
+		page->zone_device_data = dmem->free_pages;
+		dmem->free_pages = page;
+	}
 
 	WARN_ON(!chunk->callocated);
 	chunk->callocated--;
@@ -139,20 +152,28 @@ static void nouveau_dmem_fence_done(struct nouveau_fence **fence)
 	}
 }
 
-static int nouveau_dmem_copy_one(struct nouveau_drm *drm, struct page *spage,
-				struct page *dpage, dma_addr_t *dma_addr)
+static int nouveau_dmem_copy_folio(struct nouveau_drm *drm,
+				   struct folio *sfolio, struct folio *dfolio,
+				   struct nouveau_dmem_dma_info *dma_info)
 {
 	struct device *dev = drm->dev->dev;
+	struct page *dpage = folio_page(dfolio, 0);
+	struct page *spage = folio_page(sfolio, 0);
 
-	lock_page(dpage);
+	folio_lock(dfolio);
 
-	*dma_addr = dma_map_page(dev, dpage, 0, PAGE_SIZE, DMA_BIDIRECTIONAL);
-	if (dma_mapping_error(dev, *dma_addr))
+	dma_info->dma_addr = dma_map_page(dev, dpage, 0, page_size(dpage),
+					DMA_BIDIRECTIONAL);
+	dma_info->size = page_size(dpage);
+	if (dma_mapping_error(dev, dma_info->dma_addr))
 		return -EIO;
 
-	if (drm->dmem->migrate.copy_func(drm, 1, NOUVEAU_APER_HOST, *dma_addr,
-					 NOUVEAU_APER_VRAM, nouveau_dmem_page_addr(spage))) {
-		dma_unmap_page(dev, *dma_addr, PAGE_SIZE, DMA_BIDIRECTIONAL);
+	if (drm->dmem->migrate.copy_func(drm, folio_nr_pages(sfolio),
+					 NOUVEAU_APER_HOST, dma_info->dma_addr,
+					 NOUVEAU_APER_VRAM,
+					 nouveau_dmem_page_addr(spage))) {
+		dma_unmap_page(dev, dma_info->dma_addr, page_size(dpage),
+					DMA_BIDIRECTIONAL);
 		return -EIO;
 	}
 
@@ -165,21 +186,47 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf)
 	struct nouveau_dmem *dmem = drm->dmem;
 	struct nouveau_fence *fence;
 	struct nouveau_svmm *svmm;
-	struct page *spage, *dpage;
-	unsigned long src = 0, dst = 0;
-	dma_addr_t dma_addr = 0;
+	struct page *dpage;
 	vm_fault_t ret = 0;
 	struct migrate_vma args = {
 		.vma		= vmf->vma,
-		.start		= vmf->address,
-		.end		= vmf->address + PAGE_SIZE,
-		.src		= &src,
-		.dst		= &dst,
 		.pgmap_owner	= drm->dev,
 		.fault_page	= vmf->page,
-		.flags		= MIGRATE_VMA_SELECT_DEVICE_PRIVATE,
+		.flags		= MIGRATE_VMA_SELECT_DEVICE_PRIVATE |
+				  MIGRATE_VMA_SELECT_COMPOUND,
+		.src = NULL,
+		.dst = NULL,
 	};
+	unsigned int order, nr;
+	struct folio *sfolio, *dfolio;
+	struct nouveau_dmem_dma_info dma_info;
+
+	sfolio = page_folio(vmf->page);
+	order = folio_order(sfolio);
+	nr = 1 << order;
+
+	/*
+	 * Handle partial unmap faults, where the folio is large, but
+	 * the pmd is split.
+	 */
+	if (vmf->pte) {
+		order = 0;
+		nr = 1;
+	}
+
+	if (order)
+		args.flags |= MIGRATE_VMA_SELECT_COMPOUND;
 
+	args.start = ALIGN_DOWN(vmf->address, (PAGE_SIZE << order));
+	args.vma = vmf->vma;
+	args.end = args.start + (PAGE_SIZE << order);
+	args.src = kcalloc(nr, sizeof(*args.src), GFP_KERNEL);
+	args.dst = kcalloc(nr, sizeof(*args.dst), GFP_KERNEL);
+
+	if (!args.src || !args.dst) {
+		ret = VM_FAULT_OOM;
+		goto err;
+	}
 	/*
 	 * FIXME what we really want is to find some heuristic to migrate more
 	 * than just one page on CPU fault. When such fault happens it is very
@@ -190,20 +237,26 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf)
 	if (!args.cpages)
 		return 0;
 
-	spage = migrate_pfn_to_page(src);
-	if (!spage || !(src & MIGRATE_PFN_MIGRATE))
-		goto done;
-
-	dpage = alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO, vmf->vma, vmf->address);
-	if (!dpage)
+	if (order)
+		dpage = folio_page(vma_alloc_folio(GFP_HIGHUSER | __GFP_ZERO,
+					order, vmf->vma, vmf->address), 0);
+	else
+		dpage = alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO, vmf->vma,
+					vmf->address);
+	if (!dpage) {
+		ret = VM_FAULT_OOM;
 		goto done;
+	}
 
-	dst = migrate_pfn(page_to_pfn(dpage));
+	args.dst[0] = migrate_pfn(page_to_pfn(dpage));
+	if (order)
+		args.dst[0] |= MIGRATE_PFN_COMPOUND;
+	dfolio = page_folio(dpage);
 
-	svmm = spage->zone_device_data;
+	svmm = folio_zone_device_data(sfolio);
 	mutex_lock(&svmm->mutex);
 	nouveau_svmm_invalidate(svmm, args.start, args.end);
-	ret = nouveau_dmem_copy_one(drm, spage, dpage, &dma_addr);
+	ret = nouveau_dmem_copy_folio(drm, sfolio, dfolio, &dma_info);
 	mutex_unlock(&svmm->mutex);
 	if (ret) {
 		ret = VM_FAULT_SIGBUS;
@@ -213,25 +266,40 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf)
 	nouveau_fence_new(&fence, dmem->migrate.chan);
 	migrate_vma_pages(&args);
 	nouveau_dmem_fence_done(&fence);
-	dma_unmap_page(drm->dev->dev, dma_addr, PAGE_SIZE, DMA_BIDIRECTIONAL);
+	dma_unmap_page(drm->dev->dev, dma_info.dma_addr, PAGE_SIZE,
+				DMA_BIDIRECTIONAL);
 done:
 	migrate_vma_finalize(&args);
+err:
+	kfree(args.src);
+	kfree(args.dst);
 	return ret;
 }
 
+static void nouveau_dmem_folio_split(struct folio *head, struct folio *tail)
+{
+	if (tail == NULL)
+		return;
+	tail->pgmap = head->pgmap;
+	tail->mapping = head->mapping;
+	folio_set_zone_device_data(tail, folio_zone_device_data(head));
+}
+
 static const struct dev_pagemap_ops nouveau_dmem_pagemap_ops = {
 	.page_free		= nouveau_dmem_page_free,
 	.migrate_to_ram		= nouveau_dmem_migrate_to_ram,
+	.folio_split		= nouveau_dmem_folio_split,
 };
 
 static int
-nouveau_dmem_chunk_alloc(struct nouveau_drm *drm, struct page **ppage)
+nouveau_dmem_chunk_alloc(struct nouveau_drm *drm, struct page **ppage,
+			 bool is_large)
 {
 	struct nouveau_dmem_chunk *chunk;
 	struct resource *res;
 	struct page *page;
 	void *ptr;
-	unsigned long i, pfn_first;
+	unsigned long i, pfn_first, pfn;
 	int ret;
 
 	chunk = kzalloc(sizeof(*chunk), GFP_KERNEL);
@@ -241,7 +309,7 @@ nouveau_dmem_chunk_alloc(struct nouveau_drm *drm, struct page **ppage)
 	}
 
 	/* Allocate unused physical address space for device private pages. */
-	res = request_free_mem_region(&iomem_resource, DMEM_CHUNK_SIZE,
+	res = request_free_mem_region(&iomem_resource, DMEM_CHUNK_SIZE * NR_CHUNKS,
 				      "nouveau_dmem");
 	if (IS_ERR(res)) {
 		ret = PTR_ERR(res);
@@ -274,16 +342,40 @@ nouveau_dmem_chunk_alloc(struct nouveau_drm *drm, struct page **ppage)
 	pfn_first = chunk->pagemap.range.start >> PAGE_SHIFT;
 	page = pfn_to_page(pfn_first);
 	spin_lock(&drm->dmem->lock);
-	for (i = 0; i < DMEM_CHUNK_NPAGES - 1; ++i, ++page) {
-		page->zone_device_data = drm->dmem->free_pages;
-		drm->dmem->free_pages = page;
+
+	pfn = pfn_first;
+	for (i = 0; i < NR_CHUNKS; i++) {
+		int j;
+
+		if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) || !is_large) {
+			for (j = 0; j < DMEM_CHUNK_NPAGES - 1; j++, pfn++) {
+				page = pfn_to_page(pfn);
+				page->zone_device_data = drm->dmem->free_pages;
+				drm->dmem->free_pages = page;
+			}
+		} else {
+			page = pfn_to_page(pfn);
+			page->zone_device_data = drm->dmem->free_folios;
+			drm->dmem->free_folios = page_folio(page);
+			pfn += DMEM_CHUNK_NPAGES;
+		}
 	}
-	*ppage = page;
+
+	/* Move to next page */
+	if (is_large) {
+		*ppage = &drm->dmem->free_folios->page;
+		drm->dmem->free_folios = (*ppage)->zone_device_data;
+	} else {
+		*ppage = drm->dmem->free_pages;
+		drm->dmem->free_pages = (*ppage)->zone_device_data;
+	}
+
 	chunk->callocated++;
 	spin_unlock(&drm->dmem->lock);
 
-	NV_INFO(drm, "DMEM: registered %ldMB of device memory\n",
-		DMEM_CHUNK_SIZE >> 20);
+	NV_INFO(drm, "DMEM: registered %ldMB of %sdevice memory %lx %lx\n",
+		NR_CHUNKS * DMEM_CHUNK_SIZE >> 20, is_large ? "THP " : "", pfn_first,
+		nouveau_dmem_page_addr(page));
 
 	return 0;
 
@@ -298,27 +390,41 @@ nouveau_dmem_chunk_alloc(struct nouveau_drm *drm, struct page **ppage)
 }
 
 static struct page *
-nouveau_dmem_page_alloc_locked(struct nouveau_drm *drm)
+nouveau_dmem_page_alloc_locked(struct nouveau_drm *drm, bool is_large)
 {
 	struct nouveau_dmem_chunk *chunk;
 	struct page *page = NULL;
+	struct folio *folio = NULL;
 	int ret;
+	unsigned int order = 0;
 
 	spin_lock(&drm->dmem->lock);
-	if (drm->dmem->free_pages) {
+	if (is_large && drm->dmem->free_folios) {
+		folio = drm->dmem->free_folios;
+		page = &folio->page;
+		drm->dmem->free_folios = page->zone_device_data;
+		chunk = nouveau_page_to_chunk(&folio->page);
+		chunk->callocated++;
+		spin_unlock(&drm->dmem->lock);
+		order = ilog2(DMEM_CHUNK_NPAGES);
+	} else if (!is_large && drm->dmem->free_pages) {
 		page = drm->dmem->free_pages;
 		drm->dmem->free_pages = page->zone_device_data;
 		chunk = nouveau_page_to_chunk(page);
 		chunk->callocated++;
 		spin_unlock(&drm->dmem->lock);
+		folio = page_folio(page);
 	} else {
 		spin_unlock(&drm->dmem->lock);
-		ret = nouveau_dmem_chunk_alloc(drm, &page);
+		ret = nouveau_dmem_chunk_alloc(drm, &page, is_large);
 		if (ret)
 			return NULL;
+		folio = page_folio(page);
+		if (is_large)
+			order = ilog2(DMEM_CHUNK_NPAGES);
 	}
 
-	zone_device_page_init(page);
+	zone_device_folio_init(folio, order);
 	return page;
 }
 
@@ -369,12 +475,12 @@ nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *chunk)
 {
 	unsigned long i, npages = range_len(&chunk->pagemap.range) >> PAGE_SHIFT;
 	unsigned long *src_pfns, *dst_pfns;
-	dma_addr_t *dma_addrs;
+	struct nouveau_dmem_dma_info *dma_info;
 	struct nouveau_fence *fence;
 
 	src_pfns = kvcalloc(npages, sizeof(*src_pfns), GFP_KERNEL | __GFP_NOFAIL);
 	dst_pfns = kvcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL | __GFP_NOFAIL);
-	dma_addrs = kvcalloc(npages, sizeof(*dma_addrs), GFP_KERNEL | __GFP_NOFAIL);
+	dma_info = kvcalloc(npages, sizeof(*dma_info), GFP_KERNEL | __GFP_NOFAIL);
 
 	migrate_device_range(src_pfns, chunk->pagemap.range.start >> PAGE_SHIFT,
 			npages);
@@ -382,17 +488,28 @@ nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *chunk)
 	for (i = 0; i < npages; i++) {
 		if (src_pfns[i] & MIGRATE_PFN_MIGRATE) {
 			struct page *dpage;
+			struct folio *folio = page_folio(
+				migrate_pfn_to_page(src_pfns[i]));
+			unsigned int order = folio_order(folio);
+
+			if (src_pfns[i] & MIGRATE_PFN_COMPOUND) {
+				dpage = folio_page(
+						folio_alloc(
+						GFP_HIGHUSER_MOVABLE, order), 0);
+			} else {
+				/*
+				 * _GFP_NOFAIL because the GPU is going away and there
+				 * is nothing sensible we can do if we can't copy the
+				 * data back.
+				 */
+				dpage = alloc_page(GFP_HIGHUSER | __GFP_NOFAIL);
+			}
 
-			/*
-			 * _GFP_NOFAIL because the GPU is going away and there
-			 * is nothing sensible we can do if we can't copy the
-			 * data back.
-			 */
-			dpage = alloc_page(GFP_HIGHUSER | __GFP_NOFAIL);
 			dst_pfns[i] = migrate_pfn(page_to_pfn(dpage));
-			nouveau_dmem_copy_one(chunk->drm,
-					migrate_pfn_to_page(src_pfns[i]), dpage,
-					&dma_addrs[i]);
+			nouveau_dmem_copy_folio(chunk->drm,
+				page_folio(migrate_pfn_to_page(src_pfns[i])),
+				page_folio(dpage),
+				&dma_info[i]);
 		}
 	}
 
@@ -403,8 +520,9 @@ nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *chunk)
 	kvfree(src_pfns);
 	kvfree(dst_pfns);
 	for (i = 0; i < npages; i++)
-		dma_unmap_page(chunk->drm->dev->dev, dma_addrs[i], PAGE_SIZE, DMA_BIDIRECTIONAL);
-	kvfree(dma_addrs);
+		dma_unmap_page(chunk->drm->dev->dev, dma_info[i].dma_addr,
+				dma_info[i].size, DMA_BIDIRECTIONAL);
+	kvfree(dma_info);
 }
 
 void
@@ -607,31 +725,36 @@ nouveau_dmem_init(struct nouveau_drm *drm)
 
 static unsigned long nouveau_dmem_migrate_copy_one(struct nouveau_drm *drm,
 		struct nouveau_svmm *svmm, unsigned long src,
-		dma_addr_t *dma_addr, u64 *pfn)
+		struct nouveau_dmem_dma_info *dma_info, u64 *pfn)
 {
 	struct device *dev = drm->dev->dev;
 	struct page *dpage, *spage;
 	unsigned long paddr;
+	bool is_large = false;
+	unsigned long mpfn;
 
 	spage = migrate_pfn_to_page(src);
 	if (!(src & MIGRATE_PFN_MIGRATE))
 		goto out;
 
-	dpage = nouveau_dmem_page_alloc_locked(drm);
+	is_large = src & MIGRATE_PFN_COMPOUND;
+	dpage = nouveau_dmem_page_alloc_locked(drm, is_large);
 	if (!dpage)
 		goto out;
 
 	paddr = nouveau_dmem_page_addr(dpage);
 	if (spage) {
-		*dma_addr = dma_map_page(dev, spage, 0, page_size(spage),
+		dma_info->dma_addr = dma_map_page(dev, spage, 0, page_size(spage),
 					 DMA_BIDIRECTIONAL);
-		if (dma_mapping_error(dev, *dma_addr))
+		dma_info->size = page_size(spage);
+		if (dma_mapping_error(dev, dma_info->dma_addr))
 			goto out_free_page;
-		if (drm->dmem->migrate.copy_func(drm, 1,
-			NOUVEAU_APER_VRAM, paddr, NOUVEAU_APER_HOST, *dma_addr))
+		if (drm->dmem->migrate.copy_func(drm, folio_nr_pages(page_folio(spage)),
+			NOUVEAU_APER_VRAM, paddr, NOUVEAU_APER_HOST,
+			dma_info->dma_addr))
 			goto out_dma_unmap;
 	} else {
-		*dma_addr = DMA_MAPPING_ERROR;
+		dma_info->dma_addr = DMA_MAPPING_ERROR;
 		if (drm->dmem->migrate.clear_func(drm, page_size(dpage),
 			NOUVEAU_APER_VRAM, paddr))
 			goto out_free_page;
@@ -642,10 +765,13 @@ static unsigned long nouveau_dmem_migrate_copy_one(struct nouveau_drm *drm,
 		((paddr >> PAGE_SHIFT) << NVIF_VMM_PFNMAP_V0_ADDR_SHIFT);
 	if (src & MIGRATE_PFN_WRITE)
 		*pfn |= NVIF_VMM_PFNMAP_V0_W;
-	return migrate_pfn(page_to_pfn(dpage));
+	mpfn = migrate_pfn(page_to_pfn(dpage));
+	if (folio_order(page_folio(dpage)))
+		mpfn |= MIGRATE_PFN_COMPOUND;
+	return mpfn;
 
 out_dma_unmap:
-	dma_unmap_page(dev, *dma_addr, PAGE_SIZE, DMA_BIDIRECTIONAL);
+	dma_unmap_page(dev, dma_info->dma_addr, PAGE_SIZE, DMA_BIDIRECTIONAL);
 out_free_page:
 	nouveau_dmem_page_free_locked(drm, dpage);
 out:
@@ -655,27 +781,38 @@ static unsigned long nouveau_dmem_migrate_copy_one(struct nouveau_drm *drm,
 
 static void nouveau_dmem_migrate_chunk(struct nouveau_drm *drm,
 		struct nouveau_svmm *svmm, struct migrate_vma *args,
-		dma_addr_t *dma_addrs, u64 *pfns)
+		struct nouveau_dmem_dma_info *dma_info, u64 *pfns)
 {
 	struct nouveau_fence *fence;
 	unsigned long addr = args->start, nr_dma = 0, i;
+	unsigned long order = 0;
+
+	for (i = 0; addr < args->end; ) {
+		struct folio *folio;
 
-	for (i = 0; addr < args->end; i++) {
 		args->dst[i] = nouveau_dmem_migrate_copy_one(drm, svmm,
-				args->src[i], dma_addrs + nr_dma, pfns + i);
-		if (!dma_mapping_error(drm->dev->dev, dma_addrs[nr_dma]))
+				args->src[i], dma_info + nr_dma, pfns + i);
+		if (!args->dst[i]) {
+			i++;
+			addr += PAGE_SIZE;
+			continue;
+		}
+		if (!dma_mapping_error(drm->dev->dev, dma_info[nr_dma].dma_addr))
 			nr_dma++;
-		addr += PAGE_SIZE;
+		folio = page_folio(migrate_pfn_to_page(args->dst[i]));
+		order = folio_order(folio);
+		i += 1 << order;
+		addr += (1 << order) * PAGE_SIZE;
 	}
 
 	nouveau_fence_new(&fence, drm->dmem->migrate.chan);
 	migrate_vma_pages(args);
 	nouveau_dmem_fence_done(&fence);
-	nouveau_pfns_map(svmm, args->vma->vm_mm, args->start, pfns, i);
+	nouveau_pfns_map(svmm, args->vma->vm_mm, args->start, pfns, i, order);
 
 	while (nr_dma--) {
-		dma_unmap_page(drm->dev->dev, dma_addrs[nr_dma], PAGE_SIZE,
-				DMA_BIDIRECTIONAL);
+		dma_unmap_page(drm->dev->dev, dma_info[nr_dma].dma_addr,
+				dma_info[nr_dma].size, DMA_BIDIRECTIONAL);
 	}
 	migrate_vma_finalize(args);
 }
@@ -688,20 +825,27 @@ nouveau_dmem_migrate_vma(struct nouveau_drm *drm,
 			 unsigned long end)
 {
 	unsigned long npages = (end - start) >> PAGE_SHIFT;
-	unsigned long max = min(SG_MAX_SINGLE_ALLOC, npages);
-	dma_addr_t *dma_addrs;
+	unsigned long max = npages;
 	struct migrate_vma args = {
 		.vma		= vma,
 		.start		= start,
 		.pgmap_owner	= drm->dev,
-		.flags		= MIGRATE_VMA_SELECT_SYSTEM,
+		.flags		= MIGRATE_VMA_SELECT_SYSTEM
+				  | MIGRATE_VMA_SELECT_COMPOUND,
 	};
 	unsigned long i;
 	u64 *pfns;
 	int ret = -ENOMEM;
+	struct nouveau_dmem_dma_info *dma_info;
 
-	if (drm->dmem == NULL)
-		return -ENODEV;
+	if (drm->dmem == NULL) {
+		ret = -ENODEV;
+		goto out;
+	}
+
+	if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
+		if (max > (unsigned long)HPAGE_PMD_NR)
+			max = (unsigned long)HPAGE_PMD_NR;
 
 	args.src = kcalloc(max, sizeof(*args.src), GFP_KERNEL);
 	if (!args.src)
@@ -710,8 +854,8 @@ nouveau_dmem_migrate_vma(struct nouveau_drm *drm,
 	if (!args.dst)
 		goto out_free_src;
 
-	dma_addrs = kmalloc_array(max, sizeof(*dma_addrs), GFP_KERNEL);
-	if (!dma_addrs)
+	dma_info = kmalloc_array(max, sizeof(*dma_info), GFP_KERNEL);
+	if (!dma_info)
 		goto out_free_dst;
 
 	pfns = nouveau_pfns_alloc(max);
@@ -729,7 +873,7 @@ nouveau_dmem_migrate_vma(struct nouveau_drm *drm,
 			goto out_free_pfns;
 
 		if (args.cpages)
-			nouveau_dmem_migrate_chunk(drm, svmm, &args, dma_addrs,
+			nouveau_dmem_migrate_chunk(drm, svmm, &args, dma_info,
 						   pfns);
 		args.start = args.end;
 	}
@@ -738,7 +882,7 @@ nouveau_dmem_migrate_vma(struct nouveau_drm *drm,
 out_free_pfns:
 	nouveau_pfns_free(pfns);
 out_free_dma:
-	kfree(dma_addrs);
+	kfree(dma_info);
 out_free_dst:
 	kfree(args.dst);
 out_free_src:
diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c
index 6fa387da0637..b8a3378154d5 100644
--- a/drivers/gpu/drm/nouveau/nouveau_svm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
@@ -921,12 +921,14 @@ nouveau_pfns_free(u64 *pfns)
 
 void
 nouveau_pfns_map(struct nouveau_svmm *svmm, struct mm_struct *mm,
-		 unsigned long addr, u64 *pfns, unsigned long npages)
+		 unsigned long addr, u64 *pfns, unsigned long npages,
+		 unsigned int page_shift)
 {
 	struct nouveau_pfnmap_args *args = nouveau_pfns_to_args(pfns);
 
 	args->p.addr = addr;
-	args->p.size = npages << PAGE_SHIFT;
+	args->p.size = npages << page_shift;
+	args->p.page = page_shift;
 
 	mutex_lock(&svmm->mutex);
 
diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.h b/drivers/gpu/drm/nouveau/nouveau_svm.h
index e7d63d7f0c2d..3fd78662f17e 100644
--- a/drivers/gpu/drm/nouveau/nouveau_svm.h
+++ b/drivers/gpu/drm/nouveau/nouveau_svm.h
@@ -33,7 +33,8 @@ void nouveau_svmm_invalidate(struct nouveau_svmm *svmm, u64 start, u64 limit);
 u64 *nouveau_pfns_alloc(unsigned long npages);
 void nouveau_pfns_free(u64 *pfns);
 void nouveau_pfns_map(struct nouveau_svmm *svmm, struct mm_struct *mm,
-		      unsigned long addr, u64 *pfns, unsigned long npages);
+		      unsigned long addr, u64 *pfns, unsigned long npages,
+		      unsigned int page_shift);
 #else /* IS_ENABLED(CONFIG_DRM_NOUVEAU_SVM) */
 static inline void nouveau_svm_init(struct nouveau_drm *drm) {}
 static inline void nouveau_svm_fini(struct nouveau_drm *drm) {}
-- 
2.50.1



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 01/15] mm/zone_device: support large zone device private folios
  2025-09-16 12:21 ` [v6 01/15] mm/zone_device: support large zone device private folios Balbir Singh
@ 2025-09-18  2:49   ` Zi Yan
  2025-09-19  5:01     ` Balbir Singh
  2025-09-24 10:55     ` David Hildenbrand
  0 siblings, 2 replies; 57+ messages in thread
From: Zi Yan @ 2025-09-18  2:49 UTC (permalink / raw)
  To: Balbir Singh, David Hildenbrand, Alistair Popple
  Cc: linux-kernel, linux-mm, damon, dri-devel, Joshua Hahn, Rakie Kim,
	Byungchul Park, Gregory Price, Ying Huang, Oscar Salvador,
	Lorenzo Stoakes, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lyude Paul, Danilo Krummrich,
	David Airlie, Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 16 Sep 2025, at 8:21, Balbir Singh wrote:

> Add routines to support allocation of large order zone device folios
> and helper functions for zone device folios, to check if a folio is
> device private and helpers for setting zone device data.
>
> When large folios are used, the existing page_free() callback in
> pgmap is called when the folio is freed, this is true for both
> PAGE_SIZE and higher order pages.
>
> Zone device private large folios do not support deferred split and
> scan like normal THP folios.
>
> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
> Cc: Rakie Kim <rakie.kim@sk.com>
> Cc: Byungchul Park <byungchul@sk.com>
> Cc: Gregory Price <gourry@gourry.net>
> Cc: Ying Huang <ying.huang@linux.alibaba.com>
> Cc: Alistair Popple <apopple@nvidia.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
> Cc: Nico Pache <npache@redhat.com>
> Cc: Ryan Roberts <ryan.roberts@arm.com>
> Cc: Dev Jain <dev.jain@arm.com>
> Cc: Barry Song <baohua@kernel.org>
> Cc: Lyude Paul <lyude@redhat.com>
> Cc: Danilo Krummrich <dakr@kernel.org>
> Cc: David Airlie <airlied@gmail.com>
> Cc: Simona Vetter <simona@ffwll.ch>
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: Mika Penttilä <mpenttil@redhat.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Francois Dugast <francois.dugast@intel.com>
> ---
>  include/linux/memremap.h | 10 +++++++++-
>  mm/memremap.c            | 34 +++++++++++++++++++++-------------
>  mm/rmap.c                |  6 +++++-
>  3 files changed, 35 insertions(+), 15 deletions(-)
>
> diff --git a/include/linux/memremap.h b/include/linux/memremap.h
> index e5951ba12a28..9c20327c2be5 100644
> --- a/include/linux/memremap.h
> +++ b/include/linux/memremap.h
> @@ -206,7 +206,7 @@ static inline bool is_fsdax_page(const struct page *page)
>  }
>
>  #ifdef CONFIG_ZONE_DEVICE
> -void zone_device_page_init(struct page *page);
> +void zone_device_folio_init(struct folio *folio, unsigned int order);
>  void *memremap_pages(struct dev_pagemap *pgmap, int nid);
>  void memunmap_pages(struct dev_pagemap *pgmap);
>  void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
> @@ -215,6 +215,14 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn);
>  bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn);
>
>  unsigned long memremap_compat_align(void);
> +
> +static inline void zone_device_page_init(struct page *page)
> +{
> +	struct folio *folio = page_folio(page);
> +
> +	zone_device_folio_init(folio, 0);

I assume it is for legacy code, where only non-compound page exists?

It seems that you assume @page is always order-0, but there is no check
for it. Adding VM_WARN_ON_ONCE_FOLIO(folio_order(folio) != 0, folio)
above it would be useful to detect misuse.

> +}
> +
>  #else
>  static inline void *devm_memremap_pages(struct device *dev,
>  		struct dev_pagemap *pgmap)
> diff --git a/mm/memremap.c b/mm/memremap.c
> index 46cb1b0b6f72..a8481ebf94cc 100644
> --- a/mm/memremap.c
> +++ b/mm/memremap.c
> @@ -416,20 +416,19 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap);
>  void free_zone_device_folio(struct folio *folio)
>  {
>  	struct dev_pagemap *pgmap = folio->pgmap;
> +	unsigned long nr = folio_nr_pages(folio);
> +	int i;
>
>  	if (WARN_ON_ONCE(!pgmap))
>  		return;
>
>  	mem_cgroup_uncharge(folio);
>
> -	/*
> -	 * Note: we don't expect anonymous compound pages yet. Once supported
> -	 * and we could PTE-map them similar to THP, we'd have to clear
> -	 * PG_anon_exclusive on all tail pages.
> -	 */
>  	if (folio_test_anon(folio)) {
> -		VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
> -		__ClearPageAnonExclusive(folio_page(folio, 0));
> +		for (i = 0; i < nr; i++)
> +			__ClearPageAnonExclusive(folio_page(folio, i));
> +	} else {
> +		VM_WARN_ON_ONCE(folio_test_large(folio));
>  	}
>
>  	/*
> @@ -456,8 +455,8 @@ void free_zone_device_folio(struct folio *folio)
>  	case MEMORY_DEVICE_COHERENT:
>  		if (WARN_ON_ONCE(!pgmap->ops || !pgmap->ops->page_free))
>  			break;
> -		pgmap->ops->page_free(folio_page(folio, 0));
> -		put_dev_pagemap(pgmap);
> +		pgmap->ops->page_free(&folio->page);
> +		percpu_ref_put_many(&folio->pgmap->ref, nr);
>  		break;
>
>  	case MEMORY_DEVICE_GENERIC:
> @@ -480,14 +479,23 @@ void free_zone_device_folio(struct folio *folio)
>  	}
>  }
>
> -void zone_device_page_init(struct page *page)
> +void zone_device_folio_init(struct folio *folio, unsigned int order)
>  {
> +	struct page *page = folio_page(folio, 0);

It is strange to see a folio is converted back to page in
a function called zone_device_folio_init().

> +
> +	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
> +
>  	/*
>  	 * Drivers shouldn't be allocating pages after calling
>  	 * memunmap_pages().
>  	 */
> -	WARN_ON_ONCE(!percpu_ref_tryget_live(&page_pgmap(page)->ref));
> -	set_page_count(page, 1);
> +	WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
> +	folio_set_count(folio, 1);
>  	lock_page(page);
> +
> +	if (order > 1) {
> +		prep_compound_page(page, order);
> +		folio_set_large_rmappable(folio);
> +	}

OK, so basically, @folio is not a compound page yet when zone_device_folio_init()
is called.

I feel that your zone_device_page_init() and zone_device_folio_init()
implementations are inverse. They should follow the same pattern
as __alloc_pages_noprof() and __folio_alloc_noprof(), where
zone_device_page_init() does the actual initialization and
zone_device_folio_init() just convert a page to folio.

Something like:

void zone_device_page_init(struct page *page, unsigned int order)
{
	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);

	/*
	 * Drivers shouldn't be allocating pages after calling
	 * memunmap_pages().
	 */

    WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
	
	/*
	 * anonymous folio does not support order-1, high order file-backed folio
	 * is not supported at all.
	 */
	VM_WARN_ON_ONCE(order == 1);

	if (order > 1)
		prep_compound_page(page, order);

	/* page has to be compound head here */
	set_page_count(page, 1);
	lock_page(page);
}

void zone_device_folio_init(struct folio *folio, unsigned int order)
{
	struct page *page = folio_page(folio, 0);

	zone_device_page_init(page, order);
	page_rmappable_folio(page);
}

Or

struct folio *zone_device_folio_init(struct page *page, unsigned int order)
{
	zone_device_page_init(page, order);
	return page_rmappable_folio(page);
}


Then, it comes to free_zone_device_folio() above,
I feel that pgmap->ops->page_free() should take an additional order
parameter to free a compound page like free_frozen_pages().


This is my impression after reading the patch and zone device page code.

Alistair and David can correct me if this is wrong, since I am new to
zone device page code.
	
>  }
> -EXPORT_SYMBOL_GPL(zone_device_page_init);
> +EXPORT_SYMBOL_GPL(zone_device_folio_init);
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 34333ae3bd80..9a2aabfaea6f 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1769,9 +1769,13 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
>  	 * the folio is unmapped and at least one page is still mapped.
>  	 *
>  	 * Check partially_mapped first to ensure it is a large folio.
> +	 *
> +	 * Device private folios do not support deferred splitting and
> +	 * shrinker based scanning of the folios to free.
>  	 */
>  	if (partially_mapped && folio_test_anon(folio) &&
> -	    !folio_test_partially_mapped(folio))
> +	    !folio_test_partially_mapped(folio) &&
> +	    !folio_is_device_private(folio))
>  		deferred_split_folio(folio, true);
>
>  	__folio_mod_stat(folio, -nr, -nr_pmdmapped);
> -- 
> 2.50.1


--
Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 02/15] mm/huge_memory: add device-private THP support to PMD operations
  2025-09-16 12:21 ` [v6 02/15] mm/huge_memory: add device-private THP support to PMD operations Balbir Singh
@ 2025-09-18 18:45   ` Zi Yan
  2025-09-19  4:51     ` Balbir Singh
  2025-09-25  0:25   ` Alistair Popple
  1 sibling, 1 reply; 57+ messages in thread
From: Zi Yan @ 2025-09-18 18:45 UTC (permalink / raw)
  To: Balbir Singh
  Cc: linux-kernel, linux-mm, damon, dri-devel, Matthew Brost,
	David Hildenbrand, Joshua Hahn, Rakie Kim, Byungchul Park,
	Gregory Price, Ying Huang, Alistair Popple, Oscar Salvador,
	Lorenzo Stoakes, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lyude Paul, Danilo Krummrich,
	David Airlie, Simona Vetter, Ralph Campbell, Mika Penttilä,
	Francois Dugast

On 16 Sep 2025, at 8:21, Balbir Singh wrote:

> Extend core huge page management functions to handle device-private THP
> entries.  This enables proper handling of large device-private folios in
> fundamental MM operations.
>
> The following functions have been updated:
>
> - copy_huge_pmd(): Handle device-private entries during fork/clone
> - zap_huge_pmd(): Properly free device-private THP during munmap
> - change_huge_pmd(): Support protection changes on device-private THP
> - __pte_offset_map(): Add device-private entry awareness
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
> Cc: Rakie Kim <rakie.kim@sk.com>
> Cc: Byungchul Park <byungchul@sk.com>
> Cc: Gregory Price <gourry@gourry.net>
> Cc: Ying Huang <ying.huang@linux.alibaba.com>
> Cc: Alistair Popple <apopple@nvidia.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
> Cc: Nico Pache <npache@redhat.com>
> Cc: Ryan Roberts <ryan.roberts@arm.com>
> Cc: Dev Jain <dev.jain@arm.com>
> Cc: Barry Song <baohua@kernel.org>
> Cc: Lyude Paul <lyude@redhat.com>
> Cc: Danilo Krummrich <dakr@kernel.org>
> Cc: David Airlie <airlied@gmail.com>
> Cc: Simona Vetter <simona@ffwll.ch>
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: Mika Penttilä <mpenttil@redhat.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Francois Dugast <francois.dugast@intel.com>
> ---
>  include/linux/swapops.h | 32 +++++++++++++++++++++++
>  mm/huge_memory.c        | 56 ++++++++++++++++++++++++++++++++++-------
>  mm/pgtable-generic.c    |  2 +-
>  3 files changed, 80 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/swapops.h b/include/linux/swapops.h
> index 64ea151a7ae3..2687928a8146 100644
> --- a/include/linux/swapops.h
> +++ b/include/linux/swapops.h
> @@ -594,10 +594,42 @@ static inline int is_pmd_migration_entry(pmd_t pmd)
>  }
>  #endif  /* CONFIG_ARCH_ENABLE_THP_MIGRATION */
>
> +#if defined(CONFIG_ZONE_DEVICE) && defined(CONFIG_ARCH_ENABLE_THP_MIGRATION)
> +
> +/**
> + * is_pmd_device_private_entry() - Check if PMD contains a device private swap entry
> + * @pmd: The PMD to check
> + *
> + * Returns true if the PMD contains a swap entry that represents a device private
> + * page mapping. This is used for zone device private pages that have been
> + * swapped out but still need special handling during various memory management
> + * operations.
> + *
> + * Return: 1 if PMD contains device private entry, 0 otherwise
> + */
> +static inline int is_pmd_device_private_entry(pmd_t pmd)
> +{
> +	return is_swap_pmd(pmd) && is_device_private_entry(pmd_to_swp_entry(pmd));
> +}
> +
> +#else /* CONFIG_ZONE_DEVICE && CONFIG_ARCH_ENABLE_THP_MIGRATION */
> +
> +static inline int is_pmd_device_private_entry(pmd_t pmd)
> +{
> +	return 0;
> +}
> +
> +#endif /* CONFIG_ZONE_DEVICE && CONFIG_ARCH_ENABLE_THP_MIGRATION */
> +
>  static inline int non_swap_entry(swp_entry_t entry)
>  {
>  	return swp_type(entry) >= MAX_SWAPFILES;
>  }
>
> +static inline int is_pmd_non_present_folio_entry(pmd_t pmd)
> +{
> +	return is_pmd_migration_entry(pmd) || is_pmd_device_private_entry(pmd);
> +}
> +

non_present seems too vague. Maybe just open code it.


>  #endif /* CONFIG_MMU */
>  #endif /* _LINUX_SWAPOPS_H */
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 5acca24bbabb..a5e4c2aef191 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1703,17 +1703,45 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>  	if (unlikely(is_swap_pmd(pmd))) {
>  		swp_entry_t entry = pmd_to_swp_entry(pmd);
>
> -		VM_BUG_ON(!is_pmd_migration_entry(pmd));
> -		if (!is_readable_migration_entry(entry)) {
> -			entry = make_readable_migration_entry(
> -							swp_offset(entry));
> +		VM_WARN_ON(!is_pmd_non_present_folio_entry(pmd));
> +
> +		if (is_writable_migration_entry(entry) ||
> +		    is_readable_exclusive_migration_entry(entry)) {
> +			entry = make_readable_migration_entry(swp_offset(entry));
>  			pmd = swp_entry_to_pmd(entry);
>  			if (pmd_swp_soft_dirty(*src_pmd))
>  				pmd = pmd_swp_mksoft_dirty(pmd);
>  			if (pmd_swp_uffd_wp(*src_pmd))
>  				pmd = pmd_swp_mkuffd_wp(pmd);
>  			set_pmd_at(src_mm, addr, src_pmd, pmd);
> +		} else if (is_device_private_entry(entry)) {
> +			/*
> +			 * For device private entries, since there are no
> +			 * read exclusive entries, writable = !readable
> +			 */
> +			if (is_writable_device_private_entry(entry)) {
> +				entry = make_readable_device_private_entry(swp_offset(entry));
> +				pmd = swp_entry_to_pmd(entry);
> +
> +				if (pmd_swp_soft_dirty(*src_pmd))
> +					pmd = pmd_swp_mksoft_dirty(pmd);
> +				if (pmd_swp_uffd_wp(*src_pmd))
> +					pmd = pmd_swp_mkuffd_wp(pmd);
> +				set_pmd_at(src_mm, addr, src_pmd, pmd);
> +			}
> +
> +			src_folio = pfn_swap_entry_folio(entry);
> +			VM_WARN_ON(!folio_test_large(src_folio));
> +
> +			folio_get(src_folio);
> +			/*
> +			 * folio_try_dup_anon_rmap_pmd does not fail for
> +			 * device private entries.
> +			 */
> +			folio_try_dup_anon_rmap_pmd(src_folio, &src_folio->page,
> +							dst_vma, src_vma);’

folio_get() and folio_try_dup_anon_rmap_pmd() are needed, because
contrary to the migration entry case, this folio exists as
a device private one.

>  		}
> +
>  		add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
>  		mm_inc_nr_ptes(dst_mm);
>  		pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable);
> @@ -2211,15 +2239,16 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
>  			folio_remove_rmap_pmd(folio, page, vma);
>  			WARN_ON_ONCE(folio_mapcount(folio) < 0);
>  			VM_BUG_ON_PAGE(!PageHead(page), page);
> -		} else if (thp_migration_supported()) {
> +		} else if (is_pmd_non_present_folio_entry(orig_pmd)) {
>  			swp_entry_t entry;
>
> -			VM_BUG_ON(!is_pmd_migration_entry(orig_pmd));

It implies thp_migration_supported() is true here. We could have
VM_WARN_ONCE_ON(!thp_migration_supported()), but that might be too much.

>  			entry = pmd_to_swp_entry(orig_pmd);
>  			folio = pfn_swap_entry_folio(entry);
>  			flush_needed = 0;
> -		} else
> -			WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!");
> +
> +			if (!thp_migration_supported())
> +				WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!");
> +		}
>
>  		if (folio_test_anon(folio)) {
>  			zap_deposited_table(tlb->mm, pmd);
> @@ -2239,6 +2268,12 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
>  				folio_mark_accessed(folio);
>  		}
>
> +		if (folio_is_device_private(folio)) {
> +			folio_remove_rmap_pmd(folio, &folio->page, vma);
> +			WARN_ON_ONCE(folio_mapcount(folio) < 0);
> +			folio_put(folio);
> +		}
> +
>  		spin_unlock(ptl);
>  		if (flush_needed)
>  			tlb_remove_page_size(tlb, &folio->page, HPAGE_PMD_SIZE);
> @@ -2367,7 +2402,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
>  		struct folio *folio = pfn_swap_entry_folio(entry);
>  		pmd_t newpmd;
>
> -		VM_BUG_ON(!is_pmd_migration_entry(*pmd));
> +		VM_WARN_ON(!is_pmd_non_present_folio_entry(*pmd));
>  		if (is_writable_migration_entry(entry)) {
>  			/*
>  			 * A protection check is difficult so
> @@ -2380,6 +2415,9 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
>  			newpmd = swp_entry_to_pmd(entry);
>  			if (pmd_swp_soft_dirty(*pmd))
>  				newpmd = pmd_swp_mksoft_dirty(newpmd);
> +		} else if (is_writable_device_private_entry(entry)) {
> +			entry = make_readable_device_private_entry(swp_offset(entry));
> +			newpmd = swp_entry_to_pmd(entry);
>  		} else {
>  			newpmd = *pmd;
>  		}
> diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
> index 567e2d084071..0c847cdf4fd3 100644
> --- a/mm/pgtable-generic.c
> +++ b/mm/pgtable-generic.c
> @@ -290,7 +290,7 @@ pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp)
>
>  	if (pmdvalp)
>  		*pmdvalp = pmdval;
> -	if (unlikely(pmd_none(pmdval) || is_pmd_migration_entry(pmdval)))
> +	if (unlikely(pmd_none(pmdval) || !pmd_present(pmdval)))
>  		goto nomap;
>  	if (unlikely(pmd_trans_huge(pmdval)))
>  		goto nomap;
> -- 
> 2.50.1

Otherwise, LGTM. Acked-by: Zi Yan <ziy@nvidia.com>

Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 02/15] mm/huge_memory: add device-private THP support to PMD operations
  2025-09-18 18:45   ` Zi Yan
@ 2025-09-19  4:51     ` Balbir Singh
  2025-09-23  8:37       ` David Hildenbrand
  0 siblings, 1 reply; 57+ messages in thread
From: Balbir Singh @ 2025-09-19  4:51 UTC (permalink / raw)
  To: Zi Yan
  Cc: linux-kernel, linux-mm, damon, dri-devel, Matthew Brost,
	David Hildenbrand, Joshua Hahn, Rakie Kim, Byungchul Park,
	Gregory Price, Ying Huang, Alistair Popple, Oscar Salvador,
	Lorenzo Stoakes, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lyude Paul, Danilo Krummrich,
	David Airlie, Simona Vetter, Ralph Campbell, Mika Penttilä,
	Francois Dugast

On 9/19/25 04:45, Zi Yan wrote:
> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
> 
>> Extend core huge page management functions to handle device-private THP
>> entries.  This enables proper handling of large device-private folios in
>> fundamental MM operations.
>>
>> The following functions have been updated:
>>
>> - copy_huge_pmd(): Handle device-private entries during fork/clone
>> - zap_huge_pmd(): Properly free device-private THP during munmap
>> - change_huge_pmd(): Support protection changes on device-private THP
>> - __pte_offset_map(): Add device-private entry awareness
>>
>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
>> Cc: Rakie Kim <rakie.kim@sk.com>
>> Cc: Byungchul Park <byungchul@sk.com>
>> Cc: Gregory Price <gourry@gourry.net>
>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
>> Cc: Alistair Popple <apopple@nvidia.com>
>> Cc: Oscar Salvador <osalvador@suse.de>
>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
>> Cc: Nico Pache <npache@redhat.com>
>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>> Cc: Dev Jain <dev.jain@arm.com>
>> Cc: Barry Song <baohua@kernel.org>
>> Cc: Lyude Paul <lyude@redhat.com>
>> Cc: Danilo Krummrich <dakr@kernel.org>
>> Cc: David Airlie <airlied@gmail.com>
>> Cc: Simona Vetter <simona@ffwll.ch>
>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>> Cc: Mika Penttilä <mpenttil@redhat.com>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Francois Dugast <francois.dugast@intel.com>
>> ---
>>  include/linux/swapops.h | 32 +++++++++++++++++++++++
>>  mm/huge_memory.c        | 56 ++++++++++++++++++++++++++++++++++-------
>>  mm/pgtable-generic.c    |  2 +-
>>  3 files changed, 80 insertions(+), 10 deletions(-)
>>
>> diff --git a/include/linux/swapops.h b/include/linux/swapops.h
>> index 64ea151a7ae3..2687928a8146 100644
>> --- a/include/linux/swapops.h
>> +++ b/include/linux/swapops.h
>> @@ -594,10 +594,42 @@ static inline int is_pmd_migration_entry(pmd_t pmd)
>>  }
>>  #endif  /* CONFIG_ARCH_ENABLE_THP_MIGRATION */
>>
>> +#if defined(CONFIG_ZONE_DEVICE) && defined(CONFIG_ARCH_ENABLE_THP_MIGRATION)
>> +
>> +/**
>> + * is_pmd_device_private_entry() - Check if PMD contains a device private swap entry
>> + * @pmd: The PMD to check
>> + *
>> + * Returns true if the PMD contains a swap entry that represents a device private
>> + * page mapping. This is used for zone device private pages that have been
>> + * swapped out but still need special handling during various memory management
>> + * operations.
>> + *
>> + * Return: 1 if PMD contains device private entry, 0 otherwise
>> + */
>> +static inline int is_pmd_device_private_entry(pmd_t pmd)
>> +{
>> +	return is_swap_pmd(pmd) && is_device_private_entry(pmd_to_swp_entry(pmd));
>> +}
>> +
>> +#else /* CONFIG_ZONE_DEVICE && CONFIG_ARCH_ENABLE_THP_MIGRATION */
>> +
>> +static inline int is_pmd_device_private_entry(pmd_t pmd)
>> +{
>> +	return 0;
>> +}
>> +
>> +#endif /* CONFIG_ZONE_DEVICE && CONFIG_ARCH_ENABLE_THP_MIGRATION */
>> +
>>  static inline int non_swap_entry(swp_entry_t entry)
>>  {
>>  	return swp_type(entry) >= MAX_SWAPFILES;
>>  }
>>
>> +static inline int is_pmd_non_present_folio_entry(pmd_t pmd)
>> +{
>> +	return is_pmd_migration_entry(pmd) || is_pmd_device_private_entry(pmd);
>> +}
>> +
> 
> non_present seems too vague. Maybe just open code it.

This was David's suggestion from the previous posting, there is is_swap_pfn_entry()
but it's much larger than we would like for our use case.

> 
> 
>>  #endif /* CONFIG_MMU */
>>  #endif /* _LINUX_SWAPOPS_H */
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 5acca24bbabb..a5e4c2aef191 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -1703,17 +1703,45 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>>  	if (unlikely(is_swap_pmd(pmd))) {
>>  		swp_entry_t entry = pmd_to_swp_entry(pmd);
>>
>> -		VM_BUG_ON(!is_pmd_migration_entry(pmd));
>> -		if (!is_readable_migration_entry(entry)) {
>> -			entry = make_readable_migration_entry(
>> -							swp_offset(entry));
>> +		VM_WARN_ON(!is_pmd_non_present_folio_entry(pmd));
>> +
>> +		if (is_writable_migration_entry(entry) ||
>> +		    is_readable_exclusive_migration_entry(entry)) {
>> +			entry = make_readable_migration_entry(swp_offset(entry));
>>  			pmd = swp_entry_to_pmd(entry);
>>  			if (pmd_swp_soft_dirty(*src_pmd))
>>  				pmd = pmd_swp_mksoft_dirty(pmd);
>>  			if (pmd_swp_uffd_wp(*src_pmd))
>>  				pmd = pmd_swp_mkuffd_wp(pmd);
>>  			set_pmd_at(src_mm, addr, src_pmd, pmd);
>> +		} else if (is_device_private_entry(entry)) {
>> +			/*
>> +			 * For device private entries, since there are no
>> +			 * read exclusive entries, writable = !readable
>> +			 */
>> +			if (is_writable_device_private_entry(entry)) {
>> +				entry = make_readable_device_private_entry(swp_offset(entry));
>> +				pmd = swp_entry_to_pmd(entry);
>> +
>> +				if (pmd_swp_soft_dirty(*src_pmd))
>> +					pmd = pmd_swp_mksoft_dirty(pmd);
>> +				if (pmd_swp_uffd_wp(*src_pmd))
>> +					pmd = pmd_swp_mkuffd_wp(pmd);
>> +				set_pmd_at(src_mm, addr, src_pmd, pmd);
>> +			}
>> +
>> +			src_folio = pfn_swap_entry_folio(entry);
>> +			VM_WARN_ON(!folio_test_large(src_folio));
>> +
>> +			folio_get(src_folio);
>> +			/*
>> +			 * folio_try_dup_anon_rmap_pmd does not fail for
>> +			 * device private entries.
>> +			 */
>> +			folio_try_dup_anon_rmap_pmd(src_folio, &src_folio->page,
>> +							dst_vma, src_vma);’
> 
> folio_get() and folio_try_dup_anon_rmap_pmd() are needed, because
> contrary to the migration entry case, this folio exists as
> a device private one.
> 

Is that a question?

>>  		}
>> +
>>  		add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
>>  		mm_inc_nr_ptes(dst_mm);
>>  		pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable);
>> @@ -2211,15 +2239,16 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
>>  			folio_remove_rmap_pmd(folio, page, vma);
>>  			WARN_ON_ONCE(folio_mapcount(folio) < 0);
>>  			VM_BUG_ON_PAGE(!PageHead(page), page);
>> -		} else if (thp_migration_supported()) {
>> +		} else if (is_pmd_non_present_folio_entry(orig_pmd)) {
>>  			swp_entry_t entry;
>>
>> -			VM_BUG_ON(!is_pmd_migration_entry(orig_pmd));
> 
> It implies thp_migration_supported() is true here. We could have
> VM_WARN_ONCE_ON(!thp_migration_supported()), but that might be too much.
> 

Yes, since we've validated that this is a pmd migration or device
private entry.

>>  			entry = pmd_to_swp_entry(orig_pmd);
>>  			folio = pfn_swap_entry_folio(entry);
>>  			flush_needed = 0;
>> -		} else
>> -			WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!");
>> +
>> +			if (!thp_migration_supported())
>> +				WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!");
>> +		}
>>
>>  		if (folio_test_anon(folio)) {
>>  			zap_deposited_table(tlb->mm, pmd);
>> @@ -2239,6 +2268,12 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
>>  				folio_mark_accessed(folio);
>>  		}
>>
>> +		if (folio_is_device_private(folio)) {
>> +			folio_remove_rmap_pmd(folio, &folio->page, vma);
>> +			WARN_ON_ONCE(folio_mapcount(folio) < 0);
>> +			folio_put(folio);
>> +		}
>> +
>>  		spin_unlock(ptl);
>>  		if (flush_needed)
>>  			tlb_remove_page_size(tlb, &folio->page, HPAGE_PMD_SIZE);
>> @@ -2367,7 +2402,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
>>  		struct folio *folio = pfn_swap_entry_folio(entry);
>>  		pmd_t newpmd;
>>
>> -		VM_BUG_ON(!is_pmd_migration_entry(*pmd));
>> +		VM_WARN_ON(!is_pmd_non_present_folio_entry(*pmd));
>>  		if (is_writable_migration_entry(entry)) {
>>  			/*
>>  			 * A protection check is difficult so
>> @@ -2380,6 +2415,9 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
>>  			newpmd = swp_entry_to_pmd(entry);
>>  			if (pmd_swp_soft_dirty(*pmd))
>>  				newpmd = pmd_swp_mksoft_dirty(newpmd);
>> +		} else if (is_writable_device_private_entry(entry)) {
>> +			entry = make_readable_device_private_entry(swp_offset(entry));
>> +			newpmd = swp_entry_to_pmd(entry);
>>  		} else {
>>  			newpmd = *pmd;
>>  		}
>> diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
>> index 567e2d084071..0c847cdf4fd3 100644
>> --- a/mm/pgtable-generic.c
>> +++ b/mm/pgtable-generic.c
>> @@ -290,7 +290,7 @@ pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp)
>>
>>  	if (pmdvalp)
>>  		*pmdvalp = pmdval;
>> -	if (unlikely(pmd_none(pmdval) || is_pmd_migration_entry(pmdval)))
>> +	if (unlikely(pmd_none(pmdval) || !pmd_present(pmdval)))
>>  		goto nomap;
>>  	if (unlikely(pmd_trans_huge(pmdval)))
>>  		goto nomap;
>> -- 
>> 2.50.1
> 
> Otherwise, LGTM. Acked-by: Zi Yan <ziy@nvidia.com>
> 

Thanks Zi!
Balbir


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 01/15] mm/zone_device: support large zone device private folios
  2025-09-18  2:49   ` Zi Yan
@ 2025-09-19  5:01     ` Balbir Singh
  2025-09-19 13:26       ` Zi Yan
  2025-09-24 10:55     ` David Hildenbrand
  1 sibling, 1 reply; 57+ messages in thread
From: Balbir Singh @ 2025-09-19  5:01 UTC (permalink / raw)
  To: Zi Yan, David Hildenbrand, Alistair Popple
  Cc: linux-kernel, linux-mm, damon, dri-devel, Joshua Hahn, Rakie Kim,
	Byungchul Park, Gregory Price, Ying Huang, Oscar Salvador,
	Lorenzo Stoakes, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lyude Paul, Danilo Krummrich,
	David Airlie, Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 9/18/25 12:49, Zi Yan wrote:
> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
> 
>> Add routines to support allocation of large order zone device folios
>> and helper functions for zone device folios, to check if a folio is
>> device private and helpers for setting zone device data.
>>
>> When large folios are used, the existing page_free() callback in
>> pgmap is called when the folio is freed, this is true for both
>> PAGE_SIZE and higher order pages.
>>
>> Zone device private large folios do not support deferred split and
>> scan like normal THP folios.
>>
>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
>> Cc: Rakie Kim <rakie.kim@sk.com>
>> Cc: Byungchul Park <byungchul@sk.com>
>> Cc: Gregory Price <gourry@gourry.net>
>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
>> Cc: Alistair Popple <apopple@nvidia.com>
>> Cc: Oscar Salvador <osalvador@suse.de>
>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
>> Cc: Nico Pache <npache@redhat.com>
>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>> Cc: Dev Jain <dev.jain@arm.com>
>> Cc: Barry Song <baohua@kernel.org>
>> Cc: Lyude Paul <lyude@redhat.com>
>> Cc: Danilo Krummrich <dakr@kernel.org>
>> Cc: David Airlie <airlied@gmail.com>
>> Cc: Simona Vetter <simona@ffwll.ch>
>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>> Cc: Mika Penttilä <mpenttil@redhat.com>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Francois Dugast <francois.dugast@intel.com>
>> ---
>>  include/linux/memremap.h | 10 +++++++++-
>>  mm/memremap.c            | 34 +++++++++++++++++++++-------------
>>  mm/rmap.c                |  6 +++++-
>>  3 files changed, 35 insertions(+), 15 deletions(-)
>>
>> diff --git a/include/linux/memremap.h b/include/linux/memremap.h
>> index e5951ba12a28..9c20327c2be5 100644
>> --- a/include/linux/memremap.h
>> +++ b/include/linux/memremap.h
>> @@ -206,7 +206,7 @@ static inline bool is_fsdax_page(const struct page *page)
>>  }
>>
>>  #ifdef CONFIG_ZONE_DEVICE
>> -void zone_device_page_init(struct page *page);
>> +void zone_device_folio_init(struct folio *folio, unsigned int order);
>>  void *memremap_pages(struct dev_pagemap *pgmap, int nid);
>>  void memunmap_pages(struct dev_pagemap *pgmap);
>>  void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
>> @@ -215,6 +215,14 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn);
>>  bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn);
>>
>>  unsigned long memremap_compat_align(void);
>> +
>> +static inline void zone_device_page_init(struct page *page)
>> +{
>> +	struct folio *folio = page_folio(page);
>> +
>> +	zone_device_folio_init(folio, 0);
> 
> I assume it is for legacy code, where only non-compound page exists?
> 
> It seems that you assume @page is always order-0, but there is no check
> for it. Adding VM_WARN_ON_ONCE_FOLIO(folio_order(folio) != 0, folio)
> above it would be useful to detect misuse.
> 
>> +}
>> +
>>  #else
>>  static inline void *devm_memremap_pages(struct device *dev,
>>  		struct dev_pagemap *pgmap)
>> diff --git a/mm/memremap.c b/mm/memremap.c
>> index 46cb1b0b6f72..a8481ebf94cc 100644
>> --- a/mm/memremap.c
>> +++ b/mm/memremap.c
>> @@ -416,20 +416,19 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap);
>>  void free_zone_device_folio(struct folio *folio)
>>  {
>>  	struct dev_pagemap *pgmap = folio->pgmap;
>> +	unsigned long nr = folio_nr_pages(folio);
>> +	int i;
>>
>>  	if (WARN_ON_ONCE(!pgmap))
>>  		return;
>>
>>  	mem_cgroup_uncharge(folio);
>>
>> -	/*
>> -	 * Note: we don't expect anonymous compound pages yet. Once supported
>> -	 * and we could PTE-map them similar to THP, we'd have to clear
>> -	 * PG_anon_exclusive on all tail pages.
>> -	 */
>>  	if (folio_test_anon(folio)) {
>> -		VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
>> -		__ClearPageAnonExclusive(folio_page(folio, 0));
>> +		for (i = 0; i < nr; i++)
>> +			__ClearPageAnonExclusive(folio_page(folio, i));
>> +	} else {
>> +		VM_WARN_ON_ONCE(folio_test_large(folio));
>>  	}
>>
>>  	/*
>> @@ -456,8 +455,8 @@ void free_zone_device_folio(struct folio *folio)
>>  	case MEMORY_DEVICE_COHERENT:
>>  		if (WARN_ON_ONCE(!pgmap->ops || !pgmap->ops->page_free))
>>  			break;
>> -		pgmap->ops->page_free(folio_page(folio, 0));
>> -		put_dev_pagemap(pgmap);
>> +		pgmap->ops->page_free(&folio->page);
>> +		percpu_ref_put_many(&folio->pgmap->ref, nr);
>>  		break;
>>
>>  	case MEMORY_DEVICE_GENERIC:
>> @@ -480,14 +479,23 @@ void free_zone_device_folio(struct folio *folio)
>>  	}
>>  }
>>
>> -void zone_device_page_init(struct page *page)
>> +void zone_device_folio_init(struct folio *folio, unsigned int order)
>>  {
>> +	struct page *page = folio_page(folio, 0);
> 
> It is strange to see a folio is converted back to page in
> a function called zone_device_folio_init().
> 
>> +
>> +	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
>> +
>>  	/*
>>  	 * Drivers shouldn't be allocating pages after calling
>>  	 * memunmap_pages().
>>  	 */
>> -	WARN_ON_ONCE(!percpu_ref_tryget_live(&page_pgmap(page)->ref));
>> -	set_page_count(page, 1);
>> +	WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
>> +	folio_set_count(folio, 1);
>>  	lock_page(page);
>> +
>> +	if (order > 1) {
>> +		prep_compound_page(page, order);
>> +		folio_set_large_rmappable(folio);
>> +	}
> 
> OK, so basically, @folio is not a compound page yet when zone_device_folio_init()
> is called.
> 
> I feel that your zone_device_page_init() and zone_device_folio_init()
> implementations are inverse. They should follow the same pattern
> as __alloc_pages_noprof() and __folio_alloc_noprof(), where
> zone_device_page_init() does the actual initialization and
> zone_device_folio_init() just convert a page to folio.
> 
> Something like:
> 
> void zone_device_page_init(struct page *page, unsigned int order)
> {
> 	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
> 
> 	/*
> 	 * Drivers shouldn't be allocating pages after calling
> 	 * memunmap_pages().
> 	 */
> 
>     WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
> 	
> 	/*
> 	 * anonymous folio does not support order-1, high order file-backed folio
> 	 * is not supported at all.
> 	 */
> 	VM_WARN_ON_ONCE(order == 1);
> 
> 	if (order > 1)
> 		prep_compound_page(page, order);
> 
> 	/* page has to be compound head here */
> 	set_page_count(page, 1);
> 	lock_page(page);
> }
> 
> void zone_device_folio_init(struct folio *folio, unsigned int order)
> {
> 	struct page *page = folio_page(folio, 0);
> 
> 	zone_device_page_init(page, order);
> 	page_rmappable_folio(page);
> }
> 
> Or
> 
> struct folio *zone_device_folio_init(struct page *page, unsigned int order)
> {
> 	zone_device_page_init(page, order);
> 	return page_rmappable_folio(page);
> }
> 
> 
> Then, it comes to free_zone_device_folio() above,
> I feel that pgmap->ops->page_free() should take an additional order
> parameter to free a compound page like free_frozen_pages().
> 
> 
> This is my impression after reading the patch and zone device page code.
> 
> Alistair and David can correct me if this is wrong, since I am new to
> zone device page code.
> 	

Thanks, I did not want to change zone_device_page_init() for several
drivers (outside my test scope) that already assume it has an order size of 0.

Balbir Singh


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 01/15] mm/zone_device: support large zone device private folios
  2025-09-19  5:01     ` Balbir Singh
@ 2025-09-19 13:26       ` Zi Yan
  2025-09-23  3:47         ` Balbir Singh
  0 siblings, 1 reply; 57+ messages in thread
From: Zi Yan @ 2025-09-19 13:26 UTC (permalink / raw)
  To: Balbir Singh, Alistair Popple
  Cc: David Hildenbrand, linux-kernel, linux-mm, damon, dri-devel,
	Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Oscar Salvador, Lorenzo Stoakes, Baolin Wang,
	Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
	Lyude Paul, Danilo Krummrich, David Airlie, Simona Vetter,
	Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 19 Sep 2025, at 1:01, Balbir Singh wrote:

> On 9/18/25 12:49, Zi Yan wrote:
>> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
>>
>>> Add routines to support allocation of large order zone device folios
>>> and helper functions for zone device folios, to check if a folio is
>>> device private and helpers for setting zone device data.
>>>
>>> When large folios are used, the existing page_free() callback in
>>> pgmap is called when the folio is freed, this is true for both
>>> PAGE_SIZE and higher order pages.
>>>
>>> Zone device private large folios do not support deferred split and
>>> scan like normal THP folios.
>>>
>>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>>> Cc: David Hildenbrand <david@redhat.com>
>>> Cc: Zi Yan <ziy@nvidia.com>
>>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
>>> Cc: Rakie Kim <rakie.kim@sk.com>
>>> Cc: Byungchul Park <byungchul@sk.com>
>>> Cc: Gregory Price <gourry@gourry.net>
>>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
>>> Cc: Alistair Popple <apopple@nvidia.com>
>>> Cc: Oscar Salvador <osalvador@suse.de>
>>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
>>> Cc: Nico Pache <npache@redhat.com>
>>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>>> Cc: Dev Jain <dev.jain@arm.com>
>>> Cc: Barry Song <baohua@kernel.org>
>>> Cc: Lyude Paul <lyude@redhat.com>
>>> Cc: Danilo Krummrich <dakr@kernel.org>
>>> Cc: David Airlie <airlied@gmail.com>
>>> Cc: Simona Vetter <simona@ffwll.ch>
>>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>>> Cc: Mika Penttilä <mpenttil@redhat.com>
>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>> Cc: Francois Dugast <francois.dugast@intel.com>
>>> ---
>>>  include/linux/memremap.h | 10 +++++++++-
>>>  mm/memremap.c            | 34 +++++++++++++++++++++-------------
>>>  mm/rmap.c                |  6 +++++-
>>>  3 files changed, 35 insertions(+), 15 deletions(-)
>>>
>>> diff --git a/include/linux/memremap.h b/include/linux/memremap.h
>>> index e5951ba12a28..9c20327c2be5 100644
>>> --- a/include/linux/memremap.h
>>> +++ b/include/linux/memremap.h
>>> @@ -206,7 +206,7 @@ static inline bool is_fsdax_page(const struct page *page)
>>>  }
>>>
>>>  #ifdef CONFIG_ZONE_DEVICE
>>> -void zone_device_page_init(struct page *page);
>>> +void zone_device_folio_init(struct folio *folio, unsigned int order);
>>>  void *memremap_pages(struct dev_pagemap *pgmap, int nid);
>>>  void memunmap_pages(struct dev_pagemap *pgmap);
>>>  void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
>>> @@ -215,6 +215,14 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn);
>>>  bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn);
>>>
>>>  unsigned long memremap_compat_align(void);
>>> +
>>> +static inline void zone_device_page_init(struct page *page)
>>> +{
>>> +	struct folio *folio = page_folio(page);
>>> +
>>> +	zone_device_folio_init(folio, 0);
>>
>> I assume it is for legacy code, where only non-compound page exists?
>>
>> It seems that you assume @page is always order-0, but there is no check
>> for it. Adding VM_WARN_ON_ONCE_FOLIO(folio_order(folio) != 0, folio)
>> above it would be useful to detect misuse.
>>
>>> +}
>>> +
>>>  #else
>>>  static inline void *devm_memremap_pages(struct device *dev,
>>>  		struct dev_pagemap *pgmap)
>>> diff --git a/mm/memremap.c b/mm/memremap.c
>>> index 46cb1b0b6f72..a8481ebf94cc 100644
>>> --- a/mm/memremap.c
>>> +++ b/mm/memremap.c
>>> @@ -416,20 +416,19 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap);
>>>  void free_zone_device_folio(struct folio *folio)
>>>  {
>>>  	struct dev_pagemap *pgmap = folio->pgmap;
>>> +	unsigned long nr = folio_nr_pages(folio);
>>> +	int i;
>>>
>>>  	if (WARN_ON_ONCE(!pgmap))
>>>  		return;
>>>
>>>  	mem_cgroup_uncharge(folio);
>>>
>>> -	/*
>>> -	 * Note: we don't expect anonymous compound pages yet. Once supported
>>> -	 * and we could PTE-map them similar to THP, we'd have to clear
>>> -	 * PG_anon_exclusive on all tail pages.
>>> -	 */
>>>  	if (folio_test_anon(folio)) {
>>> -		VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
>>> -		__ClearPageAnonExclusive(folio_page(folio, 0));
>>> +		for (i = 0; i < nr; i++)
>>> +			__ClearPageAnonExclusive(folio_page(folio, i));
>>> +	} else {
>>> +		VM_WARN_ON_ONCE(folio_test_large(folio));
>>>  	}
>>>
>>>  	/*
>>> @@ -456,8 +455,8 @@ void free_zone_device_folio(struct folio *folio)
>>>  	case MEMORY_DEVICE_COHERENT:
>>>  		if (WARN_ON_ONCE(!pgmap->ops || !pgmap->ops->page_free))
>>>  			break;
>>> -		pgmap->ops->page_free(folio_page(folio, 0));
>>> -		put_dev_pagemap(pgmap);
>>> +		pgmap->ops->page_free(&folio->page);
>>> +		percpu_ref_put_many(&folio->pgmap->ref, nr);
>>>  		break;
>>>
>>>  	case MEMORY_DEVICE_GENERIC:
>>> @@ -480,14 +479,23 @@ void free_zone_device_folio(struct folio *folio)
>>>  	}
>>>  }
>>>
>>> -void zone_device_page_init(struct page *page)
>>> +void zone_device_folio_init(struct folio *folio, unsigned int order)
>>>  {
>>> +	struct page *page = folio_page(folio, 0);
>>
>> It is strange to see a folio is converted back to page in
>> a function called zone_device_folio_init().
>>
>>> +
>>> +	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
>>> +
>>>  	/*
>>>  	 * Drivers shouldn't be allocating pages after calling
>>>  	 * memunmap_pages().
>>>  	 */
>>> -	WARN_ON_ONCE(!percpu_ref_tryget_live(&page_pgmap(page)->ref));
>>> -	set_page_count(page, 1);
>>> +	WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
>>> +	folio_set_count(folio, 1);
>>>  	lock_page(page);
>>> +
>>> +	if (order > 1) {
>>> +		prep_compound_page(page, order);
>>> +		folio_set_large_rmappable(folio);
>>> +	}
>>
>> OK, so basically, @folio is not a compound page yet when zone_device_folio_init()
>> is called.
>>
>> I feel that your zone_device_page_init() and zone_device_folio_init()
>> implementations are inverse. They should follow the same pattern
>> as __alloc_pages_noprof() and __folio_alloc_noprof(), where
>> zone_device_page_init() does the actual initialization and
>> zone_device_folio_init() just convert a page to folio.
>>
>> Something like:
>>
>> void zone_device_page_init(struct page *page, unsigned int order)
>> {
>> 	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
>>
>> 	/*
>> 	 * Drivers shouldn't be allocating pages after calling
>> 	 * memunmap_pages().
>> 	 */
>>
>>     WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
>> 	
>> 	/*
>> 	 * anonymous folio does not support order-1, high order file-backed folio
>> 	 * is not supported at all.
>> 	 */
>> 	VM_WARN_ON_ONCE(order == 1);
>>
>> 	if (order > 1)
>> 		prep_compound_page(page, order);
>>
>> 	/* page has to be compound head here */
>> 	set_page_count(page, 1);
>> 	lock_page(page);
>> }
>>
>> void zone_device_folio_init(struct folio *folio, unsigned int order)
>> {
>> 	struct page *page = folio_page(folio, 0);
>>
>> 	zone_device_page_init(page, order);
>> 	page_rmappable_folio(page);
>> }
>>
>> Or
>>
>> struct folio *zone_device_folio_init(struct page *page, unsigned int order)
>> {
>> 	zone_device_page_init(page, order);
>> 	return page_rmappable_folio(page);
>> }
>>
>>
>> Then, it comes to free_zone_device_folio() above,
>> I feel that pgmap->ops->page_free() should take an additional order
>> parameter to free a compound page like free_frozen_pages().
>>
>>
>> This is my impression after reading the patch and zone device page code.
>>
>> Alistair and David can correct me if this is wrong, since I am new to
>> zone device page code.
>> 	
>
> Thanks, I did not want to change zone_device_page_init() for several
> drivers (outside my test scope) that already assume it has an order size of 0.

But my proposed zone_device_page_init() should still work for order-0
pages. You just need to change call site to add 0 as a new parameter.


One strange thing I found in the original zone_device_page_init() is
the use of page_pgmap() in
WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order)).
page_pgmap() calls page_folio() on the given page to access pgmap field.
And pgmap field is only available in struct folio. The code initializes
struct page, but in middle it suddenly finds the page is actually a folio,
then treat it as a page afterwards. I wonder if it can be done better.

This might be a question to Alistair, since he made the change.

Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 03/15] mm/rmap: extend rmap and migration support device-private entries
  2025-09-16 12:21 ` [v6 03/15] mm/rmap: extend rmap and migration support device-private entries Balbir Singh
@ 2025-09-22 20:13   ` Zi Yan
  2025-09-23  3:39     ` Balbir Singh
  0 siblings, 1 reply; 57+ messages in thread
From: Zi Yan @ 2025-09-22 20:13 UTC (permalink / raw)
  To: Balbir Singh
  Cc: linux-kernel, linux-mm, damon, dri-devel, SeongJae Park,
	David Hildenbrand, Joshua Hahn, Rakie Kim, Byungchul Park,
	Gregory Price, Ying Huang, Alistair Popple, Oscar Salvador,
	Lorenzo Stoakes, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lyude Paul, Danilo Krummrich,
	David Airlie, Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 16 Sep 2025, at 8:21, Balbir Singh wrote:

> Add device-private THP support to reverse mapping infrastructure, enabling
> proper handling during migration and walk operations.
>
> The key changes are:
> - add_migration_pmd()/remove_migration_pmd(): Handle device-private
>   entries during folio migration and splitting
> - page_vma_mapped_walk(): Recognize device-private THP entries during
>   VMA traversal operations
>
> This change supports folio splitting and migration operations on
> device-private entries.
>
> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
> Reviewed-by: SeongJae Park <sj@kernel.org>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
> Cc: Rakie Kim <rakie.kim@sk.com>
> Cc: Byungchul Park <byungchul@sk.com>
> Cc: Gregory Price <gourry@gourry.net>
> Cc: Ying Huang <ying.huang@linux.alibaba.com>
> Cc: Alistair Popple <apopple@nvidia.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
> Cc: Nico Pache <npache@redhat.com>
> Cc: Ryan Roberts <ryan.roberts@arm.com>
> Cc: Dev Jain <dev.jain@arm.com>
> Cc: Barry Song <baohua@kernel.org>
> Cc: Lyude Paul <lyude@redhat.com>
> Cc: Danilo Krummrich <dakr@kernel.org>
> Cc: David Airlie <airlied@gmail.com>
> Cc: Simona Vetter <simona@ffwll.ch>
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: Mika Penttilä <mpenttil@redhat.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Francois Dugast <francois.dugast@intel.com>
> ---
>  mm/damon/ops-common.c | 20 +++++++++++++++++---
>  mm/huge_memory.c      | 16 +++++++++++++++-
>  mm/page_idle.c        |  7 +++++--
>  mm/page_vma_mapped.c  |  7 +++++++
>  mm/rmap.c             | 21 +++++++++++++++++----
>  5 files changed, 61 insertions(+), 10 deletions(-)
>
> diff --git a/mm/damon/ops-common.c b/mm/damon/ops-common.c
> index 998c5180a603..eda4de553611 100644
> --- a/mm/damon/ops-common.c
> +++ b/mm/damon/ops-common.c
> @@ -75,12 +75,24 @@ void damon_ptep_mkold(pte_t *pte, struct vm_area_struct *vma, unsigned long addr
>  void damon_pmdp_mkold(pmd_t *pmd, struct vm_area_struct *vma, unsigned long addr)
>  {
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -	struct folio *folio = damon_get_folio(pmd_pfn(pmdp_get(pmd)));
> +	pmd_t pmdval = pmdp_get(pmd);
> +	struct folio *folio;
> +	bool young = false;
> +	unsigned long pfn;
> +
> +	if (likely(pmd_present(pmdval)))
> +		pfn = pmd_pfn(pmdval);
> +	else
> +		pfn = swp_offset_pfn(pmd_to_swp_entry(pmdval));
>
> +	folio = damon_get_folio(pfn);
>  	if (!folio)
>  		return;
>
> -	if (pmdp_clear_young_notify(vma, addr, pmd))
> +	if (likely(pmd_present(pmdval)))
> +		young |= pmdp_clear_young_notify(vma, addr, pmd);
> +	young |= mmu_notifier_clear_young(vma->vm_mm, addr, addr + PAGE_SIZE);

This should be HPAGE_PMD_SIZE (it is guarded in CONFIG_TRANSPARENT_HUGEPAGE,
so HPAGE_PMD_SIZE will not trigger a build bug like the one below).

> +	if (young)
>  		folio_set_young(folio);
>
>  	folio_set_idle(folio);
> @@ -203,7 +215,9 @@ static bool damon_folio_young_one(struct folio *folio,
>  				mmu_notifier_test_young(vma->vm_mm, addr);
>  		} else {
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -			*accessed = pmd_young(pmdp_get(pvmw.pmd)) ||
> +			pmd_t pmd = pmdp_get(pvmw.pmd);
> +
> +			*accessed = (pmd_present(pmd) && pmd_young(pmd)) ||
>  				!folio_test_idle(folio) ||
>  				mmu_notifier_test_young(vma->vm_mm, addr);
>  #else
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index a5e4c2aef191..78166db72f4d 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -4637,7 +4637,10 @@ int set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
>  		return 0;
>
>  	flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);
> -	pmdval = pmdp_invalidate(vma, address, pvmw->pmd);
> +	if (unlikely(!pmd_present(*pvmw->pmd)))
> +		pmdval = pmdp_huge_get_and_clear(vma->vm_mm, address, pvmw->pmd);
> +	else
> +		pmdval = pmdp_invalidate(vma, address, pvmw->pmd);
>
>  	/* See folio_try_share_anon_rmap_pmd(): invalidate PMD first. */
>  	anon_exclusive = folio_test_anon(folio) && PageAnonExclusive(page);
> @@ -4687,6 +4690,17 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
>  	entry = pmd_to_swp_entry(*pvmw->pmd);
>  	folio_get(folio);
>  	pmde = folio_mk_pmd(folio, READ_ONCE(vma->vm_page_prot));
> +
> +	if (folio_is_device_private(folio)) {
> +		if (pmd_write(pmde))
> +			entry = make_writable_device_private_entry(
> +							page_to_pfn(new));
> +		else
> +			entry = make_readable_device_private_entry(
> +							page_to_pfn(new));
> +		pmde = swp_entry_to_pmd(entry);
> +	}
> +
>  	if (pmd_swp_soft_dirty(*pvmw->pmd))
>  		pmde = pmd_mksoft_dirty(pmde);
>  	if (is_writable_migration_entry(entry))
> diff --git a/mm/page_idle.c b/mm/page_idle.c
> index a82b340dc204..3bf0fbe05cc2 100644
> --- a/mm/page_idle.c
> +++ b/mm/page_idle.c
> @@ -71,8 +71,11 @@ static bool page_idle_clear_pte_refs_one(struct folio *folio,
>  				referenced |= ptep_test_and_clear_young(vma, addr, pvmw.pte);
>  			referenced |= mmu_notifier_clear_young(vma->vm_mm, addr, addr + PAGE_SIZE);
>  		} else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
> -			if (pmdp_clear_young_notify(vma, addr, pvmw.pmd))
> -				referenced = true;
> +			pmd_t pmdval = pmdp_get(pvmw.pmd);
> +
> +			if (likely(pmd_present(pmdval)))
> +				referenced |= pmdp_clear_young_notify(vma, addr, pvmw.pmd);
> +			referenced |= mmu_notifier_clear_young(vma->vm_mm, addr, addr + PAGE_SIZE);

This should be HPAGE_PMD_SIZE (or PMD_SIZE, since the code is not compiled
out when CONFIG_TRANSPARENT_HUGEPAGE is not selected and HPAGE_PMD_SIZE
will cause a build bug when CONFIG_PGTABLE_HAS_HUGE_LEAVES is not selected).

>  		} else {
>  			/* unexpected pmd-mapped page? */
>  			WARN_ON_ONCE(1);
> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> index e981a1a292d2..159953c590cc 100644
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -277,6 +277,13 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
>  			 * cannot return prematurely, while zap_huge_pmd() has
>  			 * cleared *pmd but not decremented compound_mapcount().
>  			 */
> +			swp_entry_t entry = pmd_to_swp_entry(pmde);
> +
> +			if (is_device_private_entry(entry)) {
> +				pvmw->ptl = pmd_lock(mm, pvmw->pmd);
> +				return true;
> +			}
> +
>  			if ((pvmw->flags & PVMW_SYNC) &&
>  			    thp_vma_suitable_order(vma, pvmw->address,
>  						   PMD_ORDER) &&
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 9a2aabfaea6f..080fc4048431 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1063,9 +1063,11 @@ static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw)
>  		} else {
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  			pmd_t *pmd = pvmw->pmd;
> -			pmd_t entry;
> +			pmd_t entry = pmdp_get(pmd);
>
> -			if (!pmd_dirty(*pmd) && !pmd_write(*pmd))

It is better to add a similar comment as the one above !pte_present().
Something like:
PFN swap PMDs, such as ...


> +			if (!pmd_present(entry))
> +				continue;
> +			if (!pmd_dirty(entry) && !pmd_write(entry))
>  				continue;
>
>  			flush_cache_range(vma, address,
> @@ -2330,6 +2332,11 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
>  	while (page_vma_mapped_walk(&pvmw)) {
>  		/* PMD-mapped THP migration entry */
>  		if (!pvmw.pte) {
> +#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
> +			unsigned long pfn;
> +			pmd_t pmdval;
> +#endif
> +

This looks ugly. IIRC, we now can put variable definition in the middle.
Maybe for this case, these two can be moved to the below ifdef region.

>  			if (flags & TTU_SPLIT_HUGE_PMD) {
>  				split_huge_pmd_locked(vma, pvmw.address,
>  						      pvmw.pmd, true);
> @@ -2338,8 +2345,14 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
>  				break;
>  			}
>  #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
> -			subpage = folio_page(folio,
> -				pmd_pfn(*pvmw.pmd) - folio_pfn(folio));
> +			pmdval = pmdp_get(pvmw.pmd);
> +			if (likely(pmd_present(pmdval)))
> +				pfn = pmd_pfn(pmdval);
> +			else
> +				pfn = swp_offset_pfn(pmd_to_swp_entry(pmdval));
> +
> +			subpage = folio_page(folio, pfn - folio_pfn(folio));
> +
>  			VM_BUG_ON_FOLIO(folio_test_hugetlb(folio) ||
>  					!folio_test_pmd_mappable(folio), folio);
>
> -- 
> 2.50.1

Otherwise, LGTM. Acked-by: Zi Yan <ziy@nvidia.com>

Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 04/15] mm/huge_memory: implement device-private THP splitting
  2025-09-16 12:21 ` [v6 04/15] mm/huge_memory: implement device-private THP splitting Balbir Singh
@ 2025-09-22 21:09   ` Zi Yan
  2025-09-23  1:50     ` Balbir Singh
  2025-09-25 10:01   ` David Hildenbrand
  1 sibling, 1 reply; 57+ messages in thread
From: Zi Yan @ 2025-09-22 21:09 UTC (permalink / raw)
  To: Balbir Singh, David Hildenbrand
  Cc: linux-kernel, linux-mm, damon, dri-devel, Joshua Hahn, Rakie Kim,
	Byungchul Park, Gregory Price, Ying Huang, Alistair Popple,
	Oscar Salvador, Lorenzo Stoakes, Baolin Wang, Liam R. Howlett,
	Nico Pache, Ryan Roberts, Dev Jain, Barry Song, Lyude Paul,
	Danilo Krummrich, David Airlie, Simona Vetter, Ralph Campbell,
	Mika Penttilä,
	Matthew Brost, Francois Dugast

On 16 Sep 2025, at 8:21, Balbir Singh wrote:

> Add support for splitting device-private THP folios, enabling fallback
> to smaller page sizes when large page allocation or migration fails.
>
> Key changes:
> - split_huge_pmd(): Handle device-private PMD entries during splitting
> - Preserve RMAP_EXCLUSIVE semantics for anonymous exclusive folios
> - Skip RMP_USE_SHARED_ZEROPAGE for device-private entries as they
>   don't support shared zero page semantics
>
> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
> Cc: Rakie Kim <rakie.kim@sk.com>
> Cc: Byungchul Park <byungchul@sk.com>
> Cc: Gregory Price <gourry@gourry.net>
> Cc: Ying Huang <ying.huang@linux.alibaba.com>
> Cc: Alistair Popple <apopple@nvidia.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
> Cc: Nico Pache <npache@redhat.com>
> Cc: Ryan Roberts <ryan.roberts@arm.com>
> Cc: Dev Jain <dev.jain@arm.com>
> Cc: Barry Song <baohua@kernel.org>
> Cc: Lyude Paul <lyude@redhat.com>
> Cc: Danilo Krummrich <dakr@kernel.org>
> Cc: David Airlie <airlied@gmail.com>
> Cc: Simona Vetter <simona@ffwll.ch>
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: Mika Penttilä <mpenttil@redhat.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Francois Dugast <francois.dugast@intel.com>
> ---
>  mm/huge_memory.c | 138 +++++++++++++++++++++++++++++++++--------------
>  1 file changed, 98 insertions(+), 40 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 78166db72f4d..5291ee155a02 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2872,16 +2872,18 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>  	struct page *page;
>  	pgtable_t pgtable;
>  	pmd_t old_pmd, _pmd;
> -	bool young, write, soft_dirty, pmd_migration = false, uffd_wp = false;
> -	bool anon_exclusive = false, dirty = false;
> +	bool soft_dirty, uffd_wp = false, young = false, write = false;
> +	bool anon_exclusive = false, dirty = false, present = false;
>  	unsigned long addr;
>  	pte_t *pte;
>  	int i;
> +	swp_entry_t swp_entry;
>
>  	VM_BUG_ON(haddr & ~HPAGE_PMD_MASK);
>  	VM_BUG_ON_VMA(vma->vm_start > haddr, vma);
>  	VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PMD_SIZE, vma);
> -	VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd));
> +
> +	VM_WARN_ON(!is_pmd_non_present_folio_entry(*pmd) && !pmd_trans_huge(*pmd));
>
>  	count_vm_event(THP_SPLIT_PMD);
>
> @@ -2929,20 +2931,47 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>  		return __split_huge_zero_page_pmd(vma, haddr, pmd);
>  	}
>
> -	pmd_migration = is_pmd_migration_entry(*pmd);
> -	if (unlikely(pmd_migration)) {
> -		swp_entry_t entry;
>
> +	present = pmd_present(*pmd);
> +	if (is_pmd_migration_entry(*pmd)) {
>  		old_pmd = *pmd;
> -		entry = pmd_to_swp_entry(old_pmd);
> -		page = pfn_swap_entry_to_page(entry);
> -		write = is_writable_migration_entry(entry);
> +		swp_entry = pmd_to_swp_entry(old_pmd);
> +		page = pfn_swap_entry_to_page(swp_entry);
> +		folio = page_folio(page);
> +
> +		soft_dirty = pmd_swp_soft_dirty(old_pmd);
> +		uffd_wp = pmd_swp_uffd_wp(old_pmd);
> +
> +		write = is_writable_migration_entry(swp_entry);
>  		if (PageAnon(page))
> -			anon_exclusive = is_readable_exclusive_migration_entry(entry);
> -		young = is_migration_entry_young(entry);
> -		dirty = is_migration_entry_dirty(entry);
> +			anon_exclusive = is_readable_exclusive_migration_entry(swp_entry);
> +		young = is_migration_entry_young(swp_entry);
> +		dirty = is_migration_entry_dirty(swp_entry);
> +	} else if (is_pmd_device_private_entry(*pmd)) {
> +		old_pmd = *pmd;
> +		swp_entry = pmd_to_swp_entry(old_pmd);
> +		page = pfn_swap_entry_to_page(swp_entry);
> +		folio = page_folio(page);
> +
>  		soft_dirty = pmd_swp_soft_dirty(old_pmd);
>  		uffd_wp = pmd_swp_uffd_wp(old_pmd);
> +
> +		write = is_writable_device_private_entry(swp_entry);
> +		anon_exclusive = PageAnonExclusive(page);
> +
> +		if (freeze && anon_exclusive &&
> +		    folio_try_share_anon_rmap_pmd(folio, page))
> +			freeze = false;

Why is it OK to change the freeze request? OK, it is replicating
the code for present PMD folios. Either add a comment to point
to the explanation in the comment below, or move
“if (is_pmd_device_private_entry(*pmd))“ branch in the else below
to deduplicate this code.

> +		if (!freeze) {
> +			rmap_t rmap_flags = RMAP_NONE;
> +
> +			folio_ref_add(folio, HPAGE_PMD_NR - 1);
> +			if (anon_exclusive)
> +				rmap_flags |= RMAP_EXCLUSIVE;
> +
> +			folio_add_anon_rmap_ptes(folio, page, HPAGE_PMD_NR,
> +						 vma, haddr, rmap_flags);
> +		}
>  	} else {
>  		/*
>  		 * Up to this point the pmd is present and huge and userland has
> @@ -3026,32 +3055,57 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>  	 * Note that NUMA hinting access restrictions are not transferred to
>  	 * avoid any possibility of altering permissions across VMAs.
>  	 */
> -	if (freeze || pmd_migration) {
> -		for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) {
> -			pte_t entry;
> -			swp_entry_t swp_entry;
> -
> -			if (write)
> -				swp_entry = make_writable_migration_entry(
> -							page_to_pfn(page + i));
> -			else if (anon_exclusive)
> -				swp_entry = make_readable_exclusive_migration_entry(
> -							page_to_pfn(page + i));
> -			else
> -				swp_entry = make_readable_migration_entry(
> -							page_to_pfn(page + i));
> -			if (young)
> -				swp_entry = make_migration_entry_young(swp_entry);
> -			if (dirty)
> -				swp_entry = make_migration_entry_dirty(swp_entry);
> -			entry = swp_entry_to_pte(swp_entry);
> -			if (soft_dirty)
> -				entry = pte_swp_mksoft_dirty(entry);
> -			if (uffd_wp)
> -				entry = pte_swp_mkuffd_wp(entry);
> +	if (freeze || !present) {
> +		pte_t entry;
>
> -			VM_WARN_ON(!pte_none(ptep_get(pte + i)));
> -			set_pte_at(mm, addr, pte + i, entry);
> +		if (freeze || is_migration_entry(swp_entry)) {
>
<snip>
> +		} else {
<snip>
>  		}
>  	} else {
>  		pte_t entry;

David already pointed this out in v5. It can be done such as:

if (freeze || pmd_migration) {
...
} else if (is_pmd_device_private_entry(old_pmd)) {
...
} else {
/* for present, non freeze case */
}

> @@ -3076,7 +3130,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>  	}
>  	pte_unmap(pte);
>
> -	if (!pmd_migration)
> +	if (!is_pmd_migration_entry(*pmd))
>  		folio_remove_rmap_pmd(folio, page, vma);
>  	if (freeze)
>  		put_page(page);
> @@ -3089,7 +3143,7 @@ void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
>  			   pmd_t *pmd, bool freeze)
>  {
>  	VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PMD_SIZE));
> -	if (pmd_trans_huge(*pmd) || is_pmd_migration_entry(*pmd))
> +	if (pmd_trans_huge(*pmd) || is_pmd_non_present_folio_entry(*pmd))
>  		__split_huge_pmd_locked(vma, pmd, address, freeze);
>  }
>
> @@ -3268,6 +3322,9 @@ static void lru_add_split_folio(struct folio *folio, struct folio *new_folio,
>  	VM_BUG_ON_FOLIO(folio_test_lru(new_folio), folio);
>  	lockdep_assert_held(&lruvec->lru_lock);
>
> +	if (folio_is_device_private(folio))
> +		return;
> +
>  	if (list) {
>  		/* page reclaim is reclaiming a huge page */
>  		VM_WARN_ON(folio_test_lru(folio));
> @@ -3885,8 +3942,9 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>  	if (nr_shmem_dropped)
>  		shmem_uncharge(mapping->host, nr_shmem_dropped);
>
> -	if (!ret && is_anon)
> +	if (!ret && is_anon && !folio_is_device_private(folio))
>  		remap_flags = RMP_USE_SHARED_ZEROPAGE;
> +

You should remove this and add

if (folio_is_device_private(folio))
	return false;

in try_to_map_unused_to_zeropage(). Otherwise, no one would know
device private folios need to be excluded from mapping unused to
zero page.

>  	remap_page(folio, 1 << order, remap_flags);
>
>  	/*
> -- 
> 2.50.1


Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 04/15] mm/huge_memory: implement device-private THP splitting
  2025-09-22 21:09   ` Zi Yan
@ 2025-09-23  1:50     ` Balbir Singh
  2025-09-23  2:09       ` Zi Yan
  0 siblings, 1 reply; 57+ messages in thread
From: Balbir Singh @ 2025-09-23  1:50 UTC (permalink / raw)
  To: Zi Yan, David Hildenbrand
  Cc: linux-kernel, linux-mm, damon, dri-devel, Joshua Hahn, Rakie Kim,
	Byungchul Park, Gregory Price, Ying Huang, Alistair Popple,
	Oscar Salvador, Lorenzo Stoakes, Baolin Wang, Liam R. Howlett,
	Nico Pache, Ryan Roberts, Dev Jain, Barry Song, Lyude Paul,
	Danilo Krummrich, David Airlie, Simona Vetter, Ralph Campbell,
	Mika Penttilä,
	Matthew Brost, Francois Dugast

On 9/23/25 07:09, Zi Yan wrote:
> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
> 
>> Add support for splitting device-private THP folios, enabling fallback
>> to smaller page sizes when large page allocation or migration fails.
>>
>> Key changes:
>> - split_huge_pmd(): Handle device-private PMD entries during splitting
>> - Preserve RMAP_EXCLUSIVE semantics for anonymous exclusive folios
>> - Skip RMP_USE_SHARED_ZEROPAGE for device-private entries as they
>>   don't support shared zero page semantics
>>
>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
>> Cc: Rakie Kim <rakie.kim@sk.com>
>> Cc: Byungchul Park <byungchul@sk.com>
>> Cc: Gregory Price <gourry@gourry.net>
>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
>> Cc: Alistair Popple <apopple@nvidia.com>
>> Cc: Oscar Salvador <osalvador@suse.de>
>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
>> Cc: Nico Pache <npache@redhat.com>
>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>> Cc: Dev Jain <dev.jain@arm.com>
>> Cc: Barry Song <baohua@kernel.org>
>> Cc: Lyude Paul <lyude@redhat.com>
>> Cc: Danilo Krummrich <dakr@kernel.org>
>> Cc: David Airlie <airlied@gmail.com>
>> Cc: Simona Vetter <simona@ffwll.ch>
>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>> Cc: Mika Penttilä <mpenttil@redhat.com>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Francois Dugast <francois.dugast@intel.com>
>> ---
>>  mm/huge_memory.c | 138 +++++++++++++++++++++++++++++++++--------------
>>  1 file changed, 98 insertions(+), 40 deletions(-)
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 78166db72f4d..5291ee155a02 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -2872,16 +2872,18 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>  	struct page *page;
>>  	pgtable_t pgtable;
>>  	pmd_t old_pmd, _pmd;
>> -	bool young, write, soft_dirty, pmd_migration = false, uffd_wp = false;
>> -	bool anon_exclusive = false, dirty = false;
>> +	bool soft_dirty, uffd_wp = false, young = false, write = false;
>> +	bool anon_exclusive = false, dirty = false, present = false;
>>  	unsigned long addr;
>>  	pte_t *pte;
>>  	int i;
>> +	swp_entry_t swp_entry;
>>
>>  	VM_BUG_ON(haddr & ~HPAGE_PMD_MASK);
>>  	VM_BUG_ON_VMA(vma->vm_start > haddr, vma);
>>  	VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PMD_SIZE, vma);
>> -	VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd));
>> +
>> +	VM_WARN_ON(!is_pmd_non_present_folio_entry(*pmd) && !pmd_trans_huge(*pmd));
>>
>>  	count_vm_event(THP_SPLIT_PMD);
>>
>> @@ -2929,20 +2931,47 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>  		return __split_huge_zero_page_pmd(vma, haddr, pmd);
>>  	}
>>
>> -	pmd_migration = is_pmd_migration_entry(*pmd);
>> -	if (unlikely(pmd_migration)) {
>> -		swp_entry_t entry;
>>
>> +	present = pmd_present(*pmd);
>> +	if (is_pmd_migration_entry(*pmd)) {
>>  		old_pmd = *pmd;
>> -		entry = pmd_to_swp_entry(old_pmd);
>> -		page = pfn_swap_entry_to_page(entry);
>> -		write = is_writable_migration_entry(entry);
>> +		swp_entry = pmd_to_swp_entry(old_pmd);
>> +		page = pfn_swap_entry_to_page(swp_entry);
>> +		folio = page_folio(page);
>> +
>> +		soft_dirty = pmd_swp_soft_dirty(old_pmd);
>> +		uffd_wp = pmd_swp_uffd_wp(old_pmd);
>> +
>> +		write = is_writable_migration_entry(swp_entry);
>>  		if (PageAnon(page))
>> -			anon_exclusive = is_readable_exclusive_migration_entry(entry);
>> -		young = is_migration_entry_young(entry);
>> -		dirty = is_migration_entry_dirty(entry);
>> +			anon_exclusive = is_readable_exclusive_migration_entry(swp_entry);
>> +		young = is_migration_entry_young(swp_entry);
>> +		dirty = is_migration_entry_dirty(swp_entry);
>> +	} else if (is_pmd_device_private_entry(*pmd)) {
>> +		old_pmd = *pmd;
>> +		swp_entry = pmd_to_swp_entry(old_pmd);
>> +		page = pfn_swap_entry_to_page(swp_entry);
>> +		folio = page_folio(page);
>> +
>>  		soft_dirty = pmd_swp_soft_dirty(old_pmd);
>>  		uffd_wp = pmd_swp_uffd_wp(old_pmd);
>> +
>> +		write = is_writable_device_private_entry(swp_entry);
>> +		anon_exclusive = PageAnonExclusive(page);
>> +
>> +		if (freeze && anon_exclusive &&
>> +		    folio_try_share_anon_rmap_pmd(folio, page))
>> +			freeze = false;
> 
> Why is it OK to change the freeze request? OK, it is replicating
> the code for present PMD folios. Either add a comment to point
> to the explanation in the comment below, or move
> “if (is_pmd_device_private_entry(*pmd))“ branch in the else below
> to deduplicate this code.

Similar to the code for present pages, ideally folio_try_share_anon_rmap_pmd()
should never fail.

> 
>> +		if (!freeze) {
>> +			rmap_t rmap_flags = RMAP_NONE;
>> +
>> +			folio_ref_add(folio, HPAGE_PMD_NR - 1);
>> +			if (anon_exclusive)
>> +				rmap_flags |= RMAP_EXCLUSIVE;
>> +
>> +			folio_add_anon_rmap_ptes(folio, page, HPAGE_PMD_NR,
>> +						 vma, haddr, rmap_flags);
>> +		}
>>  	} else {
>>  		/*
>>  		 * Up to this point the pmd is present and huge and userland has
>> @@ -3026,32 +3055,57 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>  	 * Note that NUMA hinting access restrictions are not transferred to
>>  	 * avoid any possibility of altering permissions across VMAs.
>>  	 */
>> -	if (freeze || pmd_migration) {
>> -		for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) {
>> -			pte_t entry;
>> -			swp_entry_t swp_entry;
>> -
>> -			if (write)
>> -				swp_entry = make_writable_migration_entry(
>> -							page_to_pfn(page + i));
>> -			else if (anon_exclusive)
>> -				swp_entry = make_readable_exclusive_migration_entry(
>> -							page_to_pfn(page + i));
>> -			else
>> -				swp_entry = make_readable_migration_entry(
>> -							page_to_pfn(page + i));
>> -			if (young)
>> -				swp_entry = make_migration_entry_young(swp_entry);
>> -			if (dirty)
>> -				swp_entry = make_migration_entry_dirty(swp_entry);
>> -			entry = swp_entry_to_pte(swp_entry);
>> -			if (soft_dirty)
>> -				entry = pte_swp_mksoft_dirty(entry);
>> -			if (uffd_wp)
>> -				entry = pte_swp_mkuffd_wp(entry);
>> +	if (freeze || !present) {
>> +		pte_t entry;
>>
>> -			VM_WARN_ON(!pte_none(ptep_get(pte + i)));
>> -			set_pte_at(mm, addr, pte + i, entry);
>> +		if (freeze || is_migration_entry(swp_entry)) {
>>
> <snip>
>> +		} else {
> <snip>
>>  		}
>>  	} else {
>>  		pte_t entry;
> 
> David already pointed this out in v5. It can be done such as:
> 
> if (freeze || pmd_migration) {
> ...
> } else if (is_pmd_device_private_entry(old_pmd)) {
> ...

No.. freeze can be true for device private entries as well

> } else {
> /* for present, non freeze case */
> }
> 
>> @@ -3076,7 +3130,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>  	}
>>  	pte_unmap(pte);
>>
>> -	if (!pmd_migration)
>> +	if (!is_pmd_migration_entry(*pmd))
>>  		folio_remove_rmap_pmd(folio, page, vma);
>>  	if (freeze)
>>  		put_page(page);
>> @@ -3089,7 +3143,7 @@ void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
>>  			   pmd_t *pmd, bool freeze)
>>  {
>>  	VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PMD_SIZE));
>> -	if (pmd_trans_huge(*pmd) || is_pmd_migration_entry(*pmd))
>> +	if (pmd_trans_huge(*pmd) || is_pmd_non_present_folio_entry(*pmd))
>>  		__split_huge_pmd_locked(vma, pmd, address, freeze);
>>  }
>>
>> @@ -3268,6 +3322,9 @@ static void lru_add_split_folio(struct folio *folio, struct folio *new_folio,
>>  	VM_BUG_ON_FOLIO(folio_test_lru(new_folio), folio);
>>  	lockdep_assert_held(&lruvec->lru_lock);
>>
>> +	if (folio_is_device_private(folio))
>> +		return;
>> +
>>  	if (list) {
>>  		/* page reclaim is reclaiming a huge page */
>>  		VM_WARN_ON(folio_test_lru(folio));
>> @@ -3885,8 +3942,9 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>>  	if (nr_shmem_dropped)
>>  		shmem_uncharge(mapping->host, nr_shmem_dropped);
>>
>> -	if (!ret && is_anon)
>> +	if (!ret && is_anon && !folio_is_device_private(folio))
>>  		remap_flags = RMP_USE_SHARED_ZEROPAGE;
>> +
> 
> You should remove this and add
> 
> if (folio_is_device_private(folio))
> 	return false;
> 
> in try_to_map_unused_to_zeropage(). Otherwise, no one would know
> device private folios need to be excluded from mapping unused to
> zero page.
> 

I had that upto v2 and then David asked me to remove it. FYI, this
is the only call site for RMP_USE_SHARED_ZEROPAGE

>>  	remap_page(folio, 1 << order, remap_flags);
>>
>>  	/*
>> -- 
>> 2.50.1
> 
> 

Thanks for the review
Balbir



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 04/15] mm/huge_memory: implement device-private THP splitting
  2025-09-23  1:50     ` Balbir Singh
@ 2025-09-23  2:09       ` Zi Yan
  2025-09-23  4:04         ` Balbir Singh
  0 siblings, 1 reply; 57+ messages in thread
From: Zi Yan @ 2025-09-23  2:09 UTC (permalink / raw)
  To: Balbir Singh
  Cc: David Hildenbrand, linux-kernel, linux-mm, damon, dri-devel,
	Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Alistair Popple, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 22 Sep 2025, at 21:50, Balbir Singh wrote:

> On 9/23/25 07:09, Zi Yan wrote:
>> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
>>
>>> Add support for splitting device-private THP folios, enabling fallback
>>> to smaller page sizes when large page allocation or migration fails.
>>>
>>> Key changes:
>>> - split_huge_pmd(): Handle device-private PMD entries during splitting
>>> - Preserve RMAP_EXCLUSIVE semantics for anonymous exclusive folios
>>> - Skip RMP_USE_SHARED_ZEROPAGE for device-private entries as they
>>>   don't support shared zero page semantics
>>>
>>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>>> Cc: David Hildenbrand <david@redhat.com>
>>> Cc: Zi Yan <ziy@nvidia.com>
>>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
>>> Cc: Rakie Kim <rakie.kim@sk.com>
>>> Cc: Byungchul Park <byungchul@sk.com>
>>> Cc: Gregory Price <gourry@gourry.net>
>>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
>>> Cc: Alistair Popple <apopple@nvidia.com>
>>> Cc: Oscar Salvador <osalvador@suse.de>
>>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
>>> Cc: Nico Pache <npache@redhat.com>
>>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>>> Cc: Dev Jain <dev.jain@arm.com>
>>> Cc: Barry Song <baohua@kernel.org>
>>> Cc: Lyude Paul <lyude@redhat.com>
>>> Cc: Danilo Krummrich <dakr@kernel.org>
>>> Cc: David Airlie <airlied@gmail.com>
>>> Cc: Simona Vetter <simona@ffwll.ch>
>>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>>> Cc: Mika Penttilä <mpenttil@redhat.com>
>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>> Cc: Francois Dugast <francois.dugast@intel.com>
>>> ---
>>>  mm/huge_memory.c | 138 +++++++++++++++++++++++++++++++++--------------
>>>  1 file changed, 98 insertions(+), 40 deletions(-)
>>>
>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>> index 78166db72f4d..5291ee155a02 100644
>>> --- a/mm/huge_memory.c
>>> +++ b/mm/huge_memory.c
>>> @@ -2872,16 +2872,18 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>>  	struct page *page;
>>>  	pgtable_t pgtable;
>>>  	pmd_t old_pmd, _pmd;
>>> -	bool young, write, soft_dirty, pmd_migration = false, uffd_wp = false;
>>> -	bool anon_exclusive = false, dirty = false;
>>> +	bool soft_dirty, uffd_wp = false, young = false, write = false;
>>> +	bool anon_exclusive = false, dirty = false, present = false;
>>>  	unsigned long addr;
>>>  	pte_t *pte;
>>>  	int i;
>>> +	swp_entry_t swp_entry;
>>>
>>>  	VM_BUG_ON(haddr & ~HPAGE_PMD_MASK);
>>>  	VM_BUG_ON_VMA(vma->vm_start > haddr, vma);
>>>  	VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PMD_SIZE, vma);
>>> -	VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd));
>>> +
>>> +	VM_WARN_ON(!is_pmd_non_present_folio_entry(*pmd) && !pmd_trans_huge(*pmd));
>>>
>>>  	count_vm_event(THP_SPLIT_PMD);
>>>
>>> @@ -2929,20 +2931,47 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>>  		return __split_huge_zero_page_pmd(vma, haddr, pmd);
>>>  	}
>>>
>>> -	pmd_migration = is_pmd_migration_entry(*pmd);
>>> -	if (unlikely(pmd_migration)) {
>>> -		swp_entry_t entry;
>>>
>>> +	present = pmd_present(*pmd);
>>> +	if (is_pmd_migration_entry(*pmd)) {
>>>  		old_pmd = *pmd;
>>> -		entry = pmd_to_swp_entry(old_pmd);
>>> -		page = pfn_swap_entry_to_page(entry);
>>> -		write = is_writable_migration_entry(entry);
>>> +		swp_entry = pmd_to_swp_entry(old_pmd);
>>> +		page = pfn_swap_entry_to_page(swp_entry);
>>> +		folio = page_folio(page);
>>> +
>>> +		soft_dirty = pmd_swp_soft_dirty(old_pmd);
>>> +		uffd_wp = pmd_swp_uffd_wp(old_pmd);
>>> +
>>> +		write = is_writable_migration_entry(swp_entry);
>>>  		if (PageAnon(page))
>>> -			anon_exclusive = is_readable_exclusive_migration_entry(entry);
>>> -		young = is_migration_entry_young(entry);
>>> -		dirty = is_migration_entry_dirty(entry);
>>> +			anon_exclusive = is_readable_exclusive_migration_entry(swp_entry);
>>> +		young = is_migration_entry_young(swp_entry);
>>> +		dirty = is_migration_entry_dirty(swp_entry);
>>> +	} else if (is_pmd_device_private_entry(*pmd)) {
>>> +		old_pmd = *pmd;
>>> +		swp_entry = pmd_to_swp_entry(old_pmd);
>>> +		page = pfn_swap_entry_to_page(swp_entry);
>>> +		folio = page_folio(page);
>>> +
>>>  		soft_dirty = pmd_swp_soft_dirty(old_pmd);
>>>  		uffd_wp = pmd_swp_uffd_wp(old_pmd);
>>> +
>>> +		write = is_writable_device_private_entry(swp_entry);
>>> +		anon_exclusive = PageAnonExclusive(page);
>>> +
>>> +		if (freeze && anon_exclusive &&
>>> +		    folio_try_share_anon_rmap_pmd(folio, page))
>>> +			freeze = false;
>>
>> Why is it OK to change the freeze request? OK, it is replicating
>> the code for present PMD folios. Either add a comment to point
>> to the explanation in the comment below, or move
>> “if (is_pmd_device_private_entry(*pmd))“ branch in the else below
>> to deduplicate this code.
>
> Similar to the code for present pages, ideally folio_try_share_anon_rmap_pmd()
> should never fail.

anon_exclusive = PageAnonExclusive(page);
if (freeze && anon_exclusive &&
    folio_try_share_anon_rmap_pmd(folio, page))
        freeze = false;
if (!freeze) {
        rmap_t rmap_flags = RMAP_NONE;

        folio_ref_add(folio, HPAGE_PMD_NR - 1);
        if (anon_exclusive)
                rmap_flags |= RMAP_EXCLUSIVE;
        folio_add_anon_rmap_ptes(folio, page, HPAGE_PMD_NR,
                                    vma, haddr, rmap_flags);
}

are the same for both device private and present. Can it be deduplicated
by doing below?

if (is_pmd_migration_entry(*pmd)) {
...
} else {
	if (is_pmd_device_private_entry(*pmd)) {
		...
	} else if (pmd_present()) {
		...
	}

	/* the above code */
}

If not, at least adding a comment in the device private copy of the code
pointing to the present copy's comment.

>
>>
>>> +		if (!freeze) {
>>> +			rmap_t rmap_flags = RMAP_NONE;
>>> +
>>> +			folio_ref_add(folio, HPAGE_PMD_NR - 1);
>>> +			if (anon_exclusive)
>>> +				rmap_flags |= RMAP_EXCLUSIVE;
>>> +
>>> +			folio_add_anon_rmap_ptes(folio, page, HPAGE_PMD_NR,
>>> +						 vma, haddr, rmap_flags);
>>> +		}
>>>  	} else {
>>>  		/*
>>>  		 * Up to this point the pmd is present and huge and userland has
>>> @@ -3026,32 +3055,57 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>>  	 * Note that NUMA hinting access restrictions are not transferred to
>>>  	 * avoid any possibility of altering permissions across VMAs.
>>>  	 */
>>> -	if (freeze || pmd_migration) {
>>> -		for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) {
>>> -			pte_t entry;
>>> -			swp_entry_t swp_entry;
>>> -
>>> -			if (write)
>>> -				swp_entry = make_writable_migration_entry(
>>> -							page_to_pfn(page + i));
>>> -			else if (anon_exclusive)
>>> -				swp_entry = make_readable_exclusive_migration_entry(
>>> -							page_to_pfn(page + i));
>>> -			else
>>> -				swp_entry = make_readable_migration_entry(
>>> -							page_to_pfn(page + i));
>>> -			if (young)
>>> -				swp_entry = make_migration_entry_young(swp_entry);
>>> -			if (dirty)
>>> -				swp_entry = make_migration_entry_dirty(swp_entry);
>>> -			entry = swp_entry_to_pte(swp_entry);
>>> -			if (soft_dirty)
>>> -				entry = pte_swp_mksoft_dirty(entry);
>>> -			if (uffd_wp)
>>> -				entry = pte_swp_mkuffd_wp(entry);
>>> +	if (freeze || !present) {
>>> +		pte_t entry;
>>>
>>> -			VM_WARN_ON(!pte_none(ptep_get(pte + i)));
>>> -			set_pte_at(mm, addr, pte + i, entry);
>>> +		if (freeze || is_migration_entry(swp_entry)) {
>>>
>> <snip>
>>> +		} else {
>> <snip>
>>>  		}
>>>  	} else {
>>>  		pte_t entry;
>>
>> David already pointed this out in v5. It can be done such as:
>>
>> if (freeze || pmd_migration) {
>> ...
>> } else if (is_pmd_device_private_entry(old_pmd)) {
>> ...
>
> No.. freeze can be true for device private entries as well

When freeze is true, migration entry is installed in place of
device private entry, since the "if (freeze || pmd_migration)"
branch is taken. This proposal is same as your code. What is
the difference?

>
>> } else {
>> /* for present, non freeze case */
>> }
>>
>>> @@ -3076,7 +3130,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>>  	}
>>>  	pte_unmap(pte);
>>>
>>> -	if (!pmd_migration)
>>> +	if (!is_pmd_migration_entry(*pmd))
>>>  		folio_remove_rmap_pmd(folio, page, vma);
>>>  	if (freeze)
>>>  		put_page(page);
>>> @@ -3089,7 +3143,7 @@ void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
>>>  			   pmd_t *pmd, bool freeze)
>>>  {
>>>  	VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PMD_SIZE));
>>> -	if (pmd_trans_huge(*pmd) || is_pmd_migration_entry(*pmd))
>>> +	if (pmd_trans_huge(*pmd) || is_pmd_non_present_folio_entry(*pmd))
>>>  		__split_huge_pmd_locked(vma, pmd, address, freeze);
>>>  }
>>>
>>> @@ -3268,6 +3322,9 @@ static void lru_add_split_folio(struct folio *folio, struct folio *new_folio,
>>>  	VM_BUG_ON_FOLIO(folio_test_lru(new_folio), folio);
>>>  	lockdep_assert_held(&lruvec->lru_lock);
>>>
>>> +	if (folio_is_device_private(folio))
>>> +		return;
>>> +
>>>  	if (list) {
>>>  		/* page reclaim is reclaiming a huge page */
>>>  		VM_WARN_ON(folio_test_lru(folio));
>>> @@ -3885,8 +3942,9 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>>>  	if (nr_shmem_dropped)
>>>  		shmem_uncharge(mapping->host, nr_shmem_dropped);
>>>
>>> -	if (!ret && is_anon)
>>> +	if (!ret && is_anon && !folio_is_device_private(folio))
>>>  		remap_flags = RMP_USE_SHARED_ZEROPAGE;
>>> +
>>
>> You should remove this and add
>>
>> if (folio_is_device_private(folio))
>> 	return false;
>>
>> in try_to_map_unused_to_zeropage(). Otherwise, no one would know
>> device private folios need to be excluded from mapping unused to
>> zero page.
>>
>
> I had that upto v2 and then David asked me to remove it. FYI, this
> is the only call site for RMP_USE_SHARED_ZEROPAGE

Can you provide a link?

Even if this is the only call site, there is no guarantee that
there will be none in the future. I am not sure why we want caller
to handle this special case. Who is going to tell the next user
of RMP_USE_SHARED_ZEROPAGE or caller to try_to_map_unused_to_zeropage()
that device private is incompatible with them?

>
>>>  	remap_page(folio, 1 << order, remap_flags);
>>>
>>>  	/*
>>> -- 
>>> 2.50.1
>>
>>
>
> Thanks for the review
> Balbir


--
Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 05/15] mm/migrate_device: handle partially mapped folios during collection
  2025-09-16 12:21 ` [v6 05/15] mm/migrate_device: handle partially mapped folios during collection Balbir Singh
@ 2025-09-23  2:23   ` Zi Yan
  2025-09-23  3:44     ` Balbir Singh
  0 siblings, 1 reply; 57+ messages in thread
From: Zi Yan @ 2025-09-23  2:23 UTC (permalink / raw)
  To: Balbir Singh
  Cc: linux-kernel, linux-mm, damon, dri-devel, David Hildenbrand,
	Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Alistair Popple, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 16 Sep 2025, at 8:21, Balbir Singh wrote:

> Extend migrate_vma_collect_pmd() to handle partially mapped large folios
> that require splitting before migration can proceed.
>
> During PTE walk in the collection phase, if a large folio is only
> partially mapped in the migration range, it must be split to ensure the
> folio is correctly migrated.
>
> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
> Cc: Rakie Kim <rakie.kim@sk.com>
> Cc: Byungchul Park <byungchul@sk.com>
> Cc: Gregory Price <gourry@gourry.net>
> Cc: Ying Huang <ying.huang@linux.alibaba.com>
> Cc: Alistair Popple <apopple@nvidia.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
> Cc: Nico Pache <npache@redhat.com>
> Cc: Ryan Roberts <ryan.roberts@arm.com>
> Cc: Dev Jain <dev.jain@arm.com>
> Cc: Barry Song <baohua@kernel.org>
> Cc: Lyude Paul <lyude@redhat.com>
> Cc: Danilo Krummrich <dakr@kernel.org>
> Cc: David Airlie <airlied@gmail.com>
> Cc: Simona Vetter <simona@ffwll.ch>
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: Mika Penttilä <mpenttil@redhat.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Francois Dugast <francois.dugast@intel.com>
> ---
>  mm/migrate_device.c | 82 +++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 82 insertions(+)
>
> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
> index abd9f6850db6..70c0601f70ea 100644
> --- a/mm/migrate_device.c
> +++ b/mm/migrate_device.c
> @@ -54,6 +54,53 @@ static int migrate_vma_collect_hole(unsigned long start,
>  	return 0;
>  }
>
> +/**
> + * migrate_vma_split_folio() - Helper function to split a THP folio
> + * @folio: the folio to split
> + * @fault_page: struct page associated with the fault if any
> + *
> + * Returns 0 on success
> + */
> +static int migrate_vma_split_folio(struct folio *folio,
> +				   struct page *fault_page)
> +{
> +	int ret;
> +	struct folio *fault_folio = fault_page ? page_folio(fault_page) : NULL;
> +	struct folio *new_fault_folio = NULL;
> +
> +	if (folio != fault_folio) {
> +		folio_get(folio);
> +		folio_lock(folio);
> +	}
> +
> +	ret = split_folio(folio);
> +	if (ret) {
> +		if (folio != fault_folio) {
> +			folio_unlock(folio);
> +			folio_put(folio);
> +		}
> +		return ret;
> +	}
> +
> +	new_fault_folio = fault_page ? page_folio(fault_page) : NULL;
> +
> +	/*
> +	 * Ensure the lock is held on the correct
> +	 * folio after the split
> +	 */
> +	if (!new_fault_folio) {
> +		folio_unlock(folio);
> +		folio_put(folio);
> +	} else if (folio != new_fault_folio) {
> +		folio_get(new_fault_folio);
> +		folio_lock(new_fault_folio);
> +		folio_unlock(folio);
> +		folio_put(folio);
> +	}
> +
> +	return 0;
> +}
> +
>  static int migrate_vma_collect_pmd(pmd_t *pmdp,
>  				   unsigned long start,
>  				   unsigned long end,
> @@ -136,6 +183,8 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>  			 * page table entry. Other special swap entries are not
>  			 * migratable, and we ignore regular swapped page.
>  			 */
> +			struct folio *folio;
> +
>  			entry = pte_to_swp_entry(pte);
>  			if (!is_device_private_entry(entry))
>  				goto next;
> @@ -147,6 +196,23 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>  			    pgmap->owner != migrate->pgmap_owner)
>  				goto next;
>
> +			folio = page_folio(page);
> +			if (folio_test_large(folio)) {
> +				int ret;
> +
> +				pte_unmap_unlock(ptep, ptl);
> +				ret = migrate_vma_split_folio(folio,
> +							  migrate->fault_page);
> +
> +				if (ret) {
> +					ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl);
> +					goto next;
> +				}
> +
> +				addr = start;
> +				goto again;
> +			}

This does not look right to me.

The folio here is device private, but migrate_vma_split_folio()
calls split_folio(), which cannot handle device private folios yet.
Your change to split_folio() is in Patch 10 and should be moved
before this patch.

> +
>  			mpfn = migrate_pfn(page_to_pfn(page)) |
>  					MIGRATE_PFN_MIGRATE;
>  			if (is_writable_device_private_entry(entry))
> @@ -171,6 +237,22 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>  					pgmap->owner != migrate->pgmap_owner)
>  					goto next;
>  			}
> +			folio = page ? page_folio(page) : NULL;
> +			if (folio && folio_test_large(folio)) {
> +				int ret;
> +
> +				pte_unmap_unlock(ptep, ptl);
> +				ret = migrate_vma_split_folio(folio,
> +							  migrate->fault_page);
> +
> +				if (ret) {
> +					ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl);
> +					goto next;
> +				}
> +
> +				addr = start;
> +				goto again;
> +			}
>  			mpfn = migrate_pfn(pfn) | MIGRATE_PFN_MIGRATE;
>  			mpfn |= pte_write(pte) ? MIGRATE_PFN_WRITE : 0;
>  		}
> -- 
> 2.50.1


--
Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 03/15] mm/rmap: extend rmap and migration support device-private entries
  2025-09-22 20:13   ` Zi Yan
@ 2025-09-23  3:39     ` Balbir Singh
  2025-09-24 10:46       ` David Hildenbrand
  0 siblings, 1 reply; 57+ messages in thread
From: Balbir Singh @ 2025-09-23  3:39 UTC (permalink / raw)
  To: Zi Yan
  Cc: linux-kernel, linux-mm, damon, dri-devel, SeongJae Park,
	David Hildenbrand, Joshua Hahn, Rakie Kim, Byungchul Park,
	Gregory Price, Ying Huang, Alistair Popple, Oscar Salvador,
	Lorenzo Stoakes, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lyude Paul, Danilo Krummrich,
	David Airlie, Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 9/23/25 06:13, Zi Yan wrote:
> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
> 
>> Add device-private THP support to reverse mapping infrastructure, enabling
>> proper handling during migration and walk operations.
>>
>> The key changes are:
>> - add_migration_pmd()/remove_migration_pmd(): Handle device-private
>>   entries during folio migration and splitting
>> - page_vma_mapped_walk(): Recognize device-private THP entries during
>>   VMA traversal operations
>>
>> This change supports folio splitting and migration operations on
>> device-private entries.
>>
>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>> Reviewed-by: SeongJae Park <sj@kernel.org>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
>> Cc: Rakie Kim <rakie.kim@sk.com>
>> Cc: Byungchul Park <byungchul@sk.com>
>> Cc: Gregory Price <gourry@gourry.net>
>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
>> Cc: Alistair Popple <apopple@nvidia.com>
>> Cc: Oscar Salvador <osalvador@suse.de>
>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
>> Cc: Nico Pache <npache@redhat.com>
>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>> Cc: Dev Jain <dev.jain@arm.com>
>> Cc: Barry Song <baohua@kernel.org>
>> Cc: Lyude Paul <lyude@redhat.com>
>> Cc: Danilo Krummrich <dakr@kernel.org>
>> Cc: David Airlie <airlied@gmail.com>
>> Cc: Simona Vetter <simona@ffwll.ch>
>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>> Cc: Mika Penttilä <mpenttil@redhat.com>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Francois Dugast <francois.dugast@intel.com>
>> ---
>>  mm/damon/ops-common.c | 20 +++++++++++++++++---
>>  mm/huge_memory.c      | 16 +++++++++++++++-
>>  mm/page_idle.c        |  7 +++++--
>>  mm/page_vma_mapped.c  |  7 +++++++
>>  mm/rmap.c             | 21 +++++++++++++++++----
>>  5 files changed, 61 insertions(+), 10 deletions(-)
>>
>> diff --git a/mm/damon/ops-common.c b/mm/damon/ops-common.c
>> index 998c5180a603..eda4de553611 100644
>> --- a/mm/damon/ops-common.c
>> +++ b/mm/damon/ops-common.c
>> @@ -75,12 +75,24 @@ void damon_ptep_mkold(pte_t *pte, struct vm_area_struct *vma, unsigned long addr
>>  void damon_pmdp_mkold(pmd_t *pmd, struct vm_area_struct *vma, unsigned long addr)
>>  {
>>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>> -	struct folio *folio = damon_get_folio(pmd_pfn(pmdp_get(pmd)));
>> +	pmd_t pmdval = pmdp_get(pmd);
>> +	struct folio *folio;
>> +	bool young = false;
>> +	unsigned long pfn;
>> +
>> +	if (likely(pmd_present(pmdval)))
>> +		pfn = pmd_pfn(pmdval);
>> +	else
>> +		pfn = swp_offset_pfn(pmd_to_swp_entry(pmdval));
>>
>> +	folio = damon_get_folio(pfn);
>>  	if (!folio)
>>  		return;
>>
>> -	if (pmdp_clear_young_notify(vma, addr, pmd))
>> +	if (likely(pmd_present(pmdval)))
>> +		young |= pmdp_clear_young_notify(vma, addr, pmd);
>> +	young |= mmu_notifier_clear_young(vma->vm_mm, addr, addr + PAGE_SIZE);
> 
> This should be HPAGE_PMD_SIZE (it is guarded in CONFIG_TRANSPARENT_HUGEPAGE,
> so HPAGE_PMD_SIZE will not trigger a build bug like the one below).
> 
>> +	if (young)
>>  		folio_set_young(folio);
>>
>>  	folio_set_idle(folio);
>> @@ -203,7 +215,9 @@ static bool damon_folio_young_one(struct folio *folio,
>>  				mmu_notifier_test_young(vma->vm_mm, addr);
>>  		} else {
>>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>> -			*accessed = pmd_young(pmdp_get(pvmw.pmd)) ||
>> +			pmd_t pmd = pmdp_get(pvmw.pmd);
>> +
>> +			*accessed = (pmd_present(pmd) && pmd_young(pmd)) ||
>>  				!folio_test_idle(folio) ||
>>  				mmu_notifier_test_young(vma->vm_mm, addr);
>>  #else
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index a5e4c2aef191..78166db72f4d 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -4637,7 +4637,10 @@ int set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
>>  		return 0;
>>
>>  	flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);
>> -	pmdval = pmdp_invalidate(vma, address, pvmw->pmd);
>> +	if (unlikely(!pmd_present(*pvmw->pmd)))
>> +		pmdval = pmdp_huge_get_and_clear(vma->vm_mm, address, pvmw->pmd);
>> +	else
>> +		pmdval = pmdp_invalidate(vma, address, pvmw->pmd);
>>
>>  	/* See folio_try_share_anon_rmap_pmd(): invalidate PMD first. */
>>  	anon_exclusive = folio_test_anon(folio) && PageAnonExclusive(page);
>> @@ -4687,6 +4690,17 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
>>  	entry = pmd_to_swp_entry(*pvmw->pmd);
>>  	folio_get(folio);
>>  	pmde = folio_mk_pmd(folio, READ_ONCE(vma->vm_page_prot));
>> +
>> +	if (folio_is_device_private(folio)) {
>> +		if (pmd_write(pmde))
>> +			entry = make_writable_device_private_entry(
>> +							page_to_pfn(new));
>> +		else
>> +			entry = make_readable_device_private_entry(
>> +							page_to_pfn(new));
>> +		pmde = swp_entry_to_pmd(entry);
>> +	}
>> +
>>  	if (pmd_swp_soft_dirty(*pvmw->pmd))
>>  		pmde = pmd_mksoft_dirty(pmde);
>>  	if (is_writable_migration_entry(entry))
>> diff --git a/mm/page_idle.c b/mm/page_idle.c
>> index a82b340dc204..3bf0fbe05cc2 100644
>> --- a/mm/page_idle.c
>> +++ b/mm/page_idle.c
>> @@ -71,8 +71,11 @@ static bool page_idle_clear_pte_refs_one(struct folio *folio,
>>  				referenced |= ptep_test_and_clear_young(vma, addr, pvmw.pte);
>>  			referenced |= mmu_notifier_clear_young(vma->vm_mm, addr, addr + PAGE_SIZE);
>>  		} else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
>> -			if (pmdp_clear_young_notify(vma, addr, pvmw.pmd))
>> -				referenced = true;
>> +			pmd_t pmdval = pmdp_get(pvmw.pmd);
>> +
>> +			if (likely(pmd_present(pmdval)))
>> +				referenced |= pmdp_clear_young_notify(vma, addr, pvmw.pmd);
>> +			referenced |= mmu_notifier_clear_young(vma->vm_mm, addr, addr + PAGE_SIZE);
> 
> This should be HPAGE_PMD_SIZE (or PMD_SIZE, since the code is not compiled
> out when CONFIG_TRANSPARENT_HUGEPAGE is not selected and HPAGE_PMD_SIZE
> will cause a build bug when CONFIG_PGTABLE_HAS_HUGE_LEAVES is not selected).

I'll protect it accordingly, thanks!

> 
>>  		} else {
>>  			/* unexpected pmd-mapped page? */
>>  			WARN_ON_ONCE(1);
>> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
>> index e981a1a292d2..159953c590cc 100644
>> --- a/mm/page_vma_mapped.c
>> +++ b/mm/page_vma_mapped.c
>> @@ -277,6 +277,13 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
>>  			 * cannot return prematurely, while zap_huge_pmd() has
>>  			 * cleared *pmd but not decremented compound_mapcount().
>>  			 */
>> +			swp_entry_t entry = pmd_to_swp_entry(pmde);
>> +
>> +			if (is_device_private_entry(entry)) {
>> +				pvmw->ptl = pmd_lock(mm, pvmw->pmd);
>> +				return true;
>> +			}
>> +
>>  			if ((pvmw->flags & PVMW_SYNC) &&
>>  			    thp_vma_suitable_order(vma, pvmw->address,
>>  						   PMD_ORDER) &&
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index 9a2aabfaea6f..080fc4048431 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -1063,9 +1063,11 @@ static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw)
>>  		} else {
>>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>  			pmd_t *pmd = pvmw->pmd;
>> -			pmd_t entry;
>> +			pmd_t entry = pmdp_get(pmd);
>>
>> -			if (!pmd_dirty(*pmd) && !pmd_write(*pmd))
> 
> It is better to add a similar comment as the one above !pte_present().
> Something like:
> PFN swap PMDs, such as ...
> 
> 

Sure, can do and repeat the comment or just say look at the comments for !pte_present() :)

>> +			if (!pmd_present(entry))
>> +				continue;
>> +			if (!pmd_dirty(entry) && !pmd_write(entry))
>>  				continue;
>>
>>  			flush_cache_range(vma, address,
>> @@ -2330,6 +2332,11 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
>>  	while (page_vma_mapped_walk(&pvmw)) {
>>  		/* PMD-mapped THP migration entry */
>>  		if (!pvmw.pte) {
>> +#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
>> +			unsigned long pfn;
>> +			pmd_t pmdval;
>> +#endif
>> +
> 
> This looks ugly. IIRC, we now can put variable definition in the middle.
> Maybe for this case, these two can be moved to the below ifdef region.
> 

I can't find any examples of mixing declarations and could not find any clear
guidance in the coding style

>>  			if (flags & TTU_SPLIT_HUGE_PMD) {
>>  				split_huge_pmd_locked(vma, pvmw.address,
>>  						      pvmw.pmd, true);
>> @@ -2338,8 +2345,14 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
>>  				break;
>>  			}
>>  #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
>> -			subpage = folio_page(folio,
>> -				pmd_pfn(*pvmw.pmd) - folio_pfn(folio));
>> +			pmdval = pmdp_get(pvmw.pmd);
>> +			if (likely(pmd_present(pmdval)))
>> +				pfn = pmd_pfn(pmdval);
>> +			else
>> +				pfn = swp_offset_pfn(pmd_to_swp_entry(pmdval));
>> +
>> +			subpage = folio_page(folio, pfn - folio_pfn(folio));
>> +
>>  			VM_BUG_ON_FOLIO(folio_test_hugetlb(folio) ||
>>  					!folio_test_pmd_mappable(folio), folio);
>>
>> -- 
>> 2.50.1
> 
> Otherwise, LGTM. Acked-by: Zi Yan <ziy@nvidia.com>

Thanks for the review,
Balbir


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 05/15] mm/migrate_device: handle partially mapped folios during collection
  2025-09-23  2:23   ` Zi Yan
@ 2025-09-23  3:44     ` Balbir Singh
  2025-09-23 15:56       ` Karim Manaouil
  0 siblings, 1 reply; 57+ messages in thread
From: Balbir Singh @ 2025-09-23  3:44 UTC (permalink / raw)
  To: Zi Yan
  Cc: linux-kernel, linux-mm, damon, dri-devel, David Hildenbrand,
	Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Alistair Popple, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 9/23/25 12:23, Zi Yan wrote:
> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
> 
>> Extend migrate_vma_collect_pmd() to handle partially mapped large folios
>> that require splitting before migration can proceed.
>>
>> During PTE walk in the collection phase, if a large folio is only
>> partially mapped in the migration range, it must be split to ensure the
>> folio is correctly migrated.
>>
>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
>> Cc: Rakie Kim <rakie.kim@sk.com>
>> Cc: Byungchul Park <byungchul@sk.com>
>> Cc: Gregory Price <gourry@gourry.net>
>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
>> Cc: Alistair Popple <apopple@nvidia.com>
>> Cc: Oscar Salvador <osalvador@suse.de>
>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
>> Cc: Nico Pache <npache@redhat.com>
>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>> Cc: Dev Jain <dev.jain@arm.com>
>> Cc: Barry Song <baohua@kernel.org>
>> Cc: Lyude Paul <lyude@redhat.com>
>> Cc: Danilo Krummrich <dakr@kernel.org>
>> Cc: David Airlie <airlied@gmail.com>
>> Cc: Simona Vetter <simona@ffwll.ch>
>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>> Cc: Mika Penttilä <mpenttil@redhat.com>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Francois Dugast <francois.dugast@intel.com>
>> ---
>>  mm/migrate_device.c | 82 +++++++++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 82 insertions(+)
>>
>> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
>> index abd9f6850db6..70c0601f70ea 100644
>> --- a/mm/migrate_device.c
>> +++ b/mm/migrate_device.c
>> @@ -54,6 +54,53 @@ static int migrate_vma_collect_hole(unsigned long start,
>>  	return 0;
>>  }
>>
>> +/**
>> + * migrate_vma_split_folio() - Helper function to split a THP folio
>> + * @folio: the folio to split
>> + * @fault_page: struct page associated with the fault if any
>> + *
>> + * Returns 0 on success
>> + */
>> +static int migrate_vma_split_folio(struct folio *folio,
>> +				   struct page *fault_page)
>> +{
>> +	int ret;
>> +	struct folio *fault_folio = fault_page ? page_folio(fault_page) : NULL;
>> +	struct folio *new_fault_folio = NULL;
>> +
>> +	if (folio != fault_folio) {
>> +		folio_get(folio);
>> +		folio_lock(folio);
>> +	}
>> +
>> +	ret = split_folio(folio);
>> +	if (ret) {
>> +		if (folio != fault_folio) {
>> +			folio_unlock(folio);
>> +			folio_put(folio);
>> +		}
>> +		return ret;
>> +	}
>> +
>> +	new_fault_folio = fault_page ? page_folio(fault_page) : NULL;
>> +
>> +	/*
>> +	 * Ensure the lock is held on the correct
>> +	 * folio after the split
>> +	 */
>> +	if (!new_fault_folio) {
>> +		folio_unlock(folio);
>> +		folio_put(folio);
>> +	} else if (folio != new_fault_folio) {
>> +		folio_get(new_fault_folio);
>> +		folio_lock(new_fault_folio);
>> +		folio_unlock(folio);
>> +		folio_put(folio);
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>>  static int migrate_vma_collect_pmd(pmd_t *pmdp,
>>  				   unsigned long start,
>>  				   unsigned long end,
>> @@ -136,6 +183,8 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>>  			 * page table entry. Other special swap entries are not
>>  			 * migratable, and we ignore regular swapped page.
>>  			 */
>> +			struct folio *folio;
>> +
>>  			entry = pte_to_swp_entry(pte);
>>  			if (!is_device_private_entry(entry))
>>  				goto next;
>> @@ -147,6 +196,23 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>>  			    pgmap->owner != migrate->pgmap_owner)
>>  				goto next;
>>
>> +			folio = page_folio(page);
>> +			if (folio_test_large(folio)) {
>> +				int ret;
>> +
>> +				pte_unmap_unlock(ptep, ptl);
>> +				ret = migrate_vma_split_folio(folio,
>> +							  migrate->fault_page);
>> +
>> +				if (ret) {
>> +					ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl);
>> +					goto next;
>> +				}
>> +
>> +				addr = start;
>> +				goto again;
>> +			}
> 
> This does not look right to me.
> 
> The folio here is device private, but migrate_vma_split_folio()
> calls split_folio(), which cannot handle device private folios yet.
> Your change to split_folio() is in Patch 10 and should be moved
> before this patch.
> 

Patch 10 is to split the folio in the middle of migration (when we have
converted the entries to migration entries). This patch relies on the
changes in patch 4. I agree the names are confusing, I'll reword the
functions


Thanks for the review
Balbir


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 01/15] mm/zone_device: support large zone device private folios
  2025-09-19 13:26       ` Zi Yan
@ 2025-09-23  3:47         ` Balbir Singh
  2025-09-24 11:04           ` David Hildenbrand
  0 siblings, 1 reply; 57+ messages in thread
From: Balbir Singh @ 2025-09-23  3:47 UTC (permalink / raw)
  To: Zi Yan, Alistair Popple
  Cc: David Hildenbrand, linux-kernel, linux-mm, damon, dri-devel,
	Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Oscar Salvador, Lorenzo Stoakes, Baolin Wang,
	Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
	Lyude Paul, Danilo Krummrich, David Airlie, Simona Vetter,
	Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 9/19/25 23:26, Zi Yan wrote:
> On 19 Sep 2025, at 1:01, Balbir Singh wrote:
> 
>> On 9/18/25 12:49, Zi Yan wrote:
>>> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
>>>
>>>> Add routines to support allocation of large order zone device folios
>>>> and helper functions for zone device folios, to check if a folio is
>>>> device private and helpers for setting zone device data.
>>>>
>>>> When large folios are used, the existing page_free() callback in
>>>> pgmap is called when the folio is freed, this is true for both
>>>> PAGE_SIZE and higher order pages.
>>>>
>>>> Zone device private large folios do not support deferred split and
>>>> scan like normal THP folios.
>>>>
>>>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>>>> Cc: David Hildenbrand <david@redhat.com>
>>>> Cc: Zi Yan <ziy@nvidia.com>
>>>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
>>>> Cc: Rakie Kim <rakie.kim@sk.com>
>>>> Cc: Byungchul Park <byungchul@sk.com>
>>>> Cc: Gregory Price <gourry@gourry.net>
>>>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
>>>> Cc: Alistair Popple <apopple@nvidia.com>
>>>> Cc: Oscar Salvador <osalvador@suse.de>
>>>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>>>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
>>>> Cc: Nico Pache <npache@redhat.com>
>>>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>>>> Cc: Dev Jain <dev.jain@arm.com>
>>>> Cc: Barry Song <baohua@kernel.org>
>>>> Cc: Lyude Paul <lyude@redhat.com>
>>>> Cc: Danilo Krummrich <dakr@kernel.org>
>>>> Cc: David Airlie <airlied@gmail.com>
>>>> Cc: Simona Vetter <simona@ffwll.ch>
>>>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>>>> Cc: Mika Penttilä <mpenttil@redhat.com>
>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>> Cc: Francois Dugast <francois.dugast@intel.com>
>>>> ---
>>>>  include/linux/memremap.h | 10 +++++++++-
>>>>  mm/memremap.c            | 34 +++++++++++++++++++++-------------
>>>>  mm/rmap.c                |  6 +++++-
>>>>  3 files changed, 35 insertions(+), 15 deletions(-)
>>>>
>>>> diff --git a/include/linux/memremap.h b/include/linux/memremap.h
>>>> index e5951ba12a28..9c20327c2be5 100644
>>>> --- a/include/linux/memremap.h
>>>> +++ b/include/linux/memremap.h
>>>> @@ -206,7 +206,7 @@ static inline bool is_fsdax_page(const struct page *page)
>>>>  }
>>>>
>>>>  #ifdef CONFIG_ZONE_DEVICE
>>>> -void zone_device_page_init(struct page *page);
>>>> +void zone_device_folio_init(struct folio *folio, unsigned int order);
>>>>  void *memremap_pages(struct dev_pagemap *pgmap, int nid);
>>>>  void memunmap_pages(struct dev_pagemap *pgmap);
>>>>  void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
>>>> @@ -215,6 +215,14 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn);
>>>>  bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn);
>>>>
>>>>  unsigned long memremap_compat_align(void);
>>>> +
>>>> +static inline void zone_device_page_init(struct page *page)
>>>> +{
>>>> +	struct folio *folio = page_folio(page);
>>>> +
>>>> +	zone_device_folio_init(folio, 0);
>>>
>>> I assume it is for legacy code, where only non-compound page exists?
>>>
>>> It seems that you assume @page is always order-0, but there is no check
>>> for it. Adding VM_WARN_ON_ONCE_FOLIO(folio_order(folio) != 0, folio)
>>> above it would be useful to detect misuse.
>>>
>>>> +}
>>>> +
>>>>  #else
>>>>  static inline void *devm_memremap_pages(struct device *dev,
>>>>  		struct dev_pagemap *pgmap)
>>>> diff --git a/mm/memremap.c b/mm/memremap.c
>>>> index 46cb1b0b6f72..a8481ebf94cc 100644
>>>> --- a/mm/memremap.c
>>>> +++ b/mm/memremap.c
>>>> @@ -416,20 +416,19 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap);
>>>>  void free_zone_device_folio(struct folio *folio)
>>>>  {
>>>>  	struct dev_pagemap *pgmap = folio->pgmap;
>>>> +	unsigned long nr = folio_nr_pages(folio);
>>>> +	int i;
>>>>
>>>>  	if (WARN_ON_ONCE(!pgmap))
>>>>  		return;
>>>>
>>>>  	mem_cgroup_uncharge(folio);
>>>>
>>>> -	/*
>>>> -	 * Note: we don't expect anonymous compound pages yet. Once supported
>>>> -	 * and we could PTE-map them similar to THP, we'd have to clear
>>>> -	 * PG_anon_exclusive on all tail pages.
>>>> -	 */
>>>>  	if (folio_test_anon(folio)) {
>>>> -		VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
>>>> -		__ClearPageAnonExclusive(folio_page(folio, 0));
>>>> +		for (i = 0; i < nr; i++)
>>>> +			__ClearPageAnonExclusive(folio_page(folio, i));
>>>> +	} else {
>>>> +		VM_WARN_ON_ONCE(folio_test_large(folio));
>>>>  	}
>>>>
>>>>  	/*
>>>> @@ -456,8 +455,8 @@ void free_zone_device_folio(struct folio *folio)
>>>>  	case MEMORY_DEVICE_COHERENT:
>>>>  		if (WARN_ON_ONCE(!pgmap->ops || !pgmap->ops->page_free))
>>>>  			break;
>>>> -		pgmap->ops->page_free(folio_page(folio, 0));
>>>> -		put_dev_pagemap(pgmap);
>>>> +		pgmap->ops->page_free(&folio->page);
>>>> +		percpu_ref_put_many(&folio->pgmap->ref, nr);
>>>>  		break;
>>>>
>>>>  	case MEMORY_DEVICE_GENERIC:
>>>> @@ -480,14 +479,23 @@ void free_zone_device_folio(struct folio *folio)
>>>>  	}
>>>>  }
>>>>
>>>> -void zone_device_page_init(struct page *page)
>>>> +void zone_device_folio_init(struct folio *folio, unsigned int order)
>>>>  {
>>>> +	struct page *page = folio_page(folio, 0);
>>>
>>> It is strange to see a folio is converted back to page in
>>> a function called zone_device_folio_init().
>>>
>>>> +
>>>> +	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
>>>> +
>>>>  	/*
>>>>  	 * Drivers shouldn't be allocating pages after calling
>>>>  	 * memunmap_pages().
>>>>  	 */
>>>> -	WARN_ON_ONCE(!percpu_ref_tryget_live(&page_pgmap(page)->ref));
>>>> -	set_page_count(page, 1);
>>>> +	WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
>>>> +	folio_set_count(folio, 1);
>>>>  	lock_page(page);
>>>> +
>>>> +	if (order > 1) {
>>>> +		prep_compound_page(page, order);
>>>> +		folio_set_large_rmappable(folio);
>>>> +	}
>>>
>>> OK, so basically, @folio is not a compound page yet when zone_device_folio_init()
>>> is called.
>>>
>>> I feel that your zone_device_page_init() and zone_device_folio_init()
>>> implementations are inverse. They should follow the same pattern
>>> as __alloc_pages_noprof() and __folio_alloc_noprof(), where
>>> zone_device_page_init() does the actual initialization and
>>> zone_device_folio_init() just convert a page to folio.
>>>
>>> Something like:
>>>
>>> void zone_device_page_init(struct page *page, unsigned int order)
>>> {
>>> 	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
>>>
>>> 	/*
>>> 	 * Drivers shouldn't be allocating pages after calling
>>> 	 * memunmap_pages().
>>> 	 */
>>>
>>>     WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
>>> 	
>>> 	/*
>>> 	 * anonymous folio does not support order-1, high order file-backed folio
>>> 	 * is not supported at all.
>>> 	 */
>>> 	VM_WARN_ON_ONCE(order == 1);
>>>
>>> 	if (order > 1)
>>> 		prep_compound_page(page, order);
>>>
>>> 	/* page has to be compound head here */
>>> 	set_page_count(page, 1);
>>> 	lock_page(page);
>>> }
>>>
>>> void zone_device_folio_init(struct folio *folio, unsigned int order)
>>> {
>>> 	struct page *page = folio_page(folio, 0);
>>>
>>> 	zone_device_page_init(page, order);
>>> 	page_rmappable_folio(page);
>>> }
>>>
>>> Or
>>>
>>> struct folio *zone_device_folio_init(struct page *page, unsigned int order)
>>> {
>>> 	zone_device_page_init(page, order);
>>> 	return page_rmappable_folio(page);
>>> }
>>>
>>>
>>> Then, it comes to free_zone_device_folio() above,
>>> I feel that pgmap->ops->page_free() should take an additional order
>>> parameter to free a compound page like free_frozen_pages().
>>>
>>>
>>> This is my impression after reading the patch and zone device page code.
>>>
>>> Alistair and David can correct me if this is wrong, since I am new to
>>> zone device page code.
>>> 	
>>
>> Thanks, I did not want to change zone_device_page_init() for several
>> drivers (outside my test scope) that already assume it has an order size of 0.
> 
> But my proposed zone_device_page_init() should still work for order-0
> pages. You just need to change call site to add 0 as a new parameter.
> 

I did not want to change existing callers (increases testing impact)
without a strong reason.

> 
> One strange thing I found in the original zone_device_page_init() is
> the use of page_pgmap() in
> WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order)).
> page_pgmap() calls page_folio() on the given page to access pgmap field.
> And pgmap field is only available in struct folio. The code initializes
> struct page, but in middle it suddenly finds the page is actually a folio,
> then treat it as a page afterwards. I wonder if it can be done better.
> 
> This might be a question to Alistair, since he made the change.
> 

I'll let him answer it :)

Thanks for the review
Balbir


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 04/15] mm/huge_memory: implement device-private THP splitting
  2025-09-23  2:09       ` Zi Yan
@ 2025-09-23  4:04         ` Balbir Singh
  2025-09-23 16:08           ` Zi Yan
  0 siblings, 1 reply; 57+ messages in thread
From: Balbir Singh @ 2025-09-23  4:04 UTC (permalink / raw)
  To: Zi Yan
  Cc: David Hildenbrand, linux-kernel, linux-mm, damon, dri-devel,
	Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Alistair Popple, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 9/23/25 12:09, Zi Yan wrote:
> On 22 Sep 2025, at 21:50, Balbir Singh wrote:
> 
>> On 9/23/25 07:09, Zi Yan wrote:
>>> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
>>>
>>>> Add support for splitting device-private THP folios, enabling fallback
>>>> to smaller page sizes when large page allocation or migration fails.
>>>>
>>>> Key changes:
>>>> - split_huge_pmd(): Handle device-private PMD entries during splitting
>>>> - Preserve RMAP_EXCLUSIVE semantics for anonymous exclusive folios
>>>> - Skip RMP_USE_SHARED_ZEROPAGE for device-private entries as they
>>>>   don't support shared zero page semantics
>>>>
>>>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>>>> Cc: David Hildenbrand <david@redhat.com>
>>>> Cc: Zi Yan <ziy@nvidia.com>
>>>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
>>>> Cc: Rakie Kim <rakie.kim@sk.com>
>>>> Cc: Byungchul Park <byungchul@sk.com>
>>>> Cc: Gregory Price <gourry@gourry.net>
>>>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
>>>> Cc: Alistair Popple <apopple@nvidia.com>
>>>> Cc: Oscar Salvador <osalvador@suse.de>
>>>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>>>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
>>>> Cc: Nico Pache <npache@redhat.com>
>>>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>>>> Cc: Dev Jain <dev.jain@arm.com>
>>>> Cc: Barry Song <baohua@kernel.org>
>>>> Cc: Lyude Paul <lyude@redhat.com>
>>>> Cc: Danilo Krummrich <dakr@kernel.org>
>>>> Cc: David Airlie <airlied@gmail.com>
>>>> Cc: Simona Vetter <simona@ffwll.ch>
>>>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>>>> Cc: Mika Penttilä <mpenttil@redhat.com>
>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>> Cc: Francois Dugast <francois.dugast@intel.com>
>>>> ---
>>>>  mm/huge_memory.c | 138 +++++++++++++++++++++++++++++++++--------------
>>>>  1 file changed, 98 insertions(+), 40 deletions(-)
>>>>
>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>>> index 78166db72f4d..5291ee155a02 100644
>>>> --- a/mm/huge_memory.c
>>>> +++ b/mm/huge_memory.c
>>>> @@ -2872,16 +2872,18 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>>>  	struct page *page;
>>>>  	pgtable_t pgtable;
>>>>  	pmd_t old_pmd, _pmd;
>>>> -	bool young, write, soft_dirty, pmd_migration = false, uffd_wp = false;
>>>> -	bool anon_exclusive = false, dirty = false;
>>>> +	bool soft_dirty, uffd_wp = false, young = false, write = false;
>>>> +	bool anon_exclusive = false, dirty = false, present = false;
>>>>  	unsigned long addr;
>>>>  	pte_t *pte;
>>>>  	int i;
>>>> +	swp_entry_t swp_entry;
>>>>
>>>>  	VM_BUG_ON(haddr & ~HPAGE_PMD_MASK);
>>>>  	VM_BUG_ON_VMA(vma->vm_start > haddr, vma);
>>>>  	VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PMD_SIZE, vma);
>>>> -	VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd));
>>>> +
>>>> +	VM_WARN_ON(!is_pmd_non_present_folio_entry(*pmd) && !pmd_trans_huge(*pmd));
>>>>
>>>>  	count_vm_event(THP_SPLIT_PMD);
>>>>
>>>> @@ -2929,20 +2931,47 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>>>  		return __split_huge_zero_page_pmd(vma, haddr, pmd);
>>>>  	}
>>>>
>>>> -	pmd_migration = is_pmd_migration_entry(*pmd);
>>>> -	if (unlikely(pmd_migration)) {
>>>> -		swp_entry_t entry;
>>>>
>>>> +	present = pmd_present(*pmd);
>>>> +	if (is_pmd_migration_entry(*pmd)) {
>>>>  		old_pmd = *pmd;
>>>> -		entry = pmd_to_swp_entry(old_pmd);
>>>> -		page = pfn_swap_entry_to_page(entry);
>>>> -		write = is_writable_migration_entry(entry);
>>>> +		swp_entry = pmd_to_swp_entry(old_pmd);
>>>> +		page = pfn_swap_entry_to_page(swp_entry);
>>>> +		folio = page_folio(page);
>>>> +
>>>> +		soft_dirty = pmd_swp_soft_dirty(old_pmd);
>>>> +		uffd_wp = pmd_swp_uffd_wp(old_pmd);
>>>> +
>>>> +		write = is_writable_migration_entry(swp_entry);
>>>>  		if (PageAnon(page))
>>>> -			anon_exclusive = is_readable_exclusive_migration_entry(entry);
>>>> -		young = is_migration_entry_young(entry);
>>>> -		dirty = is_migration_entry_dirty(entry);
>>>> +			anon_exclusive = is_readable_exclusive_migration_entry(swp_entry);
>>>> +		young = is_migration_entry_young(swp_entry);
>>>> +		dirty = is_migration_entry_dirty(swp_entry);
>>>> +	} else if (is_pmd_device_private_entry(*pmd)) {
>>>> +		old_pmd = *pmd;
>>>> +		swp_entry = pmd_to_swp_entry(old_pmd);
>>>> +		page = pfn_swap_entry_to_page(swp_entry);
>>>> +		folio = page_folio(page);
>>>> +
>>>>  		soft_dirty = pmd_swp_soft_dirty(old_pmd);
>>>>  		uffd_wp = pmd_swp_uffd_wp(old_pmd);
>>>> +
>>>> +		write = is_writable_device_private_entry(swp_entry);
>>>> +		anon_exclusive = PageAnonExclusive(page);
>>>> +
>>>> +		if (freeze && anon_exclusive &&
>>>> +		    folio_try_share_anon_rmap_pmd(folio, page))
>>>> +			freeze = false;
>>>
>>> Why is it OK to change the freeze request? OK, it is replicating
>>> the code for present PMD folios. Either add a comment to point
>>> to the explanation in the comment below, or move
>>> “if (is_pmd_device_private_entry(*pmd))“ branch in the else below
>>> to deduplicate this code.
>>
>> Similar to the code for present pages, ideally folio_try_share_anon_rmap_pmd()
>> should never fail.
> 
> anon_exclusive = PageAnonExclusive(page);
> if (freeze && anon_exclusive &&
>     folio_try_share_anon_rmap_pmd(folio, page))
>         freeze = false;
> if (!freeze) {
>         rmap_t rmap_flags = RMAP_NONE;
> 
>         folio_ref_add(folio, HPAGE_PMD_NR - 1);
>         if (anon_exclusive)
>                 rmap_flags |= RMAP_EXCLUSIVE;
>         folio_add_anon_rmap_ptes(folio, page, HPAGE_PMD_NR,
>                                     vma, haddr, rmap_flags);
> }
> 
> are the same for both device private and present. Can it be deduplicated
> by doing below?
> 
> if (is_pmd_migration_entry(*pmd)) {
> ...
> } else {
> 	if (is_pmd_device_private_entry(*pmd)) {
> 		...
> 	} else if (pmd_present()) {
> 		...
> 	}
> 
> 	/* the above code */
> }
> 
> If not, at least adding a comment in the device private copy of the code
> pointing to the present copy's comment.
> 
>>
>>>
>>>> +		if (!freeze) {
>>>> +			rmap_t rmap_flags = RMAP_NONE;
>>>> +
>>>> +			folio_ref_add(folio, HPAGE_PMD_NR - 1);
>>>> +			if (anon_exclusive)
>>>> +				rmap_flags |= RMAP_EXCLUSIVE;
>>>> +
>>>> +			folio_add_anon_rmap_ptes(folio, page, HPAGE_PMD_NR,
>>>> +						 vma, haddr, rmap_flags);
>>>> +		}
>>>>  	} else {
>>>>  		/*
>>>>  		 * Up to this point the pmd is present and huge and userland has
>>>> @@ -3026,32 +3055,57 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>>>  	 * Note that NUMA hinting access restrictions are not transferred to
>>>>  	 * avoid any possibility of altering permissions across VMAs.
>>>>  	 */
>>>> -	if (freeze || pmd_migration) {
>>>> -		for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) {
>>>> -			pte_t entry;
>>>> -			swp_entry_t swp_entry;
>>>> -
>>>> -			if (write)
>>>> -				swp_entry = make_writable_migration_entry(
>>>> -							page_to_pfn(page + i));
>>>> -			else if (anon_exclusive)
>>>> -				swp_entry = make_readable_exclusive_migration_entry(
>>>> -							page_to_pfn(page + i));
>>>> -			else
>>>> -				swp_entry = make_readable_migration_entry(
>>>> -							page_to_pfn(page + i));
>>>> -			if (young)
>>>> -				swp_entry = make_migration_entry_young(swp_entry);
>>>> -			if (dirty)
>>>> -				swp_entry = make_migration_entry_dirty(swp_entry);
>>>> -			entry = swp_entry_to_pte(swp_entry);
>>>> -			if (soft_dirty)
>>>> -				entry = pte_swp_mksoft_dirty(entry);
>>>> -			if (uffd_wp)
>>>> -				entry = pte_swp_mkuffd_wp(entry);
>>>> +	if (freeze || !present) {
>>>> +		pte_t entry;
>>>>
>>>> -			VM_WARN_ON(!pte_none(ptep_get(pte + i)));
>>>> -			set_pte_at(mm, addr, pte + i, entry);
>>>> +		if (freeze || is_migration_entry(swp_entry)) {
>>>>
>>> <snip>
>>>> +		} else {
>>> <snip>
>>>>  		}
>>>>  	} else {
>>>>  		pte_t entry;
>>>
>>> David already pointed this out in v5. It can be done such as:
>>>
>>> if (freeze || pmd_migration) {
>>> ...
>>> } else if (is_pmd_device_private_entry(old_pmd)) {
>>> ...
>>
>> No.. freeze can be true for device private entries as well
> 
> When freeze is true, migration entry is installed in place of
> device private entry, since the "if (freeze || pmd_migration)"
> branch is taken. This proposal is same as your code. What is
> the difference?
> 

I read the else if incorrectly, I'll simplify

>>
>>> } else {
>>> /* for present, non freeze case */
>>> }
>>>
>>>> @@ -3076,7 +3130,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>>>  	}
>>>>  	pte_unmap(pte);
>>>>
>>>> -	if (!pmd_migration)
>>>> +	if (!is_pmd_migration_entry(*pmd))
>>>>  		folio_remove_rmap_pmd(folio, page, vma);
>>>>  	if (freeze)
>>>>  		put_page(page);
>>>> @@ -3089,7 +3143,7 @@ void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
>>>>  			   pmd_t *pmd, bool freeze)
>>>>  {
>>>>  	VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PMD_SIZE));
>>>> -	if (pmd_trans_huge(*pmd) || is_pmd_migration_entry(*pmd))
>>>> +	if (pmd_trans_huge(*pmd) || is_pmd_non_present_folio_entry(*pmd))
>>>>  		__split_huge_pmd_locked(vma, pmd, address, freeze);
>>>>  }
>>>>
>>>> @@ -3268,6 +3322,9 @@ static void lru_add_split_folio(struct folio *folio, struct folio *new_folio,
>>>>  	VM_BUG_ON_FOLIO(folio_test_lru(new_folio), folio);
>>>>  	lockdep_assert_held(&lruvec->lru_lock);
>>>>
>>>> +	if (folio_is_device_private(folio))
>>>> +		return;
>>>> +
>>>>  	if (list) {
>>>>  		/* page reclaim is reclaiming a huge page */
>>>>  		VM_WARN_ON(folio_test_lru(folio));
>>>> @@ -3885,8 +3942,9 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>>>>  	if (nr_shmem_dropped)
>>>>  		shmem_uncharge(mapping->host, nr_shmem_dropped);
>>>>
>>>> -	if (!ret && is_anon)
>>>> +	if (!ret && is_anon && !folio_is_device_private(folio))
>>>>  		remap_flags = RMP_USE_SHARED_ZEROPAGE;
>>>> +
>>>
>>> You should remove this and add
>>>
>>> if (folio_is_device_private(folio))
>>> 	return false;
>>>
>>> in try_to_map_unused_to_zeropage(). Otherwise, no one would know
>>> device private folios need to be excluded from mapping unused to
>>> zero page.
>>>
>>
>> I had that upto v2 and then David asked me to remove it. FYI, this
>> is the only call site for RMP_USE_SHARED_ZEROPAGE
> 
> Can you provide a link?
> 

Please see https://lore.kernel.org/linux-mm/20250306044239.3874247-3-balbirs@nvidia.com/T/

> Even if this is the only call site, there is no guarantee that
> there will be none in the future. I am not sure why we want caller
> to handle this special case. Who is going to tell the next user
> of RMP_USE_SHARED_ZEROPAGE or caller to try_to_map_unused_to_zeropage()
> that device private is incompatible with them?
> 

I don't disagree, but the question was why are device private pages even making
it to try_to_map_unused_to_zeropage()>>
>>>>  	remap_page(folio, 1 << order, remap_flags);
>>>>
>>>>  	/*
>>>> -- 
>>>> 2.50.1
>>>
>>>
>>
>> Thanks for the review
>> Balbir

Thanks,
Balbir


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 02/15] mm/huge_memory: add device-private THP support to PMD operations
  2025-09-19  4:51     ` Balbir Singh
@ 2025-09-23  8:37       ` David Hildenbrand
  0 siblings, 0 replies; 57+ messages in thread
From: David Hildenbrand @ 2025-09-23  8:37 UTC (permalink / raw)
  To: Balbir Singh, Zi Yan
  Cc: linux-kernel, linux-mm, damon, dri-devel, Matthew Brost,
	Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Alistair Popple, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Francois Dugast


>>
>> non_present seems too vague. Maybe just open code it.
> 
> This was David's suggestion from the previous posting, there is is_swap_pfn_entry()
> but it's much larger than we would like for our use case.

Right. If we can find a better name, great, but open coding this turned 
out nasty.

-- 
Cheers

David / dhildenb



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 05/15] mm/migrate_device: handle partially mapped folios during collection
  2025-09-23  3:44     ` Balbir Singh
@ 2025-09-23 15:56       ` Karim Manaouil
  2025-09-24  4:47         ` Balbir Singh
  2025-09-30 11:58         ` Balbir Singh
  0 siblings, 2 replies; 57+ messages in thread
From: Karim Manaouil @ 2025-09-23 15:56 UTC (permalink / raw)
  To: Balbir Singh
  Cc: Zi Yan, linux-kernel, linux-mm, damon, dri-devel,
	David Hildenbrand, Joshua Hahn, Rakie Kim, Byungchul Park,
	Gregory Price, Ying Huang, Alistair Popple, Oscar Salvador,
	Lorenzo Stoakes, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lyude Paul, Danilo Krummrich,
	David Airlie, Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On Tue, Sep 23, 2025 at 01:44:20PM +1000, Balbir Singh wrote:
> On 9/23/25 12:23, Zi Yan wrote:
> > On 16 Sep 2025, at 8:21, Balbir Singh wrote:
> > 
> >> Extend migrate_vma_collect_pmd() to handle partially mapped large folios
> >> that require splitting before migration can proceed.
> >>
> >> During PTE walk in the collection phase, if a large folio is only
> >> partially mapped in the migration range, it must be split to ensure the
> >> folio is correctly migrated.
> >>
> >> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
> >> Cc: David Hildenbrand <david@redhat.com>
> >> Cc: Zi Yan <ziy@nvidia.com>
> >> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
> >> Cc: Rakie Kim <rakie.kim@sk.com>
> >> Cc: Byungchul Park <byungchul@sk.com>
> >> Cc: Gregory Price <gourry@gourry.net>
> >> Cc: Ying Huang <ying.huang@linux.alibaba.com>
> >> Cc: Alistair Popple <apopple@nvidia.com>
> >> Cc: Oscar Salvador <osalvador@suse.de>
> >> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> >> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> >> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
> >> Cc: Nico Pache <npache@redhat.com>
> >> Cc: Ryan Roberts <ryan.roberts@arm.com>
> >> Cc: Dev Jain <dev.jain@arm.com>
> >> Cc: Barry Song <baohua@kernel.org>
> >> Cc: Lyude Paul <lyude@redhat.com>
> >> Cc: Danilo Krummrich <dakr@kernel.org>
> >> Cc: David Airlie <airlied@gmail.com>
> >> Cc: Simona Vetter <simona@ffwll.ch>
> >> Cc: Ralph Campbell <rcampbell@nvidia.com>
> >> Cc: Mika Penttilä <mpenttil@redhat.com>
> >> Cc: Matthew Brost <matthew.brost@intel.com>
> >> Cc: Francois Dugast <francois.dugast@intel.com>
> >> ---
> >>  mm/migrate_device.c | 82 +++++++++++++++++++++++++++++++++++++++++++++
> >>  1 file changed, 82 insertions(+)
> >>
> >> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
> >> index abd9f6850db6..70c0601f70ea 100644
> >> --- a/mm/migrate_device.c
> >> +++ b/mm/migrate_device.c
> >> @@ -54,6 +54,53 @@ static int migrate_vma_collect_hole(unsigned long start,
> >>  	return 0;
> >>  }
> >>
> >> +/**
> >> + * migrate_vma_split_folio() - Helper function to split a THP folio
> >> + * @folio: the folio to split
> >> + * @fault_page: struct page associated with the fault if any
> >> + *
> >> + * Returns 0 on success
> >> + */
> >> +static int migrate_vma_split_folio(struct folio *folio,
> >> +				   struct page *fault_page)
> >> +{
> >> +	int ret;
> >> +	struct folio *fault_folio = fault_page ? page_folio(fault_page) : NULL;
> >> +	struct folio *new_fault_folio = NULL;
> >> +
> >> +	if (folio != fault_folio) {
> >> +		folio_get(folio);
> >> +		folio_lock(folio);
> >> +	}
> >> +
> >> +	ret = split_folio(folio);
> >> +	if (ret) {
> >> +		if (folio != fault_folio) {
> >> +			folio_unlock(folio);
> >> +			folio_put(folio);
> >> +		}
> >> +		return ret;
> >> +	}
> >> +
> >> +	new_fault_folio = fault_page ? page_folio(fault_page) : NULL;
> >> +
> >> +	/*
> >> +	 * Ensure the lock is held on the correct
> >> +	 * folio after the split
> >> +	 */
> >> +	if (!new_fault_folio) {
> >> +		folio_unlock(folio);
> >> +		folio_put(folio);
> >> +	} else if (folio != new_fault_folio) {
> >> +		folio_get(new_fault_folio);
> >> +		folio_lock(new_fault_folio);
> >> +		folio_unlock(folio);
> >> +		folio_put(folio);
> >> +	}
> >> +
> >> +	return 0;
> >> +}
> >> +
> >>  static int migrate_vma_collect_pmd(pmd_t *pmdp,
> >>  				   unsigned long start,
> >>  				   unsigned long end,
> >> @@ -136,6 +183,8 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
> >>  			 * page table entry. Other special swap entries are not
> >>  			 * migratable, and we ignore regular swapped page.
> >>  			 */
> >> +			struct folio *folio;
> >> +
> >>  			entry = pte_to_swp_entry(pte);
> >>  			if (!is_device_private_entry(entry))
> >>  				goto next;
> >> @@ -147,6 +196,23 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
> >>  			    pgmap->owner != migrate->pgmap_owner)
> >>  				goto next;
> >>
> >> +			folio = page_folio(page);
> >> +			if (folio_test_large(folio)) {
> >> +				int ret;
> >> +
> >> +				pte_unmap_unlock(ptep, ptl);
> >> +				ret = migrate_vma_split_folio(folio,
> >> +							  migrate->fault_page);
> >> +
> >> +				if (ret) {
> >> +					ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl);
> >> +					goto next;
> >> +				}
> >> +
> >> +				addr = start;
> >> +				goto again;
> >> +			}
> > 
> > This does not look right to me.
> > 
> > The folio here is device private, but migrate_vma_split_folio()
> > calls split_folio(), which cannot handle device private folios yet.
> > Your change to split_folio() is in Patch 10 and should be moved
> > before this patch.
> > 
> 
> Patch 10 is to split the folio in the middle of migration (when we have
> converted the entries to migration entries). This patch relies on the
> changes in patch 4. I agree the names are confusing, I'll reword the
> functions

Hi Balbir,

I am still reviewing the patches, but I think I agree with Zi here.

split_folio() will replace the PMD mappings of the huge folio with PTE
mappings, but will also split the folio into smaller folios. The former
is ok with this patch, but the latter is probably not correct if the folio
is a zone device folio. The driver needs to know about the change, as
usually the driver will have some sort of mapping between GPU physical
memory chunks and their corresponding zone device pages.



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 04/15] mm/huge_memory: implement device-private THP splitting
  2025-09-23  4:04         ` Balbir Singh
@ 2025-09-23 16:08           ` Zi Yan
  2025-09-25 10:06             ` David Hildenbrand
  0 siblings, 1 reply; 57+ messages in thread
From: Zi Yan @ 2025-09-23 16:08 UTC (permalink / raw)
  To: Balbir Singh
  Cc: David Hildenbrand, linux-kernel, linux-mm, damon, dri-devel,
	Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Alistair Popple, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 23 Sep 2025, at 0:04, Balbir Singh wrote:

> On 9/23/25 12:09, Zi Yan wrote:
>> On 22 Sep 2025, at 21:50, Balbir Singh wrote:
>>
>>> On 9/23/25 07:09, Zi Yan wrote:
>>>> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
>>>>
>>>>> Add support for splitting device-private THP folios, enabling fallback
>>>>> to smaller page sizes when large page allocation or migration fails.
>>>>>
>>>>> Key changes:
>>>>> - split_huge_pmd(): Handle device-private PMD entries during splitting
>>>>> - Preserve RMAP_EXCLUSIVE semantics for anonymous exclusive folios
>>>>> - Skip RMP_USE_SHARED_ZEROPAGE for device-private entries as they
>>>>>   don't support shared zero page semantics
>>>>>
>>>>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>>>>> Cc: David Hildenbrand <david@redhat.com>
>>>>> Cc: Zi Yan <ziy@nvidia.com>
>>>>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
>>>>> Cc: Rakie Kim <rakie.kim@sk.com>
>>>>> Cc: Byungchul Park <byungchul@sk.com>
>>>>> Cc: Gregory Price <gourry@gourry.net>
>>>>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
>>>>> Cc: Alistair Popple <apopple@nvidia.com>
>>>>> Cc: Oscar Salvador <osalvador@suse.de>
>>>>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>>>>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
>>>>> Cc: Nico Pache <npache@redhat.com>
>>>>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>>>>> Cc: Dev Jain <dev.jain@arm.com>
>>>>> Cc: Barry Song <baohua@kernel.org>
>>>>> Cc: Lyude Paul <lyude@redhat.com>
>>>>> Cc: Danilo Krummrich <dakr@kernel.org>
>>>>> Cc: David Airlie <airlied@gmail.com>
>>>>> Cc: Simona Vetter <simona@ffwll.ch>
>>>>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>>>>> Cc: Mika Penttilä <mpenttil@redhat.com>
>>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>>> Cc: Francois Dugast <francois.dugast@intel.com>
>>>>> ---
>>>>>  mm/huge_memory.c | 138 +++++++++++++++++++++++++++++++++--------------
>>>>>  1 file changed, 98 insertions(+), 40 deletions(-)
>>>>>
>>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>>>> index 78166db72f4d..5291ee155a02 100644
>>>>> --- a/mm/huge_memory.c
>>>>> +++ b/mm/huge_memory.c
>>>>> @@ -2872,16 +2872,18 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>>>>  	struct page *page;
>>>>>  	pgtable_t pgtable;
>>>>>  	pmd_t old_pmd, _pmd;
>>>>> -	bool young, write, soft_dirty, pmd_migration = false, uffd_wp = false;
>>>>> -	bool anon_exclusive = false, dirty = false;
>>>>> +	bool soft_dirty, uffd_wp = false, young = false, write = false;
>>>>> +	bool anon_exclusive = false, dirty = false, present = false;
>>>>>  	unsigned long addr;
>>>>>  	pte_t *pte;
>>>>>  	int i;
>>>>> +	swp_entry_t swp_entry;
>>>>>
>>>>>  	VM_BUG_ON(haddr & ~HPAGE_PMD_MASK);
>>>>>  	VM_BUG_ON_VMA(vma->vm_start > haddr, vma);
>>>>>  	VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PMD_SIZE, vma);
>>>>> -	VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd));
>>>>> +
>>>>> +	VM_WARN_ON(!is_pmd_non_present_folio_entry(*pmd) && !pmd_trans_huge(*pmd));
>>>>>
>>>>>  	count_vm_event(THP_SPLIT_PMD);
>>>>>
>>>>> @@ -2929,20 +2931,47 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>>>>  		return __split_huge_zero_page_pmd(vma, haddr, pmd);
>>>>>  	}
>>>>>
>>>>> -	pmd_migration = is_pmd_migration_entry(*pmd);
>>>>> -	if (unlikely(pmd_migration)) {
>>>>> -		swp_entry_t entry;
>>>>>
>>>>> +	present = pmd_present(*pmd);
>>>>> +	if (is_pmd_migration_entry(*pmd)) {
>>>>>  		old_pmd = *pmd;
>>>>> -		entry = pmd_to_swp_entry(old_pmd);
>>>>> -		page = pfn_swap_entry_to_page(entry);
>>>>> -		write = is_writable_migration_entry(entry);
>>>>> +		swp_entry = pmd_to_swp_entry(old_pmd);
>>>>> +		page = pfn_swap_entry_to_page(swp_entry);
>>>>> +		folio = page_folio(page);
>>>>> +
>>>>> +		soft_dirty = pmd_swp_soft_dirty(old_pmd);
>>>>> +		uffd_wp = pmd_swp_uffd_wp(old_pmd);
>>>>> +
>>>>> +		write = is_writable_migration_entry(swp_entry);
>>>>>  		if (PageAnon(page))
>>>>> -			anon_exclusive = is_readable_exclusive_migration_entry(entry);
>>>>> -		young = is_migration_entry_young(entry);
>>>>> -		dirty = is_migration_entry_dirty(entry);
>>>>> +			anon_exclusive = is_readable_exclusive_migration_entry(swp_entry);
>>>>> +		young = is_migration_entry_young(swp_entry);
>>>>> +		dirty = is_migration_entry_dirty(swp_entry);
>>>>> +	} else if (is_pmd_device_private_entry(*pmd)) {
>>>>> +		old_pmd = *pmd;
>>>>> +		swp_entry = pmd_to_swp_entry(old_pmd);
>>>>> +		page = pfn_swap_entry_to_page(swp_entry);
>>>>> +		folio = page_folio(page);
>>>>> +
>>>>>  		soft_dirty = pmd_swp_soft_dirty(old_pmd);
>>>>>  		uffd_wp = pmd_swp_uffd_wp(old_pmd);
>>>>> +
>>>>> +		write = is_writable_device_private_entry(swp_entry);
>>>>> +		anon_exclusive = PageAnonExclusive(page);
>>>>> +
>>>>> +		if (freeze && anon_exclusive &&
>>>>> +		    folio_try_share_anon_rmap_pmd(folio, page))
>>>>> +			freeze = false;
>>>>
>>>> Why is it OK to change the freeze request? OK, it is replicating
>>>> the code for present PMD folios. Either add a comment to point
>>>> to the explanation in the comment below, or move
>>>> “if (is_pmd_device_private_entry(*pmd))“ branch in the else below
>>>> to deduplicate this code.
>>>
>>> Similar to the code for present pages, ideally folio_try_share_anon_rmap_pmd()
>>> should never fail.
>>
>> anon_exclusive = PageAnonExclusive(page);
>> if (freeze && anon_exclusive &&
>>     folio_try_share_anon_rmap_pmd(folio, page))
>>         freeze = false;
>> if (!freeze) {
>>         rmap_t rmap_flags = RMAP_NONE;
>>
>>         folio_ref_add(folio, HPAGE_PMD_NR - 1);
>>         if (anon_exclusive)
>>                 rmap_flags |= RMAP_EXCLUSIVE;
>>         folio_add_anon_rmap_ptes(folio, page, HPAGE_PMD_NR,
>>                                     vma, haddr, rmap_flags);
>> }
>>
>> are the same for both device private and present. Can it be deduplicated
>> by doing below?
>>
>> if (is_pmd_migration_entry(*pmd)) {
>> ...
>> } else {
>> 	if (is_pmd_device_private_entry(*pmd)) {
>> 		...
>> 	} else if (pmd_present()) {
>> 		...
>> 	}
>>
>> 	/* the above code */
>> }
>>
>> If not, at least adding a comment in the device private copy of the code
>> pointing to the present copy's comment.
>>
>>>
>>>>
>>>>> +		if (!freeze) {
>>>>> +			rmap_t rmap_flags = RMAP_NONE;
>>>>> +
>>>>> +			folio_ref_add(folio, HPAGE_PMD_NR - 1);
>>>>> +			if (anon_exclusive)
>>>>> +				rmap_flags |= RMAP_EXCLUSIVE;
>>>>> +
>>>>> +			folio_add_anon_rmap_ptes(folio, page, HPAGE_PMD_NR,
>>>>> +						 vma, haddr, rmap_flags);
>>>>> +		}
>>>>>  	} else {
>>>>>  		/*
>>>>>  		 * Up to this point the pmd is present and huge and userland has
>>>>> @@ -3026,32 +3055,57 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>>>>  	 * Note that NUMA hinting access restrictions are not transferred to
>>>>>  	 * avoid any possibility of altering permissions across VMAs.
>>>>>  	 */
>>>>> -	if (freeze || pmd_migration) {
>>>>> -		for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) {
>>>>> -			pte_t entry;
>>>>> -			swp_entry_t swp_entry;
>>>>> -
>>>>> -			if (write)
>>>>> -				swp_entry = make_writable_migration_entry(
>>>>> -							page_to_pfn(page + i));
>>>>> -			else if (anon_exclusive)
>>>>> -				swp_entry = make_readable_exclusive_migration_entry(
>>>>> -							page_to_pfn(page + i));
>>>>> -			else
>>>>> -				swp_entry = make_readable_migration_entry(
>>>>> -							page_to_pfn(page + i));
>>>>> -			if (young)
>>>>> -				swp_entry = make_migration_entry_young(swp_entry);
>>>>> -			if (dirty)
>>>>> -				swp_entry = make_migration_entry_dirty(swp_entry);
>>>>> -			entry = swp_entry_to_pte(swp_entry);
>>>>> -			if (soft_dirty)
>>>>> -				entry = pte_swp_mksoft_dirty(entry);
>>>>> -			if (uffd_wp)
>>>>> -				entry = pte_swp_mkuffd_wp(entry);
>>>>> +	if (freeze || !present) {
>>>>> +		pte_t entry;
>>>>>
>>>>> -			VM_WARN_ON(!pte_none(ptep_get(pte + i)));
>>>>> -			set_pte_at(mm, addr, pte + i, entry);
>>>>> +		if (freeze || is_migration_entry(swp_entry)) {
>>>>>
>>>> <snip>
>>>>> +		} else {
>>>> <snip>
>>>>>  		}
>>>>>  	} else {
>>>>>  		pte_t entry;
>>>>
>>>> David already pointed this out in v5. It can be done such as:
>>>>
>>>> if (freeze || pmd_migration) {
>>>> ...
>>>> } else if (is_pmd_device_private_entry(old_pmd)) {
>>>> ...
>>>
>>> No.. freeze can be true for device private entries as well
>>
>> When freeze is true, migration entry is installed in place of
>> device private entry, since the "if (freeze || pmd_migration)"
>> branch is taken. This proposal is same as your code. What is
>> the difference?
>>
>
> I read the else if incorrectly, I'll simplify
>
>>>
>>>> } else {
>>>> /* for present, non freeze case */
>>>> }
>>>>
>>>>> @@ -3076,7 +3130,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>>>>  	}
>>>>>  	pte_unmap(pte);
>>>>>
>>>>> -	if (!pmd_migration)
>>>>> +	if (!is_pmd_migration_entry(*pmd))
>>>>>  		folio_remove_rmap_pmd(folio, page, vma);
>>>>>  	if (freeze)
>>>>>  		put_page(page);
>>>>> @@ -3089,7 +3143,7 @@ void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
>>>>>  			   pmd_t *pmd, bool freeze)
>>>>>  {
>>>>>  	VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PMD_SIZE));
>>>>> -	if (pmd_trans_huge(*pmd) || is_pmd_migration_entry(*pmd))
>>>>> +	if (pmd_trans_huge(*pmd) || is_pmd_non_present_folio_entry(*pmd))
>>>>>  		__split_huge_pmd_locked(vma, pmd, address, freeze);
>>>>>  }
>>>>>
>>>>> @@ -3268,6 +3322,9 @@ static void lru_add_split_folio(struct folio *folio, struct folio *new_folio,
>>>>>  	VM_BUG_ON_FOLIO(folio_test_lru(new_folio), folio);
>>>>>  	lockdep_assert_held(&lruvec->lru_lock);
>>>>>
>>>>> +	if (folio_is_device_private(folio))
>>>>> +		return;
>>>>> +
>>>>>  	if (list) {
>>>>>  		/* page reclaim is reclaiming a huge page */
>>>>>  		VM_WARN_ON(folio_test_lru(folio));
>>>>> @@ -3885,8 +3942,9 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>>>>>  	if (nr_shmem_dropped)
>>>>>  		shmem_uncharge(mapping->host, nr_shmem_dropped);
>>>>>
>>>>> -	if (!ret && is_anon)
>>>>> +	if (!ret && is_anon && !folio_is_device_private(folio))
>>>>>  		remap_flags = RMP_USE_SHARED_ZEROPAGE;
>>>>> +
>>>>
>>>> You should remove this and add
>>>>
>>>> if (folio_is_device_private(folio))
>>>> 	return false;
>>>>
>>>> in try_to_map_unused_to_zeropage(). Otherwise, no one would know
>>>> device private folios need to be excluded from mapping unused to
>>>> zero page.
>>>>
>>>
>>> I had that upto v2 and then David asked me to remove it. FYI, this
>>> is the only call site for RMP_USE_SHARED_ZEROPAGE
>>
>> Can you provide a link?
>>
>
> Please see https://lore.kernel.org/linux-mm/20250306044239.3874247-3-balbirs@nvidia.com/T/

I do not see any comment on removing device private folio check
in try_to_map_unused_to_zeropage(). Can you try again?

>
>> Even if this is the only call site, there is no guarantee that
>> there will be none in the future. I am not sure why we want caller
>> to handle this special case. Who is going to tell the next user
>> of RMP_USE_SHARED_ZEROPAGE or caller to try_to_map_unused_to_zeropage()
>> that device private is incompatible with them?
>>
>
> I don't disagree, but the question was why are device private pages even making
> it to try_to_map_unused_to_zeropage()>>

Then, it could be done in remove_migration_pte():

if (rmap_walk_arg->map_unused_to_zeropage &&
	!folio_is_device_private(folio) &&
	try_to_map_unused_to_zeropage(&pvmw, folio, idx))
	continue;

Maybe I am too hung up on this and someone else could pat on my back and
tell me it is OK to just do this at the only caller instead. :)

>>>>>  	remap_page(folio, 1 << order, remap_flags);
>>>>>
>>>>>  	/*
>>>>> -- 
>>>>> 2.50.1
>>>>
>>>>
>>>
>>> Thanks for the review
>>> Balbir
>
> Thanks,
> Balbir


Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 05/15] mm/migrate_device: handle partially mapped folios during collection
  2025-09-23 15:56       ` Karim Manaouil
@ 2025-09-24  4:47         ` Balbir Singh
  2025-09-30 11:58         ` Balbir Singh
  1 sibling, 0 replies; 57+ messages in thread
From: Balbir Singh @ 2025-09-24  4:47 UTC (permalink / raw)
  To: Karim Manaouil
  Cc: Zi Yan, linux-kernel, linux-mm, damon, dri-devel,
	David Hildenbrand, Joshua Hahn, Rakie Kim, Byungchul Park,
	Gregory Price, Ying Huang, Alistair Popple, Oscar Salvador,
	Lorenzo Stoakes, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lyude Paul, Danilo Krummrich,
	David Airlie, Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 9/24/25 01:56, Karim Manaouil wrote:
> On Tue, Sep 23, 2025 at 01:44:20PM +1000, Balbir Singh wrote:
>> On 9/23/25 12:23, Zi Yan wrote:
>>> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
>>>
>>>> Extend migrate_vma_collect_pmd() to handle partially mapped large folios
>>>> that require splitting before migration can proceed.
>>>>
>>>> During PTE walk in the collection phase, if a large folio is only
>>>> partially mapped in the migration range, it must be split to ensure the
>>>> folio is correctly migrated.
>>>>
>>>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>>>> Cc: David Hildenbrand <david@redhat.com>
>>>> Cc: Zi Yan <ziy@nvidia.com>
>>>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
>>>> Cc: Rakie Kim <rakie.kim@sk.com>
>>>> Cc: Byungchul Park <byungchul@sk.com>
>>>> Cc: Gregory Price <gourry@gourry.net>
>>>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
>>>> Cc: Alistair Popple <apopple@nvidia.com>
>>>> Cc: Oscar Salvador <osalvador@suse.de>
>>>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>>>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
>>>> Cc: Nico Pache <npache@redhat.com>
>>>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>>>> Cc: Dev Jain <dev.jain@arm.com>
>>>> Cc: Barry Song <baohua@kernel.org>
>>>> Cc: Lyude Paul <lyude@redhat.com>
>>>> Cc: Danilo Krummrich <dakr@kernel.org>
>>>> Cc: David Airlie <airlied@gmail.com>
>>>> Cc: Simona Vetter <simona@ffwll.ch>
>>>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>>>> Cc: Mika Penttilä <mpenttil@redhat.com>
>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>> Cc: Francois Dugast <francois.dugast@intel.com>
>>>> ---
>>>>  mm/migrate_device.c | 82 +++++++++++++++++++++++++++++++++++++++++++++
>>>>  1 file changed, 82 insertions(+)
>>>>
>>>> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
>>>> index abd9f6850db6..70c0601f70ea 100644
>>>> --- a/mm/migrate_device.c
>>>> +++ b/mm/migrate_device.c
>>>> @@ -54,6 +54,53 @@ static int migrate_vma_collect_hole(unsigned long start,
>>>>  	return 0;
>>>>  }
>>>>
>>>> +/**
>>>> + * migrate_vma_split_folio() - Helper function to split a THP folio
>>>> + * @folio: the folio to split
>>>> + * @fault_page: struct page associated with the fault if any
>>>> + *
>>>> + * Returns 0 on success
>>>> + */
>>>> +static int migrate_vma_split_folio(struct folio *folio,
>>>> +				   struct page *fault_page)
>>>> +{
>>>> +	int ret;
>>>> +	struct folio *fault_folio = fault_page ? page_folio(fault_page) : NULL;
>>>> +	struct folio *new_fault_folio = NULL;
>>>> +
>>>> +	if (folio != fault_folio) {
>>>> +		folio_get(folio);
>>>> +		folio_lock(folio);
>>>> +	}
>>>> +
>>>> +	ret = split_folio(folio);
>>>> +	if (ret) {
>>>> +		if (folio != fault_folio) {
>>>> +			folio_unlock(folio);
>>>> +			folio_put(folio);
>>>> +		}
>>>> +		return ret;
>>>> +	}
>>>> +
>>>> +	new_fault_folio = fault_page ? page_folio(fault_page) : NULL;
>>>> +
>>>> +	/*
>>>> +	 * Ensure the lock is held on the correct
>>>> +	 * folio after the split
>>>> +	 */
>>>> +	if (!new_fault_folio) {
>>>> +		folio_unlock(folio);
>>>> +		folio_put(folio);
>>>> +	} else if (folio != new_fault_folio) {
>>>> +		folio_get(new_fault_folio);
>>>> +		folio_lock(new_fault_folio);
>>>> +		folio_unlock(folio);
>>>> +		folio_put(folio);
>>>> +	}
>>>> +
>>>> +	return 0;
>>>> +}
>>>> +
>>>>  static int migrate_vma_collect_pmd(pmd_t *pmdp,
>>>>  				   unsigned long start,
>>>>  				   unsigned long end,
>>>> @@ -136,6 +183,8 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>>>>  			 * page table entry. Other special swap entries are not
>>>>  			 * migratable, and we ignore regular swapped page.
>>>>  			 */
>>>> +			struct folio *folio;
>>>> +
>>>>  			entry = pte_to_swp_entry(pte);
>>>>  			if (!is_device_private_entry(entry))
>>>>  				goto next;
>>>> @@ -147,6 +196,23 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>>>>  			    pgmap->owner != migrate->pgmap_owner)
>>>>  				goto next;
>>>>
>>>> +			folio = page_folio(page);
>>>> +			if (folio_test_large(folio)) {
>>>> +				int ret;
>>>> +
>>>> +				pte_unmap_unlock(ptep, ptl);
>>>> +				ret = migrate_vma_split_folio(folio,
>>>> +							  migrate->fault_page);
>>>> +
>>>> +				if (ret) {
>>>> +					ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl);
>>>> +					goto next;
>>>> +				}
>>>> +
>>>> +				addr = start;
>>>> +				goto again;
>>>> +			}
>>>
>>> This does not look right to me.
>>>
>>> The folio here is device private, but migrate_vma_split_folio()
>>> calls split_folio(), which cannot handle device private folios yet.
>>> Your change to split_folio() is in Patch 10 and should be moved
>>> before this patch.
>>>
>>
>> Patch 10 is to split the folio in the middle of migration (when we have
>> converted the entries to migration entries). This patch relies on the
>> changes in patch 4. I agree the names are confusing, I'll reword the
>> functions
> 
> Hi Balbir,
> 
> I am still reviewing the patches, but I think I agree with Zi here.
> 
> split_folio() will replace the PMD mappings of the huge folio with PTE
> mappings, but will also split the folio into smaller folios. The former
> is ok with this patch, but the latter is probably not correct if the folio
> is a zone device folio. The driver needs to know about the change, as
> usually the driver will have some sort of mapping between GPU physical
> memory chunks and their corresponding zone device pages.
> 

Yes, at this point there is no support for split folio callback. I can move this
bit to a later patch or move the entire patch to after patch 10. I suspect
this is a theoretical bisection concern for a future driver using large folios?

Thanks,
Balbir



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 03/15] mm/rmap: extend rmap and migration support device-private entries
  2025-09-23  3:39     ` Balbir Singh
@ 2025-09-24 10:46       ` David Hildenbrand
  0 siblings, 0 replies; 57+ messages in thread
From: David Hildenbrand @ 2025-09-24 10:46 UTC (permalink / raw)
  To: Balbir Singh, Zi Yan
  Cc: linux-kernel, linux-mm, damon, dri-devel, SeongJae Park,
	Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Alistair Popple, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast


>>> +			if (!pmd_present(entry))
>>> +				continue;
>>> +			if (!pmd_dirty(entry) && !pmd_write(entry))
>>>   				continue;
>>>
>>>   			flush_cache_range(vma, address,
>>> @@ -2330,6 +2332,11 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
>>>   	while (page_vma_mapped_walk(&pvmw)) {
>>>   		/* PMD-mapped THP migration entry */
>>>   		if (!pvmw.pte) {
>>> +#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
>>> +			unsigned long pfn;
>>> +			pmd_t pmdval;
>>> +#endif
>>> +
>>
>> This looks ugly. IIRC, we now can put variable definition in the middle.
>> Maybe for this case, these two can be moved to the below ifdef region.
>>
> 
> I can't find any examples of mixing declarations and could not find any clear
> guidance in the coding style

Rather not do it :)

__maybe_unsed might help avoiding the ifdef.

-- 
Cheers

David / dhildenb



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 01/15] mm/zone_device: support large zone device private folios
  2025-09-18  2:49   ` Zi Yan
  2025-09-19  5:01     ` Balbir Singh
@ 2025-09-24 10:55     ` David Hildenbrand
  2025-09-24 17:36       ` Zi Yan
  1 sibling, 1 reply; 57+ messages in thread
From: David Hildenbrand @ 2025-09-24 10:55 UTC (permalink / raw)
  To: Zi Yan, Balbir Singh, Alistair Popple
  Cc: linux-kernel, linux-mm, damon, dri-devel, Joshua Hahn, Rakie Kim,
	Byungchul Park, Gregory Price, Ying Huang, Oscar Salvador,
	Lorenzo Stoakes, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lyude Paul, Danilo Krummrich,
	David Airlie, Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 18.09.25 04:49, Zi Yan wrote:
> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
> 
>> Add routines to support allocation of large order zone device folios
>> and helper functions for zone device folios, to check if a folio is
>> device private and helpers for setting zone device data.
>>
>> When large folios are used, the existing page_free() callback in
>> pgmap is called when the folio is freed, this is true for both
>> PAGE_SIZE and higher order pages.
>>
>> Zone device private large folios do not support deferred split and
>> scan like normal THP folios.
>>
>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
>> Cc: Rakie Kim <rakie.kim@sk.com>
>> Cc: Byungchul Park <byungchul@sk.com>
>> Cc: Gregory Price <gourry@gourry.net>
>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
>> Cc: Alistair Popple <apopple@nvidia.com>
>> Cc: Oscar Salvador <osalvador@suse.de>
>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
>> Cc: Nico Pache <npache@redhat.com>
>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>> Cc: Dev Jain <dev.jain@arm.com>
>> Cc: Barry Song <baohua@kernel.org>
>> Cc: Lyude Paul <lyude@redhat.com>
>> Cc: Danilo Krummrich <dakr@kernel.org>
>> Cc: David Airlie <airlied@gmail.com>
>> Cc: Simona Vetter <simona@ffwll.ch>
>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>> Cc: Mika Penttilä <mpenttil@redhat.com>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Francois Dugast <francois.dugast@intel.com>
>> ---
>>   include/linux/memremap.h | 10 +++++++++-
>>   mm/memremap.c            | 34 +++++++++++++++++++++-------------
>>   mm/rmap.c                |  6 +++++-
>>   3 files changed, 35 insertions(+), 15 deletions(-)
>>
>> diff --git a/include/linux/memremap.h b/include/linux/memremap.h
>> index e5951ba12a28..9c20327c2be5 100644
>> --- a/include/linux/memremap.h
>> +++ b/include/linux/memremap.h
>> @@ -206,7 +206,7 @@ static inline bool is_fsdax_page(const struct page *page)
>>   }
>>
>>   #ifdef CONFIG_ZONE_DEVICE
>> -void zone_device_page_init(struct page *page);
>> +void zone_device_folio_init(struct folio *folio, unsigned int order);
>>   void *memremap_pages(struct dev_pagemap *pgmap, int nid);
>>   void memunmap_pages(struct dev_pagemap *pgmap);
>>   void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
>> @@ -215,6 +215,14 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn);
>>   bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn);
>>
>>   unsigned long memremap_compat_align(void);
>> +
>> +static inline void zone_device_page_init(struct page *page)
>> +{
>> +	struct folio *folio = page_folio(page);
>> +
>> +	zone_device_folio_init(folio, 0);
> 
> I assume it is for legacy code, where only non-compound page exists?
> 
> It seems that you assume @page is always order-0, but there is no check
> for it. Adding VM_WARN_ON_ONCE_FOLIO(folio_order(folio) != 0, folio)
> above it would be useful to detect misuse.
> 
>> +}
>> +
>>   #else
>>   static inline void *devm_memremap_pages(struct device *dev,
>>   		struct dev_pagemap *pgmap)
>> diff --git a/mm/memremap.c b/mm/memremap.c
>> index 46cb1b0b6f72..a8481ebf94cc 100644
>> --- a/mm/memremap.c
>> +++ b/mm/memremap.c
>> @@ -416,20 +416,19 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap);
>>   void free_zone_device_folio(struct folio *folio)
>>   {
>>   	struct dev_pagemap *pgmap = folio->pgmap;
>> +	unsigned long nr = folio_nr_pages(folio);
>> +	int i;
>>
>>   	if (WARN_ON_ONCE(!pgmap))
>>   		return;
>>
>>   	mem_cgroup_uncharge(folio);
>>
>> -	/*
>> -	 * Note: we don't expect anonymous compound pages yet. Once supported
>> -	 * and we could PTE-map them similar to THP, we'd have to clear
>> -	 * PG_anon_exclusive on all tail pages.
>> -	 */
>>   	if (folio_test_anon(folio)) {
>> -		VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
>> -		__ClearPageAnonExclusive(folio_page(folio, 0));
>> +		for (i = 0; i < nr; i++)
>> +			__ClearPageAnonExclusive(folio_page(folio, i));
>> +	} else {
>> +		VM_WARN_ON_ONCE(folio_test_large(folio));
>>   	}
>>
>>   	/*
>> @@ -456,8 +455,8 @@ void free_zone_device_folio(struct folio *folio)
>>   	case MEMORY_DEVICE_COHERENT:
>>   		if (WARN_ON_ONCE(!pgmap->ops || !pgmap->ops->page_free))
>>   			break;
>> -		pgmap->ops->page_free(folio_page(folio, 0));
>> -		put_dev_pagemap(pgmap);
>> +		pgmap->ops->page_free(&folio->page);
>> +		percpu_ref_put_many(&folio->pgmap->ref, nr);
>>   		break;
>>
>>   	case MEMORY_DEVICE_GENERIC:
>> @@ -480,14 +479,23 @@ void free_zone_device_folio(struct folio *folio)
>>   	}
>>   }
>>
>> -void zone_device_page_init(struct page *page)
>> +void zone_device_folio_init(struct folio *folio, unsigned int order)
>>   {
>> +	struct page *page = folio_page(folio, 0);
> 
> It is strange to see a folio is converted back to page in
> a function called zone_device_folio_init().
> 
>> +
>> +	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
>> +
>>   	/*
>>   	 * Drivers shouldn't be allocating pages after calling
>>   	 * memunmap_pages().
>>   	 */
>> -	WARN_ON_ONCE(!percpu_ref_tryget_live(&page_pgmap(page)->ref));
>> -	set_page_count(page, 1);
>> +	WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
>> +	folio_set_count(folio, 1);
>>   	lock_page(page);
>> +
>> +	if (order > 1) {
>> +		prep_compound_page(page, order);
>> +		folio_set_large_rmappable(folio);
>> +	}
> 
> OK, so basically, @folio is not a compound page yet when zone_device_folio_init()
> is called.
> 
> I feel that your zone_device_page_init() and zone_device_folio_init()
> implementations are inverse. They should follow the same pattern
> as __alloc_pages_noprof() and __folio_alloc_noprof(), where
> zone_device_page_init() does the actual initialization and
> zone_device_folio_init() just convert a page to folio.
> 
> Something like:
> 
> void zone_device_page_init(struct page *page, unsigned int order)
> {
> 	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
> 
> 	/*
> 	 * Drivers shouldn't be allocating pages after calling
> 	 * memunmap_pages().
> 	 */
> 
>      WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
> 	
> 	/*
> 	 * anonymous folio does not support order-1, high order file-backed folio
> 	 * is not supported at all.
> 	 */
> 	VM_WARN_ON_ONCE(order == 1);
> 
> 	if (order > 1)
> 		prep_compound_page(page, order);
> 
> 	/* page has to be compound head here */
> 	set_page_count(page, 1);
> 	lock_page(page);
> }
> 
> void zone_device_folio_init(struct folio *folio, unsigned int order)
> {
> 	struct page *page = folio_page(folio, 0);
> 
> 	zone_device_page_init(page, order);
> 	page_rmappable_folio(page);
> }
> 
> Or
> 
> struct folio *zone_device_folio_init(struct page *page, unsigned int order)
> {
> 	zone_device_page_init(page, order);
> 	return page_rmappable_folio(page);
> }

I think the problem is that it will all be weird once we dynamically 
allocate "struct folio".

I have not yet a clear understanding on how that would really work.

For example, should it be pgmap->ops->page_folio() ?

Who allocates the folio? Do we allocate all order-0 folios initially, to 
then merge them when constructing large folios? How do we manage the 
"struct folio" during such merging splitting?

With that in mind, I don't really know what the proper interface should 
be today.


zone_device_folio_init(struct page *page, unsigned int order)

looks cleaner, agreed.

> 
> 
> Then, it comes to free_zone_device_folio() above,
> I feel that pgmap->ops->page_free() should take an additional order
> parameter to free a compound page like free_frozen_pages().
> 

IIRC free_frozen_pages() does not operate on compound pages. If we know 
that we are operating on a compound page (or single page) then passing 
in the page (or better the folio) should work.

-- 
Cheers

David / dhildenb



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 01/15] mm/zone_device: support large zone device private folios
  2025-09-23  3:47         ` Balbir Singh
@ 2025-09-24 11:04           ` David Hildenbrand
  2025-09-24 17:49             ` Zi Yan
  0 siblings, 1 reply; 57+ messages in thread
From: David Hildenbrand @ 2025-09-24 11:04 UTC (permalink / raw)
  To: Balbir Singh, Zi Yan, Alistair Popple
  Cc: linux-kernel, linux-mm, damon, dri-devel, Joshua Hahn, Rakie Kim,
	Byungchul Park, Gregory Price, Ying Huang, Oscar Salvador,
	Lorenzo Stoakes, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lyude Paul, Danilo Krummrich,
	David Airlie, Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 23.09.25 05:47, Balbir Singh wrote:
> On 9/19/25 23:26, Zi Yan wrote:
>> On 19 Sep 2025, at 1:01, Balbir Singh wrote:
>>
>>> On 9/18/25 12:49, Zi Yan wrote:
>>>> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
>>>>
>>>>> Add routines to support allocation of large order zone device folios
>>>>> and helper functions for zone device folios, to check if a folio is
>>>>> device private and helpers for setting zone device data.
>>>>>
>>>>> When large folios are used, the existing page_free() callback in
>>>>> pgmap is called when the folio is freed, this is true for both
>>>>> PAGE_SIZE and higher order pages.
>>>>>
>>>>> Zone device private large folios do not support deferred split and
>>>>> scan like normal THP folios.
>>>>>
>>>>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>>>>> Cc: David Hildenbrand <david@redhat.com>
>>>>> Cc: Zi Yan <ziy@nvidia.com>
>>>>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
>>>>> Cc: Rakie Kim <rakie.kim@sk.com>
>>>>> Cc: Byungchul Park <byungchul@sk.com>
>>>>> Cc: Gregory Price <gourry@gourry.net>
>>>>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
>>>>> Cc: Alistair Popple <apopple@nvidia.com>
>>>>> Cc: Oscar Salvador <osalvador@suse.de>
>>>>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>>>>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
>>>>> Cc: Nico Pache <npache@redhat.com>
>>>>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>>>>> Cc: Dev Jain <dev.jain@arm.com>
>>>>> Cc: Barry Song <baohua@kernel.org>
>>>>> Cc: Lyude Paul <lyude@redhat.com>
>>>>> Cc: Danilo Krummrich <dakr@kernel.org>
>>>>> Cc: David Airlie <airlied@gmail.com>
>>>>> Cc: Simona Vetter <simona@ffwll.ch>
>>>>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>>>>> Cc: Mika Penttilä <mpenttil@redhat.com>
>>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>>> Cc: Francois Dugast <francois.dugast@intel.com>
>>>>> ---
>>>>>   include/linux/memremap.h | 10 +++++++++-
>>>>>   mm/memremap.c            | 34 +++++++++++++++++++++-------------
>>>>>   mm/rmap.c                |  6 +++++-
>>>>>   3 files changed, 35 insertions(+), 15 deletions(-)
>>>>>
>>>>> diff --git a/include/linux/memremap.h b/include/linux/memremap.h
>>>>> index e5951ba12a28..9c20327c2be5 100644
>>>>> --- a/include/linux/memremap.h
>>>>> +++ b/include/linux/memremap.h
>>>>> @@ -206,7 +206,7 @@ static inline bool is_fsdax_page(const struct page *page)
>>>>>   }
>>>>>
>>>>>   #ifdef CONFIG_ZONE_DEVICE
>>>>> -void zone_device_page_init(struct page *page);
>>>>> +void zone_device_folio_init(struct folio *folio, unsigned int order);
>>>>>   void *memremap_pages(struct dev_pagemap *pgmap, int nid);
>>>>>   void memunmap_pages(struct dev_pagemap *pgmap);
>>>>>   void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
>>>>> @@ -215,6 +215,14 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn);
>>>>>   bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn);
>>>>>
>>>>>   unsigned long memremap_compat_align(void);
>>>>> +
>>>>> +static inline void zone_device_page_init(struct page *page)
>>>>> +{
>>>>> +	struct folio *folio = page_folio(page);
>>>>> +
>>>>> +	zone_device_folio_init(folio, 0);
>>>>
>>>> I assume it is for legacy code, where only non-compound page exists?
>>>>
>>>> It seems that you assume @page is always order-0, but there is no check
>>>> for it. Adding VM_WARN_ON_ONCE_FOLIO(folio_order(folio) != 0, folio)
>>>> above it would be useful to detect misuse.
>>>>
>>>>> +}
>>>>> +
>>>>>   #else
>>>>>   static inline void *devm_memremap_pages(struct device *dev,
>>>>>   		struct dev_pagemap *pgmap)
>>>>> diff --git a/mm/memremap.c b/mm/memremap.c
>>>>> index 46cb1b0b6f72..a8481ebf94cc 100644
>>>>> --- a/mm/memremap.c
>>>>> +++ b/mm/memremap.c
>>>>> @@ -416,20 +416,19 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap);
>>>>>   void free_zone_device_folio(struct folio *folio)
>>>>>   {
>>>>>   	struct dev_pagemap *pgmap = folio->pgmap;
>>>>> +	unsigned long nr = folio_nr_pages(folio);
>>>>> +	int i;
>>>>>
>>>>>   	if (WARN_ON_ONCE(!pgmap))
>>>>>   		return;
>>>>>
>>>>>   	mem_cgroup_uncharge(folio);
>>>>>
>>>>> -	/*
>>>>> -	 * Note: we don't expect anonymous compound pages yet. Once supported
>>>>> -	 * and we could PTE-map them similar to THP, we'd have to clear
>>>>> -	 * PG_anon_exclusive on all tail pages.
>>>>> -	 */
>>>>>   	if (folio_test_anon(folio)) {
>>>>> -		VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
>>>>> -		__ClearPageAnonExclusive(folio_page(folio, 0));
>>>>> +		for (i = 0; i < nr; i++)
>>>>> +			__ClearPageAnonExclusive(folio_page(folio, i));
>>>>> +	} else {
>>>>> +		VM_WARN_ON_ONCE(folio_test_large(folio));
>>>>>   	}
>>>>>
>>>>>   	/*
>>>>> @@ -456,8 +455,8 @@ void free_zone_device_folio(struct folio *folio)
>>>>>   	case MEMORY_DEVICE_COHERENT:
>>>>>   		if (WARN_ON_ONCE(!pgmap->ops || !pgmap->ops->page_free))
>>>>>   			break;
>>>>> -		pgmap->ops->page_free(folio_page(folio, 0));
>>>>> -		put_dev_pagemap(pgmap);
>>>>> +		pgmap->ops->page_free(&folio->page);
>>>>> +		percpu_ref_put_many(&folio->pgmap->ref, nr);
>>>>>   		break;
>>>>>
>>>>>   	case MEMORY_DEVICE_GENERIC:
>>>>> @@ -480,14 +479,23 @@ void free_zone_device_folio(struct folio *folio)
>>>>>   	}
>>>>>   }
>>>>>
>>>>> -void zone_device_page_init(struct page *page)
>>>>> +void zone_device_folio_init(struct folio *folio, unsigned int order)
>>>>>   {
>>>>> +	struct page *page = folio_page(folio, 0);
>>>>
>>>> It is strange to see a folio is converted back to page in
>>>> a function called zone_device_folio_init().
>>>>
>>>>> +
>>>>> +	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
>>>>> +
>>>>>   	/*
>>>>>   	 * Drivers shouldn't be allocating pages after calling
>>>>>   	 * memunmap_pages().
>>>>>   	 */
>>>>> -	WARN_ON_ONCE(!percpu_ref_tryget_live(&page_pgmap(page)->ref));
>>>>> -	set_page_count(page, 1);
>>>>> +	WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
>>>>> +	folio_set_count(folio, 1);
>>>>>   	lock_page(page);
>>>>> +
>>>>> +	if (order > 1) {
>>>>> +		prep_compound_page(page, order);
>>>>> +		folio_set_large_rmappable(folio);
>>>>> +	}
>>>>
>>>> OK, so basically, @folio is not a compound page yet when zone_device_folio_init()
>>>> is called.
>>>>
>>>> I feel that your zone_device_page_init() and zone_device_folio_init()
>>>> implementations are inverse. They should follow the same pattern
>>>> as __alloc_pages_noprof() and __folio_alloc_noprof(), where
>>>> zone_device_page_init() does the actual initialization and
>>>> zone_device_folio_init() just convert a page to folio.
>>>>
>>>> Something like:
>>>>
>>>> void zone_device_page_init(struct page *page, unsigned int order)
>>>> {
>>>> 	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
>>>>
>>>> 	/*
>>>> 	 * Drivers shouldn't be allocating pages after calling
>>>> 	 * memunmap_pages().
>>>> 	 */
>>>>
>>>>      WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
>>>> 	
>>>> 	/*
>>>> 	 * anonymous folio does not support order-1, high order file-backed folio
>>>> 	 * is not supported at all.
>>>> 	 */
>>>> 	VM_WARN_ON_ONCE(order == 1);
>>>>
>>>> 	if (order > 1)
>>>> 		prep_compound_page(page, order);
>>>>
>>>> 	/* page has to be compound head here */
>>>> 	set_page_count(page, 1);
>>>> 	lock_page(page);
>>>> }
>>>>
>>>> void zone_device_folio_init(struct folio *folio, unsigned int order)
>>>> {
>>>> 	struct page *page = folio_page(folio, 0);
>>>>
>>>> 	zone_device_page_init(page, order);
>>>> 	page_rmappable_folio(page);
>>>> }
>>>>
>>>> Or
>>>>
>>>> struct folio *zone_device_folio_init(struct page *page, unsigned int order)
>>>> {
>>>> 	zone_device_page_init(page, order);
>>>> 	return page_rmappable_folio(page);
>>>> }
>>>>
>>>>
>>>> Then, it comes to free_zone_device_folio() above,
>>>> I feel that pgmap->ops->page_free() should take an additional order
>>>> parameter to free a compound page like free_frozen_pages().
>>>>
>>>>
>>>> This is my impression after reading the patch and zone device page code.
>>>>
>>>> Alistair and David can correct me if this is wrong, since I am new to
>>>> zone device page code.
>>>> 	
>>>
>>> Thanks, I did not want to change zone_device_page_init() for several
>>> drivers (outside my test scope) that already assume it has an order size of 0.
>>
>> But my proposed zone_device_page_init() should still work for order-0
>> pages. You just need to change call site to add 0 as a new parameter.
>>
> 
> I did not want to change existing callers (increases testing impact)
> without a strong reason.
> 
>>
>> One strange thing I found in the original zone_device_page_init() is
>> the use of page_pgmap() in
>> WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order)).
>> page_pgmap() calls page_folio() on the given page to access pgmap field.
>> And pgmap field is only available in struct folio. The code initializes
>> struct page, but in middle it suddenly finds the page is actually a folio,
>> then treat it as a page afterwards. I wonder if it can be done better.
>>
>> This might be a question to Alistair, since he made the change.
>>
> 
> I'll let him answer it :)

Not him, but I think this goes back to my question raised in my other 
reply: When would we allocate "struct folio" in the future.

If it's "always" then actually most of the zone-device code would only 
ever operate on folios and never on pages in the future.

I recall during a discussion at LSF/MM I raised that, and the answer was 
(IIRC) that we will allocate "struct folio" as we will initialize the 
memmap for dax.

So essentially, we'd always have folios and would never really have to 
operate on pages.

-- 
Cheers

David / dhildenb



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 01/15] mm/zone_device: support large zone device private folios
  2025-09-24 10:55     ` David Hildenbrand
@ 2025-09-24 17:36       ` Zi Yan
  2025-09-24 23:58         ` Alistair Popple
  0 siblings, 1 reply; 57+ messages in thread
From: Zi Yan @ 2025-09-24 17:36 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Balbir Singh, Alistair Popple, linux-kernel, linux-mm, damon,
	dri-devel, Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Oscar Salvador, Lorenzo Stoakes, Baolin Wang,
	Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
	Lyude Paul, Danilo Krummrich, David Airlie, Simona Vetter,
	Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 24 Sep 2025, at 6:55, David Hildenbrand wrote:

> On 18.09.25 04:49, Zi Yan wrote:
>> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
>>
>>> Add routines to support allocation of large order zone device folios
>>> and helper functions for zone device folios, to check if a folio is
>>> device private and helpers for setting zone device data.
>>>
>>> When large folios are used, the existing page_free() callback in
>>> pgmap is called when the folio is freed, this is true for both
>>> PAGE_SIZE and higher order pages.
>>>
>>> Zone device private large folios do not support deferred split and
>>> scan like normal THP folios.
>>>
>>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>>> Cc: David Hildenbrand <david@redhat.com>
>>> Cc: Zi Yan <ziy@nvidia.com>
>>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
>>> Cc: Rakie Kim <rakie.kim@sk.com>
>>> Cc: Byungchul Park <byungchul@sk.com>
>>> Cc: Gregory Price <gourry@gourry.net>
>>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
>>> Cc: Alistair Popple <apopple@nvidia.com>
>>> Cc: Oscar Salvador <osalvador@suse.de>
>>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
>>> Cc: Nico Pache <npache@redhat.com>
>>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>>> Cc: Dev Jain <dev.jain@arm.com>
>>> Cc: Barry Song <baohua@kernel.org>
>>> Cc: Lyude Paul <lyude@redhat.com>
>>> Cc: Danilo Krummrich <dakr@kernel.org>
>>> Cc: David Airlie <airlied@gmail.com>
>>> Cc: Simona Vetter <simona@ffwll.ch>
>>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>>> Cc: Mika Penttilä <mpenttil@redhat.com>
>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>> Cc: Francois Dugast <francois.dugast@intel.com>
>>> ---
>>>   include/linux/memremap.h | 10 +++++++++-
>>>   mm/memremap.c            | 34 +++++++++++++++++++++-------------
>>>   mm/rmap.c                |  6 +++++-
>>>   3 files changed, 35 insertions(+), 15 deletions(-)
>>>
>>> diff --git a/include/linux/memremap.h b/include/linux/memremap.h
>>> index e5951ba12a28..9c20327c2be5 100644
>>> --- a/include/linux/memremap.h
>>> +++ b/include/linux/memremap.h
>>> @@ -206,7 +206,7 @@ static inline bool is_fsdax_page(const struct page *page)
>>>   }
>>>
>>>   #ifdef CONFIG_ZONE_DEVICE
>>> -void zone_device_page_init(struct page *page);
>>> +void zone_device_folio_init(struct folio *folio, unsigned int order);
>>>   void *memremap_pages(struct dev_pagemap *pgmap, int nid);
>>>   void memunmap_pages(struct dev_pagemap *pgmap);
>>>   void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
>>> @@ -215,6 +215,14 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn);
>>>   bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn);
>>>
>>>   unsigned long memremap_compat_align(void);
>>> +
>>> +static inline void zone_device_page_init(struct page *page)
>>> +{
>>> +	struct folio *folio = page_folio(page);
>>> +
>>> +	zone_device_folio_init(folio, 0);
>>
>> I assume it is for legacy code, where only non-compound page exists?
>>
>> It seems that you assume @page is always order-0, but there is no check
>> for it. Adding VM_WARN_ON_ONCE_FOLIO(folio_order(folio) != 0, folio)
>> above it would be useful to detect misuse.
>>
>>> +}
>>> +
>>>   #else
>>>   static inline void *devm_memremap_pages(struct device *dev,
>>>   		struct dev_pagemap *pgmap)
>>> diff --git a/mm/memremap.c b/mm/memremap.c
>>> index 46cb1b0b6f72..a8481ebf94cc 100644
>>> --- a/mm/memremap.c
>>> +++ b/mm/memremap.c
>>> @@ -416,20 +416,19 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap);
>>>   void free_zone_device_folio(struct folio *folio)
>>>   {
>>>   	struct dev_pagemap *pgmap = folio->pgmap;
>>> +	unsigned long nr = folio_nr_pages(folio);
>>> +	int i;
>>>
>>>   	if (WARN_ON_ONCE(!pgmap))
>>>   		return;
>>>
>>>   	mem_cgroup_uncharge(folio);
>>>
>>> -	/*
>>> -	 * Note: we don't expect anonymous compound pages yet. Once supported
>>> -	 * and we could PTE-map them similar to THP, we'd have to clear
>>> -	 * PG_anon_exclusive on all tail pages.
>>> -	 */
>>>   	if (folio_test_anon(folio)) {
>>> -		VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
>>> -		__ClearPageAnonExclusive(folio_page(folio, 0));
>>> +		for (i = 0; i < nr; i++)
>>> +			__ClearPageAnonExclusive(folio_page(folio, i));
>>> +	} else {
>>> +		VM_WARN_ON_ONCE(folio_test_large(folio));
>>>   	}
>>>
>>>   	/*
>>> @@ -456,8 +455,8 @@ void free_zone_device_folio(struct folio *folio)
>>>   	case MEMORY_DEVICE_COHERENT:
>>>   		if (WARN_ON_ONCE(!pgmap->ops || !pgmap->ops->page_free))
>>>   			break;
>>> -		pgmap->ops->page_free(folio_page(folio, 0));
>>> -		put_dev_pagemap(pgmap);
>>> +		pgmap->ops->page_free(&folio->page);
>>> +		percpu_ref_put_many(&folio->pgmap->ref, nr);
>>>   		break;
>>>
>>>   	case MEMORY_DEVICE_GENERIC:
>>> @@ -480,14 +479,23 @@ void free_zone_device_folio(struct folio *folio)
>>>   	}
>>>   }
>>>
>>> -void zone_device_page_init(struct page *page)
>>> +void zone_device_folio_init(struct folio *folio, unsigned int order)
>>>   {
>>> +	struct page *page = folio_page(folio, 0);
>>
>> It is strange to see a folio is converted back to page in
>> a function called zone_device_folio_init().
>>
>>> +
>>> +	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
>>> +
>>>   	/*
>>>   	 * Drivers shouldn't be allocating pages after calling
>>>   	 * memunmap_pages().
>>>   	 */
>>> -	WARN_ON_ONCE(!percpu_ref_tryget_live(&page_pgmap(page)->ref));
>>> -	set_page_count(page, 1);
>>> +	WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
>>> +	folio_set_count(folio, 1);
>>>   	lock_page(page);
>>> +
>>> +	if (order > 1) {
>>> +		prep_compound_page(page, order);
>>> +		folio_set_large_rmappable(folio);
>>> +	}
>>
>> OK, so basically, @folio is not a compound page yet when zone_device_folio_init()
>> is called.
>>
>> I feel that your zone_device_page_init() and zone_device_folio_init()
>> implementations are inverse. They should follow the same pattern
>> as __alloc_pages_noprof() and __folio_alloc_noprof(), where
>> zone_device_page_init() does the actual initialization and
>> zone_device_folio_init() just convert a page to folio.
>>
>> Something like:
>>
>> void zone_device_page_init(struct page *page, unsigned int order)
>> {
>> 	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
>>
>> 	/*
>> 	 * Drivers shouldn't be allocating pages after calling
>> 	 * memunmap_pages().
>> 	 */
>>
>>      WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
>> 	
>> 	/*
>> 	 * anonymous folio does not support order-1, high order file-backed folio
>> 	 * is not supported at all.
>> 	 */
>> 	VM_WARN_ON_ONCE(order == 1);
>>
>> 	if (order > 1)
>> 		prep_compound_page(page, order);
>>
>> 	/* page has to be compound head here */
>> 	set_page_count(page, 1);
>> 	lock_page(page);
>> }
>>
>> void zone_device_folio_init(struct folio *folio, unsigned int order)
>> {
>> 	struct page *page = folio_page(folio, 0);
>>
>> 	zone_device_page_init(page, order);
>> 	page_rmappable_folio(page);
>> }
>>
>> Or
>>
>> struct folio *zone_device_folio_init(struct page *page, unsigned int order)
>> {
>> 	zone_device_page_init(page, order);
>> 	return page_rmappable_folio(page);
>> }
>
> I think the problem is that it will all be weird once we dynamically allocate "struct folio".
>
> I have not yet a clear understanding on how that would really work.
>
> For example, should it be pgmap->ops->page_folio() ?
>
> Who allocates the folio? Do we allocate all order-0 folios initially, to then merge them when constructing large folios? How do we manage the "struct folio" during such merging splitting?

Right. Either we would waste memory by simply concatenating all “struct folio”
and putting paddings at the end, or we would free tail “struct folio” first,
then allocate tail “struct page”. Both are painful and do not match core mm’s
memdesc pattern, where “struct folio” is allocated when caller is asking
for a folio. If “struct folio” is always allocated, there is no difference
between “struct folio” and “struct page”.

>
> With that in mind, I don't really know what the proper interface should be today.
>
>
> zone_device_folio_init(struct page *page, unsigned int order)
>
> looks cleaner, agreed.
>
>>
>>
>> Then, it comes to free_zone_device_folio() above,
>> I feel that pgmap->ops->page_free() should take an additional order
>> parameter to free a compound page like free_frozen_pages().
>>
>
> IIRC free_frozen_pages() does not operate on compound pages. If we know that we are operating on a compound page (or single page) then passing in the page (or better the folio) should work.

free_pages_prepare() in __free_frozen_pages(), called by free_frozen_pages(),
checks if compound_order(page) matches the given order, in case folio field
corrupts. I suppose it is useful. But I do not have a strong opinion about
this one.



Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 01/15] mm/zone_device: support large zone device private folios
  2025-09-24 11:04           ` David Hildenbrand
@ 2025-09-24 17:49             ` Zi Yan
  2025-09-24 23:45               ` Alistair Popple
  0 siblings, 1 reply; 57+ messages in thread
From: Zi Yan @ 2025-09-24 17:49 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Balbir Singh, Alistair Popple, linux-kernel, linux-mm, damon,
	dri-devel, Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Oscar Salvador, Lorenzo Stoakes, Baolin Wang,
	Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
	Lyude Paul, Danilo Krummrich, David Airlie, Simona Vetter,
	Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 24 Sep 2025, at 7:04, David Hildenbrand wrote:

> On 23.09.25 05:47, Balbir Singh wrote:
>> On 9/19/25 23:26, Zi Yan wrote:
>>> On 19 Sep 2025, at 1:01, Balbir Singh wrote:
>>>
>>>> On 9/18/25 12:49, Zi Yan wrote:
>>>>> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
>>>>>
>>>>>> Add routines to support allocation of large order zone device folios
>>>>>> and helper functions for zone device folios, to check if a folio is
>>>>>> device private and helpers for setting zone device data.
>>>>>>
>>>>>> When large folios are used, the existing page_free() callback in
>>>>>> pgmap is called when the folio is freed, this is true for both
>>>>>> PAGE_SIZE and higher order pages.
>>>>>>
>>>>>> Zone device private large folios do not support deferred split and
>>>>>> scan like normal THP folios.
>>>>>>
>>>>>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>>>>>> Cc: David Hildenbrand <david@redhat.com>
>>>>>> Cc: Zi Yan <ziy@nvidia.com>
>>>>>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
>>>>>> Cc: Rakie Kim <rakie.kim@sk.com>
>>>>>> Cc: Byungchul Park <byungchul@sk.com>
>>>>>> Cc: Gregory Price <gourry@gourry.net>
>>>>>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
>>>>>> Cc: Alistair Popple <apopple@nvidia.com>
>>>>>> Cc: Oscar Salvador <osalvador@suse.de>
>>>>>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>>>>>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>>>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
>>>>>> Cc: Nico Pache <npache@redhat.com>
>>>>>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>>>>>> Cc: Dev Jain <dev.jain@arm.com>
>>>>>> Cc: Barry Song <baohua@kernel.org>
>>>>>> Cc: Lyude Paul <lyude@redhat.com>
>>>>>> Cc: Danilo Krummrich <dakr@kernel.org>
>>>>>> Cc: David Airlie <airlied@gmail.com>
>>>>>> Cc: Simona Vetter <simona@ffwll.ch>
>>>>>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>>>>>> Cc: Mika Penttilä <mpenttil@redhat.com>
>>>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>>>> Cc: Francois Dugast <francois.dugast@intel.com>
>>>>>> ---
>>>>>>   include/linux/memremap.h | 10 +++++++++-
>>>>>>   mm/memremap.c            | 34 +++++++++++++++++++++-------------
>>>>>>   mm/rmap.c                |  6 +++++-
>>>>>>   3 files changed, 35 insertions(+), 15 deletions(-)
>>>>>>
>>>>>> diff --git a/include/linux/memremap.h b/include/linux/memremap.h
>>>>>> index e5951ba12a28..9c20327c2be5 100644
>>>>>> --- a/include/linux/memremap.h
>>>>>> +++ b/include/linux/memremap.h
>>>>>> @@ -206,7 +206,7 @@ static inline bool is_fsdax_page(const struct page *page)
>>>>>>   }
>>>>>>
>>>>>>   #ifdef CONFIG_ZONE_DEVICE
>>>>>> -void zone_device_page_init(struct page *page);
>>>>>> +void zone_device_folio_init(struct folio *folio, unsigned int order);
>>>>>>   void *memremap_pages(struct dev_pagemap *pgmap, int nid);
>>>>>>   void memunmap_pages(struct dev_pagemap *pgmap);
>>>>>>   void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
>>>>>> @@ -215,6 +215,14 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn);
>>>>>>   bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn);
>>>>>>
>>>>>>   unsigned long memremap_compat_align(void);
>>>>>> +
>>>>>> +static inline void zone_device_page_init(struct page *page)
>>>>>> +{
>>>>>> +	struct folio *folio = page_folio(page);
>>>>>> +
>>>>>> +	zone_device_folio_init(folio, 0);
>>>>>
>>>>> I assume it is for legacy code, where only non-compound page exists?
>>>>>
>>>>> It seems that you assume @page is always order-0, but there is no check
>>>>> for it. Adding VM_WARN_ON_ONCE_FOLIO(folio_order(folio) != 0, folio)
>>>>> above it would be useful to detect misuse.
>>>>>
>>>>>> +}
>>>>>> +
>>>>>>   #else
>>>>>>   static inline void *devm_memremap_pages(struct device *dev,
>>>>>>   		struct dev_pagemap *pgmap)
>>>>>> diff --git a/mm/memremap.c b/mm/memremap.c
>>>>>> index 46cb1b0b6f72..a8481ebf94cc 100644
>>>>>> --- a/mm/memremap.c
>>>>>> +++ b/mm/memremap.c
>>>>>> @@ -416,20 +416,19 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap);
>>>>>>   void free_zone_device_folio(struct folio *folio)
>>>>>>   {
>>>>>>   	struct dev_pagemap *pgmap = folio->pgmap;
>>>>>> +	unsigned long nr = folio_nr_pages(folio);
>>>>>> +	int i;
>>>>>>
>>>>>>   	if (WARN_ON_ONCE(!pgmap))
>>>>>>   		return;
>>>>>>
>>>>>>   	mem_cgroup_uncharge(folio);
>>>>>>
>>>>>> -	/*
>>>>>> -	 * Note: we don't expect anonymous compound pages yet. Once supported
>>>>>> -	 * and we could PTE-map them similar to THP, we'd have to clear
>>>>>> -	 * PG_anon_exclusive on all tail pages.
>>>>>> -	 */
>>>>>>   	if (folio_test_anon(folio)) {
>>>>>> -		VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
>>>>>> -		__ClearPageAnonExclusive(folio_page(folio, 0));
>>>>>> +		for (i = 0; i < nr; i++)
>>>>>> +			__ClearPageAnonExclusive(folio_page(folio, i));
>>>>>> +	} else {
>>>>>> +		VM_WARN_ON_ONCE(folio_test_large(folio));
>>>>>>   	}
>>>>>>
>>>>>>   	/*
>>>>>> @@ -456,8 +455,8 @@ void free_zone_device_folio(struct folio *folio)
>>>>>>   	case MEMORY_DEVICE_COHERENT:
>>>>>>   		if (WARN_ON_ONCE(!pgmap->ops || !pgmap->ops->page_free))
>>>>>>   			break;
>>>>>> -		pgmap->ops->page_free(folio_page(folio, 0));
>>>>>> -		put_dev_pagemap(pgmap);
>>>>>> +		pgmap->ops->page_free(&folio->page);
>>>>>> +		percpu_ref_put_many(&folio->pgmap->ref, nr);
>>>>>>   		break;
>>>>>>
>>>>>>   	case MEMORY_DEVICE_GENERIC:
>>>>>> @@ -480,14 +479,23 @@ void free_zone_device_folio(struct folio *folio)
>>>>>>   	}
>>>>>>   }
>>>>>>
>>>>>> -void zone_device_page_init(struct page *page)
>>>>>> +void zone_device_folio_init(struct folio *folio, unsigned int order)
>>>>>>   {
>>>>>> +	struct page *page = folio_page(folio, 0);
>>>>>
>>>>> It is strange to see a folio is converted back to page in
>>>>> a function called zone_device_folio_init().
>>>>>
>>>>>> +
>>>>>> +	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
>>>>>> +
>>>>>>   	/*
>>>>>>   	 * Drivers shouldn't be allocating pages after calling
>>>>>>   	 * memunmap_pages().
>>>>>>   	 */
>>>>>> -	WARN_ON_ONCE(!percpu_ref_tryget_live(&page_pgmap(page)->ref));
>>>>>> -	set_page_count(page, 1);
>>>>>> +	WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
>>>>>> +	folio_set_count(folio, 1);
>>>>>>   	lock_page(page);
>>>>>> +
>>>>>> +	if (order > 1) {
>>>>>> +		prep_compound_page(page, order);
>>>>>> +		folio_set_large_rmappable(folio);
>>>>>> +	}
>>>>>
>>>>> OK, so basically, @folio is not a compound page yet when zone_device_folio_init()
>>>>> is called.
>>>>>
>>>>> I feel that your zone_device_page_init() and zone_device_folio_init()
>>>>> implementations are inverse. They should follow the same pattern
>>>>> as __alloc_pages_noprof() and __folio_alloc_noprof(), where
>>>>> zone_device_page_init() does the actual initialization and
>>>>> zone_device_folio_init() just convert a page to folio.
>>>>>
>>>>> Something like:
>>>>>
>>>>> void zone_device_page_init(struct page *page, unsigned int order)
>>>>> {
>>>>> 	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
>>>>>
>>>>> 	/*
>>>>> 	 * Drivers shouldn't be allocating pages after calling
>>>>> 	 * memunmap_pages().
>>>>> 	 */
>>>>>
>>>>>      WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
>>>>> 	
>>>>> 	/*
>>>>> 	 * anonymous folio does not support order-1, high order file-backed folio
>>>>> 	 * is not supported at all.
>>>>> 	 */
>>>>> 	VM_WARN_ON_ONCE(order == 1);
>>>>>
>>>>> 	if (order > 1)
>>>>> 		prep_compound_page(page, order);
>>>>>
>>>>> 	/* page has to be compound head here */
>>>>> 	set_page_count(page, 1);
>>>>> 	lock_page(page);
>>>>> }
>>>>>
>>>>> void zone_device_folio_init(struct folio *folio, unsigned int order)
>>>>> {
>>>>> 	struct page *page = folio_page(folio, 0);
>>>>>
>>>>> 	zone_device_page_init(page, order);
>>>>> 	page_rmappable_folio(page);
>>>>> }
>>>>>
>>>>> Or
>>>>>
>>>>> struct folio *zone_device_folio_init(struct page *page, unsigned int order)
>>>>> {
>>>>> 	zone_device_page_init(page, order);
>>>>> 	return page_rmappable_folio(page);
>>>>> }
>>>>>
>>>>>
>>>>> Then, it comes to free_zone_device_folio() above,
>>>>> I feel that pgmap->ops->page_free() should take an additional order
>>>>> parameter to free a compound page like free_frozen_pages().
>>>>>
>>>>>
>>>>> This is my impression after reading the patch and zone device page code.
>>>>>
>>>>> Alistair and David can correct me if this is wrong, since I am new to
>>>>> zone device page code.
>>>>> 	
>>>>
>>>> Thanks, I did not want to change zone_device_page_init() for several
>>>> drivers (outside my test scope) that already assume it has an order size of 0.
>>>
>>> But my proposed zone_device_page_init() should still work for order-0
>>> pages. You just need to change call site to add 0 as a new parameter.
>>>
>>
>> I did not want to change existing callers (increases testing impact)
>> without a strong reason.
>>
>>>
>>> One strange thing I found in the original zone_device_page_init() is
>>> the use of page_pgmap() in
>>> WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order)).
>>> page_pgmap() calls page_folio() on the given page to access pgmap field.
>>> And pgmap field is only available in struct folio. The code initializes
>>> struct page, but in middle it suddenly finds the page is actually a folio,
>>> then treat it as a page afterwards. I wonder if it can be done better.
>>>
>>> This might be a question to Alistair, since he made the change.
>>>
>>
>> I'll let him answer it :)
>
> Not him, but I think this goes back to my question raised in my other reply: When would we allocate "struct folio" in the future.
>
> If it's "always" then actually most of the zone-device code would only ever operate on folios and never on pages in the future.
>
> I recall during a discussion at LSF/MM I raised that, and the answer was (IIRC) that we will allocate "struct folio" as we will initialize the memmap for dax.
>
> So essentially, we'd always have folios and would never really have to operate on pages.

Hmm, then what is the point of having “struct folio”, which originally is
added to save compound_head() calls, where everything is a folio in device
private world? We might need DAX people to explain the rationale of
“always struct folio”.

Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 01/15] mm/zone_device: support large zone device private folios
  2025-09-24 17:49             ` Zi Yan
@ 2025-09-24 23:45               ` Alistair Popple
  2025-09-25 15:27                 ` Zi Yan
  0 siblings, 1 reply; 57+ messages in thread
From: Alistair Popple @ 2025-09-24 23:45 UTC (permalink / raw)
  To: Zi Yan
  Cc: David Hildenbrand, Balbir Singh, linux-kernel, linux-mm, damon,
	dri-devel, Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Oscar Salvador, Lorenzo Stoakes, Baolin Wang,
	Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
	Lyude Paul, Danilo Krummrich, David Airlie, Simona Vetter,
	Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 2025-09-25 at 03:49 +1000, Zi Yan <ziy@nvidia.com> wrote...
> On 24 Sep 2025, at 7:04, David Hildenbrand wrote:
> 
> > On 23.09.25 05:47, Balbir Singh wrote:
> >> On 9/19/25 23:26, Zi Yan wrote:
> >>> On 19 Sep 2025, at 1:01, Balbir Singh wrote:
> >>>
> >>>> On 9/18/25 12:49, Zi Yan wrote:
> >>>>> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
> >>>>>
> >>>>>> Add routines to support allocation of large order zone device folios
> >>>>>> and helper functions for zone device folios, to check if a folio is
> >>>>>> device private and helpers for setting zone device data.
> >>>>>>
> >>>>>> When large folios are used, the existing page_free() callback in
> >>>>>> pgmap is called when the folio is freed, this is true for both
> >>>>>> PAGE_SIZE and higher order pages.
> >>>>>>
> >>>>>> Zone device private large folios do not support deferred split and
> >>>>>> scan like normal THP folios.
> >>>>>>
> >>>>>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
> >>>>>> Cc: David Hildenbrand <david@redhat.com>
> >>>>>> Cc: Zi Yan <ziy@nvidia.com>
> >>>>>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
> >>>>>> Cc: Rakie Kim <rakie.kim@sk.com>
> >>>>>> Cc: Byungchul Park <byungchul@sk.com>
> >>>>>> Cc: Gregory Price <gourry@gourry.net>
> >>>>>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
> >>>>>> Cc: Alistair Popple <apopple@nvidia.com>
> >>>>>> Cc: Oscar Salvador <osalvador@suse.de>
> >>>>>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> >>>>>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> >>>>>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
> >>>>>> Cc: Nico Pache <npache@redhat.com>
> >>>>>> Cc: Ryan Roberts <ryan.roberts@arm.com>
> >>>>>> Cc: Dev Jain <dev.jain@arm.com>
> >>>>>> Cc: Barry Song <baohua@kernel.org>
> >>>>>> Cc: Lyude Paul <lyude@redhat.com>
> >>>>>> Cc: Danilo Krummrich <dakr@kernel.org>
> >>>>>> Cc: David Airlie <airlied@gmail.com>
> >>>>>> Cc: Simona Vetter <simona@ffwll.ch>
> >>>>>> Cc: Ralph Campbell <rcampbell@nvidia.com>
> >>>>>> Cc: Mika Penttilä <mpenttil@redhat.com>
> >>>>>> Cc: Matthew Brost <matthew.brost@intel.com>
> >>>>>> Cc: Francois Dugast <francois.dugast@intel.com>
> >>>>>> ---
> >>>>>>   include/linux/memremap.h | 10 +++++++++-
> >>>>>>   mm/memremap.c            | 34 +++++++++++++++++++++-------------
> >>>>>>   mm/rmap.c                |  6 +++++-
> >>>>>>   3 files changed, 35 insertions(+), 15 deletions(-)
> >>>>>>
> >>>>>> diff --git a/include/linux/memremap.h b/include/linux/memremap.h
> >>>>>> index e5951ba12a28..9c20327c2be5 100644
> >>>>>> --- a/include/linux/memremap.h
> >>>>>> +++ b/include/linux/memremap.h
> >>>>>> @@ -206,7 +206,7 @@ static inline bool is_fsdax_page(const struct page *page)
> >>>>>>   }
> >>>>>>
> >>>>>>   #ifdef CONFIG_ZONE_DEVICE
> >>>>>> -void zone_device_page_init(struct page *page);
> >>>>>> +void zone_device_folio_init(struct folio *folio, unsigned int order);
> >>>>>>   void *memremap_pages(struct dev_pagemap *pgmap, int nid);
> >>>>>>   void memunmap_pages(struct dev_pagemap *pgmap);
> >>>>>>   void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
> >>>>>> @@ -215,6 +215,14 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn);
> >>>>>>   bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn);
> >>>>>>
> >>>>>>   unsigned long memremap_compat_align(void);
> >>>>>> +
> >>>>>> +static inline void zone_device_page_init(struct page *page)
> >>>>>> +{
> >>>>>> +	struct folio *folio = page_folio(page);
> >>>>>> +
> >>>>>> +	zone_device_folio_init(folio, 0);
> >>>>>
> >>>>> I assume it is for legacy code, where only non-compound page exists?
> >>>>>
> >>>>> It seems that you assume @page is always order-0, but there is no check
> >>>>> for it. Adding VM_WARN_ON_ONCE_FOLIO(folio_order(folio) != 0, folio)
> >>>>> above it would be useful to detect misuse.
> >>>>>
> >>>>>> +}
> >>>>>> +
> >>>>>>   #else
> >>>>>>   static inline void *devm_memremap_pages(struct device *dev,
> >>>>>>   		struct dev_pagemap *pgmap)
> >>>>>> diff --git a/mm/memremap.c b/mm/memremap.c
> >>>>>> index 46cb1b0b6f72..a8481ebf94cc 100644
> >>>>>> --- a/mm/memremap.c
> >>>>>> +++ b/mm/memremap.c
> >>>>>> @@ -416,20 +416,19 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap);
> >>>>>>   void free_zone_device_folio(struct folio *folio)
> >>>>>>   {
> >>>>>>   	struct dev_pagemap *pgmap = folio->pgmap;
> >>>>>> +	unsigned long nr = folio_nr_pages(folio);
> >>>>>> +	int i;
> >>>>>>
> >>>>>>   	if (WARN_ON_ONCE(!pgmap))
> >>>>>>   		return;
> >>>>>>
> >>>>>>   	mem_cgroup_uncharge(folio);
> >>>>>>
> >>>>>> -	/*
> >>>>>> -	 * Note: we don't expect anonymous compound pages yet. Once supported
> >>>>>> -	 * and we could PTE-map them similar to THP, we'd have to clear
> >>>>>> -	 * PG_anon_exclusive on all tail pages.
> >>>>>> -	 */
> >>>>>>   	if (folio_test_anon(folio)) {
> >>>>>> -		VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
> >>>>>> -		__ClearPageAnonExclusive(folio_page(folio, 0));
> >>>>>> +		for (i = 0; i < nr; i++)
> >>>>>> +			__ClearPageAnonExclusive(folio_page(folio, i));
> >>>>>> +	} else {
> >>>>>> +		VM_WARN_ON_ONCE(folio_test_large(folio));
> >>>>>>   	}
> >>>>>>
> >>>>>>   	/*
> >>>>>> @@ -456,8 +455,8 @@ void free_zone_device_folio(struct folio *folio)
> >>>>>>   	case MEMORY_DEVICE_COHERENT:
> >>>>>>   		if (WARN_ON_ONCE(!pgmap->ops || !pgmap->ops->page_free))
> >>>>>>   			break;
> >>>>>> -		pgmap->ops->page_free(folio_page(folio, 0));
> >>>>>> -		put_dev_pagemap(pgmap);
> >>>>>> +		pgmap->ops->page_free(&folio->page);
> >>>>>> +		percpu_ref_put_many(&folio->pgmap->ref, nr);
> >>>>>>   		break;
> >>>>>>
> >>>>>>   	case MEMORY_DEVICE_GENERIC:
> >>>>>> @@ -480,14 +479,23 @@ void free_zone_device_folio(struct folio *folio)
> >>>>>>   	}
> >>>>>>   }
> >>>>>>
> >>>>>> -void zone_device_page_init(struct page *page)
> >>>>>> +void zone_device_folio_init(struct folio *folio, unsigned int order)
> >>>>>>   {
> >>>>>> +	struct page *page = folio_page(folio, 0);
> >>>>>
> >>>>> It is strange to see a folio is converted back to page in
> >>>>> a function called zone_device_folio_init().
> >>>>>
> >>>>>> +
> >>>>>> +	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
> >>>>>> +
> >>>>>>   	/*
> >>>>>>   	 * Drivers shouldn't be allocating pages after calling
> >>>>>>   	 * memunmap_pages().
> >>>>>>   	 */
> >>>>>> -	WARN_ON_ONCE(!percpu_ref_tryget_live(&page_pgmap(page)->ref));
> >>>>>> -	set_page_count(page, 1);
> >>>>>> +	WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
> >>>>>> +	folio_set_count(folio, 1);
> >>>>>>   	lock_page(page);
> >>>>>> +
> >>>>>> +	if (order > 1) {

Why is this only called for order > 1 rather than order > 0 ?

> >>>>>> +		prep_compound_page(page, order);
> >>>>>> +		folio_set_large_rmappable(folio);
> >>>>>> +	}
> >>>>>
> >>>>> OK, so basically, @folio is not a compound page yet when zone_device_folio_init()
> >>>>> is called.
> >>>>>
> >>>>> I feel that your zone_device_page_init() and zone_device_folio_init()
> >>>>> implementations are inverse. They should follow the same pattern
> >>>>> as __alloc_pages_noprof() and __folio_alloc_noprof(), where
> >>>>> zone_device_page_init() does the actual initialization and
> >>>>> zone_device_folio_init() just convert a page to folio.
> >>>>>
> >>>>> Something like:
> >>>>>
> >>>>> void zone_device_page_init(struct page *page, unsigned int order)
> >>>>> {
> >>>>> 	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
> >>>>>
> >>>>> 	/*
> >>>>> 	 * Drivers shouldn't be allocating pages after calling
> >>>>> 	 * memunmap_pages().
> >>>>> 	 */
> >>>>>
> >>>>>      WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
> >>>>> 	
> >>>>> 	/*
> >>>>> 	 * anonymous folio does not support order-1, high order file-backed folio
> >>>>> 	 * is not supported at all.
> >>>>> 	 */

I guess that answers my question :-)

> >>>>> 	VM_WARN_ON_ONCE(order == 1);
> >>>>>
> >>>>> 	if (order > 1)
> >>>>> 		prep_compound_page(page, order);
> >>>>>
> >>>>> 	/* page has to be compound head here */
> >>>>> 	set_page_count(page, 1);
> >>>>> 	lock_page(page);
> >>>>> }
> >>>>>
> >>>>> void zone_device_folio_init(struct folio *folio, unsigned int order)
> >>>>> {
> >>>>> 	struct page *page = folio_page(folio, 0);
> >>>>>
> >>>>> 	zone_device_page_init(page, order);
> >>>>> 	page_rmappable_folio(page);
> >>>>> }
> >>>>>
> >>>>> Or
> >>>>>
> >>>>> struct folio *zone_device_folio_init(struct page *page, unsigned int order)
> >>>>> {
> >>>>> 	zone_device_page_init(page, order);
> >>>>> 	return page_rmappable_folio(page);
> >>>>> }
> >>>>>
> >>>>>
> >>>>> Then, it comes to free_zone_device_folio() above,
> >>>>> I feel that pgmap->ops->page_free() should take an additional order
> >>>>> parameter to free a compound page like free_frozen_pages().
> >>>>>
> >>>>>
> >>>>> This is my impression after reading the patch and zone device page code.
> >>>>>
> >>>>> Alistair and David can correct me if this is wrong, since I am new to
> >>>>> zone device page code.
> >>>>> 	
> >>>>
> >>>> Thanks, I did not want to change zone_device_page_init() for several
> >>>> drivers (outside my test scope) that already assume it has an order size of 0.

It's a trivial change, so I don't think avoiding changes to other drivers should
be a concern.

> >>>
> >>> But my proposed zone_device_page_init() should still work for order-0
> >>> pages. You just need to change call site to add 0 as a new parameter.
> >>>
> >>
> >> I did not want to change existing callers (increases testing impact)
> >> without a strong reason.
> >>
> >>>
> >>> One strange thing I found in the original zone_device_page_init() is
> >>> the use of page_pgmap() in
> >>> WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order)).
> >>> page_pgmap() calls page_folio() on the given page to access pgmap field.
> >>> And pgmap field is only available in struct folio. The code initializes
> >>> struct page, but in middle it suddenly finds the page is actually a folio,
> >>> then treat it as a page afterwards. I wonder if it can be done better.
> >>>
> >>> This might be a question to Alistair, since he made the change.

Hello! I might be him :)

I think this situation is just historical - when I originally wrote
zone_device_page_init() the pgmap was stored on the page rather than the folio.
That only changed fairly recently with commit 82ba975e4c43 ("mm: allow compound
zone device pages").

The reason pgmap is now only available on the folio is described in the
commit log. The TLDR is switching FS DAX to use compound pages required
page->compound_head to be available for use, and that was being shared
with page->pgmap. So the solution was to move pgmap to the folio freeing up
page->compound_head for use on tail pages.

The whole percpu pgmap->ref could actually now go away - I've debated removing
it but haven't found the motivation as it provides a small advantage on driver
tear down. Basically it just tracks how many pages are allocated in the pgmap
so drivers could use that to determine if they need to trigger migrations before
tearing down the pgmap.

The alternative is just to loop over every page in the pgmap to ensure the
folio/page refcounts are 0 before tear down.

> >>>
> >>
> >> I'll let him answer it :)
> >
> > Not him, but I think this goes back to my question raised in my other reply: When would we allocate "struct folio" in the future.
> >
> > If it's "always" then actually most of the zone-device code would only ever operate on folios and never on pages in the future.
> >
> > I recall during a discussion at LSF/MM I raised that, and the answer was (IIRC) that we will allocate "struct folio" as we will initialize the memmap for dax.

Sounds about right.

> > So essentially, we'd always have folios and would never really have to operate on pages.

Yeah, I think I mentioned to Matthew at LSF/MM that I thought ZONE_DEVICE (and
in particular ZONE_DEVICE_PRIVATE) might be a good candidate to experiment with
removing struct pages entirely and switching to memdesc's or whatever. Because
we should, in theory at least, only need to operate on folio's. But I'm still a
little vague on the details how that would actually work. It's been on my TODO
list for a while, so myabe I will try and look at it for LPC as a healthy bit of
conference driven development.

> Hmm, then what is the point of having “struct folio”, which originally is
> added to save compound_head() calls, where everything is a folio in device
> private world? We might need DAX people to explain the rationale of
> “always struct folio”.

Longer term isn't there an aim to remove struct page? So I assumed moving to
folio's was part of that effort. As you say though many of the clean-ups thus
far related to switching ZONE_DEVICE to folios have indeed just been about
removing compound_head() calls.

 - Alistair

> Best Regards,
> Yan, Zi


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 01/15] mm/zone_device: support large zone device private folios
  2025-09-24 17:36       ` Zi Yan
@ 2025-09-24 23:58         ` Alistair Popple
  2025-09-25  0:05           ` Balbir Singh
  2025-09-25  9:43           ` David Hildenbrand
  0 siblings, 2 replies; 57+ messages in thread
From: Alistair Popple @ 2025-09-24 23:58 UTC (permalink / raw)
  To: Zi Yan
  Cc: David Hildenbrand, Balbir Singh, linux-kernel, linux-mm, damon,
	dri-devel, Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Oscar Salvador, Lorenzo Stoakes, Baolin Wang,
	Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
	Lyude Paul, Danilo Krummrich, David Airlie, Simona Vetter,
	Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 2025-09-25 at 03:36 +1000, Zi Yan <ziy@nvidia.com> wrote...
> On 24 Sep 2025, at 6:55, David Hildenbrand wrote:
> 
> > On 18.09.25 04:49, Zi Yan wrote:
> >> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
> >>
> >>> Add routines to support allocation of large order zone device folios
> >>> and helper functions for zone device folios, to check if a folio is
> >>> device private and helpers for setting zone device data.
> >>>
> >>> When large folios are used, the existing page_free() callback in
> >>> pgmap is called when the folio is freed, this is true for both
> >>> PAGE_SIZE and higher order pages.
> >>>
> >>> Zone device private large folios do not support deferred split and
> >>> scan like normal THP folios.
> >>>
> >>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
> >>> Cc: David Hildenbrand <david@redhat.com>
> >>> Cc: Zi Yan <ziy@nvidia.com>
> >>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
> >>> Cc: Rakie Kim <rakie.kim@sk.com>
> >>> Cc: Byungchul Park <byungchul@sk.com>
> >>> Cc: Gregory Price <gourry@gourry.net>
> >>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
> >>> Cc: Alistair Popple <apopple@nvidia.com>
> >>> Cc: Oscar Salvador <osalvador@suse.de>
> >>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> >>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> >>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
> >>> Cc: Nico Pache <npache@redhat.com>
> >>> Cc: Ryan Roberts <ryan.roberts@arm.com>
> >>> Cc: Dev Jain <dev.jain@arm.com>
> >>> Cc: Barry Song <baohua@kernel.org>
> >>> Cc: Lyude Paul <lyude@redhat.com>
> >>> Cc: Danilo Krummrich <dakr@kernel.org>
> >>> Cc: David Airlie <airlied@gmail.com>
> >>> Cc: Simona Vetter <simona@ffwll.ch>
> >>> Cc: Ralph Campbell <rcampbell@nvidia.com>
> >>> Cc: Mika Penttilä <mpenttil@redhat.com>
> >>> Cc: Matthew Brost <matthew.brost@intel.com>
> >>> Cc: Francois Dugast <francois.dugast@intel.com>
> >>> ---
> >>>   include/linux/memremap.h | 10 +++++++++-
> >>>   mm/memremap.c            | 34 +++++++++++++++++++++-------------
> >>>   mm/rmap.c                |  6 +++++-
> >>>   3 files changed, 35 insertions(+), 15 deletions(-)
> >>>
> >>> diff --git a/include/linux/memremap.h b/include/linux/memremap.h
> >>> index e5951ba12a28..9c20327c2be5 100644
> >>> --- a/include/linux/memremap.h
> >>> +++ b/include/linux/memremap.h
> >>> @@ -206,7 +206,7 @@ static inline bool is_fsdax_page(const struct page *page)
> >>>   }
> >>>
> >>>   #ifdef CONFIG_ZONE_DEVICE
> >>> -void zone_device_page_init(struct page *page);
> >>> +void zone_device_folio_init(struct folio *folio, unsigned int order);
> >>>   void *memremap_pages(struct dev_pagemap *pgmap, int nid);
> >>>   void memunmap_pages(struct dev_pagemap *pgmap);
> >>>   void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
> >>> @@ -215,6 +215,14 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn);
> >>>   bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn);
> >>>
> >>>   unsigned long memremap_compat_align(void);
> >>> +
> >>> +static inline void zone_device_page_init(struct page *page)
> >>> +{
> >>> +	struct folio *folio = page_folio(page);
> >>> +
> >>> +	zone_device_folio_init(folio, 0);
> >>
> >> I assume it is for legacy code, where only non-compound page exists?
> >>
> >> It seems that you assume @page is always order-0, but there is no check
> >> for it. Adding VM_WARN_ON_ONCE_FOLIO(folio_order(folio) != 0, folio)
> >> above it would be useful to detect misuse.
> >>
> >>> +}
> >>> +
> >>>   #else
> >>>   static inline void *devm_memremap_pages(struct device *dev,
> >>>   		struct dev_pagemap *pgmap)
> >>> diff --git a/mm/memremap.c b/mm/memremap.c
> >>> index 46cb1b0b6f72..a8481ebf94cc 100644
> >>> --- a/mm/memremap.c
> >>> +++ b/mm/memremap.c
> >>> @@ -416,20 +416,19 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap);
> >>>   void free_zone_device_folio(struct folio *folio)
> >>>   {
> >>>   	struct dev_pagemap *pgmap = folio->pgmap;
> >>> +	unsigned long nr = folio_nr_pages(folio);
> >>> +	int i;
> >>>
> >>>   	if (WARN_ON_ONCE(!pgmap))
> >>>   		return;
> >>>
> >>>   	mem_cgroup_uncharge(folio);
> >>>
> >>> -	/*
> >>> -	 * Note: we don't expect anonymous compound pages yet. Once supported
> >>> -	 * and we could PTE-map them similar to THP, we'd have to clear
> >>> -	 * PG_anon_exclusive on all tail pages.
> >>> -	 */
> >>>   	if (folio_test_anon(folio)) {
> >>> -		VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
> >>> -		__ClearPageAnonExclusive(folio_page(folio, 0));
> >>> +		for (i = 0; i < nr; i++)
> >>> +			__ClearPageAnonExclusive(folio_page(folio, i));
> >>> +	} else {
> >>> +		VM_WARN_ON_ONCE(folio_test_large(folio));
> >>>   	}
> >>>
> >>>   	/*
> >>> @@ -456,8 +455,8 @@ void free_zone_device_folio(struct folio *folio)
> >>>   	case MEMORY_DEVICE_COHERENT:
> >>>   		if (WARN_ON_ONCE(!pgmap->ops || !pgmap->ops->page_free))
> >>>   			break;
> >>> -		pgmap->ops->page_free(folio_page(folio, 0));
> >>> -		put_dev_pagemap(pgmap);
> >>> +		pgmap->ops->page_free(&folio->page);
> >>> +		percpu_ref_put_many(&folio->pgmap->ref, nr);
> >>>   		break;
> >>>
> >>>   	case MEMORY_DEVICE_GENERIC:
> >>> @@ -480,14 +479,23 @@ void free_zone_device_folio(struct folio *folio)
> >>>   	}
> >>>   }
> >>>
> >>> -void zone_device_page_init(struct page *page)
> >>> +void zone_device_folio_init(struct folio *folio, unsigned int order)
> >>>   {
> >>> +	struct page *page = folio_page(folio, 0);
> >>
> >> It is strange to see a folio is converted back to page in
> >> a function called zone_device_folio_init().
> >>
> >>> +
> >>> +	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
> >>> +
> >>>   	/*
> >>>   	 * Drivers shouldn't be allocating pages after calling
> >>>   	 * memunmap_pages().
> >>>   	 */
> >>> -	WARN_ON_ONCE(!percpu_ref_tryget_live(&page_pgmap(page)->ref));
> >>> -	set_page_count(page, 1);
> >>> +	WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
> >>> +	folio_set_count(folio, 1);
> >>>   	lock_page(page);
> >>> +
> >>> +	if (order > 1) {
> >>> +		prep_compound_page(page, order);
> >>> +		folio_set_large_rmappable(folio);
> >>> +	}
> >>
> >> OK, so basically, @folio is not a compound page yet when zone_device_folio_init()
> >> is called.
> >>
> >> I feel that your zone_device_page_init() and zone_device_folio_init()
> >> implementations are inverse. They should follow the same pattern
> >> as __alloc_pages_noprof() and __folio_alloc_noprof(), where
> >> zone_device_page_init() does the actual initialization and
> >> zone_device_folio_init() just convert a page to folio.
> >>
> >> Something like:
> >>
> >> void zone_device_page_init(struct page *page, unsigned int order)
> >> {
> >> 	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
> >>
> >> 	/*
> >> 	 * Drivers shouldn't be allocating pages after calling
> >> 	 * memunmap_pages().
> >> 	 */
> >>
> >>      WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
> >> 	
> >> 	/*
> >> 	 * anonymous folio does not support order-1, high order file-backed folio
> >> 	 * is not supported at all.
> >> 	 */
> >> 	VM_WARN_ON_ONCE(order == 1);
> >>
> >> 	if (order > 1)
> >> 		prep_compound_page(page, order);
> >>
> >> 	/* page has to be compound head here */
> >> 	set_page_count(page, 1);
> >> 	lock_page(page);
> >> }
> >>
> >> void zone_device_folio_init(struct folio *folio, unsigned int order)
> >> {
> >> 	struct page *page = folio_page(folio, 0);
> >>
> >> 	zone_device_page_init(page, order);
> >> 	page_rmappable_folio(page);
> >> }
> >>
> >> Or
> >>
> >> struct folio *zone_device_folio_init(struct page *page, unsigned int order)
> >> {
> >> 	zone_device_page_init(page, order);
> >> 	return page_rmappable_folio(page);
> >> }
> >
> > I think the problem is that it will all be weird once we dynamically allocate "struct folio".
> >
> > I have not yet a clear understanding on how that would really work.
> >
> > For example, should it be pgmap->ops->page_folio() ?
> >
> > Who allocates the folio? Do we allocate all order-0 folios initially, to then merge them when constructing large folios? How do we manage the "struct folio" during such merging splitting?
> 
> Right. Either we would waste memory by simply concatenating all “struct folio”
> and putting paddings at the end, or we would free tail “struct folio” first,
> then allocate tail “struct page”. Both are painful and do not match core mm’s
> memdesc pattern, where “struct folio” is allocated when caller is asking
> for a folio. If “struct folio” is always allocated, there is no difference
> between “struct folio” and “struct page”.

As mentioned in my other reply I need to investigate this some more, but I
don't think we _need_ to always allocate folios (or pages for that matter).
The ZONE_DEVICE code just uses folios/pages for interacting with the core mm,
not for managing the device memory itself, so we should be able to make it more
closely match the memdesc pattern. It's just I'm still a bit unsure what that
pattern will actually look like.

> >
> > With that in mind, I don't really know what the proper interface should be today.
> >
> >
> > zone_device_folio_init(struct page *page, unsigned int order)
> >
> > looks cleaner, agreed.

Agreed.

> >>
> >>
> >> Then, it comes to free_zone_device_folio() above,
> >> I feel that pgmap->ops->page_free() should take an additional order
> >> parameter to free a compound page like free_frozen_pages().

Where would the order parameter come from? Presumably
folio_order(compound_head(page)) in which case shouldn't the op actually just be
pgmap->ops->folio_free()?

 - Alistair

> >>
> >
> > IIRC free_frozen_pages() does not operate on compound pages. If we know that we are operating on a compound page (or single page) then passing in the page (or better the folio) should work.
> 
> free_pages_prepare() in __free_frozen_pages(), called by free_frozen_pages(),
> checks if compound_order(page) matches the given order, in case folio field
> corrupts. I suppose it is useful. But I do not have a strong opinion about
> this one.
> 
> 
> 
> Best Regards,
> Yan, Zi


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 01/15] mm/zone_device: support large zone device private folios
  2025-09-24 23:58         ` Alistair Popple
@ 2025-09-25  0:05           ` Balbir Singh
  2025-09-25 15:32             ` Zi Yan
  2025-09-25  9:43           ` David Hildenbrand
  1 sibling, 1 reply; 57+ messages in thread
From: Balbir Singh @ 2025-09-25  0:05 UTC (permalink / raw)
  To: Alistair Popple, Zi Yan
  Cc: David Hildenbrand, linux-kernel, linux-mm, damon, dri-devel,
	Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Oscar Salvador, Lorenzo Stoakes, Baolin Wang,
	Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
	Lyude Paul, Danilo Krummrich, David Airlie, Simona Vetter,
	Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 9/25/25 09:58, Alistair Popple wrote:
> On 2025-09-25 at 03:36 +1000, Zi Yan <ziy@nvidia.com> wrote...
>> On 24 Sep 2025, at 6:55, David Hildenbrand wrote:
>>
>>> On 18.09.25 04:49, Zi Yan wrote:
>>>> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
>>>>
>>>>> Add routines to support allocation of large order zone device folios
>>>>> and helper functions for zone device folios, to check if a folio is
>>>>> device private and helpers for setting zone device data.
>>>>>
>>>>> When large folios are used, the existing page_free() callback in
>>>>> pgmap is called when the folio is freed, this is true for both
>>>>> PAGE_SIZE and higher order pages.
>>>>>
>>>>> Zone device private large folios do not support deferred split and
>>>>> scan like normal THP folios.
>>>>>
>>>>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>>>>> Cc: David Hildenbrand <david@redhat.com>
>>>>> Cc: Zi Yan <ziy@nvidia.com>
>>>>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
>>>>> Cc: Rakie Kim <rakie.kim@sk.com>
>>>>> Cc: Byungchul Park <byungchul@sk.com>
>>>>> Cc: Gregory Price <gourry@gourry.net>
>>>>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
>>>>> Cc: Alistair Popple <apopple@nvidia.com>
>>>>> Cc: Oscar Salvador <osalvador@suse.de>
>>>>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>>>>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
>>>>> Cc: Nico Pache <npache@redhat.com>
>>>>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>>>>> Cc: Dev Jain <dev.jain@arm.com>
>>>>> Cc: Barry Song <baohua@kernel.org>
>>>>> Cc: Lyude Paul <lyude@redhat.com>
>>>>> Cc: Danilo Krummrich <dakr@kernel.org>
>>>>> Cc: David Airlie <airlied@gmail.com>
>>>>> Cc: Simona Vetter <simona@ffwll.ch>
>>>>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>>>>> Cc: Mika Penttilä <mpenttil@redhat.com>
>>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>>> Cc: Francois Dugast <francois.dugast@intel.com>
>>>>> ---
>>>>>   include/linux/memremap.h | 10 +++++++++-
>>>>>   mm/memremap.c            | 34 +++++++++++++++++++++-------------
>>>>>   mm/rmap.c                |  6 +++++-
>>>>>   3 files changed, 35 insertions(+), 15 deletions(-)
>>>>>
>>>>> diff --git a/include/linux/memremap.h b/include/linux/memremap.h
>>>>> index e5951ba12a28..9c20327c2be5 100644
>>>>> --- a/include/linux/memremap.h
>>>>> +++ b/include/linux/memremap.h
>>>>> @@ -206,7 +206,7 @@ static inline bool is_fsdax_page(const struct page *page)
>>>>>   }
>>>>>
>>>>>   #ifdef CONFIG_ZONE_DEVICE
>>>>> -void zone_device_page_init(struct page *page);
>>>>> +void zone_device_folio_init(struct folio *folio, unsigned int order);
>>>>>   void *memremap_pages(struct dev_pagemap *pgmap, int nid);
>>>>>   void memunmap_pages(struct dev_pagemap *pgmap);
>>>>>   void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
>>>>> @@ -215,6 +215,14 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn);
>>>>>   bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn);
>>>>>
>>>>>   unsigned long memremap_compat_align(void);
>>>>> +
>>>>> +static inline void zone_device_page_init(struct page *page)
>>>>> +{
>>>>> +	struct folio *folio = page_folio(page);
>>>>> +
>>>>> +	zone_device_folio_init(folio, 0);
>>>>
>>>> I assume it is for legacy code, where only non-compound page exists?
>>>>
>>>> It seems that you assume @page is always order-0, but there is no check
>>>> for it. Adding VM_WARN_ON_ONCE_FOLIO(folio_order(folio) != 0, folio)
>>>> above it would be useful to detect misuse.
>>>>
>>>>> +}
>>>>> +
>>>>>   #else
>>>>>   static inline void *devm_memremap_pages(struct device *dev,
>>>>>   		struct dev_pagemap *pgmap)
>>>>> diff --git a/mm/memremap.c b/mm/memremap.c
>>>>> index 46cb1b0b6f72..a8481ebf94cc 100644
>>>>> --- a/mm/memremap.c
>>>>> +++ b/mm/memremap.c
>>>>> @@ -416,20 +416,19 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap);
>>>>>   void free_zone_device_folio(struct folio *folio)
>>>>>   {
>>>>>   	struct dev_pagemap *pgmap = folio->pgmap;
>>>>> +	unsigned long nr = folio_nr_pages(folio);
>>>>> +	int i;
>>>>>
>>>>>   	if (WARN_ON_ONCE(!pgmap))
>>>>>   		return;
>>>>>
>>>>>   	mem_cgroup_uncharge(folio);
>>>>>
>>>>> -	/*
>>>>> -	 * Note: we don't expect anonymous compound pages yet. Once supported
>>>>> -	 * and we could PTE-map them similar to THP, we'd have to clear
>>>>> -	 * PG_anon_exclusive on all tail pages.
>>>>> -	 */
>>>>>   	if (folio_test_anon(folio)) {
>>>>> -		VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
>>>>> -		__ClearPageAnonExclusive(folio_page(folio, 0));
>>>>> +		for (i = 0; i < nr; i++)
>>>>> +			__ClearPageAnonExclusive(folio_page(folio, i));
>>>>> +	} else {
>>>>> +		VM_WARN_ON_ONCE(folio_test_large(folio));
>>>>>   	}
>>>>>
>>>>>   	/*
>>>>> @@ -456,8 +455,8 @@ void free_zone_device_folio(struct folio *folio)
>>>>>   	case MEMORY_DEVICE_COHERENT:
>>>>>   		if (WARN_ON_ONCE(!pgmap->ops || !pgmap->ops->page_free))
>>>>>   			break;
>>>>> -		pgmap->ops->page_free(folio_page(folio, 0));
>>>>> -		put_dev_pagemap(pgmap);
>>>>> +		pgmap->ops->page_free(&folio->page);
>>>>> +		percpu_ref_put_many(&folio->pgmap->ref, nr);
>>>>>   		break;
>>>>>
>>>>>   	case MEMORY_DEVICE_GENERIC:
>>>>> @@ -480,14 +479,23 @@ void free_zone_device_folio(struct folio *folio)
>>>>>   	}
>>>>>   }
>>>>>
>>>>> -void zone_device_page_init(struct page *page)
>>>>> +void zone_device_folio_init(struct folio *folio, unsigned int order)
>>>>>   {
>>>>> +	struct page *page = folio_page(folio, 0);
>>>>
>>>> It is strange to see a folio is converted back to page in
>>>> a function called zone_device_folio_init().
>>>>
>>>>> +
>>>>> +	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
>>>>> +
>>>>>   	/*
>>>>>   	 * Drivers shouldn't be allocating pages after calling
>>>>>   	 * memunmap_pages().
>>>>>   	 */
>>>>> -	WARN_ON_ONCE(!percpu_ref_tryget_live(&page_pgmap(page)->ref));
>>>>> -	set_page_count(page, 1);
>>>>> +	WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
>>>>> +	folio_set_count(folio, 1);
>>>>>   	lock_page(page);
>>>>> +
>>>>> +	if (order > 1) {
>>>>> +		prep_compound_page(page, order);
>>>>> +		folio_set_large_rmappable(folio);
>>>>> +	}
>>>>
>>>> OK, so basically, @folio is not a compound page yet when zone_device_folio_init()
>>>> is called.
>>>>
>>>> I feel that your zone_device_page_init() and zone_device_folio_init()
>>>> implementations are inverse. They should follow the same pattern
>>>> as __alloc_pages_noprof() and __folio_alloc_noprof(), where
>>>> zone_device_page_init() does the actual initialization and
>>>> zone_device_folio_init() just convert a page to folio.
>>>>
>>>> Something like:
>>>>
>>>> void zone_device_page_init(struct page *page, unsigned int order)
>>>> {
>>>> 	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
>>>>
>>>> 	/*
>>>> 	 * Drivers shouldn't be allocating pages after calling
>>>> 	 * memunmap_pages().
>>>> 	 */
>>>>
>>>>      WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
>>>> 	
>>>> 	/*
>>>> 	 * anonymous folio does not support order-1, high order file-backed folio
>>>> 	 * is not supported at all.
>>>> 	 */
>>>> 	VM_WARN_ON_ONCE(order == 1);
>>>>
>>>> 	if (order > 1)
>>>> 		prep_compound_page(page, order);
>>>>
>>>> 	/* page has to be compound head here */
>>>> 	set_page_count(page, 1);
>>>> 	lock_page(page);
>>>> }
>>>>
>>>> void zone_device_folio_init(struct folio *folio, unsigned int order)
>>>> {
>>>> 	struct page *page = folio_page(folio, 0);
>>>>
>>>> 	zone_device_page_init(page, order);
>>>> 	page_rmappable_folio(page);
>>>> }
>>>>
>>>> Or
>>>>
>>>> struct folio *zone_device_folio_init(struct page *page, unsigned int order)
>>>> {
>>>> 	zone_device_page_init(page, order);
>>>> 	return page_rmappable_folio(page);
>>>> }
>>>
>>> I think the problem is that it will all be weird once we dynamically allocate "struct folio".
>>>
>>> I have not yet a clear understanding on how that would really work.
>>>
>>> For example, should it be pgmap->ops->page_folio() ?
>>>
>>> Who allocates the folio? Do we allocate all order-0 folios initially, to then merge them when constructing large folios? How do we manage the "struct folio" during such merging splitting?
>>
>> Right. Either we would waste memory by simply concatenating all “struct folio”
>> and putting paddings at the end, or we would free tail “struct folio” first,
>> then allocate tail “struct page”. Both are painful and do not match core mm’s
>> memdesc pattern, where “struct folio” is allocated when caller is asking
>> for a folio. If “struct folio” is always allocated, there is no difference
>> between “struct folio” and “struct page”.
> 
> As mentioned in my other reply I need to investigate this some more, but I
> don't think we _need_ to always allocate folios (or pages for that matter).
> The ZONE_DEVICE code just uses folios/pages for interacting with the core mm,
> not for managing the device memory itself, so we should be able to make it more
> closely match the memdesc pattern. It's just I'm still a bit unsure what that
> pattern will actually look like.
> 
>>>
>>> With that in mind, I don't really know what the proper interface should be today.
>>>
>>>
>>> zone_device_folio_init(struct page *page, unsigned int order)
>>>
>>> looks cleaner, agreed.
> 
> Agreed.
> 
>>>>
>>>>
>>>> Then, it comes to free_zone_device_folio() above,
>>>> I feel that pgmap->ops->page_free() should take an additional order
>>>> parameter to free a compound page like free_frozen_pages().
> 
> Where would the order parameter come from? Presumably
> folio_order(compound_head(page)) in which case shouldn't the op actually just be
> pgmap->ops->folio_free()?
> 
->page_free() can detect if the page is of large order. The patchset was designed
to make folios and opt-in and avoid unnecessary changes to existing drivers.
But I can revisit that thought process if it helps with cleaner code.

Balbir


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 02/15] mm/huge_memory: add device-private THP support to PMD operations
  2025-09-16 12:21 ` [v6 02/15] mm/huge_memory: add device-private THP support to PMD operations Balbir Singh
  2025-09-18 18:45   ` Zi Yan
@ 2025-09-25  0:25   ` Alistair Popple
  2025-09-25  9:53     ` David Hildenbrand
  1 sibling, 1 reply; 57+ messages in thread
From: Alistair Popple @ 2025-09-25  0:25 UTC (permalink / raw)
  To: Balbir Singh
  Cc: linux-kernel, linux-mm, damon, dri-devel, Matthew Brost,
	David Hildenbrand, Zi Yan, Joshua Hahn, Rakie Kim,
	Byungchul Park, Gregory Price, Ying Huang, Oscar Salvador,
	Lorenzo Stoakes, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lyude Paul, Danilo Krummrich,
	David Airlie, Simona Vetter, Ralph Campbell, Mika Penttilä,
	Francois Dugast

On 2025-09-16 at 22:21 +1000, Balbir Singh <balbirs@nvidia.com> wrote...
> Extend core huge page management functions to handle device-private THP
> entries.  This enables proper handling of large device-private folios in
> fundamental MM operations.
> 
> The following functions have been updated:
> 
> - copy_huge_pmd(): Handle device-private entries during fork/clone
> - zap_huge_pmd(): Properly free device-private THP during munmap
> - change_huge_pmd(): Support protection changes on device-private THP
> - __pte_offset_map(): Add device-private entry awareness
> 
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
> Cc: Rakie Kim <rakie.kim@sk.com>
> Cc: Byungchul Park <byungchul@sk.com>
> Cc: Gregory Price <gourry@gourry.net>
> Cc: Ying Huang <ying.huang@linux.alibaba.com>
> Cc: Alistair Popple <apopple@nvidia.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
> Cc: Nico Pache <npache@redhat.com>
> Cc: Ryan Roberts <ryan.roberts@arm.com>
> Cc: Dev Jain <dev.jain@arm.com>
> Cc: Barry Song <baohua@kernel.org>
> Cc: Lyude Paul <lyude@redhat.com>
> Cc: Danilo Krummrich <dakr@kernel.org>
> Cc: David Airlie <airlied@gmail.com>
> Cc: Simona Vetter <simona@ffwll.ch>
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: Mika Penttilä <mpenttil@redhat.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Francois Dugast <francois.dugast@intel.com>
> ---
>  include/linux/swapops.h | 32 +++++++++++++++++++++++
>  mm/huge_memory.c        | 56 ++++++++++++++++++++++++++++++++++-------
>  mm/pgtable-generic.c    |  2 +-
>  3 files changed, 80 insertions(+), 10 deletions(-)
> 
> diff --git a/include/linux/swapops.h b/include/linux/swapops.h
> index 64ea151a7ae3..2687928a8146 100644
> --- a/include/linux/swapops.h
> +++ b/include/linux/swapops.h
> @@ -594,10 +594,42 @@ static inline int is_pmd_migration_entry(pmd_t pmd)
>  }
>  #endif  /* CONFIG_ARCH_ENABLE_THP_MIGRATION */
>  
> +#if defined(CONFIG_ZONE_DEVICE) && defined(CONFIG_ARCH_ENABLE_THP_MIGRATION)
> +
> +/**
> + * is_pmd_device_private_entry() - Check if PMD contains a device private swap entry
> + * @pmd: The PMD to check
> + *
> + * Returns true if the PMD contains a swap entry that represents a device private
> + * page mapping. This is used for zone device private pages that have been
> + * swapped out but still need special handling during various memory management
> + * operations.
> + *
> + * Return: 1 if PMD contains device private entry, 0 otherwise
> + */
> +static inline int is_pmd_device_private_entry(pmd_t pmd)
> +{
> +	return is_swap_pmd(pmd) && is_device_private_entry(pmd_to_swp_entry(pmd));
> +}
> +
> +#else /* CONFIG_ZONE_DEVICE && CONFIG_ARCH_ENABLE_THP_MIGRATION */
> +
> +static inline int is_pmd_device_private_entry(pmd_t pmd)
> +{
> +	return 0;
> +}
> +
> +#endif /* CONFIG_ZONE_DEVICE && CONFIG_ARCH_ENABLE_THP_MIGRATION */
> +
>  static inline int non_swap_entry(swp_entry_t entry)
>  {
>  	return swp_type(entry) >= MAX_SWAPFILES;
>  }
>  
> +static inline int is_pmd_non_present_folio_entry(pmd_t pmd)

I can't think of a better name either although I am curious why open-coding it
was so nasty given we don't have the equivalent for pte entries. Will go read
the previous discussion.

> +{
> +	return is_pmd_migration_entry(pmd) || is_pmd_device_private_entry(pmd);
> +}
> +
>  #endif /* CONFIG_MMU */
>  #endif /* _LINUX_SWAPOPS_H */
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 5acca24bbabb..a5e4c2aef191 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1703,17 +1703,45 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>  	if (unlikely(is_swap_pmd(pmd))) {
>  		swp_entry_t entry = pmd_to_swp_entry(pmd);
>  
> -		VM_BUG_ON(!is_pmd_migration_entry(pmd));
> -		if (!is_readable_migration_entry(entry)) {
> -			entry = make_readable_migration_entry(
> -							swp_offset(entry));
> +		VM_WARN_ON(!is_pmd_non_present_folio_entry(pmd));
> +
> +		if (is_writable_migration_entry(entry) ||
> +		    is_readable_exclusive_migration_entry(entry)) {
> +			entry = make_readable_migration_entry(swp_offset(entry));
>  			pmd = swp_entry_to_pmd(entry);
>  			if (pmd_swp_soft_dirty(*src_pmd))
>  				pmd = pmd_swp_mksoft_dirty(pmd);
>  			if (pmd_swp_uffd_wp(*src_pmd))
>  				pmd = pmd_swp_mkuffd_wp(pmd);
>  			set_pmd_at(src_mm, addr, src_pmd, pmd);
> +		} else if (is_device_private_entry(entry)) {
> +			/*
> +			 * For device private entries, since there are no
> +			 * read exclusive entries, writable = !readable
> +			 */
> +			if (is_writable_device_private_entry(entry)) {
> +				entry = make_readable_device_private_entry(swp_offset(entry));
> +				pmd = swp_entry_to_pmd(entry);
> +
> +				if (pmd_swp_soft_dirty(*src_pmd))
> +					pmd = pmd_swp_mksoft_dirty(pmd);
> +				if (pmd_swp_uffd_wp(*src_pmd))
> +					pmd = pmd_swp_mkuffd_wp(pmd);
> +				set_pmd_at(src_mm, addr, src_pmd, pmd);
> +			}
> +
> +			src_folio = pfn_swap_entry_folio(entry);
> +			VM_WARN_ON(!folio_test_large(src_folio));
> +
> +			folio_get(src_folio);
> +			/*
> +			 * folio_try_dup_anon_rmap_pmd does not fail for
> +			 * device private entries.

Not today. But maybe wrapping this in WARN_ON_ONCE() might be nice in case that
ever changes.

> +			 */
> +			folio_try_dup_anon_rmap_pmd(src_folio, &src_folio->page,
> +							dst_vma, src_vma);
>  		}
> +
>  		add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
>  		mm_inc_nr_ptes(dst_mm);
>  		pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable);
> @@ -2211,15 +2239,16 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
>  			folio_remove_rmap_pmd(folio, page, vma);
>  			WARN_ON_ONCE(folio_mapcount(folio) < 0);
>  			VM_BUG_ON_PAGE(!PageHead(page), page);
> -		} else if (thp_migration_supported()) {
> +		} else if (is_pmd_non_present_folio_entry(orig_pmd)) {
>  			swp_entry_t entry;
>  
> -			VM_BUG_ON(!is_pmd_migration_entry(orig_pmd));
>  			entry = pmd_to_swp_entry(orig_pmd);
>  			folio = pfn_swap_entry_folio(entry);
>  			flush_needed = 0;
> -		} else
> -			WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!");
> +
> +			if (!thp_migration_supported())
> +				WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!");
> +		}
>  
>  		if (folio_test_anon(folio)) {
>  			zap_deposited_table(tlb->mm, pmd);
> @@ -2239,6 +2268,12 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
>  				folio_mark_accessed(folio);
>  		}
>  
> +		if (folio_is_device_private(folio)) {
> +			folio_remove_rmap_pmd(folio, &folio->page, vma);
> +			WARN_ON_ONCE(folio_mapcount(folio) < 0);
> +			folio_put(folio);
> +		}
> +
>  		spin_unlock(ptl);
>  		if (flush_needed)
>  			tlb_remove_page_size(tlb, &folio->page, HPAGE_PMD_SIZE);
> @@ -2367,7 +2402,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
>  		struct folio *folio = pfn_swap_entry_folio(entry);
>  		pmd_t newpmd;
>  
> -		VM_BUG_ON(!is_pmd_migration_entry(*pmd));
> +		VM_WARN_ON(!is_pmd_non_present_folio_entry(*pmd));
>  		if (is_writable_migration_entry(entry)) {
>  			/*
>  			 * A protection check is difficult so
> @@ -2380,6 +2415,9 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
>  			newpmd = swp_entry_to_pmd(entry);
>  			if (pmd_swp_soft_dirty(*pmd))
>  				newpmd = pmd_swp_mksoft_dirty(newpmd);
> +		} else if (is_writable_device_private_entry(entry)) {
> +			entry = make_readable_device_private_entry(swp_offset(entry));
> +			newpmd = swp_entry_to_pmd(entry);
>  		} else {
>  			newpmd = *pmd;
>  		}
> diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
> index 567e2d084071..0c847cdf4fd3 100644
> --- a/mm/pgtable-generic.c
> +++ b/mm/pgtable-generic.c
> @@ -290,7 +290,7 @@ pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp)
>  
>  	if (pmdvalp)
>  		*pmdvalp = pmdval;
> -	if (unlikely(pmd_none(pmdval) || is_pmd_migration_entry(pmdval)))
> +	if (unlikely(pmd_none(pmdval) || !pmd_present(pmdval)))

Why isn't is_pmd_non_present_folio_entry() used here?

>  		goto nomap;
>  	if (unlikely(pmd_trans_huge(pmdval)))
>  		goto nomap;
> -- 
> 2.50.1
> 


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 01/15] mm/zone_device: support large zone device private folios
  2025-09-24 23:58         ` Alistair Popple
  2025-09-25  0:05           ` Balbir Singh
@ 2025-09-25  9:43           ` David Hildenbrand
  2025-09-25 12:02             ` Balbir Singh
  1 sibling, 1 reply; 57+ messages in thread
From: David Hildenbrand @ 2025-09-25  9:43 UTC (permalink / raw)
  To: Alistair Popple, Zi Yan
  Cc: Balbir Singh, linux-kernel, linux-mm, damon, dri-devel,
	Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Oscar Salvador, Lorenzo Stoakes, Baolin Wang,
	Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
	Lyude Paul, Danilo Krummrich, David Airlie, Simona Vetter,
	Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 25.09.25 01:58, Alistair Popple wrote:
> On 2025-09-25 at 03:36 +1000, Zi Yan <ziy@nvidia.com> wrote...
>> On 24 Sep 2025, at 6:55, David Hildenbrand wrote:
>>
>>> On 18.09.25 04:49, Zi Yan wrote:
>>>> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
>>>>
>>>>> Add routines to support allocation of large order zone device folios
>>>>> and helper functions for zone device folios, to check if a folio is
>>>>> device private and helpers for setting zone device data.
>>>>>
>>>>> When large folios are used, the existing page_free() callback in
>>>>> pgmap is called when the folio is freed, this is true for both
>>>>> PAGE_SIZE and higher order pages.
>>>>>
>>>>> Zone device private large folios do not support deferred split and
>>>>> scan like normal THP folios.
>>>>>
>>>>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>>>>> Cc: David Hildenbrand <david@redhat.com>
>>>>> Cc: Zi Yan <ziy@nvidia.com>
>>>>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
>>>>> Cc: Rakie Kim <rakie.kim@sk.com>
>>>>> Cc: Byungchul Park <byungchul@sk.com>
>>>>> Cc: Gregory Price <gourry@gourry.net>
>>>>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
>>>>> Cc: Alistair Popple <apopple@nvidia.com>
>>>>> Cc: Oscar Salvador <osalvador@suse.de>
>>>>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>>>>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
>>>>> Cc: Nico Pache <npache@redhat.com>
>>>>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>>>>> Cc: Dev Jain <dev.jain@arm.com>
>>>>> Cc: Barry Song <baohua@kernel.org>
>>>>> Cc: Lyude Paul <lyude@redhat.com>
>>>>> Cc: Danilo Krummrich <dakr@kernel.org>
>>>>> Cc: David Airlie <airlied@gmail.com>
>>>>> Cc: Simona Vetter <simona@ffwll.ch>
>>>>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>>>>> Cc: Mika Penttilä <mpenttil@redhat.com>
>>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>>> Cc: Francois Dugast <francois.dugast@intel.com>
>>>>> ---
>>>>>    include/linux/memremap.h | 10 +++++++++-
>>>>>    mm/memremap.c            | 34 +++++++++++++++++++++-------------
>>>>>    mm/rmap.c                |  6 +++++-
>>>>>    3 files changed, 35 insertions(+), 15 deletions(-)
>>>>>
>>>>> diff --git a/include/linux/memremap.h b/include/linux/memremap.h
>>>>> index e5951ba12a28..9c20327c2be5 100644
>>>>> --- a/include/linux/memremap.h
>>>>> +++ b/include/linux/memremap.h
>>>>> @@ -206,7 +206,7 @@ static inline bool is_fsdax_page(const struct page *page)
>>>>>    }
>>>>>
>>>>>    #ifdef CONFIG_ZONE_DEVICE
>>>>> -void zone_device_page_init(struct page *page);
>>>>> +void zone_device_folio_init(struct folio *folio, unsigned int order);
>>>>>    void *memremap_pages(struct dev_pagemap *pgmap, int nid);
>>>>>    void memunmap_pages(struct dev_pagemap *pgmap);
>>>>>    void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
>>>>> @@ -215,6 +215,14 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn);
>>>>>    bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn);
>>>>>
>>>>>    unsigned long memremap_compat_align(void);
>>>>> +
>>>>> +static inline void zone_device_page_init(struct page *page)
>>>>> +{
>>>>> +	struct folio *folio = page_folio(page);
>>>>> +
>>>>> +	zone_device_folio_init(folio, 0);
>>>>
>>>> I assume it is for legacy code, where only non-compound page exists?
>>>>
>>>> It seems that you assume @page is always order-0, but there is no check
>>>> for it. Adding VM_WARN_ON_ONCE_FOLIO(folio_order(folio) != 0, folio)
>>>> above it would be useful to detect misuse.
>>>>
>>>>> +}
>>>>> +
>>>>>    #else
>>>>>    static inline void *devm_memremap_pages(struct device *dev,
>>>>>    		struct dev_pagemap *pgmap)
>>>>> diff --git a/mm/memremap.c b/mm/memremap.c
>>>>> index 46cb1b0b6f72..a8481ebf94cc 100644
>>>>> --- a/mm/memremap.c
>>>>> +++ b/mm/memremap.c
>>>>> @@ -416,20 +416,19 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap);
>>>>>    void free_zone_device_folio(struct folio *folio)
>>>>>    {
>>>>>    	struct dev_pagemap *pgmap = folio->pgmap;
>>>>> +	unsigned long nr = folio_nr_pages(folio);
>>>>> +	int i;
>>>>>
>>>>>    	if (WARN_ON_ONCE(!pgmap))
>>>>>    		return;
>>>>>
>>>>>    	mem_cgroup_uncharge(folio);
>>>>>
>>>>> -	/*
>>>>> -	 * Note: we don't expect anonymous compound pages yet. Once supported
>>>>> -	 * and we could PTE-map them similar to THP, we'd have to clear
>>>>> -	 * PG_anon_exclusive on all tail pages.
>>>>> -	 */
>>>>>    	if (folio_test_anon(folio)) {
>>>>> -		VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
>>>>> -		__ClearPageAnonExclusive(folio_page(folio, 0));
>>>>> +		for (i = 0; i < nr; i++)
>>>>> +			__ClearPageAnonExclusive(folio_page(folio, i));
>>>>> +	} else {
>>>>> +		VM_WARN_ON_ONCE(folio_test_large(folio));
>>>>>    	}
>>>>>
>>>>>    	/*
>>>>> @@ -456,8 +455,8 @@ void free_zone_device_folio(struct folio *folio)
>>>>>    	case MEMORY_DEVICE_COHERENT:
>>>>>    		if (WARN_ON_ONCE(!pgmap->ops || !pgmap->ops->page_free))
>>>>>    			break;
>>>>> -		pgmap->ops->page_free(folio_page(folio, 0));
>>>>> -		put_dev_pagemap(pgmap);
>>>>> +		pgmap->ops->page_free(&folio->page);
>>>>> +		percpu_ref_put_many(&folio->pgmap->ref, nr);
>>>>>    		break;
>>>>>
>>>>>    	case MEMORY_DEVICE_GENERIC:
>>>>> @@ -480,14 +479,23 @@ void free_zone_device_folio(struct folio *folio)
>>>>>    	}
>>>>>    }
>>>>>
>>>>> -void zone_device_page_init(struct page *page)
>>>>> +void zone_device_folio_init(struct folio *folio, unsigned int order)
>>>>>    {
>>>>> +	struct page *page = folio_page(folio, 0);
>>>>
>>>> It is strange to see a folio is converted back to page in
>>>> a function called zone_device_folio_init().
>>>>
>>>>> +
>>>>> +	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
>>>>> +
>>>>>    	/*
>>>>>    	 * Drivers shouldn't be allocating pages after calling
>>>>>    	 * memunmap_pages().
>>>>>    	 */
>>>>> -	WARN_ON_ONCE(!percpu_ref_tryget_live(&page_pgmap(page)->ref));
>>>>> -	set_page_count(page, 1);
>>>>> +	WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
>>>>> +	folio_set_count(folio, 1);
>>>>>    	lock_page(page);
>>>>> +
>>>>> +	if (order > 1) {
>>>>> +		prep_compound_page(page, order);
>>>>> +		folio_set_large_rmappable(folio);
>>>>> +	}
>>>>
>>>> OK, so basically, @folio is not a compound page yet when zone_device_folio_init()
>>>> is called.
>>>>
>>>> I feel that your zone_device_page_init() and zone_device_folio_init()
>>>> implementations are inverse. They should follow the same pattern
>>>> as __alloc_pages_noprof() and __folio_alloc_noprof(), where
>>>> zone_device_page_init() does the actual initialization and
>>>> zone_device_folio_init() just convert a page to folio.
>>>>
>>>> Something like:
>>>>
>>>> void zone_device_page_init(struct page *page, unsigned int order)
>>>> {
>>>> 	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
>>>>
>>>> 	/*
>>>> 	 * Drivers shouldn't be allocating pages after calling
>>>> 	 * memunmap_pages().
>>>> 	 */
>>>>
>>>>       WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
>>>> 	
>>>> 	/*
>>>> 	 * anonymous folio does not support order-1, high order file-backed folio
>>>> 	 * is not supported at all.
>>>> 	 */
>>>> 	VM_WARN_ON_ONCE(order == 1);
>>>>
>>>> 	if (order > 1)
>>>> 		prep_compound_page(page, order);
>>>>
>>>> 	/* page has to be compound head here */
>>>> 	set_page_count(page, 1);
>>>> 	lock_page(page);
>>>> }
>>>>
>>>> void zone_device_folio_init(struct folio *folio, unsigned int order)
>>>> {
>>>> 	struct page *page = folio_page(folio, 0);
>>>>
>>>> 	zone_device_page_init(page, order);
>>>> 	page_rmappable_folio(page);
>>>> }
>>>>
>>>> Or
>>>>
>>>> struct folio *zone_device_folio_init(struct page *page, unsigned int order)
>>>> {
>>>> 	zone_device_page_init(page, order);
>>>> 	return page_rmappable_folio(page);
>>>> }
>>>
>>> I think the problem is that it will all be weird once we dynamically allocate "struct folio".
>>>
>>> I have not yet a clear understanding on how that would really work.
>>>
>>> For example, should it be pgmap->ops->page_folio() ?
>>>
>>> Who allocates the folio? Do we allocate all order-0 folios initially, to then merge them when constructing large folios? How do we manage the "struct folio" during such merging splitting?
>>
>> Right. Either we would waste memory by simply concatenating all “struct folio”
>> and putting paddings at the end, or we would free tail “struct folio” first,
>> then allocate tail “struct page”. Both are painful and do not match core mm’s
>> memdesc pattern, where “struct folio” is allocated when caller is asking
>> for a folio. If “struct folio” is always allocated, there is no difference
>> between “struct folio” and “struct page”.
> 
> As mentioned in my other reply I need to investigate this some more, but I
> don't think we _need_ to always allocate folios (or pages for that matter).
> The ZONE_DEVICE code just uses folios/pages for interacting with the core mm,
> not for managing the device memory itself, so we should be able to make it more
> closely match the memdesc pattern. It's just I'm still a bit unsure what that
> pattern will actually look like.

I think one reason might be that in contrast to ordinary pages, 
zone-device memory is only ever used to be used for folios, right?

Would there be a user that just allocates pages and not wants a folio 
associated with it?

It's a good question of that would look like when we have dynamically 
allocated struct folio ...

> 
>>>
>>> With that in mind, I don't really know what the proper interface should be today.
>>>
>>>
>>> zone_device_folio_init(struct page *page, unsigned int order)
>>>
>>> looks cleaner, agreed.
> 
> Agreed.
> 
>>>>
>>>>
>>>> Then, it comes to free_zone_device_folio() above,
>>>> I feel that pgmap->ops->page_free() should take an additional order
>>>> parameter to free a compound page like free_frozen_pages().
> 
> Where would the order parameter come from? Presumably
> folio_order(compound_head(page)) in which case shouldn't the op actually just be
> pgmap->ops->folio_free()?

Yeah, that's also what I thought.

-- 
Cheers

David / dhildenb



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 02/15] mm/huge_memory: add device-private THP support to PMD operations
  2025-09-25  0:25   ` Alistair Popple
@ 2025-09-25  9:53     ` David Hildenbrand
  2025-09-26  1:53       ` Alistair Popple
  0 siblings, 1 reply; 57+ messages in thread
From: David Hildenbrand @ 2025-09-25  9:53 UTC (permalink / raw)
  To: Alistair Popple, Balbir Singh
  Cc: linux-kernel, linux-mm, damon, dri-devel, Matthew Brost, Zi Yan,
	Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Oscar Salvador, Lorenzo Stoakes, Baolin Wang,
	Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
	Lyude Paul, Danilo Krummrich, David Airlie, Simona Vetter,
	Ralph Campbell, Mika Penttilä,
	Francois Dugast

On 25.09.25 02:25, Alistair Popple wrote:
> On 2025-09-16 at 22:21 +1000, Balbir Singh <balbirs@nvidia.com> wrote...
>> Extend core huge page management functions to handle device-private THP
>> entries.  This enables proper handling of large device-private folios in
>> fundamental MM operations.
>>
>> The following functions have been updated:
>>
>> - copy_huge_pmd(): Handle device-private entries during fork/clone
>> - zap_huge_pmd(): Properly free device-private THP during munmap
>> - change_huge_pmd(): Support protection changes on device-private THP
>> - __pte_offset_map(): Add device-private entry awareness
>>
>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
>> Cc: Rakie Kim <rakie.kim@sk.com>
>> Cc: Byungchul Park <byungchul@sk.com>
>> Cc: Gregory Price <gourry@gourry.net>
>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
>> Cc: Alistair Popple <apopple@nvidia.com>
>> Cc: Oscar Salvador <osalvador@suse.de>
>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
>> Cc: Nico Pache <npache@redhat.com>
>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>> Cc: Dev Jain <dev.jain@arm.com>
>> Cc: Barry Song <baohua@kernel.org>
>> Cc: Lyude Paul <lyude@redhat.com>
>> Cc: Danilo Krummrich <dakr@kernel.org>
>> Cc: David Airlie <airlied@gmail.com>
>> Cc: Simona Vetter <simona@ffwll.ch>
>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>> Cc: Mika Penttilä <mpenttil@redhat.com>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Francois Dugast <francois.dugast@intel.com>
>> ---
>>   include/linux/swapops.h | 32 +++++++++++++++++++++++
>>   mm/huge_memory.c        | 56 ++++++++++++++++++++++++++++++++++-------
>>   mm/pgtable-generic.c    |  2 +-
>>   3 files changed, 80 insertions(+), 10 deletions(-)
>>
>> diff --git a/include/linux/swapops.h b/include/linux/swapops.h
>> index 64ea151a7ae3..2687928a8146 100644
>> --- a/include/linux/swapops.h
>> +++ b/include/linux/swapops.h
>> @@ -594,10 +594,42 @@ static inline int is_pmd_migration_entry(pmd_t pmd)
>>   }
>>   #endif  /* CONFIG_ARCH_ENABLE_THP_MIGRATION */
>>   
>> +#if defined(CONFIG_ZONE_DEVICE) && defined(CONFIG_ARCH_ENABLE_THP_MIGRATION)
>> +
>> +/**
>> + * is_pmd_device_private_entry() - Check if PMD contains a device private swap entry
>> + * @pmd: The PMD to check
>> + *
>> + * Returns true if the PMD contains a swap entry that represents a device private
>> + * page mapping. This is used for zone device private pages that have been
>> + * swapped out but still need special handling during various memory management
>> + * operations.
>> + *
>> + * Return: 1 if PMD contains device private entry, 0 otherwise
>> + */
>> +static inline int is_pmd_device_private_entry(pmd_t pmd)
>> +{
>> +	return is_swap_pmd(pmd) && is_device_private_entry(pmd_to_swp_entry(pmd));
>> +}
>> +
>> +#else /* CONFIG_ZONE_DEVICE && CONFIG_ARCH_ENABLE_THP_MIGRATION */
>> +
>> +static inline int is_pmd_device_private_entry(pmd_t pmd)
>> +{
>> +	return 0;
>> +}
>> +
>> +#endif /* CONFIG_ZONE_DEVICE && CONFIG_ARCH_ENABLE_THP_MIGRATION */
>> +
>>   static inline int non_swap_entry(swp_entry_t entry)
>>   {
>>   	return swp_type(entry) >= MAX_SWAPFILES;
>>   }
>>   
>> +static inline int is_pmd_non_present_folio_entry(pmd_t pmd)
> 
> I can't think of a better name either although I am curious why open-coding it
> was so nasty given we don't have the equivalent for pte entries. Will go read
> the previous discussion.

I think for PTEs we just handle all cases (markers, hwpoison etc) 
properly, manye not being supported yet on the PMD level. See 
copy_nonpresent_pte() as an example.

We don't even have helpers like is_pte_migration_entry().

>> diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
>> index 567e2d084071..0c847cdf4fd3 100644
>> --- a/mm/pgtable-generic.c
>> +++ b/mm/pgtable-generic.c
>> @@ -290,7 +290,7 @@ pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp)
>>   
>>   	if (pmdvalp)
>>   		*pmdvalp = pmdval;
>> -	if (unlikely(pmd_none(pmdval) || is_pmd_migration_entry(pmdval)))
>> +	if (unlikely(pmd_none(pmdval) || !pmd_present(pmdval)))
> 
> Why isn't is_pmd_non_present_folio_entry() used here?


I thought I argued that

	if (!pmd_present(pmdval)))

Should be sufficient here in my last review?

We want to detect page tables we can map after all.
-- 
Cheers

David / dhildenb



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 04/15] mm/huge_memory: implement device-private THP splitting
  2025-09-16 12:21 ` [v6 04/15] mm/huge_memory: implement device-private THP splitting Balbir Singh
  2025-09-22 21:09   ` Zi Yan
@ 2025-09-25 10:01   ` David Hildenbrand
  2025-09-25 11:13     ` Balbir Singh
  1 sibling, 1 reply; 57+ messages in thread
From: David Hildenbrand @ 2025-09-25 10:01 UTC (permalink / raw)
  To: Balbir Singh, linux-kernel, linux-mm
  Cc: damon, dri-devel, Zi Yan, Joshua Hahn, Rakie Kim, Byungchul Park,
	Gregory Price, Ying Huang, Alistair Popple, Oscar Salvador,
	Lorenzo Stoakes, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lyude Paul, Danilo Krummrich,
	David Airlie, Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 16.09.25 14:21, Balbir Singh wrote:
> Add support for splitting device-private THP folios, enabling fallback
> to smaller page sizes when large page allocation or migration fails.
> 
> Key changes:
> - split_huge_pmd(): Handle device-private PMD entries during splitting
> - Preserve RMAP_EXCLUSIVE semantics for anonymous exclusive folios
> - Skip RMP_USE_SHARED_ZEROPAGE for device-private entries as they
>    don't support shared zero page semantics
> 
> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
> Cc: Rakie Kim <rakie.kim@sk.com>
> Cc: Byungchul Park <byungchul@sk.com>
> Cc: Gregory Price <gourry@gourry.net>
> Cc: Ying Huang <ying.huang@linux.alibaba.com>
> Cc: Alistair Popple <apopple@nvidia.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
> Cc: Nico Pache <npache@redhat.com>
> Cc: Ryan Roberts <ryan.roberts@arm.com>
> Cc: Dev Jain <dev.jain@arm.com>
> Cc: Barry Song <baohua@kernel.org>
> Cc: Lyude Paul <lyude@redhat.com>
> Cc: Danilo Krummrich <dakr@kernel.org>
> Cc: David Airlie <airlied@gmail.com>
> Cc: Simona Vetter <simona@ffwll.ch>
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: Mika Penttilä <mpenttil@redhat.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Francois Dugast <francois.dugast@intel.com>
> ---
>   mm/huge_memory.c | 138 +++++++++++++++++++++++++++++++++--------------
>   1 file changed, 98 insertions(+), 40 deletions(-)
> 
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 78166db72f4d..5291ee155a02 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2872,16 +2872,18 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>   	struct page *page;
>   	pgtable_t pgtable;
>   	pmd_t old_pmd, _pmd;
> -	bool young, write, soft_dirty, pmd_migration = false, uffd_wp = false;
> -	bool anon_exclusive = false, dirty = false;
> +	bool soft_dirty, uffd_wp = false, young = false, write = false;
> +	bool anon_exclusive = false, dirty = false, present = false;
>   	unsigned long addr;
>   	pte_t *pte;
>   	int i;
> +	swp_entry_t swp_entry;

Not renaming this variable avoids a lot of churn below. So please keep 
it called "entry" in this patch.

-- 
Cheers

David / dhildenb



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 04/15] mm/huge_memory: implement device-private THP splitting
  2025-09-23 16:08           ` Zi Yan
@ 2025-09-25 10:06             ` David Hildenbrand
  0 siblings, 0 replies; 57+ messages in thread
From: David Hildenbrand @ 2025-09-25 10:06 UTC (permalink / raw)
  To: Zi Yan, Balbir Singh
  Cc: linux-kernel, linux-mm, damon, dri-devel, Joshua Hahn, Rakie Kim,
	Byungchul Park, Gregory Price, Ying Huang, Alistair Popple,
	Oscar Salvador, Lorenzo Stoakes, Baolin Wang, Liam R. Howlett,
	Nico Pache, Ryan Roberts, Dev Jain, Barry Song, Lyude Paul,
	Danilo Krummrich, David Airlie, Simona Vetter, Ralph Campbell,
	Mika Penttilä,
	Matthew Brost, Francois Dugast

>>> Even if this is the only call site, there is no guarantee that
>>> there will be none in the future. I am not sure why we want caller
>>> to handle this special case. Who is going to tell the next user
>>> of RMP_USE_SHARED_ZEROPAGE or caller to try_to_map_unused_to_zeropage()
>>> that device private is incompatible with them?
>>>
>>
>> I don't disagree, but the question was why are device private pages even making
>> it to try_to_map_unused_to_zeropage()>>
> 
> Then, it could be done in remove_migration_pte():
> 
> if (rmap_walk_arg->map_unused_to_zeropage &&
> 	!folio_is_device_private(folio) &&
> 	try_to_map_unused_to_zeropage(&pvmw, folio, idx))
> 	continue;
> 
> Maybe I am too hung up on this and someone else could pat on my back and
> tell me it is OK to just do this at the only caller instead. :)

I think we shouldn't set a flag for a folio that does not make any 
sense. Just like we don't set the flag for non-anon folios?

In addition, we could add a 
VM_WARN_ON_ONCE(folio_is_device_private(folio)) in 
try_to_map_unused_to_zeropage(), to catch any future abuse.

-- 
Cheers

David / dhildenb



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 07/15] mm/memory/fault: add THP fault handling for zone device private pages
  2025-09-16 12:21 ` [v6 07/15] mm/memory/fault: add THP fault handling for zone device private pages Balbir Singh
@ 2025-09-25 10:11   ` David Hildenbrand
  2025-09-30 12:00     ` Balbir Singh
  0 siblings, 1 reply; 57+ messages in thread
From: David Hildenbrand @ 2025-09-25 10:11 UTC (permalink / raw)
  To: Balbir Singh, linux-kernel, linux-mm, Alistair Popple
  Cc: damon, dri-devel, Zi Yan, Joshua Hahn, Rakie Kim, Byungchul Park,
	Gregory Price, Ying Huang, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 16.09.25 14:21, Balbir Singh wrote:
> Implement CPU fault handling for zone device THP entries through
> do_huge_pmd_device_private(), enabling transparent migration of
> device-private large pages back to system memory on CPU access.
> 
> When the CPU accesses a zone device THP entry, the fault handler calls the
> device driver's migrate_to_ram() callback to migrate the entire large page
> back to system memory.
> 
> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
> Cc: Rakie Kim <rakie.kim@sk.com>
> Cc: Byungchul Park <byungchul@sk.com>
> Cc: Gregory Price <gourry@gourry.net>
> Cc: Ying Huang <ying.huang@linux.alibaba.com>
> Cc: Alistair Popple <apopple@nvidia.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
> Cc: Nico Pache <npache@redhat.com>
> Cc: Ryan Roberts <ryan.roberts@arm.com>
> Cc: Dev Jain <dev.jain@arm.com>
> Cc: Barry Song <baohua@kernel.org>
> Cc: Lyude Paul <lyude@redhat.com>
> Cc: Danilo Krummrich <dakr@kernel.org>
> Cc: David Airlie <airlied@gmail.com>
> Cc: Simona Vetter <simona@ffwll.ch>
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: Mika Penttilä <mpenttil@redhat.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Francois Dugast <francois.dugast@intel.com>
> ---
>   include/linux/huge_mm.h |  7 +++++++
>   mm/huge_memory.c        | 36 ++++++++++++++++++++++++++++++++++++
>   mm/memory.c             |  5 +++--
>   3 files changed, 46 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index f327d62fc985..2d669be7f1c8 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -496,6 +496,8 @@ static inline bool folio_test_pmd_mappable(struct folio *folio)
>   
>   vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf);
>   
> +vm_fault_t do_huge_pmd_device_private(struct vm_fault *vmf);
> +
>   extern struct folio *huge_zero_folio;
>   extern unsigned long huge_zero_pfn;
>   
> @@ -671,6 +673,11 @@ static inline vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
>   	return 0;
>   }
>   
> +static inline vm_fault_t do_huge_pmd_device_private(struct vm_fault *vmf)
> +{
> +	return 0;
> +}
> +
>   static inline bool is_huge_zero_folio(const struct folio *folio)
>   {
>   	return false;
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 5291ee155a02..90a1939455dd 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1287,6 +1287,42 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf)
>   
>   }
>   
> +vm_fault_t do_huge_pmd_device_private(struct vm_fault *vmf)
> +{
> +	struct vm_area_struct *vma = vmf->vma;
> +	vm_fault_t ret = 0;
> +	spinlock_t *ptl;
> +	swp_entry_t swp_entry;
> +	struct page *page;
> +
> +	if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
> +		vma_end_read(vma);
> +		return VM_FAULT_RETRY;
> +	}
> +
> +	ptl = pmd_lock(vma->vm_mm, vmf->pmd);
> +	if (unlikely(!pmd_same(*vmf->pmd, vmf->orig_pmd))) {
> +		spin_unlock(ptl);
> +		return 0;
> +	}
> +
> +	swp_entry = pmd_to_swp_entry(vmf->orig_pmd);
> +	page = pfn_swap_entry_to_page(swp_entry);
> +	vmf->page = page;
> +	vmf->pte = NULL;
> +	if (trylock_page(vmf->page)) {

We should be operating on a folio here. folio_trylock() + folio_get() + 
folio_unlock() + folio_put().

> +		get_page(page);
> +		spin_unlock(ptl);
> +		ret = page_pgmap(page)->ops->migrate_to_ram(vmf);

BTW, I was wondering whether it is really the right design to pass the 
vmf here. Likely the const vma+addr+folio could be sufficient. I did not 
look into all callbaks, though.

-- 
Cheers

David / dhildenb



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 04/15] mm/huge_memory: implement device-private THP splitting
  2025-09-25 10:01   ` David Hildenbrand
@ 2025-09-25 11:13     ` Balbir Singh
  0 siblings, 0 replies; 57+ messages in thread
From: Balbir Singh @ 2025-09-25 11:13 UTC (permalink / raw)
  To: David Hildenbrand, linux-kernel, linux-mm
  Cc: damon, dri-devel, Zi Yan, Joshua Hahn, Rakie Kim, Byungchul Park,
	Gregory Price, Ying Huang, Alistair Popple, Oscar Salvador,
	Lorenzo Stoakes, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lyude Paul, Danilo Krummrich,
	David Airlie, Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 9/25/25 20:01, David Hildenbrand wrote:
> On 16.09.25 14:21, Balbir Singh wrote:
>> Add support for splitting device-private THP folios, enabling fallback
>> to smaller page sizes when large page allocation or migration fails.
>>
>> Key changes:
>> - split_huge_pmd(): Handle device-private PMD entries during splitting
>> - Preserve RMAP_EXCLUSIVE semantics for anonymous exclusive folios
>> - Skip RMP_USE_SHARED_ZEROPAGE for device-private entries as they
>>    don't support shared zero page semantics
>>
>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
>> Cc: Rakie Kim <rakie.kim@sk.com>
>> Cc: Byungchul Park <byungchul@sk.com>
>> Cc: Gregory Price <gourry@gourry.net>
>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
>> Cc: Alistair Popple <apopple@nvidia.com>
>> Cc: Oscar Salvador <osalvador@suse.de>
>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
>> Cc: Nico Pache <npache@redhat.com>
>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>> Cc: Dev Jain <dev.jain@arm.com>
>> Cc: Barry Song <baohua@kernel.org>
>> Cc: Lyude Paul <lyude@redhat.com>
>> Cc: Danilo Krummrich <dakr@kernel.org>
>> Cc: David Airlie <airlied@gmail.com>
>> Cc: Simona Vetter <simona@ffwll.ch>
>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>> Cc: Mika Penttilä <mpenttil@redhat.com>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Francois Dugast <francois.dugast@intel.com>
>> ---
>>   mm/huge_memory.c | 138 +++++++++++++++++++++++++++++++++--------------
>>   1 file changed, 98 insertions(+), 40 deletions(-)
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 78166db72f4d..5291ee155a02 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -2872,16 +2872,18 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>       struct page *page;
>>       pgtable_t pgtable;
>>       pmd_t old_pmd, _pmd;
>> -    bool young, write, soft_dirty, pmd_migration = false, uffd_wp = false;
>> -    bool anon_exclusive = false, dirty = false;
>> +    bool soft_dirty, uffd_wp = false, young = false, write = false;
>> +    bool anon_exclusive = false, dirty = false, present = false;
>>       unsigned long addr;
>>       pte_t *pte;
>>       int i;
>> +    swp_entry_t swp_entry;
> 
> Not renaming this variable avoids a lot of churn below. So please keep it called "entry" in this patch.
> 

Ack, will fix

Thanks,
Balbir


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 01/15] mm/zone_device: support large zone device private folios
  2025-09-25  9:43           ` David Hildenbrand
@ 2025-09-25 12:02             ` Balbir Singh
  2025-09-26  1:50               ` Alistair Popple
  0 siblings, 1 reply; 57+ messages in thread
From: Balbir Singh @ 2025-09-25 12:02 UTC (permalink / raw)
  To: David Hildenbrand, Alistair Popple, Zi Yan
  Cc: linux-kernel, linux-mm, damon, dri-devel, Joshua Hahn, Rakie Kim,
	Byungchul Park, Gregory Price, Ying Huang, Oscar Salvador,
	Lorenzo Stoakes, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lyude Paul, Danilo Krummrich,
	David Airlie, Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 9/25/25 19:43, David Hildenbrand wrote:
> On 25.09.25 01:58, Alistair Popple wrote:
>> On 2025-09-25 at 03:36 +1000, Zi Yan <ziy@nvidia.com> wrote...
>>> On 24 Sep 2025, at 6:55, David Hildenbrand wrote:
>>>
>>>> On 18.09.25 04:49, Zi Yan wrote:
>>>>> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
>>>>>
>>>>>> Add routines to support allocation of large order zone device folios
>>>>>> and helper functions for zone device folios, to check if a folio is
>>>>>> device private and helpers for setting zone device data.
>>>>>>
>>>>>> When large folios are used, the existing page_free() callback in
>>>>>> pgmap is called when the folio is freed, this is true for both
>>>>>> PAGE_SIZE and higher order pages.
>>>>>>
>>>>>> Zone device private large folios do not support deferred split and
>>>>>> scan like normal THP folios.
>>>>>>
>>>>>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>>>>>> Cc: David Hildenbrand <david@redhat.com>
>>>>>> Cc: Zi Yan <ziy@nvidia.com>
>>>>>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
>>>>>> Cc: Rakie Kim <rakie.kim@sk.com>
>>>>>> Cc: Byungchul Park <byungchul@sk.com>
>>>>>> Cc: Gregory Price <gourry@gourry.net>
>>>>>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
>>>>>> Cc: Alistair Popple <apopple@nvidia.com>
>>>>>> Cc: Oscar Salvador <osalvador@suse.de>
>>>>>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>>>>>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>>>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
>>>>>> Cc: Nico Pache <npache@redhat.com>
>>>>>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>>>>>> Cc: Dev Jain <dev.jain@arm.com>
>>>>>> Cc: Barry Song <baohua@kernel.org>
>>>>>> Cc: Lyude Paul <lyude@redhat.com>
>>>>>> Cc: Danilo Krummrich <dakr@kernel.org>
>>>>>> Cc: David Airlie <airlied@gmail.com>
>>>>>> Cc: Simona Vetter <simona@ffwll.ch>
>>>>>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>>>>>> Cc: Mika Penttilä <mpenttil@redhat.com>
>>>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>>>> Cc: Francois Dugast <francois.dugast@intel.com>
>>>>>> ---
>>>>>>    include/linux/memremap.h | 10 +++++++++-
>>>>>>    mm/memremap.c            | 34 +++++++++++++++++++++-------------
>>>>>>    mm/rmap.c                |  6 +++++-
>>>>>>    3 files changed, 35 insertions(+), 15 deletions(-)
>>>>>>
>>>>>> diff --git a/include/linux/memremap.h b/include/linux/memremap.h
>>>>>> index e5951ba12a28..9c20327c2be5 100644
>>>>>> --- a/include/linux/memremap.h
>>>>>> +++ b/include/linux/memremap.h
>>>>>> @@ -206,7 +206,7 @@ static inline bool is_fsdax_page(const struct page *page)
>>>>>>    }
>>>>>>
>>>>>>    #ifdef CONFIG_ZONE_DEVICE
>>>>>> -void zone_device_page_init(struct page *page);
>>>>>> +void zone_device_folio_init(struct folio *folio, unsigned int order);
>>>>>>    void *memremap_pages(struct dev_pagemap *pgmap, int nid);
>>>>>>    void memunmap_pages(struct dev_pagemap *pgmap);
>>>>>>    void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
>>>>>> @@ -215,6 +215,14 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn);
>>>>>>    bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn);
>>>>>>
>>>>>>    unsigned long memremap_compat_align(void);
>>>>>> +
>>>>>> +static inline void zone_device_page_init(struct page *page)
>>>>>> +{
>>>>>> +    struct folio *folio = page_folio(page);
>>>>>> +
>>>>>> +    zone_device_folio_init(folio, 0);
>>>>>
>>>>> I assume it is for legacy code, where only non-compound page exists?
>>>>>
>>>>> It seems that you assume @page is always order-0, but there is no check
>>>>> for it. Adding VM_WARN_ON_ONCE_FOLIO(folio_order(folio) != 0, folio)
>>>>> above it would be useful to detect misuse.
>>>>>
>>>>>> +}
>>>>>> +
>>>>>>    #else
>>>>>>    static inline void *devm_memremap_pages(struct device *dev,
>>>>>>            struct dev_pagemap *pgmap)
>>>>>> diff --git a/mm/memremap.c b/mm/memremap.c
>>>>>> index 46cb1b0b6f72..a8481ebf94cc 100644
>>>>>> --- a/mm/memremap.c
>>>>>> +++ b/mm/memremap.c
>>>>>> @@ -416,20 +416,19 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap);
>>>>>>    void free_zone_device_folio(struct folio *folio)
>>>>>>    {
>>>>>>        struct dev_pagemap *pgmap = folio->pgmap;
>>>>>> +    unsigned long nr = folio_nr_pages(folio);
>>>>>> +    int i;
>>>>>>
>>>>>>        if (WARN_ON_ONCE(!pgmap))
>>>>>>            return;
>>>>>>
>>>>>>        mem_cgroup_uncharge(folio);
>>>>>>
>>>>>> -    /*
>>>>>> -     * Note: we don't expect anonymous compound pages yet. Once supported
>>>>>> -     * and we could PTE-map them similar to THP, we'd have to clear
>>>>>> -     * PG_anon_exclusive on all tail pages.
>>>>>> -     */
>>>>>>        if (folio_test_anon(folio)) {
>>>>>> -        VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
>>>>>> -        __ClearPageAnonExclusive(folio_page(folio, 0));
>>>>>> +        for (i = 0; i < nr; i++)
>>>>>> +            __ClearPageAnonExclusive(folio_page(folio, i));
>>>>>> +    } else {
>>>>>> +        VM_WARN_ON_ONCE(folio_test_large(folio));
>>>>>>        }
>>>>>>
>>>>>>        /*
>>>>>> @@ -456,8 +455,8 @@ void free_zone_device_folio(struct folio *folio)
>>>>>>        case MEMORY_DEVICE_COHERENT:
>>>>>>            if (WARN_ON_ONCE(!pgmap->ops || !pgmap->ops->page_free))
>>>>>>                break;
>>>>>> -        pgmap->ops->page_free(folio_page(folio, 0));
>>>>>> -        put_dev_pagemap(pgmap);
>>>>>> +        pgmap->ops->page_free(&folio->page);
>>>>>> +        percpu_ref_put_many(&folio->pgmap->ref, nr);
>>>>>>            break;
>>>>>>
>>>>>>        case MEMORY_DEVICE_GENERIC:
>>>>>> @@ -480,14 +479,23 @@ void free_zone_device_folio(struct folio *folio)
>>>>>>        }
>>>>>>    }
>>>>>>
>>>>>> -void zone_device_page_init(struct page *page)
>>>>>> +void zone_device_folio_init(struct folio *folio, unsigned int order)
>>>>>>    {
>>>>>> +    struct page *page = folio_page(folio, 0);
>>>>>
>>>>> It is strange to see a folio is converted back to page in
>>>>> a function called zone_device_folio_init().
>>>>>
>>>>>> +
>>>>>> +    VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
>>>>>> +
>>>>>>        /*
>>>>>>         * Drivers shouldn't be allocating pages after calling
>>>>>>         * memunmap_pages().
>>>>>>         */
>>>>>> -    WARN_ON_ONCE(!percpu_ref_tryget_live(&page_pgmap(page)->ref));
>>>>>> -    set_page_count(page, 1);
>>>>>> +    WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
>>>>>> +    folio_set_count(folio, 1);
>>>>>>        lock_page(page);
>>>>>> +
>>>>>> +    if (order > 1) {
>>>>>> +        prep_compound_page(page, order);
>>>>>> +        folio_set_large_rmappable(folio);
>>>>>> +    }
>>>>>
>>>>> OK, so basically, @folio is not a compound page yet when zone_device_folio_init()
>>>>> is called.
>>>>>
>>>>> I feel that your zone_device_page_init() and zone_device_folio_init()
>>>>> implementations are inverse. They should follow the same pattern
>>>>> as __alloc_pages_noprof() and __folio_alloc_noprof(), where
>>>>> zone_device_page_init() does the actual initialization and
>>>>> zone_device_folio_init() just convert a page to folio.
>>>>>
>>>>> Something like:
>>>>>
>>>>> void zone_device_page_init(struct page *page, unsigned int order)
>>>>> {
>>>>>     VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
>>>>>
>>>>>     /*
>>>>>      * Drivers shouldn't be allocating pages after calling
>>>>>      * memunmap_pages().
>>>>>      */
>>>>>
>>>>>       WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
>>>>>     
>>>>>     /*
>>>>>      * anonymous folio does not support order-1, high order file-backed folio
>>>>>      * is not supported at all.
>>>>>      */
>>>>>     VM_WARN_ON_ONCE(order == 1);
>>>>>
>>>>>     if (order > 1)
>>>>>         prep_compound_page(page, order);
>>>>>
>>>>>     /* page has to be compound head here */
>>>>>     set_page_count(page, 1);
>>>>>     lock_page(page);
>>>>> }
>>>>>
>>>>> void zone_device_folio_init(struct folio *folio, unsigned int order)
>>>>> {
>>>>>     struct page *page = folio_page(folio, 0);
>>>>>
>>>>>     zone_device_page_init(page, order);
>>>>>     page_rmappable_folio(page);
>>>>> }
>>>>>
>>>>> Or
>>>>>
>>>>> struct folio *zone_device_folio_init(struct page *page, unsigned int order)
>>>>> {
>>>>>     zone_device_page_init(page, order);
>>>>>     return page_rmappable_folio(page);
>>>>> }
>>>>
>>>> I think the problem is that it will all be weird once we dynamically allocate "struct folio".
>>>>
>>>> I have not yet a clear understanding on how that would really work.
>>>>
>>>> For example, should it be pgmap->ops->page_folio() ?
>>>>
>>>> Who allocates the folio? Do we allocate all order-0 folios initially, to then merge them when constructing large folios? How do we manage the "struct folio" during such merging splitting?
>>>
>>> Right. Either we would waste memory by simply concatenating all “struct folio”
>>> and putting paddings at the end, or we would free tail “struct folio” first,
>>> then allocate tail “struct page”. Both are painful and do not match core mm’s
>>> memdesc pattern, where “struct folio” is allocated when caller is asking
>>> for a folio. If “struct folio” is always allocated, there is no difference
>>> between “struct folio” and “struct page”.
>>
>> As mentioned in my other reply I need to investigate this some more, but I
>> don't think we _need_ to always allocate folios (or pages for that matter).
>> The ZONE_DEVICE code just uses folios/pages for interacting with the core mm,
>> not for managing the device memory itself, so we should be able to make it more
>> closely match the memdesc pattern. It's just I'm still a bit unsure what that
>> pattern will actually look like.
> 
> I think one reason might be that in contrast to ordinary pages, zone-device memory is only ever used to be used for folios, right?
> 
> Would there be a user that just allocates pages and not wants a folio associated with it?
> 

A non-THP aware driver use case would be a potential use case for zero order folios (also pages at the moment). 

> It's a good question of that would look like when we have dynamically allocated struct folio ...

I think for dynamically allocated folios we could probably do away with pages, but not 100% sure at the moment.

> 
>>
>>>>
>>>> With that in mind, I don't really know what the proper interface should be today.
>>>>
>>>>
>>>> zone_device_folio_init(struct page *page, unsigned int order)
>>>>
>>>> looks cleaner, agreed.
>>
>> Agreed.
>>
>>>>>
>>>>>
>>>>> Then, it comes to free_zone_device_folio() above,
>>>>> I feel that pgmap->ops->page_free() should take an additional order
>>>>> parameter to free a compound page like free_frozen_pages().
>>
>> Where would the order parameter come from? Presumably
>> folio_order(compound_head(page)) in which case shouldn't the op actually just be
>> pgmap->ops->folio_free()?
> 
> Yeah, that's also what I thought.
> 

Balbir


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 01/15] mm/zone_device: support large zone device private folios
  2025-09-24 23:45               ` Alistair Popple
@ 2025-09-25 15:27                 ` Zi Yan
  2025-09-26  1:44                   ` Alistair Popple
  0 siblings, 1 reply; 57+ messages in thread
From: Zi Yan @ 2025-09-25 15:27 UTC (permalink / raw)
  To: Alistair Popple
  Cc: David Hildenbrand, Balbir Singh, linux-kernel, linux-mm, damon,
	dri-devel, Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Oscar Salvador, Lorenzo Stoakes, Baolin Wang,
	Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
	Lyude Paul, Danilo Krummrich, David Airlie, Simona Vetter,
	Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 24 Sep 2025, at 19:45, Alistair Popple wrote:

> On 2025-09-25 at 03:49 +1000, Zi Yan <ziy@nvidia.com> wrote...
>> On 24 Sep 2025, at 7:04, David Hildenbrand wrote:
>>
>>> On 23.09.25 05:47, Balbir Singh wrote:
>>>> On 9/19/25 23:26, Zi Yan wrote:
>>>>> On 19 Sep 2025, at 1:01, Balbir Singh wrote:
>>>>>
>>>>>> On 9/18/25 12:49, Zi Yan wrote:
>>>>>>> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
>>>>>>>
>>>>>>>> Add routines to support allocation of large order zone device folios
>>>>>>>> and helper functions for zone device folios, to check if a folio is
>>>>>>>> device private and helpers for setting zone device data.
>>>>>>>>
>>>>>>>> When large folios are used, the existing page_free() callback in
>>>>>>>> pgmap is called when the folio is freed, this is true for both
>>>>>>>> PAGE_SIZE and higher order pages.
>>>>>>>>
>>>>>>>> Zone device private large folios do not support deferred split and
>>>>>>>> scan like normal THP folios.
>>>>>>>>
>>>>>>>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>>>>>>>> Cc: David Hildenbrand <david@redhat.com>
>>>>>>>> Cc: Zi Yan <ziy@nvidia.com>
>>>>>>>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
>>>>>>>> Cc: Rakie Kim <rakie.kim@sk.com>
>>>>>>>> Cc: Byungchul Park <byungchul@sk.com>
>>>>>>>> Cc: Gregory Price <gourry@gourry.net>
>>>>>>>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
>>>>>>>> Cc: Alistair Popple <apopple@nvidia.com>
>>>>>>>> Cc: Oscar Salvador <osalvador@suse.de>
>>>>>>>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>>>>>>>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>>>>>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
>>>>>>>> Cc: Nico Pache <npache@redhat.com>
>>>>>>>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>>>>>>>> Cc: Dev Jain <dev.jain@arm.com>
>>>>>>>> Cc: Barry Song <baohua@kernel.org>
>>>>>>>> Cc: Lyude Paul <lyude@redhat.com>
>>>>>>>> Cc: Danilo Krummrich <dakr@kernel.org>
>>>>>>>> Cc: David Airlie <airlied@gmail.com>
>>>>>>>> Cc: Simona Vetter <simona@ffwll.ch>
>>>>>>>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>>>>>>>> Cc: Mika Penttilä <mpenttil@redhat.com>
>>>>>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>>>>>> Cc: Francois Dugast <francois.dugast@intel.com>
>>>>>>>> ---
>>>>>>>>   include/linux/memremap.h | 10 +++++++++-
>>>>>>>>   mm/memremap.c            | 34 +++++++++++++++++++++-------------
>>>>>>>>   mm/rmap.c                |  6 +++++-
>>>>>>>>   3 files changed, 35 insertions(+), 15 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/include/linux/memremap.h b/include/linux/memremap.h
>>>>>>>> index e5951ba12a28..9c20327c2be5 100644
>>>>>>>> --- a/include/linux/memremap.h
>>>>>>>> +++ b/include/linux/memremap.h
>>>>>>>> @@ -206,7 +206,7 @@ static inline bool is_fsdax_page(const struct page *page)
>>>>>>>>   }
>>>>>>>>
>>>>>>>>   #ifdef CONFIG_ZONE_DEVICE
>>>>>>>> -void zone_device_page_init(struct page *page);
>>>>>>>> +void zone_device_folio_init(struct folio *folio, unsigned int order);
>>>>>>>>   void *memremap_pages(struct dev_pagemap *pgmap, int nid);
>>>>>>>>   void memunmap_pages(struct dev_pagemap *pgmap);
>>>>>>>>   void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
>>>>>>>> @@ -215,6 +215,14 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn);
>>>>>>>>   bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn);
>>>>>>>>
>>>>>>>>   unsigned long memremap_compat_align(void);
>>>>>>>> +
>>>>>>>> +static inline void zone_device_page_init(struct page *page)
>>>>>>>> +{
>>>>>>>> +	struct folio *folio = page_folio(page);
>>>>>>>> +
>>>>>>>> +	zone_device_folio_init(folio, 0);
>>>>>>>
>>>>>>> I assume it is for legacy code, where only non-compound page exists?
>>>>>>>
>>>>>>> It seems that you assume @page is always order-0, but there is no check
>>>>>>> for it. Adding VM_WARN_ON_ONCE_FOLIO(folio_order(folio) != 0, folio)
>>>>>>> above it would be useful to detect misuse.
>>>>>>>
>>>>>>>> +}
>>>>>>>> +
>>>>>>>>   #else
>>>>>>>>   static inline void *devm_memremap_pages(struct device *dev,
>>>>>>>>   		struct dev_pagemap *pgmap)
>>>>>>>> diff --git a/mm/memremap.c b/mm/memremap.c
>>>>>>>> index 46cb1b0b6f72..a8481ebf94cc 100644
>>>>>>>> --- a/mm/memremap.c
>>>>>>>> +++ b/mm/memremap.c
>>>>>>>> @@ -416,20 +416,19 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap);
>>>>>>>>   void free_zone_device_folio(struct folio *folio)
>>>>>>>>   {
>>>>>>>>   	struct dev_pagemap *pgmap = folio->pgmap;
>>>>>>>> +	unsigned long nr = folio_nr_pages(folio);
>>>>>>>> +	int i;
>>>>>>>>
>>>>>>>>   	if (WARN_ON_ONCE(!pgmap))
>>>>>>>>   		return;
>>>>>>>>
>>>>>>>>   	mem_cgroup_uncharge(folio);
>>>>>>>>
>>>>>>>> -	/*
>>>>>>>> -	 * Note: we don't expect anonymous compound pages yet. Once supported
>>>>>>>> -	 * and we could PTE-map them similar to THP, we'd have to clear
>>>>>>>> -	 * PG_anon_exclusive on all tail pages.
>>>>>>>> -	 */
>>>>>>>>   	if (folio_test_anon(folio)) {
>>>>>>>> -		VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
>>>>>>>> -		__ClearPageAnonExclusive(folio_page(folio, 0));
>>>>>>>> +		for (i = 0; i < nr; i++)
>>>>>>>> +			__ClearPageAnonExclusive(folio_page(folio, i));
>>>>>>>> +	} else {
>>>>>>>> +		VM_WARN_ON_ONCE(folio_test_large(folio));
>>>>>>>>   	}
>>>>>>>>
>>>>>>>>   	/*
>>>>>>>> @@ -456,8 +455,8 @@ void free_zone_device_folio(struct folio *folio)
>>>>>>>>   	case MEMORY_DEVICE_COHERENT:
>>>>>>>>   		if (WARN_ON_ONCE(!pgmap->ops || !pgmap->ops->page_free))
>>>>>>>>   			break;
>>>>>>>> -		pgmap->ops->page_free(folio_page(folio, 0));
>>>>>>>> -		put_dev_pagemap(pgmap);
>>>>>>>> +		pgmap->ops->page_free(&folio->page);
>>>>>>>> +		percpu_ref_put_many(&folio->pgmap->ref, nr);
>>>>>>>>   		break;
>>>>>>>>
>>>>>>>>   	case MEMORY_DEVICE_GENERIC:
>>>>>>>> @@ -480,14 +479,23 @@ void free_zone_device_folio(struct folio *folio)
>>>>>>>>   	}
>>>>>>>>   }
>>>>>>>>
>>>>>>>> -void zone_device_page_init(struct page *page)
>>>>>>>> +void zone_device_folio_init(struct folio *folio, unsigned int order)
>>>>>>>>   {
>>>>>>>> +	struct page *page = folio_page(folio, 0);
>>>>>>>
>>>>>>> It is strange to see a folio is converted back to page in
>>>>>>> a function called zone_device_folio_init().
>>>>>>>
>>>>>>>> +
>>>>>>>> +	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
>>>>>>>> +
>>>>>>>>   	/*
>>>>>>>>   	 * Drivers shouldn't be allocating pages after calling
>>>>>>>>   	 * memunmap_pages().
>>>>>>>>   	 */
>>>>>>>> -	WARN_ON_ONCE(!percpu_ref_tryget_live(&page_pgmap(page)->ref));
>>>>>>>> -	set_page_count(page, 1);
>>>>>>>> +	WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
>>>>>>>> +	folio_set_count(folio, 1);
>>>>>>>>   	lock_page(page);
>>>>>>>> +
>>>>>>>> +	if (order > 1) {
>
> Why is this only called for order > 1 rather than order > 0 ?
>
>>>>>>>> +		prep_compound_page(page, order);
>>>>>>>> +		folio_set_large_rmappable(folio);
>>>>>>>> +	}
>>>>>>>
>>>>>>> OK, so basically, @folio is not a compound page yet when zone_device_folio_init()
>>>>>>> is called.
>>>>>>>
>>>>>>> I feel that your zone_device_page_init() and zone_device_folio_init()
>>>>>>> implementations are inverse. They should follow the same pattern
>>>>>>> as __alloc_pages_noprof() and __folio_alloc_noprof(), where
>>>>>>> zone_device_page_init() does the actual initialization and
>>>>>>> zone_device_folio_init() just convert a page to folio.
>>>>>>>
>>>>>>> Something like:
>>>>>>>
>>>>>>> void zone_device_page_init(struct page *page, unsigned int order)
>>>>>>> {
>>>>>>> 	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
>>>>>>>
>>>>>>> 	/*
>>>>>>> 	 * Drivers shouldn't be allocating pages after calling
>>>>>>> 	 * memunmap_pages().
>>>>>>> 	 */
>>>>>>>
>>>>>>>      WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
>>>>>>> 	
>>>>>>> 	/*
>>>>>>> 	 * anonymous folio does not support order-1, high order file-backed folio
>>>>>>> 	 * is not supported at all.
>>>>>>> 	 */
>
> I guess that answers my question :-)
>
>>>>>>> 	VM_WARN_ON_ONCE(order == 1);
>>>>>>>
>>>>>>> 	if (order > 1)
>>>>>>> 		prep_compound_page(page, order);
>>>>>>>
>>>>>>> 	/* page has to be compound head here */
>>>>>>> 	set_page_count(page, 1);
>>>>>>> 	lock_page(page);
>>>>>>> }
>>>>>>>
>>>>>>> void zone_device_folio_init(struct folio *folio, unsigned int order)
>>>>>>> {
>>>>>>> 	struct page *page = folio_page(folio, 0);
>>>>>>>
>>>>>>> 	zone_device_page_init(page, order);
>>>>>>> 	page_rmappable_folio(page);
>>>>>>> }
>>>>>>>
>>>>>>> Or
>>>>>>>
>>>>>>> struct folio *zone_device_folio_init(struct page *page, unsigned int order)
>>>>>>> {
>>>>>>> 	zone_device_page_init(page, order);
>>>>>>> 	return page_rmappable_folio(page);
>>>>>>> }
>>>>>>>
>>>>>>>
>>>>>>> Then, it comes to free_zone_device_folio() above,
>>>>>>> I feel that pgmap->ops->page_free() should take an additional order
>>>>>>> parameter to free a compound page like free_frozen_pages().
>>>>>>>
>>>>>>>
>>>>>>> This is my impression after reading the patch and zone device page code.
>>>>>>>
>>>>>>> Alistair and David can correct me if this is wrong, since I am new to
>>>>>>> zone device page code.
>>>>>>> 	
>>>>>>
>>>>>> Thanks, I did not want to change zone_device_page_init() for several
>>>>>> drivers (outside my test scope) that already assume it has an order size of 0.
>
> It's a trivial change, so I don't think avoiding changes to other drivers should
> be a concern.
>
>>>>>
>>>>> But my proposed zone_device_page_init() should still work for order-0
>>>>> pages. You just need to change call site to add 0 as a new parameter.
>>>>>
>>>>
>>>> I did not want to change existing callers (increases testing impact)
>>>> without a strong reason.
>>>>
>>>>>
>>>>> One strange thing I found in the original zone_device_page_init() is
>>>>> the use of page_pgmap() in
>>>>> WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order)).
>>>>> page_pgmap() calls page_folio() on the given page to access pgmap field.
>>>>> And pgmap field is only available in struct folio. The code initializes
>>>>> struct page, but in middle it suddenly finds the page is actually a folio,
>>>>> then treat it as a page afterwards. I wonder if it can be done better.
>>>>>
>>>>> This might be a question to Alistair, since he made the change.
>
> Hello! I might be him :)
>
> I think this situation is just historical - when I originally wrote
> zone_device_page_init() the pgmap was stored on the page rather than the folio.
> That only changed fairly recently with commit 82ba975e4c43 ("mm: allow compound
> zone device pages").
>
> The reason pgmap is now only available on the folio is described in the
> commit log. The TLDR is switching FS DAX to use compound pages required
> page->compound_head to be available for use, and that was being shared
> with page->pgmap. So the solution was to move pgmap to the folio freeing up
> page->compound_head for use on tail pages.
>
> The whole percpu pgmap->ref could actually now go away - I've debated removing
> it but haven't found the motivation as it provides a small advantage on driver
> tear down. Basically it just tracks how many pages are allocated in the pgmap
> so drivers could use that to determine if they need to trigger migrations before
> tearing down the pgmap.
>
> The alternative is just to loop over every page in the pgmap to ensure the
> folio/page refcounts are 0 before tear down.
>
>>>>>
>>>>
>>>> I'll let him answer it :)
>>>
>>> Not him, but I think this goes back to my question raised in my other reply: When would we allocate "struct folio" in the future.
>>>
>>> If it's "always" then actually most of the zone-device code would only ever operate on folios and never on pages in the future.
>>>
>>> I recall during a discussion at LSF/MM I raised that, and the answer was (IIRC) that we will allocate "struct folio" as we will initialize the memmap for dax.
>
> Sounds about right.
>
>>> So essentially, we'd always have folios and would never really have to operate on pages.
>
> Yeah, I think I mentioned to Matthew at LSF/MM that I thought ZONE_DEVICE (and
> in particular ZONE_DEVICE_PRIVATE) might be a good candidate to experiment with
> removing struct pages entirely and switching to memdesc's or whatever. Because
> we should, in theory at least, only need to operate on folio's. But I'm still a
> little vague on the details how that would actually work. It's been on my TODO
> list for a while, so myabe I will try and look at it for LPC as a healthy bit of
> conference driven development.
>
>> Hmm, then what is the point of having “struct folio”, which originally is
>> added to save compound_head() calls, where everything is a folio in device
>> private world? We might need DAX people to explain the rationale of
>> “always struct folio”.
>
> Longer term isn't there an aim to remove struct page? So I assumed moving to

Right. But my current impression based on my code reading and this patchset
is that every device private page is a folio. To form a high order folio,
each device private folio is converted to page, prep_compound*()’d, then
converted back to folio. Based on what you said above, this weird conversion
might be temporary until the code is switched to memdesc.

I am looking forward to more details on how device private will be switched
to memdesc from you. :)

> folio's was part of that effort. As you say though many of the clean-ups thus
> far related to switching ZONE_DEVICE to folios have indeed just been about
> removing compound_head() calls.



Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 01/15] mm/zone_device: support large zone device private folios
  2025-09-25  0:05           ` Balbir Singh
@ 2025-09-25 15:32             ` Zi Yan
  0 siblings, 0 replies; 57+ messages in thread
From: Zi Yan @ 2025-09-25 15:32 UTC (permalink / raw)
  To: Balbir Singh
  Cc: Alistair Popple, David Hildenbrand, linux-kernel, linux-mm,
	damon, dri-devel, Joshua Hahn, Rakie Kim, Byungchul Park,
	Gregory Price, Ying Huang, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 24 Sep 2025, at 20:05, Balbir Singh wrote:

> On 9/25/25 09:58, Alistair Popple wrote:
>> On 2025-09-25 at 03:36 +1000, Zi Yan <ziy@nvidia.com> wrote...
>>> On 24 Sep 2025, at 6:55, David Hildenbrand wrote:
>>>
>>>> On 18.09.25 04:49, Zi Yan wrote:
>>>>> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
>>>>>
>>>>>> Add routines to support allocation of large order zone device folios
>>>>>> and helper functions for zone device folios, to check if a folio is
>>>>>> device private and helpers for setting zone device data.
>>>>>>
>>>>>> When large folios are used, the existing page_free() callback in
>>>>>> pgmap is called when the folio is freed, this is true for both
>>>>>> PAGE_SIZE and higher order pages.
>>>>>>
>>>>>> Zone device private large folios do not support deferred split and
>>>>>> scan like normal THP folios.
>>>>>>
>>>>>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>>>>>> Cc: David Hildenbrand <david@redhat.com>
>>>>>> Cc: Zi Yan <ziy@nvidia.com>
>>>>>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
>>>>>> Cc: Rakie Kim <rakie.kim@sk.com>
>>>>>> Cc: Byungchul Park <byungchul@sk.com>
>>>>>> Cc: Gregory Price <gourry@gourry.net>
>>>>>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
>>>>>> Cc: Alistair Popple <apopple@nvidia.com>
>>>>>> Cc: Oscar Salvador <osalvador@suse.de>
>>>>>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>>>>>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>>>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
>>>>>> Cc: Nico Pache <npache@redhat.com>
>>>>>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>>>>>> Cc: Dev Jain <dev.jain@arm.com>
>>>>>> Cc: Barry Song <baohua@kernel.org>
>>>>>> Cc: Lyude Paul <lyude@redhat.com>
>>>>>> Cc: Danilo Krummrich <dakr@kernel.org>
>>>>>> Cc: David Airlie <airlied@gmail.com>
>>>>>> Cc: Simona Vetter <simona@ffwll.ch>
>>>>>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>>>>>> Cc: Mika Penttilä <mpenttil@redhat.com>
>>>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>>>> Cc: Francois Dugast <francois.dugast@intel.com>
>>>>>> ---
>>>>>>   include/linux/memremap.h | 10 +++++++++-
>>>>>>   mm/memremap.c            | 34 +++++++++++++++++++++-------------
>>>>>>   mm/rmap.c                |  6 +++++-
>>>>>>   3 files changed, 35 insertions(+), 15 deletions(-)
>>>>>>
>>>>>> diff --git a/include/linux/memremap.h b/include/linux/memremap.h
>>>>>> index e5951ba12a28..9c20327c2be5 100644
>>>>>> --- a/include/linux/memremap.h
>>>>>> +++ b/include/linux/memremap.h
>>>>>> @@ -206,7 +206,7 @@ static inline bool is_fsdax_page(const struct page *page)
>>>>>>   }
>>>>>>
>>>>>>   #ifdef CONFIG_ZONE_DEVICE
>>>>>> -void zone_device_page_init(struct page *page);
>>>>>> +void zone_device_folio_init(struct folio *folio, unsigned int order);
>>>>>>   void *memremap_pages(struct dev_pagemap *pgmap, int nid);
>>>>>>   void memunmap_pages(struct dev_pagemap *pgmap);
>>>>>>   void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
>>>>>> @@ -215,6 +215,14 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn);
>>>>>>   bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn);
>>>>>>
>>>>>>   unsigned long memremap_compat_align(void);
>>>>>> +
>>>>>> +static inline void zone_device_page_init(struct page *page)
>>>>>> +{
>>>>>> +	struct folio *folio = page_folio(page);
>>>>>> +
>>>>>> +	zone_device_folio_init(folio, 0);
>>>>>
>>>>> I assume it is for legacy code, where only non-compound page exists?
>>>>>
>>>>> It seems that you assume @page is always order-0, but there is no check
>>>>> for it. Adding VM_WARN_ON_ONCE_FOLIO(folio_order(folio) != 0, folio)
>>>>> above it would be useful to detect misuse.
>>>>>
>>>>>> +}
>>>>>> +
>>>>>>   #else
>>>>>>   static inline void *devm_memremap_pages(struct device *dev,
>>>>>>   		struct dev_pagemap *pgmap)
>>>>>> diff --git a/mm/memremap.c b/mm/memremap.c
>>>>>> index 46cb1b0b6f72..a8481ebf94cc 100644
>>>>>> --- a/mm/memremap.c
>>>>>> +++ b/mm/memremap.c
>>>>>> @@ -416,20 +416,19 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap);
>>>>>>   void free_zone_device_folio(struct folio *folio)
>>>>>>   {
>>>>>>   	struct dev_pagemap *pgmap = folio->pgmap;
>>>>>> +	unsigned long nr = folio_nr_pages(folio);
>>>>>> +	int i;
>>>>>>
>>>>>>   	if (WARN_ON_ONCE(!pgmap))
>>>>>>   		return;
>>>>>>
>>>>>>   	mem_cgroup_uncharge(folio);
>>>>>>
>>>>>> -	/*
>>>>>> -	 * Note: we don't expect anonymous compound pages yet. Once supported
>>>>>> -	 * and we could PTE-map them similar to THP, we'd have to clear
>>>>>> -	 * PG_anon_exclusive on all tail pages.
>>>>>> -	 */
>>>>>>   	if (folio_test_anon(folio)) {
>>>>>> -		VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
>>>>>> -		__ClearPageAnonExclusive(folio_page(folio, 0));
>>>>>> +		for (i = 0; i < nr; i++)
>>>>>> +			__ClearPageAnonExclusive(folio_page(folio, i));
>>>>>> +	} else {
>>>>>> +		VM_WARN_ON_ONCE(folio_test_large(folio));
>>>>>>   	}
>>>>>>
>>>>>>   	/*
>>>>>> @@ -456,8 +455,8 @@ void free_zone_device_folio(struct folio *folio)
>>>>>>   	case MEMORY_DEVICE_COHERENT:
>>>>>>   		if (WARN_ON_ONCE(!pgmap->ops || !pgmap->ops->page_free))
>>>>>>   			break;
>>>>>> -		pgmap->ops->page_free(folio_page(folio, 0));
>>>>>> -		put_dev_pagemap(pgmap);
>>>>>> +		pgmap->ops->page_free(&folio->page);
>>>>>> +		percpu_ref_put_many(&folio->pgmap->ref, nr);
>>>>>>   		break;
>>>>>>
>>>>>>   	case MEMORY_DEVICE_GENERIC:
>>>>>> @@ -480,14 +479,23 @@ void free_zone_device_folio(struct folio *folio)
>>>>>>   	}
>>>>>>   }
>>>>>>
>>>>>> -void zone_device_page_init(struct page *page)
>>>>>> +void zone_device_folio_init(struct folio *folio, unsigned int order)
>>>>>>   {
>>>>>> +	struct page *page = folio_page(folio, 0);
>>>>>
>>>>> It is strange to see a folio is converted back to page in
>>>>> a function called zone_device_folio_init().
>>>>>
>>>>>> +
>>>>>> +	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
>>>>>> +
>>>>>>   	/*
>>>>>>   	 * Drivers shouldn't be allocating pages after calling
>>>>>>   	 * memunmap_pages().
>>>>>>   	 */
>>>>>> -	WARN_ON_ONCE(!percpu_ref_tryget_live(&page_pgmap(page)->ref));
>>>>>> -	set_page_count(page, 1);
>>>>>> +	WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
>>>>>> +	folio_set_count(folio, 1);
>>>>>>   	lock_page(page);
>>>>>> +
>>>>>> +	if (order > 1) {
>>>>>> +		prep_compound_page(page, order);
>>>>>> +		folio_set_large_rmappable(folio);
>>>>>> +	}
>>>>>
>>>>> OK, so basically, @folio is not a compound page yet when zone_device_folio_init()
>>>>> is called.
>>>>>
>>>>> I feel that your zone_device_page_init() and zone_device_folio_init()
>>>>> implementations are inverse. They should follow the same pattern
>>>>> as __alloc_pages_noprof() and __folio_alloc_noprof(), where
>>>>> zone_device_page_init() does the actual initialization and
>>>>> zone_device_folio_init() just convert a page to folio.
>>>>>
>>>>> Something like:
>>>>>
>>>>> void zone_device_page_init(struct page *page, unsigned int order)
>>>>> {
>>>>> 	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
>>>>>
>>>>> 	/*
>>>>> 	 * Drivers shouldn't be allocating pages after calling
>>>>> 	 * memunmap_pages().
>>>>> 	 */
>>>>>
>>>>>      WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
>>>>> 	
>>>>> 	/*
>>>>> 	 * anonymous folio does not support order-1, high order file-backed folio
>>>>> 	 * is not supported at all.
>>>>> 	 */
>>>>> 	VM_WARN_ON_ONCE(order == 1);
>>>>>
>>>>> 	if (order > 1)
>>>>> 		prep_compound_page(page, order);
>>>>>
>>>>> 	/* page has to be compound head here */
>>>>> 	set_page_count(page, 1);
>>>>> 	lock_page(page);
>>>>> }
>>>>>
>>>>> void zone_device_folio_init(struct folio *folio, unsigned int order)
>>>>> {
>>>>> 	struct page *page = folio_page(folio, 0);
>>>>>
>>>>> 	zone_device_page_init(page, order);
>>>>> 	page_rmappable_folio(page);
>>>>> }
>>>>>
>>>>> Or
>>>>>
>>>>> struct folio *zone_device_folio_init(struct page *page, unsigned int order)
>>>>> {
>>>>> 	zone_device_page_init(page, order);
>>>>> 	return page_rmappable_folio(page);
>>>>> }
>>>>
>>>> I think the problem is that it will all be weird once we dynamically allocate "struct folio".
>>>>
>>>> I have not yet a clear understanding on how that would really work.
>>>>
>>>> For example, should it be pgmap->ops->page_folio() ?
>>>>
>>>> Who allocates the folio? Do we allocate all order-0 folios initially, to then merge them when constructing large folios? How do we manage the "struct folio" during such merging splitting?
>>>
>>> Right. Either we would waste memory by simply concatenating all “struct folio”
>>> and putting paddings at the end, or we would free tail “struct folio” first,
>>> then allocate tail “struct page”. Both are painful and do not match core mm’s
>>> memdesc pattern, where “struct folio” is allocated when caller is asking
>>> for a folio. If “struct folio” is always allocated, there is no difference
>>> between “struct folio” and “struct page”.
>>
>> As mentioned in my other reply I need to investigate this some more, but I
>> don't think we _need_ to always allocate folios (or pages for that matter).
>> The ZONE_DEVICE code just uses folios/pages for interacting with the core mm,
>> not for managing the device memory itself, so we should be able to make it more
>> closely match the memdesc pattern. It's just I'm still a bit unsure what that
>> pattern will actually look like.
>>
>>>>
>>>> With that in mind, I don't really know what the proper interface should be today.
>>>>
>>>>
>>>> zone_device_folio_init(struct page *page, unsigned int order)
>>>>
>>>> looks cleaner, agreed.
>>
>> Agreed.
>>
>>>>>
>>>>>
>>>>> Then, it comes to free_zone_device_folio() above,
>>>>> I feel that pgmap->ops->page_free() should take an additional order
>>>>> parameter to free a compound page like free_frozen_pages().
>>
>> Where would the order parameter come from? Presumably
>> folio_order(compound_head(page)) in which case shouldn't the op actually just be
>> pgmap->ops->folio_free()?
>>
> ->page_free() can detect if the page is of large order. The patchset was designed
> to make folios and opt-in and avoid unnecessary changes to existing drivers.
> But I can revisit that thought process if it helps with cleaner code.

That would be very helpful. It is strange to see page_free(folio_page(folio, 0)).
If folio is present, converting it back to page makes me think the code
frees the first page of the folio.


Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 01/15] mm/zone_device: support large zone device private folios
  2025-09-25 15:27                 ` Zi Yan
@ 2025-09-26  1:44                   ` Alistair Popple
  0 siblings, 0 replies; 57+ messages in thread
From: Alistair Popple @ 2025-09-26  1:44 UTC (permalink / raw)
  To: Zi Yan
  Cc: David Hildenbrand, Balbir Singh, linux-kernel, linux-mm, damon,
	dri-devel, Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Oscar Salvador, Lorenzo Stoakes, Baolin Wang,
	Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
	Lyude Paul, Danilo Krummrich, David Airlie, Simona Vetter,
	Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 2025-09-26 at 01:27 +1000, Zi Yan <ziy@nvidia.com> wrote...
> On 24 Sep 2025, at 19:45, Alistair Popple wrote:
> 
> > On 2025-09-25 at 03:49 +1000, Zi Yan <ziy@nvidia.com> wrote...
> >> On 24 Sep 2025, at 7:04, David Hildenbrand wrote:
> >>
> >>> On 23.09.25 05:47, Balbir Singh wrote:
> >>>> On 9/19/25 23:26, Zi Yan wrote:
> >>>>> On 19 Sep 2025, at 1:01, Balbir Singh wrote:
> >>>>>
> >>>>>> On 9/18/25 12:49, Zi Yan wrote:
> >>>>>>> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
> >>>>>>>
> >>>>>>>> Add routines to support allocation of large order zone device folios
> >>>>>>>> and helper functions for zone device folios, to check if a folio is
> >>>>>>>> device private and helpers for setting zone device data.
> >>>>>>>>
> >>>>>>>> When large folios are used, the existing page_free() callback in
> >>>>>>>> pgmap is called when the folio is freed, this is true for both
> >>>>>>>> PAGE_SIZE and higher order pages.
> >>>>>>>>
> >>>>>>>> Zone device private large folios do not support deferred split and
> >>>>>>>> scan like normal THP folios.
> >>>>>>>>
> >>>>>>>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
> >>>>>>>> Cc: David Hildenbrand <david@redhat.com>
> >>>>>>>> Cc: Zi Yan <ziy@nvidia.com>
> >>>>>>>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
> >>>>>>>> Cc: Rakie Kim <rakie.kim@sk.com>
> >>>>>>>> Cc: Byungchul Park <byungchul@sk.com>
> >>>>>>>> Cc: Gregory Price <gourry@gourry.net>
> >>>>>>>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
> >>>>>>>> Cc: Alistair Popple <apopple@nvidia.com>
> >>>>>>>> Cc: Oscar Salvador <osalvador@suse.de>
> >>>>>>>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> >>>>>>>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> >>>>>>>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
> >>>>>>>> Cc: Nico Pache <npache@redhat.com>
> >>>>>>>> Cc: Ryan Roberts <ryan.roberts@arm.com>
> >>>>>>>> Cc: Dev Jain <dev.jain@arm.com>
> >>>>>>>> Cc: Barry Song <baohua@kernel.org>
> >>>>>>>> Cc: Lyude Paul <lyude@redhat.com>
> >>>>>>>> Cc: Danilo Krummrich <dakr@kernel.org>
> >>>>>>>> Cc: David Airlie <airlied@gmail.com>
> >>>>>>>> Cc: Simona Vetter <simona@ffwll.ch>
> >>>>>>>> Cc: Ralph Campbell <rcampbell@nvidia.com>
> >>>>>>>> Cc: Mika Penttilä <mpenttil@redhat.com>
> >>>>>>>> Cc: Matthew Brost <matthew.brost@intel.com>
> >>>>>>>> Cc: Francois Dugast <francois.dugast@intel.com>
> >>>>>>>> ---
> >>>>>>>>   include/linux/memremap.h | 10 +++++++++-
> >>>>>>>>   mm/memremap.c            | 34 +++++++++++++++++++++-------------
> >>>>>>>>   mm/rmap.c                |  6 +++++-
> >>>>>>>>   3 files changed, 35 insertions(+), 15 deletions(-)
> >>>>>>>>
> >>>>>>>> diff --git a/include/linux/memremap.h b/include/linux/memremap.h
> >>>>>>>> index e5951ba12a28..9c20327c2be5 100644
> >>>>>>>> --- a/include/linux/memremap.h
> >>>>>>>> +++ b/include/linux/memremap.h
> >>>>>>>> @@ -206,7 +206,7 @@ static inline bool is_fsdax_page(const struct page *page)
> >>>>>>>>   }
> >>>>>>>>
> >>>>>>>>   #ifdef CONFIG_ZONE_DEVICE
> >>>>>>>> -void zone_device_page_init(struct page *page);
> >>>>>>>> +void zone_device_folio_init(struct folio *folio, unsigned int order);
> >>>>>>>>   void *memremap_pages(struct dev_pagemap *pgmap, int nid);
> >>>>>>>>   void memunmap_pages(struct dev_pagemap *pgmap);
> >>>>>>>>   void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
> >>>>>>>> @@ -215,6 +215,14 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn);
> >>>>>>>>   bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn);
> >>>>>>>>
> >>>>>>>>   unsigned long memremap_compat_align(void);
> >>>>>>>> +
> >>>>>>>> +static inline void zone_device_page_init(struct page *page)
> >>>>>>>> +{
> >>>>>>>> +	struct folio *folio = page_folio(page);
> >>>>>>>> +
> >>>>>>>> +	zone_device_folio_init(folio, 0);
> >>>>>>>
> >>>>>>> I assume it is for legacy code, where only non-compound page exists?
> >>>>>>>
> >>>>>>> It seems that you assume @page is always order-0, but there is no check
> >>>>>>> for it. Adding VM_WARN_ON_ONCE_FOLIO(folio_order(folio) != 0, folio)
> >>>>>>> above it would be useful to detect misuse.
> >>>>>>>
> >>>>>>>> +}
> >>>>>>>> +
> >>>>>>>>   #else
> >>>>>>>>   static inline void *devm_memremap_pages(struct device *dev,
> >>>>>>>>   		struct dev_pagemap *pgmap)
> >>>>>>>> diff --git a/mm/memremap.c b/mm/memremap.c
> >>>>>>>> index 46cb1b0b6f72..a8481ebf94cc 100644
> >>>>>>>> --- a/mm/memremap.c
> >>>>>>>> +++ b/mm/memremap.c
> >>>>>>>> @@ -416,20 +416,19 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap);
> >>>>>>>>   void free_zone_device_folio(struct folio *folio)
> >>>>>>>>   {
> >>>>>>>>   	struct dev_pagemap *pgmap = folio->pgmap;
> >>>>>>>> +	unsigned long nr = folio_nr_pages(folio);
> >>>>>>>> +	int i;
> >>>>>>>>
> >>>>>>>>   	if (WARN_ON_ONCE(!pgmap))
> >>>>>>>>   		return;
> >>>>>>>>
> >>>>>>>>   	mem_cgroup_uncharge(folio);
> >>>>>>>>
> >>>>>>>> -	/*
> >>>>>>>> -	 * Note: we don't expect anonymous compound pages yet. Once supported
> >>>>>>>> -	 * and we could PTE-map them similar to THP, we'd have to clear
> >>>>>>>> -	 * PG_anon_exclusive on all tail pages.
> >>>>>>>> -	 */
> >>>>>>>>   	if (folio_test_anon(folio)) {
> >>>>>>>> -		VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
> >>>>>>>> -		__ClearPageAnonExclusive(folio_page(folio, 0));
> >>>>>>>> +		for (i = 0; i < nr; i++)
> >>>>>>>> +			__ClearPageAnonExclusive(folio_page(folio, i));
> >>>>>>>> +	} else {
> >>>>>>>> +		VM_WARN_ON_ONCE(folio_test_large(folio));
> >>>>>>>>   	}
> >>>>>>>>
> >>>>>>>>   	/*
> >>>>>>>> @@ -456,8 +455,8 @@ void free_zone_device_folio(struct folio *folio)
> >>>>>>>>   	case MEMORY_DEVICE_COHERENT:
> >>>>>>>>   		if (WARN_ON_ONCE(!pgmap->ops || !pgmap->ops->page_free))
> >>>>>>>>   			break;
> >>>>>>>> -		pgmap->ops->page_free(folio_page(folio, 0));
> >>>>>>>> -		put_dev_pagemap(pgmap);
> >>>>>>>> +		pgmap->ops->page_free(&folio->page);
> >>>>>>>> +		percpu_ref_put_many(&folio->pgmap->ref, nr);
> >>>>>>>>   		break;
> >>>>>>>>
> >>>>>>>>   	case MEMORY_DEVICE_GENERIC:
> >>>>>>>> @@ -480,14 +479,23 @@ void free_zone_device_folio(struct folio *folio)
> >>>>>>>>   	}
> >>>>>>>>   }
> >>>>>>>>
> >>>>>>>> -void zone_device_page_init(struct page *page)
> >>>>>>>> +void zone_device_folio_init(struct folio *folio, unsigned int order)
> >>>>>>>>   {
> >>>>>>>> +	struct page *page = folio_page(folio, 0);
> >>>>>>>
> >>>>>>> It is strange to see a folio is converted back to page in
> >>>>>>> a function called zone_device_folio_init().
> >>>>>>>
> >>>>>>>> +
> >>>>>>>> +	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
> >>>>>>>> +
> >>>>>>>>   	/*
> >>>>>>>>   	 * Drivers shouldn't be allocating pages after calling
> >>>>>>>>   	 * memunmap_pages().
> >>>>>>>>   	 */
> >>>>>>>> -	WARN_ON_ONCE(!percpu_ref_tryget_live(&page_pgmap(page)->ref));
> >>>>>>>> -	set_page_count(page, 1);
> >>>>>>>> +	WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
> >>>>>>>> +	folio_set_count(folio, 1);
> >>>>>>>>   	lock_page(page);
> >>>>>>>> +
> >>>>>>>> +	if (order > 1) {
> >
> > Why is this only called for order > 1 rather than order > 0 ?
> >
> >>>>>>>> +		prep_compound_page(page, order);
> >>>>>>>> +		folio_set_large_rmappable(folio);
> >>>>>>>> +	}
> >>>>>>>
> >>>>>>> OK, so basically, @folio is not a compound page yet when zone_device_folio_init()
> >>>>>>> is called.
> >>>>>>>
> >>>>>>> I feel that your zone_device_page_init() and zone_device_folio_init()
> >>>>>>> implementations are inverse. They should follow the same pattern
> >>>>>>> as __alloc_pages_noprof() and __folio_alloc_noprof(), where
> >>>>>>> zone_device_page_init() does the actual initialization and
> >>>>>>> zone_device_folio_init() just convert a page to folio.
> >>>>>>>
> >>>>>>> Something like:
> >>>>>>>
> >>>>>>> void zone_device_page_init(struct page *page, unsigned int order)
> >>>>>>> {
> >>>>>>> 	VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
> >>>>>>>
> >>>>>>> 	/*
> >>>>>>> 	 * Drivers shouldn't be allocating pages after calling
> >>>>>>> 	 * memunmap_pages().
> >>>>>>> 	 */
> >>>>>>>
> >>>>>>>      WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
> >>>>>>> 	
> >>>>>>> 	/*
> >>>>>>> 	 * anonymous folio does not support order-1, high order file-backed folio
> >>>>>>> 	 * is not supported at all.
> >>>>>>> 	 */
> >
> > I guess that answers my question :-)
> >
> >>>>>>> 	VM_WARN_ON_ONCE(order == 1);
> >>>>>>>
> >>>>>>> 	if (order > 1)
> >>>>>>> 		prep_compound_page(page, order);
> >>>>>>>
> >>>>>>> 	/* page has to be compound head here */
> >>>>>>> 	set_page_count(page, 1);
> >>>>>>> 	lock_page(page);
> >>>>>>> }
> >>>>>>>
> >>>>>>> void zone_device_folio_init(struct folio *folio, unsigned int order)
> >>>>>>> {
> >>>>>>> 	struct page *page = folio_page(folio, 0);
> >>>>>>>
> >>>>>>> 	zone_device_page_init(page, order);
> >>>>>>> 	page_rmappable_folio(page);
> >>>>>>> }
> >>>>>>>
> >>>>>>> Or
> >>>>>>>
> >>>>>>> struct folio *zone_device_folio_init(struct page *page, unsigned int order)
> >>>>>>> {
> >>>>>>> 	zone_device_page_init(page, order);
> >>>>>>> 	return page_rmappable_folio(page);
> >>>>>>> }
> >>>>>>>
> >>>>>>>
> >>>>>>> Then, it comes to free_zone_device_folio() above,
> >>>>>>> I feel that pgmap->ops->page_free() should take an additional order
> >>>>>>> parameter to free a compound page like free_frozen_pages().
> >>>>>>>
> >>>>>>>
> >>>>>>> This is my impression after reading the patch and zone device page code.
> >>>>>>>
> >>>>>>> Alistair and David can correct me if this is wrong, since I am new to
> >>>>>>> zone device page code.
> >>>>>>> 	
> >>>>>>
> >>>>>> Thanks, I did not want to change zone_device_page_init() for several
> >>>>>> drivers (outside my test scope) that already assume it has an order size of 0.
> >
> > It's a trivial change, so I don't think avoiding changes to other drivers should
> > be a concern.
> >
> >>>>>
> >>>>> But my proposed zone_device_page_init() should still work for order-0
> >>>>> pages. You just need to change call site to add 0 as a new parameter.
> >>>>>
> >>>>
> >>>> I did not want to change existing callers (increases testing impact)
> >>>> without a strong reason.
> >>>>
> >>>>>
> >>>>> One strange thing I found in the original zone_device_page_init() is
> >>>>> the use of page_pgmap() in
> >>>>> WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order)).
> >>>>> page_pgmap() calls page_folio() on the given page to access pgmap field.
> >>>>> And pgmap field is only available in struct folio. The code initializes
> >>>>> struct page, but in middle it suddenly finds the page is actually a folio,
> >>>>> then treat it as a page afterwards. I wonder if it can be done better.
> >>>>>
> >>>>> This might be a question to Alistair, since he made the change.
> >
> > Hello! I might be him :)
> >
> > I think this situation is just historical - when I originally wrote
> > zone_device_page_init() the pgmap was stored on the page rather than the folio.
> > That only changed fairly recently with commit 82ba975e4c43 ("mm: allow compound
> > zone device pages").
> >
> > The reason pgmap is now only available on the folio is described in the
> > commit log. The TLDR is switching FS DAX to use compound pages required
> > page->compound_head to be available for use, and that was being shared
> > with page->pgmap. So the solution was to move pgmap to the folio freeing up
> > page->compound_head for use on tail pages.
> >
> > The whole percpu pgmap->ref could actually now go away - I've debated removing
> > it but haven't found the motivation as it provides a small advantage on driver
> > tear down. Basically it just tracks how many pages are allocated in the pgmap
> > so drivers could use that to determine if they need to trigger migrations before
> > tearing down the pgmap.
> >
> > The alternative is just to loop over every page in the pgmap to ensure the
> > folio/page refcounts are 0 before tear down.
> >
> >>>>>
> >>>>
> >>>> I'll let him answer it :)
> >>>
> >>> Not him, but I think this goes back to my question raised in my other reply: When would we allocate "struct folio" in the future.
> >>>
> >>> If it's "always" then actually most of the zone-device code would only ever operate on folios and never on pages in the future.
> >>>
> >>> I recall during a discussion at LSF/MM I raised that, and the answer was (IIRC) that we will allocate "struct folio" as we will initialize the memmap for dax.
> >
> > Sounds about right.
> >
> >>> So essentially, we'd always have folios and would never really have to operate on pages.
> >
> > Yeah, I think I mentioned to Matthew at LSF/MM that I thought ZONE_DEVICE (and
> > in particular ZONE_DEVICE_PRIVATE) might be a good candidate to experiment with
> > removing struct pages entirely and switching to memdesc's or whatever. Because
> > we should, in theory at least, only need to operate on folio's. But I'm still a
> > little vague on the details how that would actually work. It's been on my TODO
> > list for a while, so myabe I will try and look at it for LPC as a healthy bit of
> > conference driven development.
> >
> >> Hmm, then what is the point of having “struct folio”, which originally is
> >> added to save compound_head() calls, where everything is a folio in device
> >> private world? We might need DAX people to explain the rationale of
> >> “always struct folio”.
> >
> > Longer term isn't there an aim to remove struct page? So I assumed moving to
> 
> Right. But my current impression based on my code reading and this patchset
> is that every device private page is a folio. To form a high order folio,
> each device private folio is converted to page, prep_compound*()’d, then
> converted back to folio. Based on what you said above, this weird conversion
> might be temporary until the code is switched to memdesc.
> 
> I am looking forward to more details on how device private will be switched
> to memdesc from you. :)

Thanks, so am I :-P

For device private I think the first step is to move away from using
pfn_to_page()/page_to_pfn() and instead create a "device pfn" that doesn't exist
in the physical direct map. That in itself would solve some problems (such as
supporting device private pages on ARM) and I hope to have something posted in
the next couple of weeks.

> > folio's was part of that effort. As you say though many of the clean-ups thus
> > far related to switching ZONE_DEVICE to folios have indeed just been about
> > removing compound_head() calls.
> 
> 
> 
> Best Regards,
> Yan, Zi


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 01/15] mm/zone_device: support large zone device private folios
  2025-09-25 12:02             ` Balbir Singh
@ 2025-09-26  1:50               ` Alistair Popple
  0 siblings, 0 replies; 57+ messages in thread
From: Alistair Popple @ 2025-09-26  1:50 UTC (permalink / raw)
  To: Balbir Singh
  Cc: David Hildenbrand, Zi Yan, linux-kernel, linux-mm, damon,
	dri-devel, Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price,
	Ying Huang, Oscar Salvador, Lorenzo Stoakes, Baolin Wang,
	Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
	Lyude Paul, Danilo Krummrich, David Airlie, Simona Vetter,
	Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 2025-09-25 at 22:02 +1000, Balbir Singh <balbirs@nvidia.com> wrote...
> On 9/25/25 19:43, David Hildenbrand wrote:
> > On 25.09.25 01:58, Alistair Popple wrote:
> >> On 2025-09-25 at 03:36 +1000, Zi Yan <ziy@nvidia.com> wrote...
> >>> On 24 Sep 2025, at 6:55, David Hildenbrand wrote:
> >>>
> >>>> On 18.09.25 04:49, Zi Yan wrote:
> >>>>> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
> >>>>>
> >>>>>> Add routines to support allocation of large order zone device folios
> >>>>>> and helper functions for zone device folios, to check if a folio is
> >>>>>> device private and helpers for setting zone device data.
> >>>>>>
> >>>>>> When large folios are used, the existing page_free() callback in
> >>>>>> pgmap is called when the folio is freed, this is true for both
> >>>>>> PAGE_SIZE and higher order pages.
> >>>>>>
> >>>>>> Zone device private large folios do not support deferred split and
> >>>>>> scan like normal THP folios.
> >>>>>>
> >>>>>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
> >>>>>> Cc: David Hildenbrand <david@redhat.com>
> >>>>>> Cc: Zi Yan <ziy@nvidia.com>
> >>>>>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
> >>>>>> Cc: Rakie Kim <rakie.kim@sk.com>
> >>>>>> Cc: Byungchul Park <byungchul@sk.com>
> >>>>>> Cc: Gregory Price <gourry@gourry.net>
> >>>>>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
> >>>>>> Cc: Alistair Popple <apopple@nvidia.com>
> >>>>>> Cc: Oscar Salvador <osalvador@suse.de>
> >>>>>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> >>>>>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> >>>>>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
> >>>>>> Cc: Nico Pache <npache@redhat.com>
> >>>>>> Cc: Ryan Roberts <ryan.roberts@arm.com>
> >>>>>> Cc: Dev Jain <dev.jain@arm.com>
> >>>>>> Cc: Barry Song <baohua@kernel.org>
> >>>>>> Cc: Lyude Paul <lyude@redhat.com>
> >>>>>> Cc: Danilo Krummrich <dakr@kernel.org>
> >>>>>> Cc: David Airlie <airlied@gmail.com>
> >>>>>> Cc: Simona Vetter <simona@ffwll.ch>
> >>>>>> Cc: Ralph Campbell <rcampbell@nvidia.com>
> >>>>>> Cc: Mika Penttilä <mpenttil@redhat.com>
> >>>>>> Cc: Matthew Brost <matthew.brost@intel.com>
> >>>>>> Cc: Francois Dugast <francois.dugast@intel.com>
> >>>>>> ---
> >>>>>>    include/linux/memremap.h | 10 +++++++++-
> >>>>>>    mm/memremap.c            | 34 +++++++++++++++++++++-------------
> >>>>>>    mm/rmap.c                |  6 +++++-
> >>>>>>    3 files changed, 35 insertions(+), 15 deletions(-)
> >>>>>>
> >>>>>> diff --git a/include/linux/memremap.h b/include/linux/memremap.h
> >>>>>> index e5951ba12a28..9c20327c2be5 100644
> >>>>>> --- a/include/linux/memremap.h
> >>>>>> +++ b/include/linux/memremap.h
> >>>>>> @@ -206,7 +206,7 @@ static inline bool is_fsdax_page(const struct page *page)
> >>>>>>    }
> >>>>>>
> >>>>>>    #ifdef CONFIG_ZONE_DEVICE
> >>>>>> -void zone_device_page_init(struct page *page);
> >>>>>> +void zone_device_folio_init(struct folio *folio, unsigned int order);
> >>>>>>    void *memremap_pages(struct dev_pagemap *pgmap, int nid);
> >>>>>>    void memunmap_pages(struct dev_pagemap *pgmap);
> >>>>>>    void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
> >>>>>> @@ -215,6 +215,14 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn);
> >>>>>>    bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn);
> >>>>>>
> >>>>>>    unsigned long memremap_compat_align(void);
> >>>>>> +
> >>>>>> +static inline void zone_device_page_init(struct page *page)
> >>>>>> +{
> >>>>>> +    struct folio *folio = page_folio(page);
> >>>>>> +
> >>>>>> +    zone_device_folio_init(folio, 0);
> >>>>>
> >>>>> I assume it is for legacy code, where only non-compound page exists?
> >>>>>
> >>>>> It seems that you assume @page is always order-0, but there is no check
> >>>>> for it. Adding VM_WARN_ON_ONCE_FOLIO(folio_order(folio) != 0, folio)
> >>>>> above it would be useful to detect misuse.
> >>>>>
> >>>>>> +}
> >>>>>> +
> >>>>>>    #else
> >>>>>>    static inline void *devm_memremap_pages(struct device *dev,
> >>>>>>            struct dev_pagemap *pgmap)
> >>>>>> diff --git a/mm/memremap.c b/mm/memremap.c
> >>>>>> index 46cb1b0b6f72..a8481ebf94cc 100644
> >>>>>> --- a/mm/memremap.c
> >>>>>> +++ b/mm/memremap.c
> >>>>>> @@ -416,20 +416,19 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap);
> >>>>>>    void free_zone_device_folio(struct folio *folio)
> >>>>>>    {
> >>>>>>        struct dev_pagemap *pgmap = folio->pgmap;
> >>>>>> +    unsigned long nr = folio_nr_pages(folio);
> >>>>>> +    int i;
> >>>>>>
> >>>>>>        if (WARN_ON_ONCE(!pgmap))
> >>>>>>            return;
> >>>>>>
> >>>>>>        mem_cgroup_uncharge(folio);
> >>>>>>
> >>>>>> -    /*
> >>>>>> -     * Note: we don't expect anonymous compound pages yet. Once supported
> >>>>>> -     * and we could PTE-map them similar to THP, we'd have to clear
> >>>>>> -     * PG_anon_exclusive on all tail pages.
> >>>>>> -     */
> >>>>>>        if (folio_test_anon(folio)) {
> >>>>>> -        VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
> >>>>>> -        __ClearPageAnonExclusive(folio_page(folio, 0));
> >>>>>> +        for (i = 0; i < nr; i++)
> >>>>>> +            __ClearPageAnonExclusive(folio_page(folio, i));
> >>>>>> +    } else {
> >>>>>> +        VM_WARN_ON_ONCE(folio_test_large(folio));
> >>>>>>        }
> >>>>>>
> >>>>>>        /*
> >>>>>> @@ -456,8 +455,8 @@ void free_zone_device_folio(struct folio *folio)
> >>>>>>        case MEMORY_DEVICE_COHERENT:
> >>>>>>            if (WARN_ON_ONCE(!pgmap->ops || !pgmap->ops->page_free))
> >>>>>>                break;
> >>>>>> -        pgmap->ops->page_free(folio_page(folio, 0));
> >>>>>> -        put_dev_pagemap(pgmap);
> >>>>>> +        pgmap->ops->page_free(&folio->page);
> >>>>>> +        percpu_ref_put_many(&folio->pgmap->ref, nr);
> >>>>>>            break;
> >>>>>>
> >>>>>>        case MEMORY_DEVICE_GENERIC:
> >>>>>> @@ -480,14 +479,23 @@ void free_zone_device_folio(struct folio *folio)
> >>>>>>        }
> >>>>>>    }
> >>>>>>
> >>>>>> -void zone_device_page_init(struct page *page)
> >>>>>> +void zone_device_folio_init(struct folio *folio, unsigned int order)
> >>>>>>    {
> >>>>>> +    struct page *page = folio_page(folio, 0);
> >>>>>
> >>>>> It is strange to see a folio is converted back to page in
> >>>>> a function called zone_device_folio_init().
> >>>>>
> >>>>>> +
> >>>>>> +    VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
> >>>>>> +
> >>>>>>        /*
> >>>>>>         * Drivers shouldn't be allocating pages after calling
> >>>>>>         * memunmap_pages().
> >>>>>>         */
> >>>>>> -    WARN_ON_ONCE(!percpu_ref_tryget_live(&page_pgmap(page)->ref));
> >>>>>> -    set_page_count(page, 1);
> >>>>>> +    WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
> >>>>>> +    folio_set_count(folio, 1);
> >>>>>>        lock_page(page);
> >>>>>> +
> >>>>>> +    if (order > 1) {
> >>>>>> +        prep_compound_page(page, order);
> >>>>>> +        folio_set_large_rmappable(folio);
> >>>>>> +    }
> >>>>>
> >>>>> OK, so basically, @folio is not a compound page yet when zone_device_folio_init()
> >>>>> is called.
> >>>>>
> >>>>> I feel that your zone_device_page_init() and zone_device_folio_init()
> >>>>> implementations are inverse. They should follow the same pattern
> >>>>> as __alloc_pages_noprof() and __folio_alloc_noprof(), where
> >>>>> zone_device_page_init() does the actual initialization and
> >>>>> zone_device_folio_init() just convert a page to folio.
> >>>>>
> >>>>> Something like:
> >>>>>
> >>>>> void zone_device_page_init(struct page *page, unsigned int order)
> >>>>> {
> >>>>>     VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES);
> >>>>>
> >>>>>     /*
> >>>>>      * Drivers shouldn't be allocating pages after calling
> >>>>>      * memunmap_pages().
> >>>>>      */
> >>>>>
> >>>>>       WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order));
> >>>>>     
> >>>>>     /*
> >>>>>      * anonymous folio does not support order-1, high order file-backed folio
> >>>>>      * is not supported at all.
> >>>>>      */
> >>>>>     VM_WARN_ON_ONCE(order == 1);
> >>>>>
> >>>>>     if (order > 1)
> >>>>>         prep_compound_page(page, order);
> >>>>>
> >>>>>     /* page has to be compound head here */
> >>>>>     set_page_count(page, 1);
> >>>>>     lock_page(page);
> >>>>> }
> >>>>>
> >>>>> void zone_device_folio_init(struct folio *folio, unsigned int order)
> >>>>> {
> >>>>>     struct page *page = folio_page(folio, 0);
> >>>>>
> >>>>>     zone_device_page_init(page, order);
> >>>>>     page_rmappable_folio(page);
> >>>>> }
> >>>>>
> >>>>> Or
> >>>>>
> >>>>> struct folio *zone_device_folio_init(struct page *page, unsigned int order)
> >>>>> {
> >>>>>     zone_device_page_init(page, order);
> >>>>>     return page_rmappable_folio(page);
> >>>>> }
> >>>>
> >>>> I think the problem is that it will all be weird once we dynamically allocate "struct folio".
> >>>>
> >>>> I have not yet a clear understanding on how that would really work.
> >>>>
> >>>> For example, should it be pgmap->ops->page_folio() ?
> >>>>
> >>>> Who allocates the folio? Do we allocate all order-0 folios initially, to then merge them when constructing large folios? How do we manage the "struct folio" during such merging splitting?
> >>>
> >>> Right. Either we would waste memory by simply concatenating all “struct folio”
> >>> and putting paddings at the end, or we would free tail “struct folio” first,
> >>> then allocate tail “struct page”. Both are painful and do not match core mm’s
> >>> memdesc pattern, where “struct folio” is allocated when caller is asking
> >>> for a folio. If “struct folio” is always allocated, there is no difference
> >>> between “struct folio” and “struct page”.
> >>
> >> As mentioned in my other reply I need to investigate this some more, but I
> >> don't think we _need_ to always allocate folios (or pages for that matter).
> >> The ZONE_DEVICE code just uses folios/pages for interacting with the core mm,
> >> not for managing the device memory itself, so we should be able to make it more
> >> closely match the memdesc pattern. It's just I'm still a bit unsure what that
> >> pattern will actually look like.
> > 
> > I think one reason might be that in contrast to ordinary pages, zone-device memory is only ever used to be used for folios, right?
> > 
> > Would there be a user that just allocates pages and not wants a folio associated with it?

I don't think so, other than of course zero order folios. There's probably just
some confusion due to a page and zero order folio are not being different at
the moment.

> > 
> 
> A non-THP aware driver use case would be a potential use case for zero order folios (also pages at the moment). 
>
> > It's a good question of that would look like when we have dynamically allocated struct folio ...
> 
> I think for dynamically allocated folios we could probably do away with pages, but not 100% sure at the moment.

Yeah, I'm not 100% sure either but that sounds about right.

> > 
> >>
> >>>>
> >>>> With that in mind, I don't really know what the proper interface should be today.
> >>>>
> >>>>
> >>>> zone_device_folio_init(struct page *page, unsigned int order)
> >>>>
> >>>> looks cleaner, agreed.
> >>
> >> Agreed.
> >>
> >>>>>
> >>>>>
> >>>>> Then, it comes to free_zone_device_folio() above,
> >>>>> I feel that pgmap->ops->page_free() should take an additional order
> >>>>> parameter to free a compound page like free_frozen_pages().
> >>
> >> Where would the order parameter come from? Presumably
> >> folio_order(compound_head(page)) in which case shouldn't the op actually just be
> >> pgmap->ops->folio_free()?
> > 
> > Yeah, that's also what I thought.
> > 
> 
> Balbir


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 02/15] mm/huge_memory: add device-private THP support to PMD operations
  2025-09-25  9:53     ` David Hildenbrand
@ 2025-09-26  1:53       ` Alistair Popple
  0 siblings, 0 replies; 57+ messages in thread
From: Alistair Popple @ 2025-09-26  1:53 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Balbir Singh, linux-kernel, linux-mm, damon, dri-devel,
	Matthew Brost, Zi Yan, Joshua Hahn, Rakie Kim, Byungchul Park,
	Gregory Price, Ying Huang, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Francois Dugast

On 2025-09-25 at 19:53 +1000, David Hildenbrand <david@redhat.com> wrote...
> On 25.09.25 02:25, Alistair Popple wrote:
> > On 2025-09-16 at 22:21 +1000, Balbir Singh <balbirs@nvidia.com> wrote...
> > > Extend core huge page management functions to handle device-private THP
> > > entries.  This enables proper handling of large device-private folios in
> > > fundamental MM operations.
> > > 
> > > The following functions have been updated:
> > > 
> > > - copy_huge_pmd(): Handle device-private entries during fork/clone
> > > - zap_huge_pmd(): Properly free device-private THP during munmap
> > > - change_huge_pmd(): Support protection changes on device-private THP
> > > - __pte_offset_map(): Add device-private entry awareness
> > > 
> > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > Signed-off-by: Balbir Singh <balbirs@nvidia.com>
> > > Cc: David Hildenbrand <david@redhat.com>
> > > Cc: Zi Yan <ziy@nvidia.com>
> > > Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
> > > Cc: Rakie Kim <rakie.kim@sk.com>
> > > Cc: Byungchul Park <byungchul@sk.com>
> > > Cc: Gregory Price <gourry@gourry.net>
> > > Cc: Ying Huang <ying.huang@linux.alibaba.com>
> > > Cc: Alistair Popple <apopple@nvidia.com>
> > > Cc: Oscar Salvador <osalvador@suse.de>
> > > Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> > > Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> > > Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
> > > Cc: Nico Pache <npache@redhat.com>
> > > Cc: Ryan Roberts <ryan.roberts@arm.com>
> > > Cc: Dev Jain <dev.jain@arm.com>
> > > Cc: Barry Song <baohua@kernel.org>
> > > Cc: Lyude Paul <lyude@redhat.com>
> > > Cc: Danilo Krummrich <dakr@kernel.org>
> > > Cc: David Airlie <airlied@gmail.com>
> > > Cc: Simona Vetter <simona@ffwll.ch>
> > > Cc: Ralph Campbell <rcampbell@nvidia.com>
> > > Cc: Mika Penttilä <mpenttil@redhat.com>
> > > Cc: Matthew Brost <matthew.brost@intel.com>
> > > Cc: Francois Dugast <francois.dugast@intel.com>
> > > ---
> > >   include/linux/swapops.h | 32 +++++++++++++++++++++++
> > >   mm/huge_memory.c        | 56 ++++++++++++++++++++++++++++++++++-------
> > >   mm/pgtable-generic.c    |  2 +-
> > >   3 files changed, 80 insertions(+), 10 deletions(-)
> > > 
> > > diff --git a/include/linux/swapops.h b/include/linux/swapops.h
> > > index 64ea151a7ae3..2687928a8146 100644
> > > --- a/include/linux/swapops.h
> > > +++ b/include/linux/swapops.h
> > > @@ -594,10 +594,42 @@ static inline int is_pmd_migration_entry(pmd_t pmd)
> > >   }
> > >   #endif  /* CONFIG_ARCH_ENABLE_THP_MIGRATION */
> > > +#if defined(CONFIG_ZONE_DEVICE) && defined(CONFIG_ARCH_ENABLE_THP_MIGRATION)
> > > +
> > > +/**
> > > + * is_pmd_device_private_entry() - Check if PMD contains a device private swap entry
> > > + * @pmd: The PMD to check
> > > + *
> > > + * Returns true if the PMD contains a swap entry that represents a device private
> > > + * page mapping. This is used for zone device private pages that have been
> > > + * swapped out but still need special handling during various memory management
> > > + * operations.
> > > + *
> > > + * Return: 1 if PMD contains device private entry, 0 otherwise
> > > + */
> > > +static inline int is_pmd_device_private_entry(pmd_t pmd)
> > > +{
> > > +	return is_swap_pmd(pmd) && is_device_private_entry(pmd_to_swp_entry(pmd));
> > > +}
> > > +
> > > +#else /* CONFIG_ZONE_DEVICE && CONFIG_ARCH_ENABLE_THP_MIGRATION */
> > > +
> > > +static inline int is_pmd_device_private_entry(pmd_t pmd)
> > > +{
> > > +	return 0;
> > > +}
> > > +
> > > +#endif /* CONFIG_ZONE_DEVICE && CONFIG_ARCH_ENABLE_THP_MIGRATION */
> > > +
> > >   static inline int non_swap_entry(swp_entry_t entry)
> > >   {
> > >   	return swp_type(entry) >= MAX_SWAPFILES;
> > >   }
> > > +static inline int is_pmd_non_present_folio_entry(pmd_t pmd)
> > 
> > I can't think of a better name either although I am curious why open-coding it
> > was so nasty given we don't have the equivalent for pte entries. Will go read
> > the previous discussion.
> 
> I think for PTEs we just handle all cases (markers, hwpoison etc) properly,
> manye not being supported yet on the PMD level. See copy_nonpresent_pte() as
> an example.
> 
> We don't even have helpers like is_pte_migration_entry().
> 
> > > diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
> > > index 567e2d084071..0c847cdf4fd3 100644
> > > --- a/mm/pgtable-generic.c
> > > +++ b/mm/pgtable-generic.c
> > > @@ -290,7 +290,7 @@ pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp)
> > >   	if (pmdvalp)
> > >   		*pmdvalp = pmdval;
> > > -	if (unlikely(pmd_none(pmdval) || is_pmd_migration_entry(pmdval)))
> > > +	if (unlikely(pmd_none(pmdval) || !pmd_present(pmdval)))
> > 
> > Why isn't is_pmd_non_present_folio_entry() used here?
> 
> 
> I thought I argued that
> 
> 	if (!pmd_present(pmdval)))
> 
> Should be sufficient here in my last review?

My bad, I'm a bit behind catching up on the last review comments. But agree it's
sufficient, was just curious why it wasn't used so will go read your previous
comments! Thanks.

> We want to detect page tables we can map after all.
> -- 
> Cheers
> 
> David / dhildenb
> 


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 05/15] mm/migrate_device: handle partially mapped folios during collection
  2025-09-23 15:56       ` Karim Manaouil
  2025-09-24  4:47         ` Balbir Singh
@ 2025-09-30 11:58         ` Balbir Singh
  1 sibling, 0 replies; 57+ messages in thread
From: Balbir Singh @ 2025-09-30 11:58 UTC (permalink / raw)
  To: Karim Manaouil
  Cc: Zi Yan, linux-kernel, linux-mm, damon, dri-devel,
	David Hildenbrand, Joshua Hahn, Rakie Kim, Byungchul Park,
	Gregory Price, Ying Huang, Alistair Popple, Oscar Salvador,
	Lorenzo Stoakes, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lyude Paul, Danilo Krummrich,
	David Airlie, Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 9/24/25 01:56, Karim Manaouil wrote:
> On Tue, Sep 23, 2025 at 01:44:20PM +1000, Balbir Singh wrote:
>> On 9/23/25 12:23, Zi Yan wrote:
>>> On 16 Sep 2025, at 8:21, Balbir Singh wrote:
>>>
>>>> Extend migrate_vma_collect_pmd() to handle partially mapped large folios
>>>> that require splitting before migration can proceed.
>>>>
>>>> During PTE walk in the collection phase, if a large folio is only
>>>> partially mapped in the migration range, it must be split to ensure the
>>>> folio is correctly migrated.
>>>>
>>>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>>>> Cc: David Hildenbrand <david@redhat.com>
>>>> Cc: Zi Yan <ziy@nvidia.com>
>>>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
>>>> Cc: Rakie Kim <rakie.kim@sk.com>
>>>> Cc: Byungchul Park <byungchul@sk.com>
>>>> Cc: Gregory Price <gourry@gourry.net>
>>>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
>>>> Cc: Alistair Popple <apopple@nvidia.com>
>>>> Cc: Oscar Salvador <osalvador@suse.de>
>>>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>>>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
>>>> Cc: Nico Pache <npache@redhat.com>
>>>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>>>> Cc: Dev Jain <dev.jain@arm.com>
>>>> Cc: Barry Song <baohua@kernel.org>
>>>> Cc: Lyude Paul <lyude@redhat.com>
>>>> Cc: Danilo Krummrich <dakr@kernel.org>
>>>> Cc: David Airlie <airlied@gmail.com>
>>>> Cc: Simona Vetter <simona@ffwll.ch>
>>>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>>>> Cc: Mika Penttilä <mpenttil@redhat.com>
>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>> Cc: Francois Dugast <francois.dugast@intel.com>
>>>> ---
>>>>  mm/migrate_device.c | 82 +++++++++++++++++++++++++++++++++++++++++++++
>>>>  1 file changed, 82 insertions(+)
>>>>
>>>> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
>>>> index abd9f6850db6..70c0601f70ea 100644
>>>> --- a/mm/migrate_device.c
>>>> +++ b/mm/migrate_device.c
>>>> @@ -54,6 +54,53 @@ static int migrate_vma_collect_hole(unsigned long start,
>>>>  	return 0;
>>>>  }
>>>>
>>>> +/**
>>>> + * migrate_vma_split_folio() - Helper function to split a THP folio
>>>> + * @folio: the folio to split
>>>> + * @fault_page: struct page associated with the fault if any
>>>> + *
>>>> + * Returns 0 on success
>>>> + */
>>>> +static int migrate_vma_split_folio(struct folio *folio,
>>>> +				   struct page *fault_page)
>>>> +{
>>>> +	int ret;
>>>> +	struct folio *fault_folio = fault_page ? page_folio(fault_page) : NULL;
>>>> +	struct folio *new_fault_folio = NULL;
>>>> +
>>>> +	if (folio != fault_folio) {
>>>> +		folio_get(folio);
>>>> +		folio_lock(folio);
>>>> +	}
>>>> +
>>>> +	ret = split_folio(folio);
>>>> +	if (ret) {
>>>> +		if (folio != fault_folio) {
>>>> +			folio_unlock(folio);
>>>> +			folio_put(folio);
>>>> +		}
>>>> +		return ret;
>>>> +	}
>>>> +
>>>> +	new_fault_folio = fault_page ? page_folio(fault_page) : NULL;
>>>> +
>>>> +	/*
>>>> +	 * Ensure the lock is held on the correct
>>>> +	 * folio after the split
>>>> +	 */
>>>> +	if (!new_fault_folio) {
>>>> +		folio_unlock(folio);
>>>> +		folio_put(folio);
>>>> +	} else if (folio != new_fault_folio) {
>>>> +		folio_get(new_fault_folio);
>>>> +		folio_lock(new_fault_folio);
>>>> +		folio_unlock(folio);
>>>> +		folio_put(folio);
>>>> +	}
>>>> +
>>>> +	return 0;
>>>> +}
>>>> +
>>>>  static int migrate_vma_collect_pmd(pmd_t *pmdp,
>>>>  				   unsigned long start,
>>>>  				   unsigned long end,
>>>> @@ -136,6 +183,8 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>>>>  			 * page table entry. Other special swap entries are not
>>>>  			 * migratable, and we ignore regular swapped page.
>>>>  			 */
>>>> +			struct folio *folio;
>>>> +
>>>>  			entry = pte_to_swp_entry(pte);
>>>>  			if (!is_device_private_entry(entry))
>>>>  				goto next;
>>>> @@ -147,6 +196,23 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>>>>  			    pgmap->owner != migrate->pgmap_owner)
>>>>  				goto next;
>>>>
>>>> +			folio = page_folio(page);
>>>> +			if (folio_test_large(folio)) {
>>>> +				int ret;
>>>> +
>>>> +				pte_unmap_unlock(ptep, ptl);
>>>> +				ret = migrate_vma_split_folio(folio,
>>>> +							  migrate->fault_page);
>>>> +
>>>> +				if (ret) {
>>>> +					ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl);
>>>> +					goto next;
>>>> +				}
>>>> +
>>>> +				addr = start;
>>>> +				goto again;
>>>> +			}
>>>
>>> This does not look right to me.
>>>
>>> The folio here is device private, but migrate_vma_split_folio()
>>> calls split_folio(), which cannot handle device private folios yet.
>>> Your change to split_folio() is in Patch 10 and should be moved
>>> before this patch.
>>>
>>
>> Patch 10 is to split the folio in the middle of migration (when we have
>> converted the entries to migration entries). This patch relies on the
>> changes in patch 4. I agree the names are confusing, I'll reword the
>> functions
> 
> Hi Balbir,
> 
> I am still reviewing the patches, but I think I agree with Zi here.
> 
> split_folio() will replace the PMD mappings of the huge folio with PTE
> mappings, but will also split the folio into smaller folios. The former
> is ok with this patch, but the latter is probably not correct if the folio
> is a zone device folio. The driver needs to know about the change, as
> usually the driver will have some sort of mapping between GPU physical
> memory chunks and their corresponding zone device pages.
> 

On further thought, there should be no driver in the tree affected by
this, but I'll definitely give it a further look

Thanks,
Balbir



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [v6 07/15] mm/memory/fault: add THP fault handling for zone device private pages
  2025-09-25 10:11   ` David Hildenbrand
@ 2025-09-30 12:00     ` Balbir Singh
  0 siblings, 0 replies; 57+ messages in thread
From: Balbir Singh @ 2025-09-30 12:00 UTC (permalink / raw)
  To: David Hildenbrand, linux-kernel, linux-mm, Alistair Popple
  Cc: damon, dri-devel, Zi Yan, Joshua Hahn, Rakie Kim, Byungchul Park,
	Gregory Price, Ying Huang, Oscar Salvador, Lorenzo Stoakes,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lyude Paul, Danilo Krummrich, David Airlie,
	Simona Vetter, Ralph Campbell, Mika Penttilä,
	Matthew Brost, Francois Dugast

On 9/25/25 20:11, David Hildenbrand wrote:
> On 16.09.25 14:21, Balbir Singh wrote:
>> Implement CPU fault handling for zone device THP entries through
>> do_huge_pmd_device_private(), enabling transparent migration of
>> device-private large pages back to system memory on CPU access.
>>
>> When the CPU accesses a zone device THP entry, the fault handler calls the
>> device driver's migrate_to_ram() callback to migrate the entire large page
>> back to system memory.
>>
>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
>> Cc: Rakie Kim <rakie.kim@sk.com>
>> Cc: Byungchul Park <byungchul@sk.com>
>> Cc: Gregory Price <gourry@gourry.net>
>> Cc: Ying Huang <ying.huang@linux.alibaba.com>
>> Cc: Alistair Popple <apopple@nvidia.com>
>> Cc: Oscar Salvador <osalvador@suse.de>
>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
>> Cc: Nico Pache <npache@redhat.com>
>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>> Cc: Dev Jain <dev.jain@arm.com>
>> Cc: Barry Song <baohua@kernel.org>
>> Cc: Lyude Paul <lyude@redhat.com>
>> Cc: Danilo Krummrich <dakr@kernel.org>
>> Cc: David Airlie <airlied@gmail.com>
>> Cc: Simona Vetter <simona@ffwll.ch>
>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>> Cc: Mika Penttilä <mpenttil@redhat.com>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Francois Dugast <francois.dugast@intel.com>
>> ---
>>   include/linux/huge_mm.h |  7 +++++++
>>   mm/huge_memory.c        | 36 ++++++++++++++++++++++++++++++++++++
>>   mm/memory.c             |  5 +++--
>>   3 files changed, 46 insertions(+), 2 deletions(-)
>>
>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>> index f327d62fc985..2d669be7f1c8 100644
>> --- a/include/linux/huge_mm.h
>> +++ b/include/linux/huge_mm.h
>> @@ -496,6 +496,8 @@ static inline bool folio_test_pmd_mappable(struct folio *folio)
>>     vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf);
>>   +vm_fault_t do_huge_pmd_device_private(struct vm_fault *vmf);
>> +
>>   extern struct folio *huge_zero_folio;
>>   extern unsigned long huge_zero_pfn;
>>   @@ -671,6 +673,11 @@ static inline vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
>>       return 0;
>>   }
>>   +static inline vm_fault_t do_huge_pmd_device_private(struct vm_fault *vmf)
>> +{
>> +    return 0;
>> +}
>> +
>>   static inline bool is_huge_zero_folio(const struct folio *folio)
>>   {
>>       return false;
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 5291ee155a02..90a1939455dd 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -1287,6 +1287,42 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf)
>>     }
>>   +vm_fault_t do_huge_pmd_device_private(struct vm_fault *vmf)
>> +{
>> +    struct vm_area_struct *vma = vmf->vma;
>> +    vm_fault_t ret = 0;
>> +    spinlock_t *ptl;
>> +    swp_entry_t swp_entry;
>> +    struct page *page;
>> +
>> +    if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
>> +        vma_end_read(vma);
>> +        return VM_FAULT_RETRY;
>> +    }
>> +
>> +    ptl = pmd_lock(vma->vm_mm, vmf->pmd);
>> +    if (unlikely(!pmd_same(*vmf->pmd, vmf->orig_pmd))) {
>> +        spin_unlock(ptl);
>> +        return 0;
>> +    }
>> +
>> +    swp_entry = pmd_to_swp_entry(vmf->orig_pmd);
>> +    page = pfn_swap_entry_to_page(swp_entry);
>> +    vmf->page = page;
>> +    vmf->pte = NULL;
>> +    if (trylock_page(vmf->page)) {
> 
> We should be operating on a folio here. folio_trylock() + folio_get() + folio_unlock() + folio_put().
> 
>> +        get_page(page);
>> +        spin_unlock(ptl);
>> +        ret = page_pgmap(page)->ops->migrate_to_ram(vmf);
> 
> BTW, I was wondering whether it is really the right design to pass the vmf here. Likely the const vma+addr+folio could be sufficient. I did not look into all callbaks, though.
> 

The vmf is used for address and other bits. FYI, this is no different from pte fault handling and migrate_to_ram(). I can do the folio conversions

Balbir


^ permalink raw reply	[flat|nested] 57+ messages in thread

end of thread, other threads:[~2025-09-30 12:00 UTC | newest]

Thread overview: 57+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-09-16 12:21 [v6 00/15] mm: support device-private THP Balbir Singh
2025-09-16 12:21 ` [v6 01/15] mm/zone_device: support large zone device private folios Balbir Singh
2025-09-18  2:49   ` Zi Yan
2025-09-19  5:01     ` Balbir Singh
2025-09-19 13:26       ` Zi Yan
2025-09-23  3:47         ` Balbir Singh
2025-09-24 11:04           ` David Hildenbrand
2025-09-24 17:49             ` Zi Yan
2025-09-24 23:45               ` Alistair Popple
2025-09-25 15:27                 ` Zi Yan
2025-09-26  1:44                   ` Alistair Popple
2025-09-24 10:55     ` David Hildenbrand
2025-09-24 17:36       ` Zi Yan
2025-09-24 23:58         ` Alistair Popple
2025-09-25  0:05           ` Balbir Singh
2025-09-25 15:32             ` Zi Yan
2025-09-25  9:43           ` David Hildenbrand
2025-09-25 12:02             ` Balbir Singh
2025-09-26  1:50               ` Alistair Popple
2025-09-16 12:21 ` [v6 02/15] mm/huge_memory: add device-private THP support to PMD operations Balbir Singh
2025-09-18 18:45   ` Zi Yan
2025-09-19  4:51     ` Balbir Singh
2025-09-23  8:37       ` David Hildenbrand
2025-09-25  0:25   ` Alistair Popple
2025-09-25  9:53     ` David Hildenbrand
2025-09-26  1:53       ` Alistair Popple
2025-09-16 12:21 ` [v6 03/15] mm/rmap: extend rmap and migration support device-private entries Balbir Singh
2025-09-22 20:13   ` Zi Yan
2025-09-23  3:39     ` Balbir Singh
2025-09-24 10:46       ` David Hildenbrand
2025-09-16 12:21 ` [v6 04/15] mm/huge_memory: implement device-private THP splitting Balbir Singh
2025-09-22 21:09   ` Zi Yan
2025-09-23  1:50     ` Balbir Singh
2025-09-23  2:09       ` Zi Yan
2025-09-23  4:04         ` Balbir Singh
2025-09-23 16:08           ` Zi Yan
2025-09-25 10:06             ` David Hildenbrand
2025-09-25 10:01   ` David Hildenbrand
2025-09-25 11:13     ` Balbir Singh
2025-09-16 12:21 ` [v6 05/15] mm/migrate_device: handle partially mapped folios during collection Balbir Singh
2025-09-23  2:23   ` Zi Yan
2025-09-23  3:44     ` Balbir Singh
2025-09-23 15:56       ` Karim Manaouil
2025-09-24  4:47         ` Balbir Singh
2025-09-30 11:58         ` Balbir Singh
2025-09-16 12:21 ` [v6 06/15] mm/migrate_device: implement THP migration of zone device pages Balbir Singh
2025-09-16 12:21 ` [v6 07/15] mm/memory/fault: add THP fault handling for zone device private pages Balbir Singh
2025-09-25 10:11   ` David Hildenbrand
2025-09-30 12:00     ` Balbir Singh
2025-09-16 12:21 ` [v6 08/15] lib/test_hmm: add zone device private THP test infrastructure Balbir Singh
2025-09-16 12:21 ` [v6 09/15] mm/memremap: add driver callback support for folio splitting Balbir Singh
2025-09-16 12:21 ` [v6 10/15] mm/migrate_device: add THP splitting during migration Balbir Singh
2025-09-16 12:21 ` [v6 11/15] lib/test_hmm: add large page allocation failure testing Balbir Singh
2025-09-16 12:21 ` [v6 12/15] selftests/mm/hmm-tests: new tests for zone device THP migration Balbir Singh
2025-09-16 12:21 ` [v6 13/15] selftests/mm/hmm-tests: partial unmap, mremap and anon_write tests Balbir Singh
2025-09-16 12:21 ` [v6 14/15] selftests/mm/hmm-tests: new throughput tests including THP Balbir Singh
2025-09-16 12:21 ` [v6 15/15] gpu/drm/nouveau: enable THP support for GPU memory migration Balbir Singh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox