linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: "Mika Penttilä" <mpenttil@redhat.com>
Cc: Balbir Singh <balbirs@nvidia.com>,
	<dri-devel@lists.freedesktop.org>, <linux-mm@kvack.org>,
	<linux-kernel@vger.kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	David Hildenbrand <david@redhat.com>, Zi Yan <ziy@nvidia.com>,
	Joshua Hahn <joshua.hahnjy@gmail.com>,
	Rakie Kim <rakie.kim@sk.com>, Byungchul Park <byungchul@sk.com>,
	Gregory Price <gourry@gourry.net>,
	Ying Huang <ying.huang@linux.alibaba.com>,
	"Alistair Popple" <apopple@nvidia.com>,
	Oscar Salvador <osalvador@suse.de>,
	"Lorenzo Stoakes" <lorenzo.stoakes@oracle.com>,
	Baolin Wang <baolin.wang@linux.alibaba.com>,
	"Liam R. Howlett" <Liam.Howlett@oracle.com>,
	Nico Pache <npache@redhat.com>,
	Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
	Barry Song <baohua@kernel.org>, Lyude Paul <lyude@redhat.com>,
	Danilo Krummrich <dakr@kernel.org>,
	David Airlie <airlied@gmail.com>, Simona Vetter <simona@ffwll.ch>,
	Ralph Campbell <rcampbell@nvidia.com>,
	Francois Dugast <francois.dugast@intel.com>
Subject: Re: [v3 03/11] mm/migrate_device: THP migration of zone device pages
Date: Mon, 11 Aug 2025 23:18:38 -0700	[thread overview]
Message-ID: <aJrcvpx5tiSKD88x@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <aJrW/JUBhdlL2Kur@lstrano-desk.jf.intel.com>

On Mon, Aug 11, 2025 at 10:54:04PM -0700, Matthew Brost wrote:
> On Tue, Aug 12, 2025 at 08:35:49AM +0300, Mika Penttilä wrote:
> > Hi,
> > 
> > On 8/12/25 05:40, Balbir Singh wrote:
> > 
> > > MIGRATE_VMA_SELECT_COMPOUND will be used to select THP pages during
> > > migrate_vma_setup() and MIGRATE_PFN_COMPOUND will make migrating
> > > device pages as compound pages during device pfn migration.
> > >
> > > migrate_device code paths go through the collect, setup
> > > and finalize phases of migration.
> > >
> > > The entries in src and dst arrays passed to these functions still
> > > remain at a PAGE_SIZE granularity. When a compound page is passed,
> > > the first entry has the PFN along with MIGRATE_PFN_COMPOUND
> > > and other flags set (MIGRATE_PFN_MIGRATE, MIGRATE_PFN_VALID), the
> > > remaining entries (HPAGE_PMD_NR - 1) are filled with 0's. This
> > > representation allows for the compound page to be split into smaller
> > > page sizes.
> > >
> > > migrate_vma_collect_hole(), migrate_vma_collect_pmd() are now THP
> > > page aware. Two new helper functions migrate_vma_collect_huge_pmd()
> > > and migrate_vma_insert_huge_pmd_page() have been added.
> > >
> > > migrate_vma_collect_huge_pmd() can collect THP pages, but if for
> > > some reason this fails, there is fallback support to split the folio
> > > and migrate it.
> > >
> > > migrate_vma_insert_huge_pmd_page() closely follows the logic of
> > > migrate_vma_insert_page()
> > >
> > > Support for splitting pages as needed for migration will follow in
> > > later patches in this series.
> > >
> > > Cc: Andrew Morton <akpm@linux-foundation.org>
> > > Cc: David Hildenbrand <david@redhat.com>
> > > Cc: Zi Yan <ziy@nvidia.com>
> > > Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
> > > Cc: Rakie Kim <rakie.kim@sk.com>
> > > Cc: Byungchul Park <byungchul@sk.com>
> > > Cc: Gregory Price <gourry@gourry.net>
> > > Cc: Ying Huang <ying.huang@linux.alibaba.com>
> > > Cc: Alistair Popple <apopple@nvidia.com>
> > > Cc: Oscar Salvador <osalvador@suse.de>
> > > Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> > > Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> > > Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
> > > Cc: Nico Pache <npache@redhat.com>
> > > Cc: Ryan Roberts <ryan.roberts@arm.com>
> > > Cc: Dev Jain <dev.jain@arm.com>
> > > Cc: Barry Song <baohua@kernel.org>
> > > Cc: Lyude Paul <lyude@redhat.com>
> > > Cc: Danilo Krummrich <dakr@kernel.org>
> > > Cc: David Airlie <airlied@gmail.com>
> > > Cc: Simona Vetter <simona@ffwll.ch>
> > > Cc: Ralph Campbell <rcampbell@nvidia.com>
> > > Cc: Mika Penttilä <mpenttil@redhat.com>
> > > Cc: Matthew Brost <matthew.brost@intel.com>
> > > Cc: Francois Dugast <francois.dugast@intel.com>
> > >
> > > Signed-off-by: Balbir Singh <balbirs@nvidia.com>
> > > ---
> > >  include/linux/migrate.h |   2 +
> > >  mm/migrate_device.c     | 457 ++++++++++++++++++++++++++++++++++------
> > >  2 files changed, 396 insertions(+), 63 deletions(-)
> > >
> > > diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> > > index acadd41e0b5c..d9cef0819f91 100644
> > > --- a/include/linux/migrate.h
> > > +++ b/include/linux/migrate.h
> > > @@ -129,6 +129,7 @@ static inline int migrate_misplaced_folio(struct folio *folio, int node)
> > >  #define MIGRATE_PFN_VALID	(1UL << 0)
> > >  #define MIGRATE_PFN_MIGRATE	(1UL << 1)
> > >  #define MIGRATE_PFN_WRITE	(1UL << 3)
> > > +#define MIGRATE_PFN_COMPOUND	(1UL << 4)
> > >  #define MIGRATE_PFN_SHIFT	6
> > >  
> > >  static inline struct page *migrate_pfn_to_page(unsigned long mpfn)
> > > @@ -147,6 +148,7 @@ enum migrate_vma_direction {
> > >  	MIGRATE_VMA_SELECT_SYSTEM = 1 << 0,
> > >  	MIGRATE_VMA_SELECT_DEVICE_PRIVATE = 1 << 1,
> > >  	MIGRATE_VMA_SELECT_DEVICE_COHERENT = 1 << 2,
> > > +	MIGRATE_VMA_SELECT_COMPOUND = 1 << 3,
> > >  };
> > >  
> > >  struct migrate_vma {
> > > diff --git a/mm/migrate_device.c b/mm/migrate_device.c
> > > index 0ed337f94fcd..6621bba62710 100644
> > > --- a/mm/migrate_device.c
> > > +++ b/mm/migrate_device.c
> > > @@ -14,6 +14,7 @@
> > >  #include <linux/pagewalk.h>
> > >  #include <linux/rmap.h>
> > >  #include <linux/swapops.h>
> > > +#include <asm/pgalloc.h>
> > >  #include <asm/tlbflush.h>
> > >  #include "internal.h"
> > >  
> > > @@ -44,6 +45,23 @@ static int migrate_vma_collect_hole(unsigned long start,
> > >  	if (!vma_is_anonymous(walk->vma))
> > >  		return migrate_vma_collect_skip(start, end, walk);
> > >  
> > > +	if (thp_migration_supported() &&
> > > +		(migrate->flags & MIGRATE_VMA_SELECT_COMPOUND) &&
> > > +		(IS_ALIGNED(start, HPAGE_PMD_SIZE) &&
> > > +		 IS_ALIGNED(end, HPAGE_PMD_SIZE))) {
> > > +		migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE |
> > > +						MIGRATE_PFN_COMPOUND;
> > > +		migrate->dst[migrate->npages] = 0;
> > > +		migrate->npages++;
> > > +		migrate->cpages++;
> > > +
> > > +		/*
> > > +		 * Collect the remaining entries as holes, in case we
> > > +		 * need to split later
> > > +		 */
> > > +		return migrate_vma_collect_skip(start + PAGE_SIZE, end, walk);
> > > +	}
> > > +
> > >  	for (addr = start; addr < end; addr += PAGE_SIZE) {
> > >  		migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE;
> > >  		migrate->dst[migrate->npages] = 0;
> > > @@ -54,57 +72,151 @@ static int migrate_vma_collect_hole(unsigned long start,
> > >  	return 0;
> > >  }
> > >  
> > > -static int migrate_vma_collect_pmd(pmd_t *pmdp,
> > > -				   unsigned long start,
> > > -				   unsigned long end,
> > > -				   struct mm_walk *walk)
> > > +/**
> > > + * migrate_vma_collect_huge_pmd - collect THP pages without splitting the
> > > + * folio for device private pages.
> > > + * @pmdp: pointer to pmd entry
> > > + * @start: start address of the range for migration
> > > + * @end: end address of the range for migration
> > > + * @walk: mm_walk callback structure
> > > + *
> > > + * Collect the huge pmd entry at @pmdp for migration and set the
> > > + * MIGRATE_PFN_COMPOUND flag in the migrate src entry to indicate that
> > > + * migration will occur at HPAGE_PMD granularity
> > > + */
> > > +static int migrate_vma_collect_huge_pmd(pmd_t *pmdp, unsigned long start,
> > > +					unsigned long end, struct mm_walk *walk,
> > > +					struct folio *fault_folio)
> > >  {
> > > +	struct mm_struct *mm = walk->mm;
> > > +	struct folio *folio;
> > >  	struct migrate_vma *migrate = walk->private;
> > > -	struct folio *fault_folio = migrate->fault_page ?
> > > -		page_folio(migrate->fault_page) : NULL;
> > > -	struct vm_area_struct *vma = walk->vma;
> > > -	struct mm_struct *mm = vma->vm_mm;
> > > -	unsigned long addr = start, unmapped = 0;
> > >  	spinlock_t *ptl;
> > > -	pte_t *ptep;
> > > +	swp_entry_t entry;
> > > +	int ret;
> > > +	unsigned long write = 0;
> > >  
> > > -again:
> > > -	if (pmd_none(*pmdp))
> > > +	ptl = pmd_lock(mm, pmdp);
> > > +	if (pmd_none(*pmdp)) {
> > > +		spin_unlock(ptl);
> > >  		return migrate_vma_collect_hole(start, end, -1, walk);
> > > +	}
> > >  
> > >  	if (pmd_trans_huge(*pmdp)) {
> > > -		struct folio *folio;
> > > -
> > > -		ptl = pmd_lock(mm, pmdp);
> > > -		if (unlikely(!pmd_trans_huge(*pmdp))) {
> > > +		if (!(migrate->flags & MIGRATE_VMA_SELECT_SYSTEM)) {
> > >  			spin_unlock(ptl);
> > > -			goto again;
> > > +			return migrate_vma_collect_skip(start, end, walk);
> > >  		}
> > >  
> > >  		folio = pmd_folio(*pmdp);
> > >  		if (is_huge_zero_folio(folio)) {
> > >  			spin_unlock(ptl);
> > > -			split_huge_pmd(vma, pmdp, addr);
> > > -		} else {
> > > -			int ret;
> > > +			return migrate_vma_collect_hole(start, end, -1, walk);
> > > +		}
> > > +		if (pmd_write(*pmdp))
> > > +			write = MIGRATE_PFN_WRITE;
> > > +	} else if (!pmd_present(*pmdp)) {
> > > +		entry = pmd_to_swp_entry(*pmdp);
> > > +		folio = pfn_swap_entry_folio(entry);
> > > +
> > > +		if (!is_device_private_entry(entry) ||
> > > +			!(migrate->flags & MIGRATE_VMA_SELECT_DEVICE_PRIVATE) ||
> > > +			(folio->pgmap->owner != migrate->pgmap_owner)) {
> > > +			spin_unlock(ptl);
> > > +			return migrate_vma_collect_skip(start, end, walk);
> > > +		}
> > >  
> > > -			folio_get(folio);
> > > +		if (is_migration_entry(entry)) {
> > > +			migration_entry_wait_on_locked(entry, ptl);
> > >  			spin_unlock(ptl);
> > > -			/* FIXME: we don't expect THP for fault_folio */
> > > -			if (WARN_ON_ONCE(fault_folio == folio))
> > > -				return migrate_vma_collect_skip(start, end,
> > > -								walk);
> > > -			if (unlikely(!folio_trylock(folio)))
> > > -				return migrate_vma_collect_skip(start, end,
> > > -								walk);
> > > -			ret = split_folio(folio);
> > > -			if (fault_folio != folio)
> > > -				folio_unlock(folio);
> > > -			folio_put(folio);
> > > -			if (ret)
> > > -				return migrate_vma_collect_skip(start, end,
> > > -								walk);
> > > +			return -EAGAIN;
> > >  		}
> > > +
> > > +		if (is_writable_device_private_entry(entry))
> > > +			write = MIGRATE_PFN_WRITE;
> > > +	} else {
> > > +		spin_unlock(ptl);
> > > +		return -EAGAIN;
> > > +	}
> > > +
> > > +	folio_get(folio);
> > > +	if (folio != fault_folio && unlikely(!folio_trylock(folio))) {
> > > +		spin_unlock(ptl);
> > > +		folio_put(folio);
> > > +		return migrate_vma_collect_skip(start, end, walk);
> > > +	}
> > > +
> > > +	if (thp_migration_supported() &&
> > > +		(migrate->flags & MIGRATE_VMA_SELECT_COMPOUND) &&
> > > +		(IS_ALIGNED(start, HPAGE_PMD_SIZE) &&
> > > +		 IS_ALIGNED(end, HPAGE_PMD_SIZE))) {
> > > +
> > > +		struct page_vma_mapped_walk pvmw = {
> > > +			.ptl = ptl,
> > > +			.address = start,
> > > +			.pmd = pmdp,
> > > +			.vma = walk->vma,
> > > +		};
> > > +
> > > +		unsigned long pfn = page_to_pfn(folio_page(folio, 0));
> > > +
> > > +		migrate->src[migrate->npages] = migrate_pfn(pfn) | write
> > > +						| MIGRATE_PFN_MIGRATE
> > > +						| MIGRATE_PFN_COMPOUND;
> > > +		migrate->dst[migrate->npages++] = 0;
> > > +		migrate->cpages++;
> > > +		ret = set_pmd_migration_entry(&pvmw, folio_page(folio, 0));
> > > +		if (ret) {
> > > +			migrate->npages--;
> > > +			migrate->cpages--;
> > > +			migrate->src[migrate->npages] = 0;
> > > +			migrate->dst[migrate->npages] = 0;
> > > +			goto fallback;
> > > +		}
> > > +		migrate_vma_collect_skip(start + PAGE_SIZE, end, walk);
> > > +		spin_unlock(ptl);
> > > +		return 0;
> > > +	}
> > > +
> > > +fallback:
> > > +	spin_unlock(ptl);
> > > +	if (!folio_test_large(folio))
> > > +		goto done;
> > > +	ret = split_folio(folio);
> > > +	if (fault_folio != folio)
> > > +		folio_unlock(folio);
> > > +	folio_put(folio);
> > > +	if (ret)
> > > +		return migrate_vma_collect_skip(start, end, walk);
> > > +	if (pmd_none(pmdp_get_lockless(pmdp)))
> > > +		return migrate_vma_collect_hole(start, end, -1, walk);
> > > +
> > > +done:
> > > +	return -ENOENT;
> > > +}
> > > +
> > > +static int migrate_vma_collect_pmd(pmd_t *pmdp,
> > > +				   unsigned long start,
> > > +				   unsigned long end,
> > > +				   struct mm_walk *walk)
> > > +{
> > > +	struct migrate_vma *migrate = walk->private;
> > > +	struct vm_area_struct *vma = walk->vma;
> > > +	struct mm_struct *mm = vma->vm_mm;
> > > +	unsigned long addr = start, unmapped = 0;
> > > +	spinlock_t *ptl;
> > > +	struct folio *fault_folio = migrate->fault_page ?
> > > +		page_folio(migrate->fault_page) : NULL;
> > > +	pte_t *ptep;
> > > +
> > > +again:
> > > +	if (pmd_trans_huge(*pmdp) || !pmd_present(*pmdp)) {
> > > +		int ret = migrate_vma_collect_huge_pmd(pmdp, start, end, walk, fault_folio);
> > > +
> > > +		if (ret == -EAGAIN)
> > > +			goto again;
> > > +		if (ret == 0)
> > > +			return 0;
> > >  	}
> > >  
> > >  	ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl);
> > > @@ -222,8 +334,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
> > >  			mpfn |= pte_write(pte) ? MIGRATE_PFN_WRITE : 0;
> > >  		}
> > >  
> > > -		/* FIXME support THP */
> > > -		if (!page || !page->mapping || PageTransCompound(page)) {
> > > +		if (!page || !page->mapping) {
> > >  			mpfn = 0;
> > >  			goto next;
> > >  		}
> > > @@ -394,14 +505,6 @@ static bool migrate_vma_check_page(struct page *page, struct page *fault_page)
> > >  	 */
> > >  	int extra = 1 + (page == fault_page);
> > >  
> > > -	/*
> > > -	 * FIXME support THP (transparent huge page), it is bit more complex to
> > > -	 * check them than regular pages, because they can be mapped with a pmd
> > > -	 * or with a pte (split pte mapping).
> > > -	 */
> > > -	if (folio_test_large(folio))
> > > -		return false;
> > > -
> > 
> > You cannot remove this check unless support normal mTHP folios migrate to device, 
> > which I think this series doesn't do, but maybe should?

I also agree that, eventually, what I detail below (collecting mTHP or
THPs in PTEs) should be supported without a split, albeit enabled by a
different migrate_vma_direction flag than introduced in this series.

Reasoning for a differnet flag: Handling of mTHP/THP at the upper layers
(drivers) will have to adjusted if the THP doesn't align to a PMD.
Fairly mirror but upper layer should maintain control if they want
non-aligned mTHP/THP to PMDs collected.

I also just noticed an interface problem — migrate_vma does not define
flags as migrate_vma_direction, but I digress.

Matt

> > 
> 
> Currently, mTHP should be split upon collection, right? The only way a
> THP should be collected is if it directly maps to a PMD. If a THP or
> mTHP is found in PTEs (i.e., in migrate_vma_collect_pmd outside of
> migrate_vma_collect_huge_pmd), it should be split there. I sent this
> logic to Balbir privately, but it appears to have been omitted.
> 
> I’m quite sure this missing split is actually an upstream bug, but it
> has been suppressed by PMDs being split upon device fault. I have a test
> that performs a ton of complete mremap—nonsense no one would normally
> do, but which should work—that exposed this. I can rebase on this series
> and see if the bug appears, or try the same nonsense without the device
> faulting first and splitting the pages, to trigger the bug.
> 
> Matt
> 
> > --Mika
> > 


  reply	other threads:[~2025-08-12  6:18 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-12  2:40 [v3 00/11] mm: support device-private THP Balbir Singh
2025-08-12  2:40 ` [v3 01/11] mm/zone_device: support large zone device private folios Balbir Singh
2025-08-26 14:22   ` David Hildenbrand
2025-08-12  2:40 ` [v3 02/11] mm/thp: zone_device awareness in THP handling code Balbir Singh
2025-08-12 14:47   ` kernel test robot
2025-08-26 15:19   ` David Hildenbrand
2025-08-27 10:14     ` Balbir Singh
2025-08-27 11:28       ` David Hildenbrand
2025-08-28 20:05   ` Matthew Brost
2025-08-28 20:12     ` David Hildenbrand
2025-08-28 20:17       ` Matthew Brost
2025-08-28 20:22         ` David Hildenbrand
2025-08-12  2:40 ` [v3 03/11] mm/migrate_device: THP migration of zone device pages Balbir Singh
2025-08-12  5:35   ` Mika Penttilä
2025-08-12  5:54     ` Matthew Brost
2025-08-12  6:18       ` Matthew Brost [this message]
2025-08-12  6:25       ` Mika Penttilä
2025-08-12  6:33         ` Matthew Brost
2025-08-12  6:37           ` Mika Penttilä
2025-08-12 23:36     ` Balbir Singh
2025-08-13  0:07       ` Mika Penttilä
2025-08-14 22:51         ` Balbir Singh
2025-08-15  0:04           ` Matthew Brost
2025-08-15 12:09             ` Balbir Singh
2025-08-21 10:24             ` Balbir Singh
2025-08-28 23:14               ` Matthew Brost
2025-08-12  2:40 ` [v3 04/11] mm/memory/fault: add support for zone device THP fault handling Balbir Singh
2025-08-12  2:40 ` [v3 05/11] lib/test_hmm: test cases and support for zone device private THP Balbir Singh
2025-08-12  2:40 ` [v3 06/11] mm/memremap: add folio_split support Balbir Singh
2025-08-12  2:40 ` [v3 07/11] mm/thp: add split during migration support Balbir Singh
2025-08-27 20:29   ` David Hildenbrand
2025-08-12  2:40 ` [v3 08/11] lib/test_hmm: add test case for split pages Balbir Singh
2025-08-12  2:40 ` [v3 09/11] selftests/mm/hmm-tests: new tests for zone device THP migration Balbir Singh
2025-08-12  2:40 ` [v3 10/11] gpu/drm/nouveau: add THP migration support Balbir Singh
2025-08-13  2:23   ` kernel test robot
2025-08-12  2:40 ` [v3 11/11] selftests/mm/hmm-tests: new throughput tests including THP Balbir Singh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aJrcvpx5tiSKD88x@lstrano-desk.jf.intel.com \
    --to=matthew.brost@intel.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=airlied@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=apopple@nvidia.com \
    --cc=balbirs@nvidia.com \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=byungchul@sk.com \
    --cc=dakr@kernel.org \
    --cc=david@redhat.com \
    --cc=dev.jain@arm.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=francois.dugast@intel.com \
    --cc=gourry@gourry.net \
    --cc=joshua.hahnjy@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=lyude@redhat.com \
    --cc=mpenttil@redhat.com \
    --cc=npache@redhat.com \
    --cc=osalvador@suse.de \
    --cc=rakie.kim@sk.com \
    --cc=rcampbell@nvidia.com \
    --cc=ryan.roberts@arm.com \
    --cc=simona@ffwll.ch \
    --cc=ying.huang@linux.alibaba.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox