linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Lance Yang <lance.yang@linux.dev>
To: Balbir Singh <balbirs@nvidia.com>
Cc: linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
	linux-mm@kvack.org, akpm@linux-foundation.org,
	"David Hildenbrand" <david@redhat.com>, "Zi Yan" <ziy@nvidia.com>,
	"Joshua Hahn" <joshua.hahnjy@gmail.com>,
	"Rakie Kim" <rakie.kim@sk.com>,
	"Byungchul Park" <byungchul@sk.com>,
	"Gregory Price" <gourry@gourry.net>,
	"Ying Huang" <ying.huang@linux.alibaba.com>,
	"Alistair Popple" <apopple@nvidia.com>,
	"Oscar Salvador" <osalvador@suse.de>,
	"Lorenzo Stoakes" <lorenzo.stoakes@oracle.com>,
	"Baolin Wang" <baolin.wang@linux.alibaba.com>,
	"Liam R. Howlett" <Liam.Howlett@oracle.com>,
	"Nico Pache" <npache@redhat.com>,
	"Ryan Roberts" <ryan.roberts@arm.com>,
	"Dev Jain" <dev.jain@arm.com>, "Barry Song" <baohua@kernel.org>,
	"Lyude Paul" <lyude@redhat.com>,
	"Danilo Krummrich" <dakr@kernel.org>,
	"David Airlie" <airlied@gmail.com>,
	"Simona Vetter" <simona@ffwll.ch>,
	"Ralph Campbell" <rcampbell@nvidia.com>,
	"Mika Penttilä" <mpenttil@redhat.com>,
	"Matthew Brost" <matthew.brost@intel.com>,
	"Francois Dugast" <francois.dugast@intel.com>,
	"SeongJae Park" <sj@kernel.org>
Subject: Re: [v7 04/16] mm/rmap: extend rmap and migration support device-private entries
Date: Wed, 22 Oct 2025 19:54:28 +0800	[thread overview]
Message-ID: <CABzRoyZZ8QLF5PSeDCVxgcnQmF9kFQ3RZdNq0Deik3o9OrK+BQ@mail.gmail.com> (raw)
In-Reply-To: <20251001065707.920170-5-balbirs@nvidia.com>

On Wed, Oct 1, 2025 at 3:25 PM Balbir Singh <balbirs@nvidia.com> wrote:
>
> Add device-private THP support to reverse mapping infrastructure, enabling
> proper handling during migration and walk operations.
>
> The key changes are:
> - add_migration_pmd()/remove_migration_pmd(): Handle device-private
>   entries during folio migration and splitting
> - page_vma_mapped_walk(): Recognize device-private THP entries during
>   VMA traversal operations
>
> This change supports folio splitting and migration operations on
> device-private entries.
>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
> Cc: Rakie Kim <rakie.kim@sk.com>
> Cc: Byungchul Park <byungchul@sk.com>
> Cc: Gregory Price <gourry@gourry.net>
> Cc: Ying Huang <ying.huang@linux.alibaba.com>
> Cc: Alistair Popple <apopple@nvidia.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
> Cc: Nico Pache <npache@redhat.com>
> Cc: Ryan Roberts <ryan.roberts@arm.com>
> Cc: Dev Jain <dev.jain@arm.com>
> Cc: Barry Song <baohua@kernel.org>
> Cc: Lyude Paul <lyude@redhat.com>
> Cc: Danilo Krummrich <dakr@kernel.org>
> Cc: David Airlie <airlied@gmail.com>
> Cc: Simona Vetter <simona@ffwll.ch>
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: Mika Penttilä <mpenttil@redhat.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Francois Dugast <francois.dugast@intel.com>
> Acked-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
> Reviewed-by: SeongJae Park <sj@kernel.org>
> ---
>  mm/damon/ops-common.c | 20 +++++++++++++++++---
>  mm/huge_memory.c      | 16 +++++++++++++++-
>  mm/page_idle.c        |  7 +++++--
>  mm/page_vma_mapped.c  |  7 +++++++
>  mm/rmap.c             | 24 ++++++++++++++++++++----
>  5 files changed, 64 insertions(+), 10 deletions(-)
>
> diff --git a/mm/damon/ops-common.c b/mm/damon/ops-common.c
> index 998c5180a603..ac54bf5b2623 100644
> --- a/mm/damon/ops-common.c
> +++ b/mm/damon/ops-common.c
> @@ -75,12 +75,24 @@ void damon_ptep_mkold(pte_t *pte, struct vm_area_struct *vma, unsigned long addr
>  void damon_pmdp_mkold(pmd_t *pmd, struct vm_area_struct *vma, unsigned long addr)
>  {
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -       struct folio *folio = damon_get_folio(pmd_pfn(pmdp_get(pmd)));
> +       pmd_t pmdval = pmdp_get(pmd);
> +       struct folio *folio;
> +       bool young = false;
> +       unsigned long pfn;
> +
> +       if (likely(pmd_present(pmdval)))
> +               pfn = pmd_pfn(pmdval);
> +       else
> +               pfn = swp_offset_pfn(pmd_to_swp_entry(pmdval));
>
> +       folio = damon_get_folio(pfn);
>         if (!folio)
>                 return;
>
> -       if (pmdp_clear_young_notify(vma, addr, pmd))
> +       if (likely(pmd_present(pmdval)))
> +               young |= pmdp_clear_young_notify(vma, addr, pmd);
> +       young |= mmu_notifier_clear_young(vma->vm_mm, addr, addr + HPAGE_PMD_SIZE);
> +       if (young)
>                 folio_set_young(folio);
>
>         folio_set_idle(folio);
> @@ -203,7 +215,9 @@ static bool damon_folio_young_one(struct folio *folio,
>                                 mmu_notifier_test_young(vma->vm_mm, addr);
>                 } else {
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -                       *accessed = pmd_young(pmdp_get(pvmw.pmd)) ||
> +                       pmd_t pmd = pmdp_get(pvmw.pmd);
> +
> +                       *accessed = (pmd_present(pmd) && pmd_young(pmd)) ||
>                                 !folio_test_idle(folio) ||
>                                 mmu_notifier_test_young(vma->vm_mm, addr);
>  #else
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 8e0a1747762d..483b8341ce22 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -4628,7 +4628,10 @@ int set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
>                 return 0;
>
>         flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);
> -       pmdval = pmdp_invalidate(vma, address, pvmw->pmd);
> +       if (unlikely(!pmd_present(*pvmw->pmd)))
> +               pmdval = pmdp_huge_get_and_clear(vma->vm_mm, address, pvmw->pmd);
> +       else
> +               pmdval = pmdp_invalidate(vma, address, pvmw->pmd);
>
>         /* See folio_try_share_anon_rmap_pmd(): invalidate PMD first. */
>         anon_exclusive = folio_test_anon(folio) && PageAnonExclusive(page);
> @@ -4678,6 +4681,17 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
>         entry = pmd_to_swp_entry(*pvmw->pmd);
>         folio_get(folio);
>         pmde = folio_mk_pmd(folio, READ_ONCE(vma->vm_page_prot));
> +
> +       if (folio_is_device_private(folio)) {
> +               if (pmd_write(pmde))
> +                       entry = make_writable_device_private_entry(
> +                                                       page_to_pfn(new));
> +               else
> +                       entry = make_readable_device_private_entry(
> +                                                       page_to_pfn(new));
> +               pmde = swp_entry_to_pmd(entry);
> +       }
> +
>         if (pmd_swp_soft_dirty(*pvmw->pmd))
>                 pmde = pmd_mksoft_dirty(pmde);
>         if (is_writable_migration_entry(entry))
> diff --git a/mm/page_idle.c b/mm/page_idle.c
> index a82b340dc204..d4299de81031 100644
> --- a/mm/page_idle.c
> +++ b/mm/page_idle.c
> @@ -71,8 +71,11 @@ static bool page_idle_clear_pte_refs_one(struct folio *folio,
>                                 referenced |= ptep_test_and_clear_young(vma, addr, pvmw.pte);
>                         referenced |= mmu_notifier_clear_young(vma->vm_mm, addr, addr + PAGE_SIZE);
>                 } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
> -                       if (pmdp_clear_young_notify(vma, addr, pvmw.pmd))
> -                               referenced = true;
> +                       pmd_t pmdval = pmdp_get(pvmw.pmd);
> +
> +                       if (likely(pmd_present(pmdval)))
> +                               referenced |= pmdp_clear_young_notify(vma, addr, pvmw.pmd);
> +                       referenced |= mmu_notifier_clear_young(vma->vm_mm, addr, addr + PMD_SIZE);
>                 } else {
>                         /* unexpected pmd-mapped page? */
>                         WARN_ON_ONCE(1);
> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> index c498a91b6706..137ce27ff68c 100644
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -277,6 +277,13 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
>                          * cannot return prematurely, while zap_huge_pmd() has
>                          * cleared *pmd but not decremented compound_mapcount().
>                          */
> +                       swp_entry_t entry = pmd_to_swp_entry(pmde);
> +
> +                       if (is_device_private_entry(entry)) {
> +                               pvmw->ptl = pmd_lock(mm, pvmw->pmd);
> +                               return true;
> +                       }
> +

We could make this simpler:

                        if (is_device_private_entry(pmd_to_swp_entry(pmde))) {
                                pvmw->ptl = pmd_lock(mm, pvmw->pmd);
                                return true;
                        }

Thanks,
Lance

>                         if ((pvmw->flags & PVMW_SYNC) &&
>                             thp_vma_suitable_order(vma, pvmw->address,
>                                                    PMD_ORDER) &&
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 9bab13429975..c3fc30cf3636 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1046,9 +1046,16 @@ static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw)
>                 } else {
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>                         pmd_t *pmd = pvmw->pmd;
> -                       pmd_t entry;
> +                       pmd_t entry = pmdp_get(pmd);
>
> -                       if (!pmd_dirty(*pmd) && !pmd_write(*pmd))
> +                       /*
> +                        * Please see the comment above (!pte_present).
> +                        * A non present PMD is not writable from a CPU
> +                        * perspective.
> +                        */
> +                       if (!pmd_present(entry))
> +                               continue;
> +                       if (!pmd_dirty(entry) && !pmd_write(entry))
>                                 continue;
>
>                         flush_cache_range(vma, address,
> @@ -2343,6 +2350,9 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
>         while (page_vma_mapped_walk(&pvmw)) {
>                 /* PMD-mapped THP migration entry */
>                 if (!pvmw.pte) {
> +                       __maybe_unused unsigned long pfn;
> +                       __maybe_unused pmd_t pmdval;
> +
>                         if (flags & TTU_SPLIT_HUGE_PMD) {
>                                 split_huge_pmd_locked(vma, pvmw.address,
>                                                       pvmw.pmd, true);
> @@ -2351,8 +2361,14 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
>                                 break;
>                         }
>  #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
> -                       subpage = folio_page(folio,
> -                               pmd_pfn(*pvmw.pmd) - folio_pfn(folio));
> +                       pmdval = pmdp_get(pvmw.pmd);
> +                       if (likely(pmd_present(pmdval)))
> +                               pfn = pmd_pfn(pmdval);
> +                       else
> +                               pfn = swp_offset_pfn(pmd_to_swp_entry(pmdval));
> +
> +                       subpage = folio_page(folio, pfn - folio_pfn(folio));
> +
>                         VM_BUG_ON_FOLIO(folio_test_hugetlb(folio) ||
>                                         !folio_test_pmd_mappable(folio), folio);
>
> --
> 2.51.0
>
>


  reply	other threads:[~2025-10-22 11:55 UTC|newest]

Thread overview: 75+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-01  6:56 [v7 00/16] mm: support device-private THP Balbir Singh
2025-10-01  6:56 ` [v7 01/16] mm/zone_device: support large zone device private folios Balbir Singh
2025-10-12  6:10   ` Lance Yang
2025-10-12 22:54     ` Balbir Singh
2025-10-01  6:56 ` [v7 02/16] mm/zone_device: Rename page_free callback to folio_free Balbir Singh
2025-10-01  6:56 ` [v7 03/16] mm/huge_memory: add device-private THP support to PMD operations Balbir Singh
2025-10-12 15:46   ` Lance Yang
2025-10-13  0:01     ` Balbir Singh
2025-10-13  1:48       ` Lance Yang
2025-10-17 14:49   ` linux-next: KVM/s390x regression (was: [v7 03/16] mm/huge_memory: add device-private THP support to PMD operations) Christian Borntraeger
2025-10-17 14:54     ` linux-next: KVM/s390x regression David Hildenbrand
2025-10-17 15:01       ` Christian Borntraeger
2025-10-17 15:07         ` David Hildenbrand
2025-10-17 15:20           ` Christian Borntraeger
2025-10-17 17:07             ` David Hildenbrand
2025-10-17 21:56               ` Balbir Singh
2025-10-17 22:15                 ` David Hildenbrand
2025-10-17 22:41                   ` David Hildenbrand
2025-10-20  7:01                     ` Christian Borntraeger
2025-10-20  7:00                 ` Christian Borntraeger
2025-10-20  8:41                   ` David Hildenbrand
2025-10-20  9:04                     ` Claudio Imbrenda
2025-10-27 16:47                     ` Claudio Imbrenda
2025-10-27 16:59                       ` David Hildenbrand
2025-10-27 17:06                       ` Christian Borntraeger
2025-10-28  9:24                         ` Balbir Singh
2025-10-28 13:01                         ` [PATCH v1 0/1] KVM: s390: Fix missing present bit for gmap puds Claudio Imbrenda
2025-10-28 13:01                           ` [PATCH v1 1/1] " Claudio Imbrenda
2025-10-28 21:23                             ` Balbir Singh
2025-10-29 10:00                             ` David Hildenbrand
2025-10-29 10:20                               ` Claudio Imbrenda
2025-10-28 22:53                           ` [PATCH v1 0/1] " Andrew Morton
2025-10-01  6:56 ` [v7 04/16] mm/rmap: extend rmap and migration support device-private entries Balbir Singh
2025-10-22 11:54   ` Lance Yang [this message]
2025-10-01  6:56 ` [v7 05/16] mm/huge_memory: implement device-private THP splitting Balbir Singh
2025-10-01  6:56 ` [v7 06/16] mm/migrate_device: handle partially mapped folios during collection Balbir Singh
2025-10-01  6:56 ` [v7 07/16] mm/migrate_device: implement THP migration of zone device pages Balbir Singh
2025-10-01  6:56 ` [v7 08/16] mm/memory/fault: add THP fault handling for zone device private pages Balbir Singh
2025-10-01  6:57 ` [v7 09/16] lib/test_hmm: add zone device private THP test infrastructure Balbir Singh
2025-10-01  6:57 ` [v7 10/16] mm/memremap: add driver callback support for folio splitting Balbir Singh
2025-10-01  6:57 ` [v7 11/16] mm/migrate_device: add THP splitting during migration Balbir Singh
2025-10-13 21:17   ` Zi Yan
2025-10-13 21:33     ` Balbir Singh
2025-10-13 21:55       ` Zi Yan
2025-10-13 22:50         ` Balbir Singh
2025-10-19  8:19   ` Wei Yang
2025-10-19 22:49     ` Balbir Singh
2025-10-19 22:59       ` Zi Yan
2025-10-21 21:34         ` Balbir Singh
2025-10-22  2:59           ` Zi Yan
2025-10-22  7:16             ` Balbir Singh
2025-10-22 15:26               ` Zi Yan
2025-10-28  9:32                 ` Balbir Singh
2025-10-01  6:57 ` [v7 12/16] lib/test_hmm: add large page allocation failure testing Balbir Singh
2025-10-01  6:57 ` [v7 13/16] selftests/mm/hmm-tests: new tests for zone device THP migration Balbir Singh
2025-10-01  6:57 ` [v7 14/16] selftests/mm/hmm-tests: partial unmap, mremap and anon_write tests Balbir Singh
2025-10-01  6:57 ` [v7 15/16] selftests/mm/hmm-tests: new throughput tests including THP Balbir Singh
2025-10-01  6:57 ` [v7 16/16] gpu/drm/nouveau: enable THP support for GPU memory migration Balbir Singh
2025-10-09  3:17 ` [v7 00/16] mm: support device-private THP Andrew Morton
2025-10-09  3:26   ` Balbir Singh
2025-10-09 10:33     ` Matthew Brost
2025-10-13 22:51       ` Balbir Singh
2025-11-11 23:43       ` Andrew Morton
2025-11-11 23:52         ` Balbir Singh
2025-11-12  0:24           ` Andrew Morton
2025-11-12  0:36             ` Balbir Singh
2025-11-20  2:40           ` Matthew Brost
2025-11-20  2:50             ` Balbir Singh
2025-11-20  2:59               ` Balbir Singh
2025-11-20  3:15                 ` Matthew Brost
2025-11-20  3:58                   ` Balbir Singh
2025-11-20  5:46                     ` Balbir Singh
2025-11-20  5:53                     ` Matthew Brost
2025-11-20  6:03                       ` Balbir Singh
2025-11-20 17:27                         ` Matthew Brost

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CABzRoyZZ8QLF5PSeDCVxgcnQmF9kFQ3RZdNq0Deik3o9OrK+BQ@mail.gmail.com \
    --to=lance.yang@linux.dev \
    --cc=Liam.Howlett@oracle.com \
    --cc=airlied@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=apopple@nvidia.com \
    --cc=balbirs@nvidia.com \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=byungchul@sk.com \
    --cc=dakr@kernel.org \
    --cc=david@redhat.com \
    --cc=dev.jain@arm.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=francois.dugast@intel.com \
    --cc=gourry@gourry.net \
    --cc=joshua.hahnjy@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=lyude@redhat.com \
    --cc=matthew.brost@intel.com \
    --cc=mpenttil@redhat.com \
    --cc=npache@redhat.com \
    --cc=osalvador@suse.de \
    --cc=rakie.kim@sk.com \
    --cc=rcampbell@nvidia.com \
    --cc=ryan.roberts@arm.com \
    --cc=simona@ffwll.ch \
    --cc=sj@kernel.org \
    --cc=ying.huang@linux.alibaba.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox