From: David Hildenbrand <david@redhat.com>
To: Balbir Singh <balbirs@nvidia.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Cc: damon@lists.linux.dev, dri-devel@lists.freedesktop.org,
"Andrew Morton" <akpm@linux-foundation.org>,
"Zi Yan" <ziy@nvidia.com>,
"Joshua Hahn" <joshua.hahnjy@gmail.com>,
"Rakie Kim" <rakie.kim@sk.com>,
"Byungchul Park" <byungchul@sk.com>,
"Gregory Price" <gourry@gourry.net>,
"Ying Huang" <ying.huang@linux.alibaba.com>,
"Alistair Popple" <apopple@nvidia.com>,
"Oscar Salvador" <osalvador@suse.de>,
"Lorenzo Stoakes" <lorenzo.stoakes@oracle.com>,
"Baolin Wang" <baolin.wang@linux.alibaba.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
"Nico Pache" <npache@redhat.com>,
"Ryan Roberts" <ryan.roberts@arm.com>,
"Dev Jain" <dev.jain@arm.com>, "Barry Song" <baohua@kernel.org>,
"Lyude Paul" <lyude@redhat.com>,
"Danilo Krummrich" <dakr@kernel.org>,
"David Airlie" <airlied@gmail.com>,
"Simona Vetter" <simona@ffwll.ch>,
"Ralph Campbell" <rcampbell@nvidia.com>,
"Mika Penttilä" <mpenttil@redhat.com>,
"Matthew Brost" <matthew.brost@intel.com>,
"Francois Dugast" <francois.dugast@intel.com>
Subject: Re: [v5 04/15] mm/huge_memory: implement device-private THP splitting
Date: Thu, 11 Sep 2025 14:31:52 +0200 [thread overview]
Message-ID: <ca592d4a-3e91-48ea-970e-a5ff12f215be@redhat.com> (raw)
In-Reply-To: <20250908000448.180088-5-balbirs@nvidia.com>
On 08.09.25 02:04, Balbir Singh wrote:
> Add support for splitting device-private THP folios, enabling fallback
> to smaller page sizes when large page allocation or migration fails.
>
> Key changes:
> - split_huge_pmd(): Handle device-private PMD entries during splitting
> - Preserve RMAP_EXCLUSIVE semantics for anonymous exclusive folios
> - Skip RMP_USE_SHARED_ZEROPAGE for device-private entries as they
> don't support shared zero page semantics
>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
> Cc: Rakie Kim <rakie.kim@sk.com>
> Cc: Byungchul Park <byungchul@sk.com>
> Cc: Gregory Price <gourry@gourry.net>
> Cc: Ying Huang <ying.huang@linux.alibaba.com>
> Cc: Alistair Popple <apopple@nvidia.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
> Cc: Nico Pache <npache@redhat.com>
> Cc: Ryan Roberts <ryan.roberts@arm.com>
> Cc: Dev Jain <dev.jain@arm.com>
> Cc: Barry Song <baohua@kernel.org>
> Cc: Lyude Paul <lyude@redhat.com>
> Cc: Danilo Krummrich <dakr@kernel.org>
> Cc: David Airlie <airlied@gmail.com>
> Cc: Simona Vetter <simona@ffwll.ch>
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: Mika Penttilä <mpenttil@redhat.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Francois Dugast <francois.dugast@intel.com>
>
> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
> ---
> mm/huge_memory.c | 129 +++++++++++++++++++++++++++++++++--------------
> 1 file changed, 91 insertions(+), 38 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 337d8e3dd837..b720870c04b2 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2880,16 +2880,19 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
> struct page *page;
> pgtable_t pgtable;
> pmd_t old_pmd, _pmd;
> - bool young, write, soft_dirty, pmd_migration = false, uffd_wp = false;
> - bool anon_exclusive = false, dirty = false;
> + bool young, write, soft_dirty, uffd_wp = false;
> + bool anon_exclusive = false, dirty = false, present = false;
> unsigned long addr;
> pte_t *pte;
> int i;
> + swp_entry_t swp_entry;
>
> VM_BUG_ON(haddr & ~HPAGE_PMD_MASK);
> VM_BUG_ON_VMA(vma->vm_start > haddr, vma);
> VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PMD_SIZE, vma);
> - VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd));
> +
> + VM_WARN_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd) &&
> + !is_pmd_device_private_entry(*pmd));
>
Indentation. But I do wonder if we want a helper to do a more
efficient
is_pmd_migration_entry() || is_pmd_device_private_entry()
If only I could come up with a good name ... any ideas?
is_non_present_folio_entry()
maybe?
Well, there is device-exclusive .... but that would not be reachable on
these paths yet, ever.
> count_vm_event(THP_SPLIT_PMD);
>
> @@ -2937,18 +2940,43 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
> return __split_huge_zero_page_pmd(vma, haddr, pmd);
> }
>
> - pmd_migration = is_pmd_migration_entry(*pmd);
> - if (unlikely(pmd_migration)) {
> - swp_entry_t entry;
>
> + present = pmd_present(*pmd);
> + if (unlikely(!present)) {
I hate this whole function. But maybe in this case it's better
to just have here
if (is_pmd_migration_entry(old_pmd)) {
} else if (is_pmd_device_private_entry(old_pmd)) {
There is not much shared code and the helps reduce the indentation level.
> + swp_entry = pmd_to_swp_entry(*pmd);
> old_pmd = *pmd;
> - entry = pmd_to_swp_entry(old_pmd);
> - page = pfn_swap_entry_to_page(entry);
> - write = is_writable_migration_entry(entry);
> - if (PageAnon(page))
> - anon_exclusive = is_readable_exclusive_migration_entry(entry);
> - young = is_migration_entry_young(entry);
> - dirty = is_migration_entry_dirty(entry);
> +
> + folio = pfn_swap_entry_folio(swp_entry);
> + VM_WARN_ON(!is_migration_entry(swp_entry) &&
> + !is_device_private_entry(swp_entry));
Indentation.
> + page = pfn_swap_entry_to_page(swp_entry);
> +
> + if (is_pmd_migration_entry(old_pmd)) {
> + write = is_writable_migration_entry(swp_entry);
> + if (PageAnon(page))
> + anon_exclusive =
> + is_readable_exclusive_migration_entry(
> + swp_entry);
Single line please, this is unreadable.
> + young = is_migration_entry_young(swp_entry);
> + dirty = is_migration_entry_dirty(swp_entry);
> + } else if (is_pmd_device_private_entry(old_pmd)) {
> + write = is_writable_device_private_entry(swp_entry);
> + anon_exclusive = PageAnonExclusive(page);
> + if (freeze && anon_exclusive &&
> + folio_try_share_anon_rmap_pmd(folio, page))
> + freeze = false;
> + if (!freeze) {
> + rmap_t rmap_flags = RMAP_NONE;
> +
> + folio_ref_add(folio, HPAGE_PMD_NR - 1);
> + if (anon_exclusive)
> + rmap_flags |= RMAP_EXCLUSIVE;
> +
> + folio_add_anon_rmap_ptes(folio, page, HPAGE_PMD_NR,
> + vma, haddr, rmap_flags);
> + }
> + }
> +
> soft_dirty = pmd_swp_soft_dirty(old_pmd);
> uffd_wp = pmd_swp_uffd_wp(old_pmd);
> } else {
> @@ -3034,30 +3062,49 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
> * Note that NUMA hinting access restrictions are not transferred to
> * avoid any possibility of altering permissions across VMAs.
> */
> - if (freeze || pmd_migration) {
> + if (freeze || !present) {
Here too, I wonder if we should just handle device-private completely
separately for now.
> for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) {
> pte_t entry;
> - swp_entry_t swp_entry;
> -
> - if (write)
> - swp_entry = make_writable_migration_entry(
> - page_to_pfn(page + i));
> - else if (anon_exclusive)
> - swp_entry = make_readable_exclusive_migration_entry(
> - page_to_pfn(page + i));
> - else
> - swp_entry = make_readable_migration_entry(
> - page_to_pfn(page + i));
> - if (young)
> - swp_entry = make_migration_entry_young(swp_entry);
> - if (dirty)
> - swp_entry = make_migration_entry_dirty(swp_entry);
> - entry = swp_entry_to_pte(swp_entry);
> - if (soft_dirty)
> - entry = pte_swp_mksoft_dirty(entry);
> - if (uffd_wp)
> - entry = pte_swp_mkuffd_wp(entry);
> -
> + if (freeze || is_migration_entry(swp_entry)) {
> + if (write)
> + swp_entry = make_writable_migration_entry(
> + page_to_pfn(page + i));
> + else if (anon_exclusive)
> + swp_entry = make_readable_exclusive_migration_entry(
> + page_to_pfn(page + i));
> + else
> + swp_entry = make_readable_migration_entry(
> + page_to_pfn(page + i));
> + if (young)
> + swp_entry = make_migration_entry_young(swp_entry);
> + if (dirty)
> + swp_entry = make_migration_entry_dirty(swp_entry);
> + entry = swp_entry_to_pte(swp_entry);
> + if (soft_dirty)
> + entry = pte_swp_mksoft_dirty(entry);
> + if (uffd_wp)
> + entry = pte_swp_mkuffd_wp(entry);
> + } else {
> + /*
> + * anon_exclusive was already propagated to the relevant
> + * pages corresponding to the pte entries when freeze
> + * is false.
> + */
> + if (write)
> + swp_entry = make_writable_device_private_entry(
> + page_to_pfn(page + i));
> + else
> + swp_entry = make_readable_device_private_entry(
> + page_to_pfn(page + i));
> + /*
> + * Young and dirty bits are not progated via swp_entry
> + */
> + entry = swp_entry_to_pte(swp_entry);
> + if (soft_dirty)
> + entry = pte_swp_mksoft_dirty(entry);
> + if (uffd_wp)
> + entry = pte_swp_mkuffd_wp(entry);
> + }
> VM_WARN_ON(!pte_none(ptep_get(pte + i)));
> set_pte_at(mm, addr, pte + i, entry);
> }
> @@ -3084,7 +3131,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
> }
> pte_unmap(pte);
>
> - if (!pmd_migration)
> + if (!is_pmd_migration_entry(*pmd))
> folio_remove_rmap_pmd(folio, page, vma);
> if (freeze)
> put_page(page);
> @@ -3096,8 +3143,10 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
> void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
> pmd_t *pmd, bool freeze)
> {
> +
Unrelated change.
--
Cheers
David / dhildenb
next prev parent reply other threads:[~2025-09-11 12:32 UTC|newest]
Thread overview: 50+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-08 0:04 [v5 00/15] mm: support device-private THP Balbir Singh
2025-09-08 0:04 ` [v5 01/15] mm/zone_device: support large zone device private folios Balbir Singh
2025-09-11 11:45 ` David Hildenbrand
2025-09-11 12:49 ` Balbir Singh
2025-09-11 12:52 ` David Hildenbrand
2025-09-12 4:49 ` Balbir Singh
2025-09-12 9:20 ` David Hildenbrand
2025-09-12 23:14 ` Balbir Singh
2025-09-15 8:02 ` David Hildenbrand
2025-09-18 12:27 ` Chris Mason
2025-09-19 1:49 ` Balbir Singh
2025-09-08 0:04 ` [v5 02/15] mm/huge_memory: add device-private THP support to PMD operations Balbir Singh
2025-09-11 12:15 ` David Hildenbrand
2025-09-15 1:35 ` Balbir Singh
2025-09-15 8:10 ` David Hildenbrand
2025-09-16 3:27 ` Balbir Singh
2025-09-17 10:22 ` David Hildenbrand
2025-09-08 0:04 ` [v5 03/15] mm/rmap: extend rmap and migration support device-private entries Balbir Singh
2025-09-11 12:04 ` David Hildenbrand
2025-09-15 2:37 ` Balbir Singh
2025-09-12 1:59 ` SeongJae Park
2025-09-12 4:51 ` Balbir Singh
2025-09-08 0:04 ` [v5 04/15] mm/huge_memory: implement device-private THP splitting Balbir Singh
2025-09-11 12:31 ` David Hildenbrand [this message]
2025-09-15 3:54 ` Balbir Singh
2025-09-15 8:23 ` David Hildenbrand
2025-09-08 0:04 ` [v5 05/15] mm/migrate_device: handle partially mapped folios during collection Balbir Singh
2025-09-08 4:14 ` Mika Penttilä
2025-09-08 4:57 ` Balbir Singh
2025-09-18 16:42 ` Chris Mason
2025-09-19 8:36 ` Balbir Singh
2025-09-19 11:33 ` Chris Mason
2025-09-08 0:04 ` [v5 06/15] mm/migrate_device: implement THP migration of zone device pages Balbir Singh
2025-09-11 11:52 ` Mika Penttilä
2025-09-12 5:04 ` Balbir Singh
2025-09-12 5:28 ` Mika Penttilä
2025-09-12 5:38 ` Mika Penttilä
2025-09-16 10:50 ` Balbir Singh
2025-09-08 0:04 ` [v5 07/15] mm/memory/fault: add THP fault handling for zone device private pages Balbir Singh
2025-09-11 12:42 ` David Hildenbrand
2025-09-15 10:31 ` Balbir Singh
2025-09-15 11:22 ` David Hildenbrand
2025-09-08 0:04 ` [v5 08/15] lib/test_hmm: add zone device private THP test infrastructure Balbir Singh
2025-09-08 0:04 ` [v5 09/15] mm/memremap: add driver callback support for folio splitting Balbir Singh
2025-09-08 0:04 ` [v5 10/15] mm/migrate_device: add THP splitting during migration Balbir Singh
2025-09-08 0:04 ` [v5 11/15] lib/test_hmm: add large page allocation failure testing Balbir Singh
2025-09-08 0:04 ` [v5 12/15] selftests/mm/hmm-tests: new tests for zone device THP migration Balbir Singh
2025-09-08 0:04 ` [v5 13/15] selftests/mm/hmm-tests: partial unmap, mremap and anon_write tests Balbir Singh
2025-09-08 0:04 ` [v5 14/15] selftests/mm/hmm-tests: new throughput tests including THP Balbir Singh
2025-09-08 0:04 ` [v5 15/15] gpu/drm/nouveau: enable THP support for GPU memory migration Balbir Singh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ca592d4a-3e91-48ea-970e-a5ff12f215be@redhat.com \
--to=david@redhat.com \
--cc=Liam.Howlett@oracle.com \
--cc=airlied@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=balbirs@nvidia.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=byungchul@sk.com \
--cc=dakr@kernel.org \
--cc=damon@lists.linux.dev \
--cc=dev.jain@arm.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=francois.dugast@intel.com \
--cc=gourry@gourry.net \
--cc=joshua.hahnjy@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=lyude@redhat.com \
--cc=matthew.brost@intel.com \
--cc=mpenttil@redhat.com \
--cc=npache@redhat.com \
--cc=osalvador@suse.de \
--cc=rakie.kim@sk.com \
--cc=rcampbell@nvidia.com \
--cc=ryan.roberts@arm.com \
--cc=simona@ffwll.ch \
--cc=ying.huang@linux.alibaba.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox