From: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
To: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: David Hildenbrand <david@redhat.com>,
Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org, Zi Yan <ziy@nvidia.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
Barry Song <baohua@kernel.org>, Lance Yang <lance.yang@linux.dev>,
Liam.Howlett@oracle.com,
Sidhartha Kumar <sidhartha.kumar@oracle.com>
Subject: Re: [PATCH v4 4/4] mm: huge_memory: use folio_needs_prot_numa() for pmd folio
Date: Tue, 21 Oct 2025 16:29:46 +0100 [thread overview]
Message-ID: <8ed25b3a-58a0-4e8b-84c3-2d787f9ae636@lucifer.local> (raw)
In-Reply-To: <2436956a-d5d2-476d-9117-a06fae5d788d@huawei.com>
On Mon, Oct 20, 2025 at 11:18:27PM +0800, Kefeng Wang wrote:
>
>
> On 2025/10/20 21:23, David Hildenbrand wrote:
> >
> > > > /*
> > > > * Avoid trapping faults against the zero page. The read-only
> > > > * data is likely to be read-cached on the local CPU and
> > > > @@ -2490,19 +2490,13 @@ int change_huge_pmd(struct mmu_gather
> > > > *tlb, struct vm_area_struct *vma,
> > > > if (pmd_protnone(*pmd))
> > > > goto unlock;
> > > >
> > > > - folio = pmd_folio(*pmd);
> > > > - toptier = node_is_toptier(folio_nid(folio));
> > > > - /*
> > > > - * Skip scanning top tier node if normal numa
> > > > - * balancing is disabled
> > > > - */
> > > > - if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) &&
> > > > - toptier)
> > > > - goto unlock;
> > > > + /* Get target node for single threaded private VMAs */
> > > > + if (!(vma->vm_flags & VM_SHARED) &&
> > > > + atomic_read(&vma->vm_mm->mm_users) == 1)
> > > > + target_node = numa_node_id();
> > >
> > > This is duplicated in both callers, and only used by
> > > folio_needs_prot_numa(),
> > > why not abstract this to the function also?
> >
> > There was a discussion on that in v3 I think where I asked the same
> > question.
> >
>
> Yes, it is in v1, for pte, we could avoid 512 times check and the
> numa_node_id(), so we leave it as is.
>
OK so what you're saying is that in change_pte_range() you only need do the
check once (per PMD) rather than for each PTE entry.
It might be worth having this as a separate helper function as it sucks to
duplicate that.
I actually really don't like that we pass in a 'target node' but only when it's
single-threaded. That's a bit silly.
E.g.:
static inline bool vma_is_single_threaded_private(struct vm_area_struct *vma)
{
if (vma->vm_flags & VM_SHARED)
return false;
return atomic_read(&vma->vm_mm->mm_users) == 1;
}
Then you can pass in a boolean and change:
/*
* Don't mess with PTEs if page is already on the node
* a single-threaded process is running on.
*/
nid = folio_nid(folio);
if (target_node == nid)
return false;
To:
/*
* Don't mess with PTEs if page is already on the node
* a single-threaded process is running on.
*/
if (is_private_single_threaded && folio_nid(folio) == numa_node_id())
return false;
Which makes a lot more sense than passing in NUMA_NO_NODE for shared VMAs.
Now we're sharing this I really don't know why the function is still in
mprotect.c by the way? Shouldn't it be in mempolicy.c + have a stub
function for !CONFIG_NUMA_BALANCING?
Thanks
next prev parent reply other threads:[~2025-10-21 15:30 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-20 6:18 [PATCH v4 0/4] mm: some optimizations for prot numa Kefeng Wang
2025-10-20 6:18 ` [PATCH v4 1/4] mm: mprotect: always skip dma pinned folio in prot_numa_skip() Kefeng Wang
2025-10-21 3:06 ` Barry Song
2025-10-20 6:18 ` [PATCH v4 2/4] mm: mprotect: avoid unnecessary struct page accessing if pte_protnone() Kefeng Wang
2025-10-20 12:53 ` Lorenzo Stoakes
2025-10-20 13:08 ` David Hildenbrand
2025-10-20 6:18 ` [PATCH v4 3/4] mm: mprotect: convert to folio_needs_prot_numa() Kefeng Wang
2025-10-20 13:09 ` David Hildenbrand
2025-10-20 13:10 ` Lorenzo Stoakes
2025-10-20 15:14 ` Kefeng Wang
2025-10-20 17:34 ` David Hildenbrand
2025-10-20 17:49 ` Lorenzo Stoakes
2025-10-20 18:12 ` David Hildenbrand
2025-10-20 18:56 ` Lorenzo Stoakes
2025-10-21 8:41 ` Kefeng Wang
2025-10-21 9:13 ` David Hildenbrand
2025-10-21 9:25 ` Lorenzo Stoakes
2025-10-21 9:45 ` Lorenzo Stoakes
2025-10-21 12:54 ` Kefeng Wang
2025-10-21 14:56 ` Lorenzo Stoakes
2025-10-21 15:01 ` David Hildenbrand
2025-10-21 16:36 ` Lorenzo Stoakes
2025-10-22 0:51 ` Kefeng Wang
2025-10-20 6:18 ` [PATCH v4 4/4] mm: huge_memory: use folio_needs_prot_numa() for pmd folio Kefeng Wang
2025-10-20 13:11 ` David Hildenbrand
2025-10-20 13:15 ` Lorenzo Stoakes
2025-10-20 13:23 ` David Hildenbrand
2025-10-20 15:18 ` Kefeng Wang
2025-10-21 13:37 ` Kefeng Wang
2025-10-21 15:35 ` Lorenzo Stoakes
2025-10-21 15:29 ` Lorenzo Stoakes [this message]
2025-10-22 1:33 ` Kefeng Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8ed25b3a-58a0-4e8b-84c3-2d787f9ae636@lucifer.local \
--to=lorenzo.stoakes@oracle.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=lance.yang@linux.dev \
--cc=linux-mm@kvack.org \
--cc=ryan.roberts@arm.com \
--cc=sidhartha.kumar@oracle.com \
--cc=wangkefeng.wang@huawei.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox