linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Baolin Wang <baolin.wang@linux.alibaba.com>,
	"Huang, Ying" <ying.huang@intel.com>
Cc: akpm@linux-foundation.org, mgorman@techsingularity.net,
	wangkefeng.wang@huawei.com, jhubbard@nvidia.com,
	21cnbao@gmail.com, ryan.roberts@arm.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH v2] mm: support multi-size THP numa balancing
Date: Mon, 18 Mar 2024 10:48:01 +0100	[thread overview]
Message-ID: <ca6cca00-8a1b-48c8-b93a-99a772103b8e@redhat.com> (raw)
In-Reply-To: <f00316da-e45d-46c1-99b8-ee160303eaaa@linux.alibaba.com>

On 18.03.24 10:42, Baolin Wang wrote:
> 
> 
> On 2024/3/18 14:16, Huang, Ying wrote:
>> Baolin Wang <baolin.wang@linux.alibaba.com> writes:
>>
>>> Now the anonymous page allocation already supports multi-size THP (mTHP),
>>> but the numa balancing still prohibits mTHP migration even though it is an
>>> exclusive mapping, which is unreasonable. Thus let's support the exclusive
>>> mTHP numa balancing firstly.
>>>
>>> Allow scanning mTHP:
>>> Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section
>>> pages") skips shared CoW pages' NUMA page migration to avoid shared data
>>> segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to
>>> NUMA-migrate COW pages that have other uses") change to use page_count()
>>> to avoid GUP pages migration, that will also skip the mTHP numa scaning.
>>> Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP
>>> issue, although there is still a GUP race, the issue seems to have been
>>> resolved by commit 80d47f5de5e3. Meanwhile, use the folio_estimated_sharers()
>>> to skip shared CoW pages though this is not a precise sharers count. To
>>> check if the folio is shared, ideally we want to make sure every page is
>>> mapped to the same process, but doing that seems expensive and using
>>> the estimated mapcount seems can work when running autonuma benchmark.
>>>
>>> Allow migrating mTHP:
>>> As mentioned in the previous thread[1], large folios are more susceptible
>>> to false sharing issues, leading to pages ping-pong back and forth during
>>> numa balancing, which is currently hard to resolve. Therefore, as a start to
>>> support mTHP numa balancing, only exclusive mappings are allowed to perform
>>> numa migration to avoid the false sharing issues with large folios. Similarly,
>>> use the estimated mapcount to skip shared mappings, which seems can work
>>> in most cases (?), and we've used folio_estimated_sharers() to skip shared
>>> mappings in migrate_misplaced_folio() for numa balancing, seems no real
>>> complaints.
>>
>> IIUC, folio_estimated_sharers() cannot identify multi-thread
>> applications.  If some mTHP is shared by multiple threads in one
> 
> Right.
> 

Wasn't this "false sharing" previously raised/described by Mel in this 
context?

>> process, how to deal with that?
> 
> IMHO, seems the should_numa_migrate_memory() already did something to help?
> 
> ......
> 	if (!cpupid_pid_unset(last_cpupid) &&
> 				cpupid_to_nid(last_cpupid) != dst_nid)
> 		return false;
> 
> 	/* Always allow migrate on private faults */
> 	if (cpupid_match_pid(p, last_cpupid))
> 		return true;
> ......
> 
> If the node of the CPU that accessed the mTHP last time is different
> from this time, which means there is some contention for that mTHP among
> threads. So it will not allow migration.
> 
> If the contention for the mTHP among threads is light or the accessing
> is relatively stable, then we can allow migration?
> 
>> For example, I think that we should avoid to migrate on the first fault
>> for mTHP in should_numa_migrate_memory().
>>
>> More thoughts?  Can we add a field in struct folio for mTHP to count
>> hint page faults from the same node?
> 
> IIUC, we do not need add a new field for folio, seems we can reuse
> ->_flags_2a field. But how to use it? If there are multiple consecutive
> NUMA faults from the same node, then allow migration?

_flags_2a cannot be used. You could place something after _deferred_list 
IIRC. But only for folios with order>1.

But I also wonder how one could achieve that using a new field.

-- 
Cheers,

David / dhildenb



  reply	other threads:[~2024-03-18  9:48 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-15  9:18 Baolin Wang
2024-03-18  6:16 ` Huang, Ying
2024-03-18  9:42   ` Baolin Wang
2024-03-18  9:48     ` David Hildenbrand [this message]
2024-03-18 10:13       ` Baolin Wang
2024-03-18 10:15         ` David Hildenbrand
2024-03-18 10:31           ` Baolin Wang
2024-03-19  7:26     ` Huang, Ying
2024-03-21  7:12       ` Baolin Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ca6cca00-8a1b-48c8-b93a-99a772103b8e@redhat.com \
    --to=david@redhat.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=ryan.roberts@arm.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox