linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Baolin Wang <baolin.wang@linux.alibaba.com>
To: "Huang, Ying" <ying.huang@intel.com>
Cc: akpm@linux-foundation.org, david@redhat.com,
	mgorman@techsingularity.net, wangkefeng.wang@huawei.com,
	jhubbard@nvidia.com, 21cnbao@gmail.com, ryan.roberts@arm.com,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH v2] mm: support multi-size THP numa balancing
Date: Thu, 21 Mar 2024 15:12:43 +0800	[thread overview]
Message-ID: <dc8ecc32-54c8-4931-9a13-64a3c3b41d35@linux.alibaba.com> (raw)
In-Reply-To: <87sf0mvg1c.fsf@yhuang6-desk2.ccr.corp.intel.com>

(sorry for late reply)

On 2024/3/19 15:26, Huang, Ying wrote:
> Baolin Wang <baolin.wang@linux.alibaba.com> writes:
> 
>> On 2024/3/18 14:16, Huang, Ying wrote:
>>> Baolin Wang <baolin.wang@linux.alibaba.com> writes:
>>>
>>>> Now the anonymous page allocation already supports multi-size THP (mTHP),
>>>> but the numa balancing still prohibits mTHP migration even though it is an
>>>> exclusive mapping, which is unreasonable. Thus let's support the exclusive
>>>> mTHP numa balancing firstly.
>>>>
>>>> Allow scanning mTHP:
>>>> Commit 859d4adc3415 ("mm: numa: do not trap faults on shared data section
>>>> pages") skips shared CoW pages' NUMA page migration to avoid shared data
>>>> segment migration. In addition, commit 80d47f5de5e3 ("mm: don't try to
>>>> NUMA-migrate COW pages that have other uses") change to use page_count()
>>>> to avoid GUP pages migration, that will also skip the mTHP numa scaning.
>>>> Theoretically, we can use folio_maybe_dma_pinned() to detect the GUP
>>>> issue, although there is still a GUP race, the issue seems to have been
>>>> resolved by commit 80d47f5de5e3. Meanwhile, use the folio_estimated_sharers()
>>>> to skip shared CoW pages though this is not a precise sharers count. To
>>>> check if the folio is shared, ideally we want to make sure every page is
>>>> mapped to the same process, but doing that seems expensive and using
>>>> the estimated mapcount seems can work when running autonuma benchmark.
>>>>
>>>> Allow migrating mTHP:
>>>> As mentioned in the previous thread[1], large folios are more susceptible
>>>> to false sharing issues, leading to pages ping-pong back and forth during
>>>> numa balancing, which is currently hard to resolve. Therefore, as a start to
>>>> support mTHP numa balancing, only exclusive mappings are allowed to perform
>>>> numa migration to avoid the false sharing issues with large folios. Similarly,
>>>> use the estimated mapcount to skip shared mappings, which seems can work
>>>> in most cases (?), and we've used folio_estimated_sharers() to skip shared
>>>> mappings in migrate_misplaced_folio() for numa balancing, seems no real
>>>> complaints.
>>> IIUC, folio_estimated_sharers() cannot identify multi-thread
>>> applications.  If some mTHP is shared by multiple threads in one
>>
>> Right.
>>
>>> process, how to deal with that?
>>
>> IMHO, seems the should_numa_migrate_memory() already did something to help?
>>
>> ......
>> 	if (!cpupid_pid_unset(last_cpupid) &&
>> 				cpupid_to_nid(last_cpupid) != dst_nid)
>> 		return false;
>>
>> 	/* Always allow migrate on private faults */
>> 	if (cpupid_match_pid(p, last_cpupid))
>> 		return true;
>> ......
>>
>> If the node of the CPU that accessed the mTHP last time is different
>> from this time, which means there is some contention for that mTHP
>> among threads. So it will not allow migration.
> 
> Yes.  The two-stage filter in should_numa_migrate_memory() helps at some
> degree.
> 
> But the situation is somewhat different after your change.  Previously,
> in one round of NUMA balancing page table scanning, the number of the
> hint page fault for one process and one folio is 1.  After your change,
> the number may become folio_nr_pages().  So we need to evaluate the

Yes, this follows the same strategy as THP.

> original algorithm in the new situation and revise.  For example, use a
> N-stage filter for mTHP.

Yes, let me try if N-stage filter for mTHP can helpful.

> 
> Anyway, the NUMA balancing algorithm adjustment needs to be based on
> test results.
> 
> Another possibility is to emulate the original behavior as much as
> possible to try to reuse the original algorithm.  For example, we can
> restore all PTE maps upon the first hint page fault of a folio.  Then,
> the behavior is almost same as the original PMD mapped THP.  Personally,
> I prefer to use this as the first step.  Then, try to adjust the
> algorithm to take advantage of more information available.

OK, sounds reasonable, I will try.

>>
>> If the contention for the mTHP among threads is light or the accessing
>> is relatively stable, then we can allow migration?
>>
>>> For example, I think that we should avoid to migrate on the first fault
>>> for mTHP in should_numa_migrate_memory().
> 
> I am referring to the following code in should_numa_migrate_memory().
> 
> 	/*
> 	 * Allow first faults or private faults to migrate immediately early in
> 	 * the lifetime of a task. The magic number 4 is based on waiting for
> 	 * two full passes of the "multi-stage node selection" test that is
> 	 * executed below.
> 	 */
> 	if ((p->numa_preferred_nid == NUMA_NO_NODE || p->numa_scan_seq <= 4) &&
> 	    (cpupid_pid_unset(last_cpupid) || cpupid_match_pid(p, last_cpupid)))
> 		return true;
> 
> But, after thought about this again, I realized that the original PMD
> mapped THP may be migrated on the first fault sometimes.  So, this isn't
> a new problem.  We may "optimize" it.  But it needn't to be part of this
> series.

Make sense. Thanks for your input.


      reply	other threads:[~2024-03-21  7:12 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-15  9:18 Baolin Wang
2024-03-18  6:16 ` Huang, Ying
2024-03-18  9:42   ` Baolin Wang
2024-03-18  9:48     ` David Hildenbrand
2024-03-18 10:13       ` Baolin Wang
2024-03-18 10:15         ` David Hildenbrand
2024-03-18 10:31           ` Baolin Wang
2024-03-19  7:26     ` Huang, Ying
2024-03-21  7:12       ` Baolin Wang [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=dc8ecc32-54c8-4931-9a13-64a3c3b41d35@linux.alibaba.com \
    --to=baolin.wang@linux.alibaba.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=ryan.roberts@arm.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox