linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Huang, Ying" <ying.huang@linux.alibaba.com>
To: Shiyang Ruan <ruansy.fnst@fujitsu.com>
Cc: linux-mm@kvack.org,  linux-kernel@vger.kernel.org,
	 lkp@intel.com, akpm@linux-foundation.org,  y-goto@fujitsu.com,
	 mingo@redhat.com, peterz@infradead.org,  juri.lelli@redhat.com,
	vincent.guittot@linaro.org,  dietmar.eggemann@arm.com,
	rostedt@goodmis.org,  mgorman@suse.de,  vschneid@redhat.com,
	 Li Zhijian <lizhijian@fujitsu.com>,
	 Ben Segall <bsegall@google.com>
Subject: Re: [PATCH RFC v3] mm: memory-tiering: Fix PGPROMOTE_CANDIDATE counting
Date: Thu, 24 Jul 2025 15:36:48 +0800	[thread overview]
Message-ID: <87wm7y3ur3.fsf@DESKTOP-5N7EMDA> (raw)
In-Reply-To: <85d83be2-02f8-4ef6-91c7-ff920e47d834@fujitsu.com> (Shiyang Ruan's message of "Thu, 24 Jul 2025 10:39:40 +0800")

Shiyang Ruan <ruansy.fnst@fujitsu.com> writes:

> 在 2025/7/23 11:09, Huang, Ying 写道:
>> Ruan Shiyang <ruansy.fnst@fujitsu.com> writes:
>> 
>>> From: Li Zhijian <lizhijian@fujitsu.com>
>>>
>>> ===
>>> Changes since v2:
>>>    1. According to Huang's suggestion, add a new stat to not count these
>>>    pages into PGPROMOTE_CANDIDATE, to avoid changing the rate limit
>>>    mechanism.
>>> ===
>> This isn't the popular place for changelog, please refer to other
>> patch
>> email.
>
> OK. I'll move this part down below.>
>>> Goto-san reported confusing pgpromote statistics where the
>>> pgpromote_success count significantly exceeded pgpromote_candidate.
>>>
>>> On a system with three nodes (nodes 0-1: DRAM 4GB, node 2: NVDIMM 4GB):
>>>   # Enable demotion only
>>>   echo 1 > /sys/kernel/mm/numa/demotion_enabled
>>>   numactl -m 0-1 memhog -r200 3500M >/dev/null &
>>>   pid=$!
>>>   sleep 2
>>>   numactl memhog -r100 2500M >/dev/null &
>>>   sleep 10
>>>   kill -9 $pid # terminate the 1st memhog
>>>   # Enable promotion
>>>   echo 2 > /proc/sys/kernel/numa_balancing
>>>
>>> After a few seconds, we observeed `pgpromote_candidate < pgpromote_success`
>>> $ grep -e pgpromote /proc/vmstat
>>> pgpromote_success 2579
>>> pgpromote_candidate 0
>>>
>>> In this scenario, after terminating the first memhog, the conditions for
>>> pgdat_free_space_enough() are quickly met, and triggers promotion.
>>> However, these migrated pages are only counted for in PGPROMOTE_SUCCESS,
>>> not in PGPROMOTE_CANDIDATE.
>>>
>>> To solve this confusing statistics, introduce this
>>> PGPROMOTE_CANDIDATE_NOLIMIT to count the missed promotion pages.  And
>>> also, not counting these pages into PGPROMOTE_CANDIDATE is to avoid
>>> changing the existing algorithm or performance of the promotion rate
>>> limit.
>>>
>>> Perhaps PGPROMOTE_CANDIDATE_NOLIMIT is not well named, please comment if
>>> you have a better idea.
>> Yes.  Naming is hard.  I guess that the name comes from the
>> promotion
>> that isn't rate limited.  I have asked Deepseek that what is the good
>> abbreviation for "not rate limited".  Its answer is "NRL".  I don't know
>> whether it's good.  However, "NOT_RATE_LIMITED" appears too long.
>
> "NRL" Sounds good to me.
>
> I'm thinking another one: since it's not rate limited, it could be
> migrated quickly/fast.  How about PGPROMOTE_CANDIDATE_FAST?

This sounds good to me, Thanks!

---
Best Regards,
Huang, Ying

>
>> 
>>>
>>>
>> The empty line is unnecessary.
>
> OK.>
>>> Cc: Huang Ying <ying.huang@linux.alibaba.com>
>> Suggested-by: Huang Ying <ying.huang@linux.alibaba.com>
>
> OK.
>
>
> --
> Thanks,
> Ruan.
>
>> 
>>> Cc: Ingo Molnar <mingo@redhat.com>
>>> Cc: Peter Zijlstra <peterz@infradead.org>
>>> Cc: Juri Lelli <juri.lelli@redhat.com>
>>> Cc: Vincent Guittot <vincent.guittot@linaro.org>
>>> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
>>> Cc: Steven Rostedt <rostedt@goodmis.org>
>>> Cc: Ben Segall <bsegall@google.com>
>>> Cc: Mel Gorman <mgorman@suse.de>
>>> Cc: Valentin Schneider <vschneid@redhat.com>
>>> Reported-by: Yasunori Gotou (Fujitsu) <y-goto@fujitsu.com>
>>> Signed-off-by: Li Zhijian <lizhijian@fujitsu.com>
>>> Signed-off-by: Ruan Shiyang <ruansy.fnst@fujitsu.com>
>>> ---
>>>   include/linux/mmzone.h | 2 ++
>>>   kernel/sched/fair.c    | 6 ++++--
>>>   mm/vmstat.c            | 1 +
>>>   3 files changed, 7 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
>>> index 283913d42d7b..6216e2eecf3b 100644
>>> --- a/include/linux/mmzone.h
>>> +++ b/include/linux/mmzone.h
>>> @@ -231,6 +231,8 @@ enum node_stat_item {
>>>   #ifdef CONFIG_NUMA_BALANCING
>>>   	PGPROMOTE_SUCCESS,	/* promote successfully */
>>>   	PGPROMOTE_CANDIDATE,	/* candidate pages to promote */
>>> +	PGPROMOTE_CANDIDATE_NOLIMIT,	/* candidate pages without considering
>>> +					 * hot threshold */
>>>   #endif
>>>   	/* PGDEMOTE_*: pages demoted */
>>>   	PGDEMOTE_KSWAPD,
>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>> index 7a14da5396fb..12dac3519c49 100644
>>> --- a/kernel/sched/fair.c
>>> +++ b/kernel/sched/fair.c
>>> @@ -1940,11 +1940,14 @@ bool should_numa_migrate_memory(struct task_struct *p, struct folio *folio,
>>>   		struct pglist_data *pgdat;
>>>   		unsigned long rate_limit;
>>>   		unsigned int latency, th, def_th;
>>> +		long nr = folio_nr_pages(folio);
>>>     		pgdat = NODE_DATA(dst_nid);
>>>   		if (pgdat_free_space_enough(pgdat)) {
>>>   			/* workload changed, reset hot threshold */
>>>   			pgdat->nbp_threshold = 0;
>>> +			mod_node_page_state(pgdat, PGPROMOTE_CANDIDATE_NOLIMIT,
>>> +					    nr);
>>>   			return true;
>>>   		}
>>>   @@ -1958,8 +1961,7 @@ bool should_numa_migrate_memory(struct
>>> task_struct *p, struct folio *folio,
>>>   		if (latency >= th)
>>>   			return false;
>>>   -		return !numa_promotion_rate_limit(pgdat, rate_limit,
>>> -						  folio_nr_pages(folio));
>>> +		return !numa_promotion_rate_limit(pgdat, rate_limit, nr);
>>>   	}
>>>     	this_cpupid = cpu_pid_to_cpupid(dst_cpu, current->pid);
>>> diff --git a/mm/vmstat.c b/mm/vmstat.c
>>> index a78d70ddeacd..ca44a2dd5497 100644
>>> --- a/mm/vmstat.c
>>> +++ b/mm/vmstat.c
>>> @@ -1272,6 +1272,7 @@ const char * const vmstat_text[] = {
>>>   #ifdef CONFIG_NUMA_BALANCING
>>>   	"pgpromote_success",
>>>   	"pgpromote_candidate",
>>> +	"pgpromote_candidate_nolimit",
>>>   #endif
>>>   	"pgdemote_kswapd",
>>>   	"pgdemote_direct",
>> ---
>> Best Regards,
>> Huang, Ying


  reply	other threads:[~2025-07-24  7:37 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-22 14:16 Ruan Shiyang
2025-07-23  3:09 ` Huang, Ying
2025-07-24  2:39   ` Shiyang Ruan
2025-07-24  7:36     ` Huang, Ying [this message]
2025-07-25  2:20       ` Shiyang Ruan
2025-07-25  6:39         ` Huang, Ying
2025-07-24  3:35 ` Zhijian Li (Fujitsu)
2025-07-24  7:35   ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87wm7y3ur3.fsf@DESKTOP-5N7EMDA \
    --to=ying.huang@linux.alibaba.com \
    --cc=akpm@linux-foundation.org \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lizhijian@fujitsu.com \
    --cc=lkp@intel.com \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=ruansy.fnst@fujitsu.com \
    --cc=vincent.guittot@linaro.org \
    --cc=vschneid@redhat.com \
    --cc=y-goto@fujitsu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox