From: "Huang, Ying" <ying.huang@intel.com>
To: "Mika Penttilä" <mika.penttila@nextfour.com>
Cc: Andrew Morton <akpm@linux-foundation.org>, <linux-mm@kvack.org>,
<linux-kernel@vger.kernel.org>,
Dave Hansen <dave.hansen@linux.intel.com>,
Yang Shi <shy828301@gmail.com>, Zi Yan <ziy@nvidia.com>,
Michal Hocko <mhocko@suse.com>, Wei Xu <weixugc@google.com>,
Oscar Salvador <osalvador@suse.de>,
David Rientjes <rientjes@google.com>,
Dan Williams <dan.j.williams@intel.com>,
"David Hildenbrand" <david@redhat.com>,
Greg Thelen <gthelen@google.com>,
"Keith Busch" <kbusch@kernel.org>
Subject: Re: [PATCH] mm/migrate: fix CPUHP state to update node demotion order
Date: Sat, 18 Sep 2021 14:56:58 +0800 [thread overview]
Message-ID: <87bl4qcqx1.fsf@yhuang6-desk2.ccr.corp.intel.com> (raw)
In-Reply-To: <ccf79d4d-4dd6-14c3-bab6-4fac034d8e22@nextfour.com> ("Mika =?utf-8?Q?Penttil=C3=A4=22's?= message of "Sat, 18 Sep 2021 07:04:41 +0300")
Mika Penttilä <mika.penttila@nextfour.com> writes:
> Hi!
>
> On 18.9.2021 5.58, Huang Ying wrote:
>> The node demotion order needs to be updated during CPU hotplug.
>> Because whether a NUMA node has CPU may influence the demotion order.
>> The update function should be called during CPU online/offline after
>> the node_states[N_CPU] has been updated. That is done in
>> CPUHP_AP_ONLINE_DYN during CPU online and in CPUHP_MM_VMSTAT_DEAD
>> during CPU offline. But in commit 884a6e5d1f93 ("mm/migrate: update
>> node demotion order on hotplug events"), the function to update node
>> demotion order is called in CPUHP_AP_ONLINE_DYN during CPU
>> online/offline. This doesn't satisfy the order requirement. So in
>> this patch, we added CPUHP_AP_MM_DEMOTION_ONLINE and
>> CPUHP_MM_DEMOTION_OFFLINE to be called after CPUHP_AP_ONLINE_DYN and
>> CPUHP_MM_VMSTAT_DEAD during CPU online/offline, and register the
>> update function on them.
>>
>> Fixes: 884a6e5d1f93 ("mm/migrate: update node demotion order on hotplug events")
>> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
>> Cc: Dave Hansen <dave.hansen@linux.intel.com>
>> Cc: Yang Shi <shy828301@gmail.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Cc: Michal Hocko <mhocko@suse.com>
>> Cc: Wei Xu <weixugc@google.com>
>> Cc: Oscar Salvador <osalvador@suse.de>
>> Cc: David Rientjes <rientjes@google.com>
>> Cc: Dan Williams <dan.j.williams@intel.com>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Greg Thelen <gthelen@google.com>
>> Cc: Keith Busch <kbusch@kernel.org>
>> ---
>> include/linux/cpuhotplug.h | 2 ++
>> mm/migrate.c | 8 +++++---
>> 2 files changed, 7 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
>> index 832d8a74fa59..5a92ea56f21b 100644
>> --- a/include/linux/cpuhotplug.h
>> +++ b/include/linux/cpuhotplug.h
>> @@ -72,6 +72,7 @@ enum cpuhp_state {
>> CPUHP_SLUB_DEAD,
>> CPUHP_DEBUG_OBJ_DEAD,
>> CPUHP_MM_WRITEBACK_DEAD,
>> + CPUHP_MM_DEMOTION_OFFLINE,
>> CPUHP_MM_VMSTAT_DEAD,
>> CPUHP_SOFTIRQ_DEAD,
>> CPUHP_NET_MVNETA_DEAD,
>> @@ -240,6 +241,7 @@ enum cpuhp_state {
>> CPUHP_AP_BASE_CACHEINFO_ONLINE,
>> CPUHP_AP_ONLINE_DYN,
>> CPUHP_AP_ONLINE_DYN_END = CPUHP_AP_ONLINE_DYN + 30,
>> + CPUHP_AP_MM_DEMOTION_ONLINE,
>> CPUHP_AP_X86_HPET_ONLINE,
>> CPUHP_AP_X86_KVM_CLK_ONLINE,
>> CPUHP_AP_DTPM_CPU_ONLINE,
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index a6a7743ee98f..77d107a4577f 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -3278,9 +3278,8 @@ static int __init migrate_on_reclaim_init(void)
>> {
>> int ret;
>> - ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "migrate on
>> reclaim",
>> - migration_online_cpu,
>> - migration_offline_cpu);
>> + ret = cpuhp_setup_state_nocalls(CPUHP_MM_DEMOTION_OFFLINE, "mm/demotion:offline",
>> + NULL, migration_offline_cpu);
>> /*
>> * In the unlikely case that this fails, the automatic
>> * migration targets may become suboptimal for nodes
>> @@ -3288,6 +3287,9 @@ static int __init migrate_on_reclaim_init(void)
>> * rare case, do not bother trying to do anything special.
>> */
>> WARN_ON(ret < 0);
>> + ret = cpuhp_setup_state_nocalls(CPUHP_AP_MM_DEMOTION_ONLINE, "mm/demotion:online",
>> + migration_online_cpu, NULL);
>>
>
> You changed to _nocalls variant, how does this handle initialization
> for cpus present at boot?
You are right! Thanks!
There are some discussion about CPUHUP in anther thread as follows,
https://lore.kernel.org/lkml/CAAPL-u_Tig1jK=mv_r=j-A-hR3Kpu7txiSFbPR3a8O1qhM1s-Q@mail.gmail.com/
I will wait for discussion in that thread too before the next step.
Best Regards,
Huang, Ying
next prev parent reply other threads:[~2021-09-18 6:57 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-18 2:58 Huang Ying
2021-09-18 4:04 ` Mika Penttilä
2021-09-18 6:56 ` Huang, Ying [this message]
2021-09-20 7:02 ` Thomas Gleixner
2021-09-21 6:41 ` Huang, Ying
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87bl4qcqx1.fsf@yhuang6-desk2.ccr.corp.intel.com \
--to=ying.huang@intel.com \
--cc=akpm@linux-foundation.org \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=gthelen@google.com \
--cc=kbusch@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=mika.penttila@nextfour.com \
--cc=osalvador@suse.de \
--cc=rientjes@google.com \
--cc=shy828301@gmail.com \
--cc=weixugc@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox