From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCC4AC433FE for ; Sat, 18 Sep 2021 06:57:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7823960ED7 for ; Sat, 18 Sep 2021 06:57:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 7823960ED7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id F23006B0072; Sat, 18 Sep 2021 02:57:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EDC7E6B0073; Sat, 18 Sep 2021 02:57:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DC0F26B0074; Sat, 18 Sep 2021 02:57:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0196.hostedemail.com [216.40.44.196]) by kanga.kvack.org (Postfix) with ESMTP id CE30D6B0072 for ; Sat, 18 Sep 2021 02:57:06 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 778BB2D00B for ; Sat, 18 Sep 2021 06:57:06 +0000 (UTC) X-FDA: 78599787252.04.57B4AC5 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by imf12.hostedemail.com (Postfix) with ESMTP id 924E510000BA for ; Sat, 18 Sep 2021 06:57:05 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10110"; a="210009905" X-IronPort-AV: E=Sophos;i="5.85,303,1624345200"; d="scan'208";a="210009905" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 23:57:04 -0700 X-IronPort-AV: E=Sophos;i="5.85,303,1624345200"; d="scan'208";a="546843351" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.239.159.119]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 23:57:00 -0700 From: "Huang, Ying" To: Mika =?utf-8?Q?Penttil=C3=A4?= Cc: Andrew Morton , , , Dave Hansen , Yang Shi , Zi Yan , Michal Hocko , Wei Xu , Oscar Salvador , David Rientjes , Dan Williams , "David Hildenbrand" , Greg Thelen , "Keith Busch" Subject: Re: [PATCH] mm/migrate: fix CPUHP state to update node demotion order References: <20210918025849.88901-1-ying.huang@intel.com> Date: Sat, 18 Sep 2021 14:56:58 +0800 In-Reply-To: ("Mika =?utf-8?Q?Penttil=C3=A4=22's?= message of "Sat, 18 Sep 2021 07:04:41 +0300") Message-ID: <87bl4qcqx1.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Authentication-Results: imf12.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none); spf=none (imf12.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 134.134.136.126) smtp.mailfrom=ying.huang@intel.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 924E510000BA X-Stat-Signature: phekptnscm7w11srcoimha5iz89f7k9w X-HE-Tag: 1631948225-589426 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Mika Penttil=C3=A4 writes: > Hi! > > On 18.9.2021 5.58, Huang Ying wrote: >> The node demotion order needs to be updated during CPU hotplug. >> Because whether a NUMA node has CPU may influence the demotion order. >> The update function should be called during CPU online/offline after >> the node_states[N_CPU] has been updated. That is done in >> CPUHP_AP_ONLINE_DYN during CPU online and in CPUHP_MM_VMSTAT_DEAD >> during CPU offline. But in commit 884a6e5d1f93 ("mm/migrate: update >> node demotion order on hotplug events"), the function to update node >> demotion order is called in CPUHP_AP_ONLINE_DYN during CPU >> online/offline. This doesn't satisfy the order requirement. So in >> this patch, we added CPUHP_AP_MM_DEMOTION_ONLINE and >> CPUHP_MM_DEMOTION_OFFLINE to be called after CPUHP_AP_ONLINE_DYN and >> CPUHP_MM_VMSTAT_DEAD during CPU online/offline, and register the >> update function on them. >> >> Fixes: 884a6e5d1f93 ("mm/migrate: update node demotion order on hotplug = events") >> Signed-off-by: "Huang, Ying" >> Cc: Dave Hansen >> Cc: Yang Shi >> Cc: Zi Yan >> Cc: Michal Hocko >> Cc: Wei Xu >> Cc: Oscar Salvador >> Cc: David Rientjes >> Cc: Dan Williams >> Cc: David Hildenbrand >> Cc: Greg Thelen >> Cc: Keith Busch >> --- >> include/linux/cpuhotplug.h | 2 ++ >> mm/migrate.c | 8 +++++--- >> 2 files changed, 7 insertions(+), 3 deletions(-) >> >> diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h >> index 832d8a74fa59..5a92ea56f21b 100644 >> --- a/include/linux/cpuhotplug.h >> +++ b/include/linux/cpuhotplug.h >> @@ -72,6 +72,7 @@ enum cpuhp_state { >> CPUHP_SLUB_DEAD, >> CPUHP_DEBUG_OBJ_DEAD, >> CPUHP_MM_WRITEBACK_DEAD, >> + CPUHP_MM_DEMOTION_OFFLINE, >> CPUHP_MM_VMSTAT_DEAD, >> CPUHP_SOFTIRQ_DEAD, >> CPUHP_NET_MVNETA_DEAD, >> @@ -240,6 +241,7 @@ enum cpuhp_state { >> CPUHP_AP_BASE_CACHEINFO_ONLINE, >> CPUHP_AP_ONLINE_DYN, >> CPUHP_AP_ONLINE_DYN_END =3D CPUHP_AP_ONLINE_DYN + 30, >> + CPUHP_AP_MM_DEMOTION_ONLINE, >> CPUHP_AP_X86_HPET_ONLINE, >> CPUHP_AP_X86_KVM_CLK_ONLINE, >> CPUHP_AP_DTPM_CPU_ONLINE, >> diff --git a/mm/migrate.c b/mm/migrate.c >> index a6a7743ee98f..77d107a4577f 100644 >> --- a/mm/migrate.c >> +++ b/mm/migrate.c >> @@ -3278,9 +3278,8 @@ static int __init migrate_on_reclaim_init(void) >> { >> int ret; >> - ret =3D cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "migrate on >> reclaim", >> - migration_online_cpu, >> - migration_offline_cpu); >> + ret =3D cpuhp_setup_state_nocalls(CPUHP_MM_DEMOTION_OFFLINE, "mm/demot= ion:offline", >> + NULL, migration_offline_cpu); >> /* >> * In the unlikely case that this fails, the automatic >> * migration targets may become suboptimal for nodes >> @@ -3288,6 +3287,9 @@ static int __init migrate_on_reclaim_init(void) >> * rare case, do not bother trying to do anything special. >> */ >> WARN_ON(ret < 0); >> + ret =3D cpuhp_setup_state_nocalls(CPUHP_AP_MM_DEMOTION_ONLINE, "mm/dem= otion:online", >> + migration_online_cpu, NULL); >> > > You changed to _nocalls variant, how does this handle initialization > for cpus present at boot? You are right! Thanks! There are some discussion about CPUHUP in anther thread as follows, https://lore.kernel.org/lkml/CAAPL-u_Tig1jK=3Dmv_r=3Dj-A-hR3Kpu7txiSFbPR3a8= O1qhM1s-Q@mail.gmail.com/ I will wait for discussion in that thread too before the next step. Best Regards, Huang, Ying