From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5509C43334 for ; Fri, 10 Jun 2022 06:04:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DC5788D0078; Fri, 10 Jun 2022 02:04:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D4B1D8D0064; Fri, 10 Jun 2022 02:04:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B78498D0078; Fri, 10 Jun 2022 02:04:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A1CDF8D0064 for ; Fri, 10 Jun 2022 02:04:38 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 78AAD6018C for ; Fri, 10 Jun 2022 06:04:38 +0000 (UTC) X-FDA: 79561287036.20.14BDF11 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf11.hostedemail.com (Postfix) with ESMTP id E53E040066 for ; Fri, 10 Jun 2022 06:04:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1654841076; x=1686377076; h=message-id:subject:from:to:cc:date:in-reply-to: references:mime-version:content-transfer-encoding; bh=R+DSh2EEu94BxaaMFthJVuW4/txPO8GfM9ooK/W62F0=; b=X91YGlv2LLGBZhHjeEimQcv0T/BM+0tsNbm5ET189fdSQ4Q/ZjkwhzQt jrNL/1rRJQOz0NjVDXQ8BQEqEdcGd2NipJCneym5cLy8VFcbSjhS8Jnec HzXDUByHcA7Nt5TqgBRhQZqv8vV3t+4U5YjdeTUyUJClakKWEUwNXLupS wfvB/MyyezgwgwaIoykj81j3dERNX9psVyK0cDAm4xq6YOB2u95QJyqM0 oSy3aC1yIqTtnACmkqCasornX08ZccqAb8EUWxENhXWxDrUPIeSgma0Br 36XWUq4Y0xQ6zYsFJXJsoAbyDiPhSUTEzsoCmRnIjUFhF4Syv04JJly22 g==; X-IronPort-AV: E=McAfee;i="6400,9594,10373"; a="275054795" X-IronPort-AV: E=Sophos;i="5.91,288,1647327600"; d="scan'208";a="275054795" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jun 2022 23:04:34 -0700 X-IronPort-AV: E=Sophos;i="5.91,288,1647327600"; d="scan'208";a="828061405" Received: from qingfen1-mobl1.ccr.corp.intel.com ([10.254.215.73]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jun 2022 23:04:29 -0700 Message-ID: <9f05470b3188c2a81696841a3a3e007e99caecea.camel@intel.com> Subject: Re: [PATCH v5 9/9] mm/demotion: Update node_is_toptier to work with memory tiers From: Ying Huang To: "Aneesh Kumar K.V" , linux-mm@kvack.org, akpm@linux-foundation.org Cc: Wei Xu , Greg Thelen , Yang Shi , Davidlohr Bueso , Tim C Chen , Brice Goglin , Michal Hocko , Linux Kernel Mailing List , Hesham Almatary , Dave Hansen , Jonathan Cameron , Alistair Popple , Dan Williams , Feng Tang , Jagdish Gediya , Baolin Wang , David Rientjes Date: Fri, 10 Jun 2022 14:04:26 +0800 In-Reply-To: <87sfoffcfz.fsf@linux.ibm.com> References: <20220603134237.131362-1-aneesh.kumar@linux.ibm.com> <20220603134237.131362-10-aneesh.kumar@linux.ibm.com> <6e94b7e2a6192e4cacba1db3676b5b5cf9b98eac.camel@intel.com> <11f94e0c50f17f4a6a2f974cb69a1ae72853e2be.camel@intel.com> <232817e0-24fd-e022-6c92-c260f7f01f8a@linux.ibm.com> <87sfoffcfz.fsf@linux.ibm.com> Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.38.3-1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1654841078; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pEh3y6F4/8+8sINB/R4F878fSIy1te/Ohlz+65vpGJk=; b=TwERSRX5u27smKwGbaE4RgphWuxN2z6E1Up6jJrahUTPzmu39glyfY8Q4GO/KhwFxcTCwL eIcBLMuzsmJhO3CbEHVS0GKcSjbQUIgROznxqL1dc35z6fag4qi8vWAT2ZojRQbZpxRHZI TCuHK6L5UOj0xnxTPdaoamuxZr4FI64= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1654841078; a=rsa-sha256; cv=none; b=kPk3KVF+qyz/psjM1snrysytUErKt1gNgkzma1ivAjqNB/syrdoaKMvEQI7X4nt7DAb3WL n8POu88/W0nhhh8IR//7yx38USb8yqRG8OJ7OSyngEFg6WikMGSpquNEb4e7D+uakz5uwg WnIZzYiM1qNz/Gg840KhfraXQ5MWgCI= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=X91YGlv2; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf11.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 192.55.52.93) smtp.mailfrom=ying.huang@intel.com X-Rspamd-Server: rspam11 X-Rspam-User: Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=X91YGlv2; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf11.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 192.55.52.93) smtp.mailfrom=ying.huang@intel.com X-Stat-Signature: rhajw7i6a7c6yyonos1seud6fuegsrxn X-Rspamd-Queue-Id: E53E040066 X-HE-Tag: 1654841075-386707 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, 2022-06-08 at 20:07 +0530, Aneesh Kumar K.V wrote: > Ying Huang writes: > > .... > > > > > > > > > > is this good (not tested)? > > > > > /* > > > > >    * build the allowed promotion mask. Promotion is allowed > > > > >    * from higher memory tier to lower memory tier only if > > > > >    * lower memory tier doesn't include compute. We want to > > > > >    * skip promotion from a memory tier, if any node which is > > > > >    * part of that memory tier have CPUs. Once we detect such > > > > >    * a memory tier, we consider that tier as top tier from > > > > >    * which promotion is not allowed. > > > > >    */ > > > > > list_for_each_entry_reverse(memtier, &memory_tiers, list) { > > > > > nodes_and(allowed, node_state[N_CPU], memtier->nodelist); > > > > > if (nodes_empty(allowed)) > > > > > nodes_or(promotion_mask, promotion_mask, allowed); > > > > > else > > > > > break; > > > > > } > > > > > > > > > > and then > > > > > > > > > > static inline bool node_is_toptier(int node) > > > > > { > > > > > > > > > > return !node_isset(node, promotion_mask); > > > > > } > > > > > > > > > > > > > This should work. But it appears unnatural. So, I don't think we > > > > should avoid to add more and more node masks to mitigate the design > > > > decision that we cannot access memory tier information directly. All > > > > these becomes simple and natural, if we can access memory tier > > > > information directly. > > > > > > > > > > how do you derive whether node is toptier details if we have memtier > > > details in pgdat? > > > > pgdat -> memory tier -> rank > > > > Then we can compare this rank with the fast memory rank. The fast > > memory rank can be calculated dynamically at appropriate places. > > This is what I am testing now. We still need to closely audit that lock > free access to the NODE_DATA()->memtier. For v6 I will keep this as a > separate patch and once we all agree that it is safe, I will fold it > back. Thanks for doing this. We finally have a way to access memory_tier in hot path. [snip] > +/* > + * Called with memory_tier_lock. Hence the device references cannot > + * be dropped during this function. > + */ > +static void memtier_node_clear(int node, struct memory_tier *memtier) > +{ > + pg_data_t *pgdat; > + > + pgdat = NODE_DATA(node); > + if (!pgdat) > + return; > + > + rcu_assign_pointer(pgdat->memtier, NULL); > + /* > + * Make sure read side see the NULL value before we clear the node > + * from the nodelist. > + */ > + synchronize_rcu(); > + node_clear(node, memtier->nodelist); > +} > + > +static void memtier_node_set(int node, struct memory_tier *memtier) > +{ > + pg_data_t *pgdat; > + > + pgdat = NODE_DATA(node); > + if (!pgdat) > + return; > + /* > + * Make sure we mark the memtier NULL before we assign the new memory tier > + * to the NUMA node. This make sure that anybody looking at NODE_DATA > + * finds a NULL memtier or the one which is still valid. > + */ > > + rcu_assign_pointer(pgdat->memtier, NULL); > + synchronize_rcu(); Per my understanding, in your code, when we change pgdat->memtier, we will call synchronize_rcu() twice. IMHO, once should be OK. That is, something like below, rcu_assign_pointer(pgdat->memtier, NULL); node_clear(node, memtier->nodelist); synchronize_rcu(); node_set(node, new_memtier->nodelist); rcu_assign_pointer(pgdat->memtier, new_memtier); In this way, there will be 3 states, 1. prev pgdat->memtier == old_memtier node_isset(node, old_memtier->node_list) !node_isset(node, new_memtier->node_list) 2. transitioning pgdat->memtier == NULL !node_isset(node, old_memtier->node_list) !node_isset(node, new_memtier->node_list) 3. after pgdat->memtier == new_memtier !node_isset(node, old_memtier->node_list) node_isset(node, new_memtier->node_list) The real state may be one of 1, 2, 3, 1+2, or 2+3. But it will not be 1+3. I think that this satisfied our requirements. [snip] >  static int __node_create_and_set_memory_tier(int node, int tier) >  { >   int ret = 0; > @@ -253,7 +318,7 @@ static int __node_create_and_set_memory_tier(int node, int tier) >   goto out; >   } >   } > - node_set(node, memtier->nodelist); > + memtier_node_set(node, memtier); >  out: >   return ret; >  } > @@ -275,12 +340,12 @@ int node_create_and_set_memory_tier(int node, int tier) >   if (current_tier->dev.id == tier) >   goto out; > - node_clear(node, current_tier->nodelist); > + memtier_node_clear(node, current_tier);+ node_set(node, memtier->nodelist); > + rcu_assign_pointer(pgdat->memtier, memtier); > +} > + > +bool node_is_toptier(int node) > +{ > + bool toptier; > + pg_data_t *pgdat; > + struct memory_tier *memtier; > + > + pgdat = NODE_DATA(node); > + if (!pgdat) > + return false; > + > + rcu_read_lock(); > + memtier = rcu_dereference(pgdat->memtier); > + if (!memtier) { > + toptier = true; > + goto out; > + } > + if (memtier->rank >= top_tier_rank) > + toptier = true; > + else > + toptier = false; > +out: > + rcu_read_unlock(); > + return toptier; > +} > + > >    ret = __node_create_and_set_memory_tier(node, tier); > >   if (ret) { >   /* reset it back to older tier */ > - node_set(node, current_tier->nodelist); > + memtier_node_set(node, current_tier); >   goto out; >   } >   > [snip] >  static int __init memory_tier_init(void) >  { > - int ret; > + int ret, node; >   struct memory_tier *memtier; > >   ret = subsys_system_register(&memory_tier_subsys, memory_tier_attr_groups); > > @@ -766,7 +829,13 @@ static int __init memory_tier_init(void) > >   panic("%s() failed to register memory tier: %d\n", __func__, ret); > >   /* CPU only nodes are not part of memory tiers. */ > > - memtier->nodelist = node_states[N_MEMORY]; > + for_each_node_state(node, N_MEMORY) { > + /* > + * Should be safe to do this early in the boot. > + */ > + NODE_DATA(node)->memtier = memtier; No required absoluately. But IMHO it's more consistent to use rcu_assign_pointer() here. > + node_set(node, memtier->nodelist); > + } >   migrate_on_reclaim_init(); >   > >  return 0; Best Regareds, Huang, Ying