From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0057C433EF for ; Wed, 8 Jun 2022 06:52:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3622E6B0071; Wed, 8 Jun 2022 02:52:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2EB866B0072; Wed, 8 Jun 2022 02:52:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 18C316B0073; Wed, 8 Jun 2022 02:52:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 0326D6B0071 for ; Wed, 8 Jun 2022 02:52:32 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id CAABB61060 for ; Wed, 8 Jun 2022 06:52:31 +0000 (UTC) X-FDA: 79554150102.03.07A7AE1 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by imf27.hostedemail.com (Postfix) with ESMTP id 845D540004 for ; Wed, 8 Jun 2022 06:52:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1654671150; x=1686207150; h=message-id:subject:from:to:cc:date:in-reply-to: references:mime-version:content-transfer-encoding; bh=Q+QS/cn0Iya0AlVh9AVNArnJL/U93flfH93KNZW4kJk=; b=Snlwd9zJh2FMUQqsC0I6Ytjjztx0n1zdjSiP2fhMjvegBLRk0BtFW+5E mwwdi3TEtw0wDBbIb3qdLns/bUUUdBj028g+myfGDceI9uHXSbhh5bgjG l74Se2bhYQy/1oBqef2riyrMCn3MJUmsth3VFKduI/KEcnRKR2x6M/T14 yNhlSfkT5TV2s4Qw8mDIAmAo9HJ10JkVWjdy3MiofQQMrXJCctbunPKDl sqfqo+7pACn7XvYpQOP6fXQZ7+H7QWE/CIUeiUNT/o56GIM7wqk9DvohN FV4Db5cU6EA/eEAcWVSXA2j5jIgeZI3s7h9pMVUDEhm9xgakujMVoKep3 Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10371"; a="259936666" X-IronPort-AV: E=Sophos;i="5.91,285,1647327600"; d="scan'208";a="259936666" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jun 2022 23:52:27 -0700 X-IronPort-AV: E=Sophos;i="5.91,285,1647327600"; d="scan'208";a="636620957" Received: from xding11-mobl.ccr.corp.intel.com ([10.254.214.239]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jun 2022 23:52:22 -0700 Message-ID: Subject: Re: [PATCH v5 4/9] mm/demotion: Build demotion targets based on explicit memory tiers From: Ying Huang To: Tim Chen , "Aneesh Kumar K.V" , linux-mm@kvack.org, akpm@linux-foundation.org Cc: Wei Xu , Greg Thelen , Yang Shi , Davidlohr Bueso , Tim C Chen , Brice Goglin , Michal Hocko , Linux Kernel Mailing List , Hesham Almatary , Dave Hansen , Jonathan Cameron , Alistair Popple , Dan Williams , Feng Tang , Jagdish Gediya , Baolin Wang , David Rientjes Date: Wed, 08 Jun 2022 14:52:20 +0800 In-Reply-To: References: <20220603134237.131362-1-aneesh.kumar@linux.ibm.com> <20220603134237.131362-5-aneesh.kumar@linux.ibm.com> Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.38.3-1 MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Stat-Signature: dmn38zzfhbmm6izm7d8gz3ybe46ummum X-Rspam-User: Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Snlwd9zJ; spf=none (imf27.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 134.134.136.126) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 845D540004 X-HE-Tag: 1654671150-390825 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, 2022-06-07 at 15:51 -0700, Tim Chen wrote: > On Fri, 2022-06-03 at 19:12 +0530, Aneesh Kumar K.V wrote: > > > > +int next_demotion_node(int node) > > +{ > > + struct demotion_nodes *nd; > > + int target, nnodes, i; > > + > > + if (!node_demotion) > > + return NUMA_NO_NODE; > > + > > + nd = &node_demotion[node]; > > + > > + /* > > + * node_demotion[] is updated without excluding this > > + * function from running. > > + * > > + * Make sure to use RCU over entire code blocks if > > + * node_demotion[] reads need to be consistent. > > + */ > > + rcu_read_lock(); > > + > > + nnodes = nodes_weight(nd->preferred); > > + if (!nnodes) > > + return NUMA_NO_NODE; > > + > > + /* > > + * If there are multiple target nodes, just select one > > + * target node randomly. > > + * > > + * In addition, we can also use round-robin to select > > + * target node, but we should introduce another variable > > + * for node_demotion[] to record last selected target node, > > + * that may cause cache ping-pong due to the changing of > > + * last target node. Or introducing per-cpu data to avoid > > + * caching issue, which seems more complicated. So selecting > > + * target node randomly seems better until now. > > + */ > > + nnodes = get_random_int() % nnodes; > > + target = first_node(nd->preferred); > > + for (i = 0; i < nnodes; i++) > > + target = next_node(target, nd->preferred); > > We can simplify the above 4 lines. > > target = node_random(nd->preferred); > > There's still a loop overhead though :( To avoid loop overhead, we can use the original implementation of next_demotion_node. The performance is much better for the most common cases, the number of preferred node is 1. Best Regards, Huang, Ying > > > > > > + > > + rcu_read_unlock(); > > + > > + return target; > > +} > > + > > > > + */ > > +static int __meminit migrate_on_reclaim_callback(struct notifier_block *self, > > + unsigned long action, void *_arg) > > +{ > > + struct memory_notify *arg = _arg; > > + > > + /* > > + * Only update the node migration order when a node is > > + * changing status, like online->offline. > > + */ > > + if (arg->status_change_nid < 0) > > + return notifier_from_errno(0); > > + > > + switch (action) { > > + case MEM_OFFLINE: > > + /* > > + * In case we are moving out of N_MEMORY. Keep the node > > + * in the memory tier so that when we bring memory online, > > + * they appear in the right memory tier. We still need > > + * to rebuild the demotion order. > > + */ > > + mutex_lock(&memory_tier_lock); > > + establish_migration_targets(); > > + mutex_unlock(&memory_tier_lock); > > + break; > > + case MEM_ONLINE: > > + /* > > + * We ignore the error here, if the node already have the tier > > + * registered, we will continue to use that for the new memory > > + * we are adding here. > > + */ > > + node_set_memory_tier(arg->status_change_nid, DEFAULT_MEMORY_TIER); > > Should establish_migration_targets() be run here? Otherwise what are the > demotion targets for this newly onlined node? > > > + break; > > + } > > + > > + return notifier_from_errno(0); > > +} > > + > > Tim >