From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA158C433EF for ; Fri, 27 May 2022 15:04:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 601DE8D001D; Fri, 27 May 2022 11:04:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5B33B8D000C; Fri, 27 May 2022 11:04:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 49DA28D001D; Fri, 27 May 2022 11:04:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 3C72A8D000C for ; Fri, 27 May 2022 11:04:03 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id F315735B78 for ; Fri, 27 May 2022 15:04:02 +0000 (UTC) X-FDA: 79511843124.30.590EB6B Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by imf22.hostedemail.com (Postfix) with ESMTP id BB1A8C0100 for ; Fri, 27 May 2022 15:03:57 +0000 (UTC) Received: from fraeml705-chm.china.huawei.com (unknown [172.18.147.200]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4L8p2v03bLz67gcY; Fri, 27 May 2022 23:03:19 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml705-chm.china.huawei.com (10.206.15.54) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.24; Fri, 27 May 2022 17:03:56 +0200 Received: from localhost (10.81.201.194) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 27 May 2022 16:03:54 +0100 Date: Fri, 27 May 2022 16:03:52 +0100 From: Jonathan Cameron To: Aneesh Kumar K.V CC: , , Huang Ying , Greg Thelen , Yang Shi , Davidlohr Bueso , Tim C Chen , Brice Goglin , Michal Hocko , Linux Kernel Mailing List , Hesham Almatary , Dave Hansen , Alistair Popple , Dan Williams , Feng Tang , Jagdish Gediya , Baolin Wang , David Rientjes Subject: Re: [RFC PATCH v4 7/7] mm/demotion: Demote pages according to allocation fallback order Message-ID: <20220527160352.00006788@Huawei.com> In-Reply-To: <20220527122528.129445-8-aneesh.kumar@linux.ibm.com> References: <20220527122528.129445-1-aneesh.kumar@linux.ibm.com> <20220527122528.129445-8-aneesh.kumar@linux.ibm.com> Organization: Huawei Technologies Research and Development (UK) Ltd. X-Mailer: Claws Mail 4.0.0 (GTK+ 3.24.29; i686-w64-mingw32) MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.81.201.194] X-ClientProxiedBy: lhreml719-chm.china.huawei.com (10.201.108.70) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Authentication-Results: imf22.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf22.hostedemail.com: domain of jonathan.cameron@huawei.com designates 185.176.79.56 as permitted sender) smtp.mailfrom=jonathan.cameron@huawei.com X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: BB1A8C0100 X-Stat-Signature: iewewenqbfutsr1n46c79ponjzzgpi3a X-HE-Tag: 1653663837-293762 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, 27 May 2022 17:55:28 +0530 "Aneesh Kumar K.V" wrote: > From: Jagdish Gediya > > currently, a higher tier node can only be demoted to selected > nodes on the next lower tier as defined by the demotion path, > not any other node from any lower tier. This strict, hard-coded > demotion order does not work in all use cases (e.g. some use cases > may want to allow cross-socket demotion to another node in the same > demotion tier as a fallback when the preferred demotion node is out > of space). This demotion order is also inconsistent with the page > allocation fallback order when all the nodes in a higher tier are > out of space: The page allocation can fall back to any node from any > lower tier, whereas the demotion order doesn't allow that currently. > > This patch adds support to get all the allowed demotion targets mask > for node, also demote_page_list() function is modified to utilize this > allowed node mask by filling it in migration_target_control structure > before passing it to migrate_pages(). > > Signed-off-by: Jagdish Gediya > Signed-off-by: Aneesh Kumar K.V Ah ok. So this deals with any tier with higher rank case. Painful though it is I would suggest the series needs recreating as a single set of steps to reach the end goal rather than introducing one approach then modifying it. What you have now might work but as a series it's very hard to review. If their is a good reason to maintain this 'path to the answer' then it can be done but it's going to make this harder to get review + merge. Superficially this looks like it addresses my earlier comments. Jonathan > --- > include/linux/migrate.h | 5 ++++ > mm/migrate.c | 52 +++++++++++++++++++++++++++++++++++++---- > mm/vmscan.c | 38 ++++++++++++++---------------- > 3 files changed, 71 insertions(+), 24 deletions(-) > > diff --git a/include/linux/migrate.h b/include/linux/migrate.h > index 77c581f47953..1f3cbd5185ca 100644 > --- a/include/linux/migrate.h > +++ b/include/linux/migrate.h > @@ -182,6 +182,7 @@ void node_remove_from_memory_tier(int node); > int node_get_memory_tier_id(int node); > int node_set_memory_tier_rank(int node, int tier); > int node_reset_memory_tier(int node, int tier); > +void node_get_allowed_targets(int node, nodemask_t *targets); > #else > #define numa_demotion_enabled false > static inline int next_demotion_node(int node) > @@ -189,6 +190,10 @@ static inline int next_demotion_node(int node) > return NUMA_NO_NODE; > } > > +static inline void node_get_allowed_targets(int node, nodemask_t *targets) > +{ > + *targets = NODE_MASK_NONE; > +} > #endif /* CONFIG_TIERED_MEMORY */ > > #endif /* _LINUX_MIGRATE_H */ > diff --git a/mm/migrate.c b/mm/migrate.c > index 114c7428b9f3..84fac477538c 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -2129,6 +2129,7 @@ struct memory_tier { > > struct demotion_nodes { > nodemask_t preferred; > + nodemask_t allowed; > }; > > #define to_memory_tier(device) container_of(device, struct memory_tier, dev) > @@ -2475,6 +2476,25 @@ int node_set_memory_tier_rank(int node, int rank) > } > EXPORT_SYMBOL_GPL(node_set_memory_tier_rank); > > +void node_get_allowed_targets(int node, nodemask_t *targets) > +{ > + /* > + * node_demotion[] is updated without excluding this > + * function from running. > + * > + * If any node is moving to lower tiers then modifications > + * in node_demotion[] are still valid for this node, if any > + * node is moving to higher tier then moving node may be > + * used once for demotion which should be ok so rcu should > + * be enough here. > + */ > + rcu_read_lock(); > + > + *targets = node_demotion[node].allowed; > + > + rcu_read_unlock(); > +} > + > /** > * next_demotion_node() - Get the next node in the demotion path > * @node: The starting node to lookup the next node > @@ -2534,8 +2554,10 @@ static void __disable_all_migrate_targets(void) > { > int node; > > - for_each_node_mask(node, node_states[N_MEMORY]) > + for_each_node_mask(node, node_states[N_MEMORY]) { > node_demotion[node].preferred = NODE_MASK_NONE; > + node_demotion[node].allowed = NODE_MASK_NONE; > + } > } > > static void disable_all_migrate_targets(void) > @@ -2558,12 +2580,11 @@ static void disable_all_migrate_targets(void) > */ > static void establish_migration_targets(void) > { > - struct list_head *ent; > struct memory_tier *memtier; > struct demotion_nodes *nd; > - int tier, target = NUMA_NO_NODE, node; > + int target = NUMA_NO_NODE, node; > int distance, best_distance; > - nodemask_t used; > + nodemask_t used, allowed = NODE_MASK_NONE; > > if (!node_demotion) > return; > @@ -2603,6 +2624,29 @@ static void establish_migration_targets(void) > } > } while (1); > } > + /* > + * Now build the allowed mask for each node collecting node mask from > + * all memory tier below it. This allows us to fallback demotion page > + * allocation to a set of nodes that is closer the above selected > + * perferred node. > + */ > + list_for_each_entry(memtier, &memory_tiers, list) > + nodes_or(allowed, allowed, memtier->nodelist); > + /* > + * Removes nodes not yet in N_MEMORY. > + */ > + nodes_and(allowed, node_states[N_MEMORY], allowed); > + > + list_for_each_entry(memtier, &memory_tiers, list) { > + /* > + * Keep removing current tier from allowed nodes, > + * This will remove all nodes in current and above > + * memory tier from the allowed mask. > + */ > + nodes_andnot(allowed, allowed, memtier->nodelist); > + for_each_node_mask(node, memtier->nodelist) > + node_demotion[node].allowed = allowed; > + } > } > > /* > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 1678802e03e7..feb994589481 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1454,23 +1454,6 @@ static void folio_check_dirty_writeback(struct folio *folio, > mapping->a_ops->is_dirty_writeback(&folio->page, dirty, writeback); > } > > -static struct page *alloc_demote_page(struct page *page, unsigned long node) > -{ > - struct migration_target_control mtc = { > - /* > - * Allocate from 'node', or fail quickly and quietly. > - * When this happens, 'page' will likely just be discarded > - * instead of migrated. > - */ > - .gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) | > - __GFP_THISNODE | __GFP_NOWARN | > - __GFP_NOMEMALLOC | GFP_NOWAIT, > - .nid = node > - }; > - > - return alloc_migration_target(page, (unsigned long)&mtc); > -} > - > /* > * Take pages on @demote_list and attempt to demote them to > * another node. Pages which are not demoted are left on > @@ -1481,6 +1464,19 @@ static unsigned int demote_page_list(struct list_head *demote_pages, > { > int target_nid = next_demotion_node(pgdat->node_id); > unsigned int nr_succeeded; > + nodemask_t allowed_mask; > + > + struct migration_target_control mtc = { > + /* > + * Allocate from 'node', or fail quickly and quietly. > + * When this happens, 'page' will likely just be discarded > + * instead of migrated. > + */ > + .gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) | __GFP_NOWARN | > + __GFP_NOMEMALLOC | GFP_NOWAIT, > + .nid = target_nid, > + .nmask = &allowed_mask > + }; > > if (list_empty(demote_pages)) > return 0; > @@ -1488,10 +1484,12 @@ static unsigned int demote_page_list(struct list_head *demote_pages, > if (target_nid == NUMA_NO_NODE) > return 0; > > + node_get_allowed_targets(pgdat->node_id, &allowed_mask); > + > /* Demotion ignores all cpuset and mempolicy settings */ > - migrate_pages(demote_pages, alloc_demote_page, NULL, > - target_nid, MIGRATE_ASYNC, MR_DEMOTION, > - &nr_succeeded); > + migrate_pages(demote_pages, alloc_migration_target, NULL, > + (unsigned long)&mtc, MIGRATE_ASYNC, MR_DEMOTION, > + &nr_succeeded); > > if (current_is_kswapd()) > __count_vm_events(PGDEMOTE_KSWAPD, nr_succeeded);