From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D28A3C43334 for ; Tue, 26 Jul 2022 08:25:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3E27E900002; Tue, 26 Jul 2022 04:25:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 392758E0001; Tue, 26 Jul 2022 04:25:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 231FD900002; Tue, 26 Jul 2022 04:25:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 109AC8E0001 for ; Tue, 26 Jul 2022 04:25:05 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id DAACD1A087E for ; Tue, 26 Jul 2022 08:25:04 +0000 (UTC) X-FDA: 79728565728.28.94F49CF Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf14.hostedemail.com (Postfix) with ESMTP id 0A7401000CD for ; Tue, 26 Jul 2022 08:25:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1658823904; x=1690359904; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=784n/SI9L6ANiaY4/7ScEbcl+/Ri38FzHCeLcV2dg/M=; b=Q+yhizJ1fOJQot+ThAuUQfASw2/IPNP+uqE0XRqah9PFwJpQAWdP4NKD WFh3C4VPbjU1IkV4+rh39uZw5Hn+CVWz6LuhpmMNsJXxeSsDHPxe0UpDU I/8j+3k3ACm013SjrtSNAln0LcE/pKI+BlbXqU6kH1j/zH+UP0+RMqyzA yRmjUNWGeoXPPvn6xS1jErTnBYTp3Hj/FVtVq8ZxmPMjoM3W+Y1vU9vND rWPu4IgUl7i4Nt8JV8g4FUQVOtZPnwCYwOn7GfFWH36J+GbNSLsDniaOJ 2bjMm05D0/EZW58F+ZPeRfeFMtlujFSOBRJ0ZyxMAo7UfmvB14CNFdpJz Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10419"; a="289083905" X-IronPort-AV: E=Sophos;i="5.93,193,1654585200"; d="scan'208";a="289083905" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Jul 2022 01:25:02 -0700 X-IronPort-AV: E=Sophos;i="5.93,193,1654585200"; d="scan'208";a="632669332" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.239.13.94]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Jul 2022 01:24:58 -0700 From: "Huang, Ying" To: "Aneesh Kumar K.V" Cc: linux-mm@kvack.org, akpm@linux-foundation.org, Wei Xu , Yang Shi , Davidlohr Bueso , Tim C Chen , Michal Hocko , Linux Kernel Mailing List , Hesham Almatary , Dave Hansen , Jonathan Cameron , Alistair Popple , Dan Williams , Johannes Weiner , jvgediya.oss@gmail.com, Jagdish Gediya Subject: Re: [PATCH v10 7/8] mm/demotion: Demote pages according to allocation fallback order References: <20220720025920.1373558-1-aneesh.kumar@linux.ibm.com> <20220720025920.1373558-8-aneesh.kumar@linux.ibm.com> Date: Tue, 26 Jul 2022 16:24:54 +0800 In-Reply-To: <20220720025920.1373558-8-aneesh.kumar@linux.ibm.com> (Aneesh Kumar K. V.'s message of "Wed, 20 Jul 2022 08:29:19 +0530") Message-ID: <87sfmouvqh.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1658823904; a=rsa-sha256; cv=none; b=lt1H1KIrP3P3sWKRGSzM2i9lfbugMFdjh9+5hfx8WbOb4DkCU3hwPth5mmz6UrpgRUy9kf TWmwuszmM4lV+jIeRcFM4tMSUpuW3VXgMB2W6XkSB3x+2McOu8ohvFER1lkAT3m1XqMgwF TqBxrQ2csf6K56qQR/Z08ccRjoGuEZM= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Q+yhizJ1; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf14.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 134.134.136.65) smtp.mailfrom=ying.huang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1658823904; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vK+N8VoGCcDQbYsgacE8d0aeLYppHjFYbD+mO9Cluhc=; b=v+HLx2M2q1RfWnY8NLlnihEc2dDzS2wjyaanLR4c+OEXn20tqbvalN99uHWHplf6Qut/cj 3HIduoynVgjixWM7ugUpNIFYlQBWPtGZDEPtibnNABD6Qoknb9jrDUgTTZtFk6PVvigCdH pa3jMYtErwg+ioUY0vT/rs0bcORj5x8= Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Q+yhizJ1; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf14.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 134.134.136.65) smtp.mailfrom=ying.huang@intel.com X-Stat-Signature: o5sn73cgq88xgwpk45iyafe11ujfxrim X-Rspamd-Queue-Id: 0A7401000CD X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1658823903-142440 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: "Aneesh Kumar K.V" writes: > From: Jagdish Gediya > > Currently, a higher tier node can only be demoted to selected > nodes on the next lower tier as defined by the demotion path. > This strict, hard-coded demotion order does not work in all > use cases (e.g. some use cases may want to allow cross-socket > demotion to another node in the same demotion tier as a fallback > when the preferred demotion node is out of space). This demotion > order is also inconsistent with the page allocation fallback order > when all the nodes in a higher tier are out of space: The page > allocation can fall back to any node from any lower tier, whereas > the demotion order doesn't allow that currently. > > This patch adds support to get all the allowed demotion targets > for a memory tier. demote_page_list() function is now modified > to utilize this allowed node mask as the fallback allocation mask. > > Signed-off-by: Jagdish Gediya > Signed-off-by: Aneesh Kumar K.V > --- > include/linux/memory-tiers.h | 11 +++++++ > mm/memory-tiers.c | 54 +++++++++++++++++++++++++++++++-- > mm/vmscan.c | 58 ++++++++++++++++++++++++++---------- > 3 files changed, 106 insertions(+), 17 deletions(-) > > diff --git a/include/linux/memory-tiers.h b/include/linux/memory-tiers.h > index 852e86bd0a23..0e58588fa066 100644 > --- a/include/linux/memory-tiers.h > +++ b/include/linux/memory-tiers.h > @@ -19,11 +19,17 @@ > extern bool numa_demotion_enabled; > #ifdef CONFIG_MIGRATION > int next_demotion_node(int node); > +void node_get_allowed_targets(pg_data_t *pgdat, nodemask_t *targets); > #else > static inline int next_demotion_node(int node) > { > return NUMA_NO_NODE; > } > + > +static inline void node_get_allowed_targets(pg_data_t *pgdat, nodemask_t *targets) > +{ > + *targets = NODE_MASK_NONE; > +} > #endif > > #else > @@ -33,5 +39,10 @@ static inline int next_demotion_node(int node) > { > return NUMA_NO_NODE; > } > + > +static inline void node_get_allowed_targets(pg_data_t *pgdat, nodemask_t *targets) > +{ > + *targets = NODE_MASK_NONE; > +} > #endif /* CONFIG_NUMA */ > #endif /* _LINUX_MEMORY_TIERS_H */ > diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c > index 4715f9b96a44..4a96e4213d66 100644 > --- a/mm/memory-tiers.c > +++ b/mm/memory-tiers.c > @@ -15,6 +15,7 @@ struct memory_tier { > struct list_head list; > int perf_level; > nodemask_t nodelist; > + nodemask_t lower_tier_mask; > }; > > struct demotion_nodes { > @@ -153,6 +154,24 @@ static struct memory_tier *__node_get_memory_tier(int node) > } > > #ifdef CONFIG_MIGRATION > +void node_get_allowed_targets(pg_data_t *pgdat, nodemask_t *targets) > +{ > + struct memory_tier *memtier; > + > + /* > + * pg_data_t.memtier updates includes a synchronize_rcu() > + * which ensures that we either find NULL or a valid memtier > + * in NODE_DATA. protect the access via rcu_read_lock(); > + */ > + rcu_read_lock(); > + memtier = rcu_dereference(pgdat->memtier); > + if (memtier) > + *targets = memtier->lower_tier_mask; > + else > + *targets = NODE_MASK_NONE; > + rcu_read_unlock(); > +} > + > /** > * next_demotion_node() - Get the next node in the demotion path > * @node: The starting node to lookup the next node > @@ -201,10 +220,19 @@ int next_demotion_node(int node) > /* Disable reclaim-based migration. */ > static void __disable_all_migrate_targets(void) > { > + struct memory_tier *memtier; > int node; > > - for_each_node_state(node, N_MEMORY) > + for_each_node_state(node, N_MEMORY) { > node_demotion[node].preferred = NODE_MASK_NONE; > + /* > + * We are holding memory_tier_lock, it is safe > + * to access pgda->memtier. > + */ > + memtier = rcu_dereference_check(NODE_DATA(node)->memtier, > + lockdep_is_held(&memory_tier_lock)); > + memtier->lower_tier_mask = NODE_MASK_NONE; > + } > } > > static void disable_all_migrate_targets(void) > @@ -230,7 +258,7 @@ static void establish_migration_targets(void) > struct demotion_nodes *nd; > int target = NUMA_NO_NODE, node; > int distance, best_distance; > - nodemask_t used; > + nodemask_t used, lower_tier = NODE_MASK_NONE; > > if (!node_demotion || !IS_ENABLED(CONFIG_MIGRATION)) > return; > @@ -276,6 +304,28 @@ static void establish_migration_targets(void) > } > } while (1); > } > + /* > + * Now build the lower_tier mask for each node collecting node mask from > + * all memory tier below it. This allows us to fallback demotion page > + * allocation to a set of nodes that is closer the above selected > + * perferred node. > + */ > + list_for_each_entry(memtier, &memory_tiers, list) > + nodes_or(lower_tier, lower_tier, memtier->nodelist); > + /* > + * Removes nodes not yet in N_MEMORY. > + */ > + nodes_and(lower_tier, node_states[N_MEMORY], lower_tier); The above code is equivalent with lower_tier = node_states[N_MEMORY]; ? > + > + list_for_each_entry(memtier, &memory_tiers, list) { > + /* > + * Keep removing current tier from lower_tier nodes, > + * This will remove all nodes in current and above > + * memory tier from the lower_tier mask. > + */ > + nodes_andnot(lower_tier, lower_tier, memtier->nodelist); > + memtier->lower_tier_mask = lower_tier; > + } This is per-memtier instead of per-node. So we need not run this code for each node? That is, move the above code out of for_each_node() loop? Best Regards, Huang, Ying > } > #else > static inline void disable_all_migrate_targets(void) {} > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 3a8f78277f99..60a5235dd639 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1460,21 +1460,34 @@ static void folio_check_dirty_writeback(struct folio *folio, > mapping->a_ops->is_dirty_writeback(folio, dirty, writeback); > } > > -static struct page *alloc_demote_page(struct page *page, unsigned long node) > +static struct page *alloc_demote_page(struct page *page, unsigned long private) > { > - struct migration_target_control mtc = { > - /* > - * Allocate from 'node', or fail quickly and quietly. > - * When this happens, 'page' will likely just be discarded > - * instead of migrated. > - */ > - .gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) | > - __GFP_THISNODE | __GFP_NOWARN | > - __GFP_NOMEMALLOC | GFP_NOWAIT, > - .nid = node > - }; > + struct page *target_page; > + nodemask_t *allowed_mask; > + struct migration_target_control *mtc; > + > + mtc = (struct migration_target_control *)private; > + > + allowed_mask = mtc->nmask; > + /* > + * make sure we allocate from the target node first also trying to > + * reclaim pages from the target node via kswapd if we are low on ~~~~~~~ demote or reclaim > + * free memory on target node. If we don't do this and if we have low ~~~~~~~~~~~~~~~~~~ > + * free memory on the target memtier, we would start allocating pages ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ and if we have free memory on the slower(lower) memtier, > + * from higher memory tiers without even forcing a demotion of cold ~~~~~~ slower(lower) > + * pages from the target memtier. This can result in the kernel placing ~~~~~~~ node > + * hotpages in higher memory tiers. ~~~~~~~~ ~~~~~~ hot pages slower(lower) Best Regards, Huang, Ying > + */ > + mtc->nmask = NULL; > + mtc->gfp_mask |= __GFP_THISNODE; > + target_page = alloc_migration_target(page, (unsigned long)mtc); > + if (target_page) > + return target_page; > > - return alloc_migration_target(page, (unsigned long)&mtc); > + mtc->gfp_mask &= ~__GFP_THISNODE; > + mtc->nmask = allowed_mask; > + > + return alloc_migration_target(page, (unsigned long)mtc); > } > > /* > @@ -1487,6 +1500,19 @@ static unsigned int demote_page_list(struct list_head *demote_pages, > { > int target_nid = next_demotion_node(pgdat->node_id); > unsigned int nr_succeeded; > + nodemask_t allowed_mask; > + > + struct migration_target_control mtc = { > + /* > + * Allocate from 'node', or fail quickly and quietly. > + * When this happens, 'page' will likely just be discarded > + * instead of migrated. > + */ > + .gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) | __GFP_NOWARN | > + __GFP_NOMEMALLOC | GFP_NOWAIT, > + .nid = target_nid, > + .nmask = &allowed_mask > + }; > > if (list_empty(demote_pages)) > return 0; > @@ -1494,10 +1520,12 @@ static unsigned int demote_page_list(struct list_head *demote_pages, > if (target_nid == NUMA_NO_NODE) > return 0; > > + node_get_allowed_targets(pgdat, &allowed_mask); > + > /* Demotion ignores all cpuset and mempolicy settings */ > migrate_pages(demote_pages, alloc_demote_page, NULL, > - target_nid, MIGRATE_ASYNC, MR_DEMOTION, > - &nr_succeeded); > + (unsigned long)&mtc, MIGRATE_ASYNC, MR_DEMOTION, > + &nr_succeeded); > > if (current_is_kswapd()) > __count_vm_events(PGDEMOTE_KSWAPD, nr_succeeded);