linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Huang, Ying" <ying.huang@intel.com>
To: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Cc: linux-mm@kvack.org,  akpm@linux-foundation.org,
	 Wei Xu <weixugc@google.com>,  Yang Shi <shy828301@gmail.com>,
	 Davidlohr Bueso <dave@stgolabs.net>,
	 Tim C Chen <tim.c.chen@intel.com>,
	 Michal Hocko <mhocko@kernel.org>,
	 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	 Hesham Almatary <hesham.almatary@huawei.com>,
	 Dave Hansen <dave.hansen@intel.com>,
	Jonathan Cameron <Jonathan.Cameron@huawei.com>,
	 Alistair Popple <apopple@nvidia.com>,
	 Dan Williams <dan.j.williams@intel.com>,
	 Johannes Weiner <hannes@cmpxchg.org>,
	 jvgediya.oss@gmail.com,  Jagdish Gediya <jvgediya@linux.ibm.com>
Subject: Re: [PATCH v10 7/8] mm/demotion: Demote pages according to allocation fallback order
Date: Tue, 26 Jul 2022 16:24:54 +0800	[thread overview]
Message-ID: <87sfmouvqh.fsf@yhuang6-desk2.ccr.corp.intel.com> (raw)
In-Reply-To: <20220720025920.1373558-8-aneesh.kumar@linux.ibm.com> (Aneesh Kumar K. V.'s message of "Wed, 20 Jul 2022 08:29:19 +0530")

"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> writes:

> From: Jagdish Gediya <jvgediya.oss@gmail.com>
>
> Currently, a higher tier node can only be demoted to selected
> nodes on the next lower tier as defined by the demotion path.
> This strict, hard-coded demotion order does not work in all
> use cases (e.g. some use cases may want to allow cross-socket
> demotion to another node in the same demotion tier as a fallback
> when the preferred demotion node is out of space). This demotion
> order is also inconsistent with the page allocation fallback order
> when all the nodes in a higher tier are out of space: The page
> allocation can fall back to any node from any lower tier, whereas
> the demotion order doesn't allow that currently.
>
> This patch adds support to get all the allowed demotion targets
> for a memory tier. demote_page_list() function is now modified
> to utilize this allowed node mask as the fallback allocation mask.
>
> Signed-off-by: Jagdish Gediya <jvgediya@linux.ibm.com>
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> ---
>  include/linux/memory-tiers.h | 11 +++++++
>  mm/memory-tiers.c            | 54 +++++++++++++++++++++++++++++++--
>  mm/vmscan.c                  | 58 ++++++++++++++++++++++++++----------
>  3 files changed, 106 insertions(+), 17 deletions(-)
>
> diff --git a/include/linux/memory-tiers.h b/include/linux/memory-tiers.h
> index 852e86bd0a23..0e58588fa066 100644
> --- a/include/linux/memory-tiers.h
> +++ b/include/linux/memory-tiers.h
> @@ -19,11 +19,17 @@
>  extern bool numa_demotion_enabled;
>  #ifdef CONFIG_MIGRATION
>  int next_demotion_node(int node);
> +void node_get_allowed_targets(pg_data_t *pgdat, nodemask_t *targets);
>  #else
>  static inline int next_demotion_node(int node)
>  {
>  	return NUMA_NO_NODE;
>  }
> +
> +static inline void node_get_allowed_targets(pg_data_t *pgdat, nodemask_t *targets)
> +{
> +	*targets = NODE_MASK_NONE;
> +}
>  #endif
>  
>  #else
> @@ -33,5 +39,10 @@ static inline int next_demotion_node(int node)
>  {
>  	return NUMA_NO_NODE;
>  }
> +
> +static inline void node_get_allowed_targets(pg_data_t *pgdat, nodemask_t *targets)
> +{
> +	*targets = NODE_MASK_NONE;
> +}
>  #endif	/* CONFIG_NUMA */
>  #endif  /* _LINUX_MEMORY_TIERS_H */
> diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c
> index 4715f9b96a44..4a96e4213d66 100644
> --- a/mm/memory-tiers.c
> +++ b/mm/memory-tiers.c
> @@ -15,6 +15,7 @@ struct memory_tier {
>  	struct list_head list;
>  	int perf_level;
>  	nodemask_t nodelist;
> +	nodemask_t lower_tier_mask;
>  };
>  
>  struct demotion_nodes {
> @@ -153,6 +154,24 @@ static struct memory_tier *__node_get_memory_tier(int node)
>  }
>  
>  #ifdef CONFIG_MIGRATION
> +void node_get_allowed_targets(pg_data_t *pgdat, nodemask_t *targets)
> +{
> +	struct memory_tier *memtier;
> +
> +	/*
> +	 * pg_data_t.memtier updates includes a synchronize_rcu()
> +	 * which ensures that we either find NULL or a valid memtier
> +	 * in NODE_DATA. protect the access via rcu_read_lock();
> +	 */
> +	rcu_read_lock();
> +	memtier = rcu_dereference(pgdat->memtier);
> +	if (memtier)
> +		*targets = memtier->lower_tier_mask;
> +	else
> +		*targets = NODE_MASK_NONE;
> +	rcu_read_unlock();
> +}
> +
>  /**
>   * next_demotion_node() - Get the next node in the demotion path
>   * @node: The starting node to lookup the next node
> @@ -201,10 +220,19 @@ int next_demotion_node(int node)
>  /* Disable reclaim-based migration. */
>  static void __disable_all_migrate_targets(void)
>  {
> +	struct memory_tier *memtier;
>  	int node;
>  
> -	for_each_node_state(node, N_MEMORY)
> +	for_each_node_state(node, N_MEMORY) {
>  		node_demotion[node].preferred = NODE_MASK_NONE;
> +		/*
> +		 * We are holding memory_tier_lock, it is safe
> +		 * to access pgda->memtier.
> +		 */
> +		memtier = rcu_dereference_check(NODE_DATA(node)->memtier,
> +						lockdep_is_held(&memory_tier_lock));
> +		memtier->lower_tier_mask = NODE_MASK_NONE;
> +	}
>  }
>  
>  static void disable_all_migrate_targets(void)
> @@ -230,7 +258,7 @@ static void establish_migration_targets(void)
>  	struct demotion_nodes *nd;
>  	int target = NUMA_NO_NODE, node;
>  	int distance, best_distance;
> -	nodemask_t used;
> +	nodemask_t used, lower_tier = NODE_MASK_NONE;
>  
>  	if (!node_demotion || !IS_ENABLED(CONFIG_MIGRATION))
>  		return;
> @@ -276,6 +304,28 @@ static void establish_migration_targets(void)
>  			}
>  		} while (1);
>  	}
> +	/*
> +	 * Now build the lower_tier mask for each node collecting node mask from
> +	 * all memory tier below it. This allows us to fallback demotion page
> +	 * allocation to a set of nodes that is closer the above selected
> +	 * perferred node.
> +	 */
> +	list_for_each_entry(memtier, &memory_tiers, list)
> +		nodes_or(lower_tier, lower_tier, memtier->nodelist);
> +	/*
> +	 * Removes nodes not yet in N_MEMORY.
> +	 */
> +	nodes_and(lower_tier, node_states[N_MEMORY], lower_tier);

The above code is equivalent with

        lower_tier = node_states[N_MEMORY];

?

> +
> +	list_for_each_entry(memtier, &memory_tiers, list) {
> +		/*
> +		 * Keep removing current tier from lower_tier nodes,
> +		 * This will remove all nodes in current and above
> +		 * memory tier from the lower_tier mask.
> +		 */
> +		nodes_andnot(lower_tier, lower_tier, memtier->nodelist);
> +		memtier->lower_tier_mask = lower_tier;
> +	}

This is per-memtier instead of per-node.  So we need not run this code
for each node?  That is, move the above code out of for_each_node()
loop?

Best Regards,
Huang, Ying

>  }
>  #else
>  static inline void disable_all_migrate_targets(void) {}
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 3a8f78277f99..60a5235dd639 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1460,21 +1460,34 @@ static void folio_check_dirty_writeback(struct folio *folio,
>  		mapping->a_ops->is_dirty_writeback(folio, dirty, writeback);
>  }
>  
> -static struct page *alloc_demote_page(struct page *page, unsigned long node)
> +static struct page *alloc_demote_page(struct page *page, unsigned long private)
>  {
> -	struct migration_target_control mtc = {
> -		/*
> -		 * Allocate from 'node', or fail quickly and quietly.
> -		 * When this happens, 'page' will likely just be discarded
> -		 * instead of migrated.
> -		 */
> -		.gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) |
> -			    __GFP_THISNODE  | __GFP_NOWARN |
> -			    __GFP_NOMEMALLOC | GFP_NOWAIT,
> -		.nid = node
> -	};
> +	struct page *target_page;
> +	nodemask_t *allowed_mask;
> +	struct migration_target_control *mtc;
> +
> +	mtc = (struct migration_target_control *)private;
> +
> +	allowed_mask = mtc->nmask;
> +	/*
> +	 * make sure we allocate from the target node first also trying to
> +	 * reclaim pages from the target node via kswapd if we are low on
           ~~~~~~~

demote or reclaim

> +	 * free memory on target node. If we don't do this and if we have low
                                                           ~~~~~~~~~~~~~~~~~~
> +	 * free memory on the target memtier, we would start allocating pages
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

and if we have free memory on the slower(lower) memtier,

> +	 * from higher memory tiers without even forcing a demotion of cold
                ~~~~~~

slower(lower)

> +	 * pages from the target memtier. This can result in the kernel placing
                                 ~~~~~~~

node

> +	 * hotpages in higher memory tiers.
           ~~~~~~~~    ~~~~~~

hot pages

slower(lower)

Best Regards,
Huang, Ying

> +	 */
> +	mtc->nmask = NULL;
> +	mtc->gfp_mask |= __GFP_THISNODE;
> +	target_page = alloc_migration_target(page, (unsigned long)mtc);
> +	if (target_page)
> +		return target_page;
>  
> -	return alloc_migration_target(page, (unsigned long)&mtc);
> +	mtc->gfp_mask &= ~__GFP_THISNODE;
> +	mtc->nmask = allowed_mask;
> +
> +	return alloc_migration_target(page, (unsigned long)mtc);
>  }
>  
>  /*
> @@ -1487,6 +1500,19 @@ static unsigned int demote_page_list(struct list_head *demote_pages,
>  {
>  	int target_nid = next_demotion_node(pgdat->node_id);
>  	unsigned int nr_succeeded;
> +	nodemask_t allowed_mask;
> +
> +	struct migration_target_control mtc = {
> +		/*
> +		 * Allocate from 'node', or fail quickly and quietly.
> +		 * When this happens, 'page' will likely just be discarded
> +		 * instead of migrated.
> +		 */
> +		.gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) | __GFP_NOWARN |
> +			__GFP_NOMEMALLOC | GFP_NOWAIT,
> +		.nid = target_nid,
> +		.nmask = &allowed_mask
> +	};
>  
>  	if (list_empty(demote_pages))
>  		return 0;
> @@ -1494,10 +1520,12 @@ static unsigned int demote_page_list(struct list_head *demote_pages,
>  	if (target_nid == NUMA_NO_NODE)
>  		return 0;
>  
> +	node_get_allowed_targets(pgdat, &allowed_mask);
> +
>  	/* Demotion ignores all cpuset and mempolicy settings */
>  	migrate_pages(demote_pages, alloc_demote_page, NULL,
> -			    target_nid, MIGRATE_ASYNC, MR_DEMOTION,
> -			    &nr_succeeded);
> +		      (unsigned long)&mtc, MIGRATE_ASYNC, MR_DEMOTION,
> +		      &nr_succeeded);
>  
>  	if (current_is_kswapd())
>  		__count_vm_events(PGDEMOTE_KSWAPD, nr_succeeded);


  reply	other threads:[~2022-07-26  8:25 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-20  2:59 [PATCH v10 0/8] mm/demotion: Memory tiers and demotion Aneesh Kumar K.V
2022-07-20  2:59 ` [PATCH v10 1/8] mm/demotion: Add support for explicit memory tiers Aneesh Kumar K.V
2022-07-26  3:53   ` Huang, Ying
2022-07-26 11:59     ` Aneesh Kumar K V
2022-07-27  1:16       ` Huang, Ying
2022-07-28 17:23         ` Johannes Weiner
2022-07-20  2:59 ` [PATCH v10 2/8] mm/demotion: Move memory demotion related code Aneesh Kumar K.V
2022-07-20  2:59 ` [PATCH v10 3/8] mm/demotion: Add hotplug callbacks to handle new numa node onlined Aneesh Kumar K.V
2022-07-26  4:03   ` Huang, Ying
2022-07-26 12:03     ` Aneesh Kumar K V
2022-07-27  1:53       ` Huang, Ying
2022-07-27  4:38         ` Aneesh Kumar K.V
2022-07-28  6:42           ` Huang, Ying
2022-07-20  2:59 ` [PATCH v10 4/8] mm/demotion/dax/kmem: Set node's performance level to MEMTIER_PERF_LEVEL_PMEM Aneesh Kumar K.V
2022-07-21  6:07   ` kernel test robot
2022-07-25  6:37   ` Huang, Ying
2022-07-25  6:48     ` Aneesh Kumar K V
2022-07-25  8:35       ` Huang, Ying
2022-07-25  8:42         ` Aneesh Kumar K V
2022-07-26  2:13           ` Huang, Ying
2022-07-27  4:31             ` Aneesh Kumar K.V
2022-07-28  6:39               ` Huang, Ying
2022-07-20  2:59 ` [PATCH v10 5/8] mm/demotion: Build demotion targets based on explicit memory tiers Aneesh Kumar K.V
2022-07-20  3:38   ` Aneesh Kumar K.V
2022-07-21  0:02   ` kernel test robot
2022-07-26  7:44   ` Huang, Ying
2022-07-26 12:30     ` Aneesh Kumar K V
2022-07-27  1:40       ` Huang, Ying
2022-07-27  4:35         ` Aneesh Kumar K.V
2022-07-28  6:51           ` Huang, Ying
2022-08-03  3:18         ` Aneesh Kumar K.V
2022-08-04  4:19           ` Huang, Ying
2022-07-20  2:59 ` [PATCH v10 6/8] mm/demotion: Add pg_data_t member to track node memory tier details Aneesh Kumar K.V
2022-07-26  8:02   ` Huang, Ying
2022-07-20  2:59 ` [PATCH v10 7/8] mm/demotion: Demote pages according to allocation fallback order Aneesh Kumar K.V
2022-07-26  8:24   ` Huang, Ying [this message]
2022-07-20  2:59 ` [PATCH v10 8/8] mm/demotion: Update node_is_toptier to work with memory tiers Aneesh Kumar K.V
2022-07-25  8:54   ` Huang, Ying
2022-07-25  8:56     ` Aneesh Kumar K V

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87sfmouvqh.fsf@yhuang6-desk2.ccr.corp.intel.com \
    --to=ying.huang@intel.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=apopple@nvidia.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@intel.com \
    --cc=dave@stgolabs.net \
    --cc=hannes@cmpxchg.org \
    --cc=hesham.almatary@huawei.com \
    --cc=jvgediya.oss@gmail.com \
    --cc=jvgediya@linux.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=shy828301@gmail.com \
    --cc=tim.c.chen@intel.com \
    --cc=weixugc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox