linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Bharata B Rao <bharata@amd.com>
To: Baolin Wang <baolin.wang@linux.alibaba.com>, akpm@linux-foundation.org
Cc: mgorman@techsingularity.net, shy828301@gmail.com,
	david@redhat.com, ying.huang@intel.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 2/4] mm: migrate: move the numamigrate_isolate_page() into do_numa_page()
Date: Tue, 22 Aug 2023 14:32:25 +0530	[thread overview]
Message-ID: <95d72acc-c4fa-cd71-a27f-113f0c2a8649@amd.com> (raw)
In-Reply-To: <9ff2a9e3e644103a08b9b84b76b39bbd4c60020b.1692665449.git.baolin.wang@linux.alibaba.com>

On 22-Aug-23 6:23 AM, Baolin Wang wrote:
> Move the numamigrate_isolate_page() into do_numa_page() to simplify the
> migrate_misplaced_page(), which now only focuses on page migration, and
> it also serves as a preparation for supporting batch migration for
> migrate_misplaced_page().
> 
> While we are at it, change the numamigrate_isolate_page() to boolean
> type to make the return value more clear.
> 
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
>  include/linux/migrate.h |  6 ++++++
>  mm/huge_memory.c        |  7 +++++++
>  mm/memory.c             |  7 +++++++
>  mm/migrate.c            | 22 +++++++---------------
>  4 files changed, 27 insertions(+), 15 deletions(-)
> 
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index 711dd9412561..ddcd62ec2c12 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -144,12 +144,18 @@ const struct movable_operations *page_movable_ops(struct page *page)
>  #ifdef CONFIG_NUMA_BALANCING
>  int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
>  			   int node);
> +bool numamigrate_isolate_page(pg_data_t *pgdat, struct page *page);
>  #else
>  static inline int migrate_misplaced_page(struct page *page,
>  					 struct vm_area_struct *vma, int node)
>  {
>  	return -EAGAIN; /* can't migrate now */
>  }
> +
> +static inline bool numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
> +{
> +	return false;
> +}
>  #endif /* CONFIG_NUMA_BALANCING */
>  
>  #ifdef CONFIG_MIGRATION
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 4a9b34a89854..07149ead11e4 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1496,6 +1496,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
>  	int target_nid, last_cpupid = (-1 & LAST_CPUPID_MASK);
>  	bool migrated = false, writable = false;
>  	int flags = 0;
> +	pg_data_t *pgdat;
>  
>  	vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd);
>  	if (unlikely(!pmd_same(oldpmd, *vmf->pmd))) {
> @@ -1545,6 +1546,12 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
>  		goto migrate_fail;
>  	}
>  
> +	pgdat = NODE_DATA(target_nid);
> +	if (!numamigrate_isolate_page(pgdat, page)) {
> +		put_page(page);
> +		goto migrate_fail;
> +	}
> +
>  	migrated = migrate_misplaced_page(page, vma, target_nid);
>  	if (migrated) {
>  		flags |= TNF_MIGRATED;
> diff --git a/mm/memory.c b/mm/memory.c
> index fc6f6b7a70e1..4e451b041488 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4769,6 +4769,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
>  	int target_nid;
>  	pte_t pte, old_pte;
>  	int flags = 0;
> +	pg_data_t *pgdat;
>  
>  	/*
>  	 * The "pte" at this point cannot be used safely without
> @@ -4844,6 +4845,12 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
>  		goto migrate_fail;
>  	}
>  
> +	pgdat = NODE_DATA(target_nid);
> +	if (!numamigrate_isolate_page(pgdat, page)) {
> +		put_page(page);
> +		goto migrate_fail;
> +	}
> +
>  	/* Migrate to the requested node */
>  	if (migrate_misplaced_page(page, vma, target_nid)) {
>  		page_nid = target_nid;
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 9cc98fb1d6ec..0b2b69a2a7ab 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2478,7 +2478,7 @@ static struct folio *alloc_misplaced_dst_folio(struct folio *src,
>  	return __folio_alloc_node(gfp, order, nid);
>  }
>  
> -static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
> +bool numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>  {
>  	int nr_pages = thp_nr_pages(page);
>  	int order = compound_order(page);
> @@ -2496,11 +2496,11 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>  				break;
>  		}

There is an other s/return 0/return false/ changed required here for this chunk:

if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
                        return 0;

>  		wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE);
> -		return 0;
> +		return false;
>  	}

Looks like this whole section under "Avoiding migrating to a node that is nearly full"
check could be moved to numa_page_can_migrate() as that can be considered as one more
check (or action to) see if the page can be migrated or not. After that numamigrate_isolate_page()
will truly be about isolating the page.

Regards,
Bharata.


  reply	other threads:[~2023-08-22  9:02 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-22  0:53 [PATCH v2 0/4] Extend migrate_misplaced_page() to support batch migration Baolin Wang
2023-08-22  0:53 ` [PATCH v2 1/4] mm: migrate: factor out migration validation into numa_page_can_migrate() Baolin Wang
2023-08-22  0:53 ` [PATCH v2 2/4] mm: migrate: move the numamigrate_isolate_page() into do_numa_page() Baolin Wang
2023-08-22  9:02   ` Bharata B Rao [this message]
2023-08-24  3:14     ` Baolin Wang
2023-08-22  0:53 ` [PATCH v2 3/4] mm: migrate: change migrate_misplaced_page() to support multiple pages migration Baolin Wang
2023-08-22  0:53 ` [PATCH v2 4/4] mm: migrate: change to return the number of pages migrated successfully Baolin Wang
2023-08-22  2:47 ` [PATCH v2 0/4] Extend migrate_misplaced_page() to support batch migration Huang, Ying
2023-08-24  3:13   ` Baolin Wang
2023-08-24  4:51     ` Huang, Ying
2023-08-24  6:26       ` Baolin Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=95d72acc-c4fa-cd71-a27f-113f0c2a8649@amd.com \
    --to=bharata@amd.com \
    --cc=akpm@linux-foundation.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=david@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=shy828301@gmail.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox