linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Huang, Ying" <ying.huang@intel.com>
To: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: <akpm@linux-foundation.org>,  <mgorman@techsingularity.net>,
	<shy828301@gmail.com>,  <david@redhat.com>,  <linux-mm@kvack.org>,
	<linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 1/4] mm: migrate: move migration validation into numa_migrate_prep()
Date: Mon, 21 Aug 2023 10:20:01 +0800	[thread overview]
Message-ID: <87h6otdtm6.fsf@yhuang6-desk2.ccr.corp.intel.com> (raw)
In-Reply-To: <a37b13dd91bd3eadcd56a08cb3c839616f8457e7.1692440586.git.baolin.wang@linux.alibaba.com> (Baolin Wang's message of "Sat, 19 Aug 2023 18:52:34 +0800")

Baolin Wang <baolin.wang@linux.alibaba.com> writes:

> Now there are 3 places will validate if a page can mirate or not, and
> some validations are performed later, which will waste some CPU to call
> numa_migrate_prep().
>
> Thus we can move all the migration validation into numa_migrate_prep(),
> which is more maintainable as well as saving some CPU resources. Another
> benefit is that it can serve as a preparation for supporting batch migration
> in do_numa_page() in future.
>
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
>  mm/memory.c  | 19 +++++++++++++++++++
>  mm/migrate.c | 19 -------------------
>  2 files changed, 19 insertions(+), 19 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index d003076b218d..bee9b1e86ef0 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4747,6 +4747,25 @@ int numa_migrate_prep(struct page *page, struct vm_area_struct *vma,
>  		*flags |= TNF_FAULT_LOCAL;
>  	}
>  
> +	/*
> +	 * Don't migrate file pages that are mapped in multiple processes
> +	 * with execute permissions as they are probably shared libraries.
> +	 */
> +	if (page_mapcount(page) != 1 && page_is_file_lru(page) &&
> +	    (vma->vm_flags & VM_EXEC))
> +		return NUMA_NO_NODE;
> +
> +	/*
> +	 * Also do not migrate dirty pages as not all filesystems can move
> +	 * dirty pages in MIGRATE_ASYNC mode which is a waste of cycles.
> +	 */
> +	if (page_is_file_lru(page) && PageDirty(page))
> +		return NUMA_NO_NODE;
> +
> +	/* Do not migrate THP mapped by multiple processes */
> +	if (PageTransHuge(page) && total_mapcount(page) > 1)
> +		return NUMA_NO_NODE;
> +
>  	return mpol_misplaced(page, vma, addr);

In mpol_misplaced()->should_numa_migrate_memory(), accessing CPU and PID
will be recorded.  So the code change above will introduce some behavior
change.

How about move these checks into a separate function which is called
between numa_migrate_prep() and migrate_misplaced_page() after unlocking
PTL?

--
Best Regards,
Huang, Ying

>  }
>  
> diff --git a/mm/migrate.c b/mm/migrate.c
> index e21d5a7e7447..9cc98fb1d6ec 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2485,10 +2485,6 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>  
>  	VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
>  
> -	/* Do not migrate THP mapped by multiple processes */
> -	if (PageTransHuge(page) && total_mapcount(page) > 1)
> -		return 0;
> -
>  	/* Avoid migrating to a node that is nearly full */
>  	if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
>  		int z;
> @@ -2533,21 +2529,6 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
>  	LIST_HEAD(migratepages);
>  	int nr_pages = thp_nr_pages(page);
>  
> -	/*
> -	 * Don't migrate file pages that are mapped in multiple processes
> -	 * with execute permissions as they are probably shared libraries.
> -	 */
> -	if (page_mapcount(page) != 1 && page_is_file_lru(page) &&
> -	    (vma->vm_flags & VM_EXEC))
> -		goto out;
> -
> -	/*
> -	 * Also do not migrate dirty pages as not all filesystems can move
> -	 * dirty pages in MIGRATE_ASYNC mode which is a waste of cycles.
> -	 */
> -	if (page_is_file_lru(page) && PageDirty(page))
> -		goto out;
> -
>  	isolated = numamigrate_isolate_page(pgdat, page);
>  	if (!isolated)
>  		goto out;


  reply	other threads:[~2023-08-21  2:22 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-19 10:52 [PATCH 0/4] Extend migrate_misplaced_page() to support batch migration Baolin Wang
2023-08-19 10:52 ` [PATCH 1/4] mm: migrate: move migration validation into numa_migrate_prep() Baolin Wang
2023-08-21  2:20   ` Huang, Ying [this message]
2023-08-21  7:52     ` Baolin Wang
2023-08-19 10:52 ` [PATCH 2/4] mm: migrate: move the numamigrate_isolate_page() into do_numa_page() Baolin Wang
2023-08-19 10:52 ` [PATCH 3/4] mm: migrate: change migrate_misplaced_page() to support multiple pages migration Baolin Wang
2023-08-19 10:52 ` [PATCH 4/4] mm: migrate: change to return the number of pages migrated successfully Baolin Wang
2023-08-21  2:29 ` [PATCH 0/4] Extend migrate_misplaced_page() to support batch migration Huang, Ying
2023-08-21  8:10   ` Baolin Wang
2023-08-21  8:41     ` Huang, Ying
2023-08-21  8:50       ` Baolin Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87h6otdtm6.fsf@yhuang6-desk2.ccr.corp.intel.com \
    --to=ying.huang@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=david@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=shy828301@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox