linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Zi Yan <ziy@nvidia.com>
To: Huang Ying <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Yang Shi <shy828301@gmail.com>,
	Baolin Wang <baolin.wang@linux.alibaba.com>,
	Oscar Salvador <osalvador@suse.de>,
	Matthew Wilcox <willy@infradead.org>,
	Bharata B Rao <bharata@amd.com>,
	Alistair Popple <apopple@nvidia.com>,
	haoxin <xhao@linux.alibaba.com>
Subject: Re: [PATCH 3/8] migrate_pages: restrict number of pages to migrate in batch
Date: Tue, 03 Jan 2023 13:40:00 -0500	[thread overview]
Message-ID: <761F148B-555B-4C51-8A1E-F17ABA85D014@nvidia.com> (raw)
In-Reply-To: <20221227002859.27740-4-ying.huang@intel.com>

[-- Attachment #1: Type: text/plain, Size: 13236 bytes --]

On 26 Dec 2022, at 19:28, Huang Ying wrote:

> This is a preparation patch to batch the folio unmapping and moving
> for non-hugetlb folios.
>
> If we had batched the folio unmapping, all folios to be migrated would
> be unmapped before copying the contents and flags of the folios.  If
> the folios that were passed to migrate_pages() were too many in unit
> of pages, the execution of the processes would be stopped for too long
> time, thus too long latency.  For example, migrate_pages() syscall
> will call migrate_pages() with all folios of a process.  To avoid this
> possible issue, in this patch, we restrict the number of pages to be
> migrated to be no more than HPAGE_PMD_NR.  That is, the influence is
> at the same level of THP migration.
>
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Yang Shi <shy828301@gmail.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Bharata B Rao <bharata@amd.com>
> Cc: Alistair Popple <apopple@nvidia.com>
> Cc: haoxin <xhao@linux.alibaba.com>
> ---
>  mm/migrate.c | 173 +++++++++++++++++++++++++++++++--------------------
>  1 file changed, 106 insertions(+), 67 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index bdbe73fe2eb7..97ea0737ab2b 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1485,40 +1485,15 @@ static int migrate_hugetlbs(struct list_head *from, new_page_t get_new_page,
>  	return rc;
>  }
>
> -/*
> - * migrate_pages - migrate the folios specified in a list, to the free folios
> - *		   supplied as the target for the page migration
> - *
> - * @from:		The list of folios to be migrated.
> - * @get_new_page:	The function used to allocate free folios to be used
> - *			as the target of the folio migration.
> - * @put_new_page:	The function used to free target folios if migration
> - *			fails, or NULL if no special handling is necessary.
> - * @private:		Private data to be passed on to get_new_page()
> - * @mode:		The migration mode that specifies the constraints for
> - *			folio migration, if any.
> - * @reason:		The reason for folio migration.
> - * @ret_succeeded:	Set to the number of folios migrated successfully if
> - *			the caller passes a non-NULL pointer.
> - *
> - * The function returns after 10 attempts or if no folios are movable any more
> - * because the list has become empty or no retryable folios exist any more.
> - * It is caller's responsibility to call putback_movable_pages() to return folios
> - * to the LRU or free list only if ret != 0.
> - *
> - * Returns the number of {normal folio, large folio, hugetlb} that were not
> - * migrated, or an error code. The number of large folio splits will be
> - * considered as the number of non-migrated large folio, no matter how many
> - * split folios of the large folio are migrated successfully.
> - */
> -int migrate_pages(struct list_head *from, new_page_t get_new_page,
> +static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page,
>  		free_page_t put_new_page, unsigned long private,
> -		enum migrate_mode mode, int reason, unsigned int *ret_succeeded)
> +		enum migrate_mode mode, int reason, struct list_head *ret_folios,
> +		struct migrate_pages_stats *stats)
>  {
>  	int retry = 1;
>  	int large_retry = 1;
>  	int thp_retry = 1;
> -	int nr_failed;
> +	int nr_failed = 0;
>  	int nr_retry_pages = 0;
>  	int nr_large_failed = 0;
>  	int pass = 0;
> @@ -1526,20 +1501,9 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>  	bool is_thp = false;
>  	struct folio *folio, *folio2;
>  	int rc, nr_pages;
> -	LIST_HEAD(ret_folios);
>  	LIST_HEAD(split_folios);
>  	bool nosplit = (reason == MR_NUMA_MISPLACED);
>  	bool no_split_folio_counting = false;
> -	struct migrate_pages_stats stats;
> -
> -	trace_mm_migrate_pages_start(mode, reason);
> -
> -	memset(&stats, 0, sizeof(stats));
> -	rc = migrate_hugetlbs(from, get_new_page, put_new_page, private, mode, reason,
> -			      &stats, &ret_folios);
> -	if (rc < 0)
> -		goto out;
> -	nr_failed = rc;
>
>  split_folio_migration:
>  	for (pass = 0; pass < 10 && (retry || large_retry); pass++) {
> @@ -1549,11 +1513,6 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>  		nr_retry_pages = 0;
>
>  		list_for_each_entry_safe(folio, folio2, from, lru) {
> -			if (folio_test_hugetlb(folio)) {
> -				list_move_tail(&folio->lru, &ret_folios);
> -				continue;
> -			}
> -
>  			/*
>  			 * Large folio statistics is based on the source large
>  			 * folio. Capture required information that might get
> @@ -1567,15 +1526,14 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>
>  			rc = unmap_and_move(get_new_page, put_new_page,
>  					    private, folio, pass > 2, mode,
> -					    reason, &ret_folios);
> +					    reason, ret_folios);
>  			/*
>  			 * The rules are:
>  			 *	Success: folio will be freed
>  			 *	-EAGAIN: stay on the from list
>  			 *	-ENOMEM: stay on the from list
>  			 *	-ENOSYS: stay on the from list
> -			 *	Other errno: put on ret_folios list then splice to
> -			 *		     from list
> +			 *	Other errno: put on ret_folios list
>  			 */
>  			switch(rc) {
>  			/*
> @@ -1592,17 +1550,17 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>  				/* Large folio migration is unsupported */
>  				if (is_large) {
>  					nr_large_failed++;
> -					stats.nr_thp_failed += is_thp;
> +					stats->nr_thp_failed += is_thp;
>  					if (!try_split_folio(folio, &split_folios)) {
> -						stats.nr_thp_split += is_thp;
> +						stats->nr_thp_split += is_thp;
>  						break;
>  					}
>  				} else if (!no_split_folio_counting) {
>  					nr_failed++;
>  				}
>
> -				stats.nr_failed_pages += nr_pages;
> -				list_move_tail(&folio->lru, &ret_folios);
> +				stats->nr_failed_pages += nr_pages;
> +				list_move_tail(&folio->lru, ret_folios);
>  				break;
>  			case -ENOMEM:
>  				/*
> @@ -1611,13 +1569,13 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>  				 */
>  				if (is_large) {
>  					nr_large_failed++;
> -					stats.nr_thp_failed += is_thp;
> +					stats->nr_thp_failed += is_thp;
>  					/* Large folio NUMA faulting doesn't split to retry. */
>  					if (!nosplit) {
>  						int ret = try_split_folio(folio, &split_folios);
>
>  						if (!ret) {
> -							stats.nr_thp_split += is_thp;
> +							stats->nr_thp_split += is_thp;
>  							break;
>  						} else if (reason == MR_LONGTERM_PIN &&
>  							   ret == -EAGAIN) {
> @@ -1635,17 +1593,17 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>  					nr_failed++;
>  				}
>
> -				stats.nr_failed_pages += nr_pages + nr_retry_pages;
> +				stats->nr_failed_pages += nr_pages + nr_retry_pages;
>  				/*
>  				 * There might be some split folios of fail-to-migrate large
> -				 * folios left in split_folios list. Move them back to migration
> +				 * folios left in split_folios list. Move them to ret_folios
>  				 * list so that they could be put back to the right list by
>  				 * the caller otherwise the folio refcnt will be leaked.
>  				 */
> -				list_splice_init(&split_folios, from);
> +				list_splice_init(&split_folios, ret_folios);
>  				/* nr_failed isn't updated for not used */
>  				nr_large_failed += large_retry;
> -				stats.nr_thp_failed += thp_retry;
> +				stats->nr_thp_failed += thp_retry;
>  				goto out;
>  			case -EAGAIN:
>  				if (is_large) {
> @@ -1657,8 +1615,8 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>  				nr_retry_pages += nr_pages;
>  				break;
>  			case MIGRATEPAGE_SUCCESS:
> -				stats.nr_succeeded += nr_pages;
> -				stats.nr_thp_succeeded += is_thp;
> +				stats->nr_succeeded += nr_pages;
> +				stats->nr_thp_succeeded += is_thp;
>  				break;
>  			default:
>  				/*
> @@ -1669,20 +1627,20 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>  				 */
>  				if (is_large) {
>  					nr_large_failed++;
> -					stats.nr_thp_failed += is_thp;
> +					stats->nr_thp_failed += is_thp;
>  				} else if (!no_split_folio_counting) {
>  					nr_failed++;
>  				}
>
> -				stats.nr_failed_pages += nr_pages;
> +				stats->nr_failed_pages += nr_pages;
>  				break;
>  			}
>  		}
>  	}
>  	nr_failed += retry;
>  	nr_large_failed += large_retry;
> -	stats.nr_thp_failed += thp_retry;
> -	stats.nr_failed_pages += nr_retry_pages;
> +	stats->nr_thp_failed += thp_retry;
> +	stats->nr_failed_pages += nr_retry_pages;
>  	/*
>  	 * Try to migrate split folios of fail-to-migrate large folios, no
>  	 * nr_failed counting in this round, since all split folios of a
> @@ -1693,7 +1651,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>  		 * Move non-migrated folios (after 10 retries) to ret_folios
>  		 * to avoid migrating them again.
>  		 */
> -		list_splice_init(from, &ret_folios);
> +		list_splice_init(from, ret_folios);
>  		list_splice_init(&split_folios, from);
>  		no_split_folio_counting = true;
>  		retry = 1;
> @@ -1701,6 +1659,87 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>  	}
>
>  	rc = nr_failed + nr_large_failed;
> +out:
> +	return rc;
> +}
> +
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> +#define NR_MAX_BATCHED_MIGRATION	HPAGE_PMD_NR
> +#else
> +#define NR_MAX_BATCHED_MIGRATION	512
> +#endif
> +
> +/*
> + * migrate_pages - migrate the folios specified in a list, to the free folios
> + *		   supplied as the target for the page migration
> + *
> + * @from:		The list of folios to be migrated.
> + * @get_new_page:	The function used to allocate free folios to be used
> + *			as the target of the folio migration.
> + * @put_new_page:	The function used to free target folios if migration
> + *			fails, or NULL if no special handling is necessary.
> + * @private:		Private data to be passed on to get_new_page()
> + * @mode:		The migration mode that specifies the constraints for
> + *			folio migration, if any.
> + * @reason:		The reason for folio migration.
> + * @ret_succeeded:	Set to the number of folios migrated successfully if
> + *			the caller passes a non-NULL pointer.
> + *
> + * The function returns after 10 attempts or if no folios are movable any more
> + * because the list has become empty or no retryable folios exist any more.
> + * It is caller's responsibility to call putback_movable_pages() to return folios
> + * to the LRU or free list only if ret != 0.
> + *
> + * Returns the number of {normal folio, large folio, hugetlb} that were not
> + * migrated, or an error code. The number of large folio splits will be
> + * considered as the number of non-migrated large folio, no matter how many
> + * split folios of the large folio are migrated successfully.
> + */
> +int migrate_pages(struct list_head *from, new_page_t get_new_page,
> +		free_page_t put_new_page, unsigned long private,
> +		enum migrate_mode mode, int reason, unsigned int *ret_succeeded)
> +{
> +	int rc, rc_gether;

rc_gether -> rc_gather?

> +	int nr_pages;
> +	struct folio *folio, *folio2;
> +	LIST_HEAD(folios);
> +	LIST_HEAD(ret_folios);
> +	struct migrate_pages_stats stats;
> +
> +	trace_mm_migrate_pages_start(mode, reason);
> +
> +	memset(&stats, 0, sizeof(stats));
> +
> +	rc_gether = migrate_hugetlbs(from, get_new_page, put_new_page, private,
> +				     mode, reason, &stats, &ret_folios);
> +	if (rc_gether < 0)
> +		goto out;
> +again:
> +	nr_pages = 0;
> +	list_for_each_entry_safe(folio, folio2, from, lru) {
> +		if (folio_test_hugetlb(folio)) {
> +			list_move_tail(&folio->lru, &ret_folios);
> +			continue;
> +		}
> +
> +		nr_pages += folio_nr_pages(folio);
> +		if (nr_pages > NR_MAX_BATCHED_MIGRATION)
> +			break;
> +	}
> +	if (nr_pages > NR_MAX_BATCHED_MIGRATION)
> +		list_cut_before(&folios, from, &folio->lru);
> +	else
> +		list_splice_init(from, &folios);
> +	rc = migrate_pages_batch(&folios, get_new_page, put_new_page, private,
> +				 mode, reason, &ret_folios, &stats);
> +	list_splice_tail_init(&folios, &ret_folios);
> +	if (rc < 0) {
> +		rc_gether = rc;
> +		goto out;
> +	}
> +	rc_gether += rc;
> +	if (!list_empty(from))
> +		goto again;
>  out:
>  	/*
>  	 * Put the permanent failure folio back to migration list, they
> @@ -1713,7 +1752,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>  	 * are migrated successfully.
>  	 */
>  	if (list_empty(from))
> -		rc = 0;
> +		rc_gether = 0;
>
>  	count_vm_events(PGMIGRATE_SUCCESS, stats.nr_succeeded);
>  	count_vm_events(PGMIGRATE_FAIL, stats.nr_failed_pages);
> @@ -1727,7 +1766,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>  	if (ret_succeeded)
>  		*ret_succeeded = stats.nr_succeeded;
>
> -	return rc;
> +	return rc_gether;
>  }
>
>  struct page *alloc_migration_target(struct page *page, unsigned long private)
> -- 
> 2.35.1


--
Best Regards,
Yan, Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

  reply	other threads:[~2023-01-03 18:40 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-27  0:28 [PATCH 0/8] migrate_pages(): batch TLB flushing Huang Ying
2022-12-27  0:28 ` [PATCH 1/8] migrate_pages: organize stats with struct migrate_pages_stats Huang Ying
2023-01-03 18:06   ` Zi Yan
2023-01-05  3:02   ` Alistair Popple
2023-01-05  5:53     ` Huang, Ying
2023-01-05  6:50       ` Alistair Popple
2023-01-05  7:06         ` Huang, Ying
2022-12-27  0:28 ` [PATCH 2/8] migrate_pages: separate hugetlb folios migration Huang Ying
2022-12-28 23:17   ` Andrew Morton
2023-01-02 23:53     ` Huang, Ying
2023-01-05  4:13   ` Alistair Popple
2023-01-05  5:51     ` Huang, Ying
2023-01-05  6:43       ` Alistair Popple
2023-01-05  7:31         ` Huang, Ying
2023-01-05  7:39           ` Alistair Popple
2023-01-09  7:23             ` Huang, Ying
2023-01-10  1:37               ` Alistair Popple
2022-12-27  0:28 ` [PATCH 3/8] migrate_pages: restrict number of pages to migrate in batch Huang Ying
2023-01-03 18:40   ` Zi Yan [this message]
2023-01-04  0:24     ` Huang, Ying
2022-12-27  0:28 ` [PATCH 4/8] migrate_pages: split unmap_and_move() to _unmap() and _move() Huang Ying
2023-01-03 18:55   ` Zi Yan
2023-01-05 18:26   ` Nathan Chancellor
2023-01-05 18:57     ` Kees Cook
2023-01-08 23:33       ` Huang, Ying
2022-12-27  0:28 ` [PATCH 5/8] migrate_pages: batch _unmap and _move Huang Ying
2022-12-28 23:22   ` Andrew Morton
2023-01-02 23:29     ` Huang, Ying
2023-01-03 19:01   ` Zi Yan
2023-01-04  0:34     ` Huang, Ying
2022-12-27  0:28 ` [PATCH 6/8] migrate_pages: move migrate_folio_done() and migrate_folio_unmap() Huang Ying
2023-01-03 19:02   ` Zi Yan
2023-01-04  1:26     ` Huang, Ying
2022-12-27  0:28 ` [PATCH 7/8] migrate_pages: share more code between _unmap and _move Huang Ying
2023-01-04  7:12   ` Alistair Popple
2023-01-06  4:15     ` Huang, Ying
2022-12-27  0:28 ` [PATCH 8/8] migrate_pages: batch flushing TLB Huang Ying
2023-01-03 19:19   ` Zi Yan
2023-01-04  1:41     ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=761F148B-555B-4C51-8A1E-F17ABA85D014@nvidia.com \
    --to=ziy@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=apopple@nvidia.com \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=bharata@amd.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=osalvador@suse.de \
    --cc=shy828301@gmail.com \
    --cc=willy@infradead.org \
    --cc=xhao@linux.alibaba.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox