linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Huang, Ying" <ying.huang@intel.com>
To: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,  <linux-mm@kvack.org>,
	<linux-kernel@vger.kernel.org>,
	 Alistair Popple <apopple@nvidia.com>, Zi Yan <ziy@nvidia.com>,
	 Yang Shi <shy828301@gmail.com>,
	 Oscar Salvador <osalvador@suse.de>,
	 Matthew Wilcox <willy@infradead.org>,
	 Bharata B Rao <bharata@amd.com>,
	 haoxin <xhao@linux.alibaba.com>
Subject: Re: [PATCH -v2 1/9] migrate_pages: organize stats with struct migrate_pages_stats
Date: Wed, 11 Jan 2023 10:15:40 +0800	[thread overview]
Message-ID: <87a62p4y0j.fsf@yhuang6-desk2.ccr.corp.intel.com> (raw)
In-Reply-To: <9d27d10c-ba9a-045d-c170-fa3c4d6ea416@linux.alibaba.com> (Baolin Wang's message of "Tue, 10 Jan 2023 18:03:09 +0800")

Baolin Wang <baolin.wang@linux.alibaba.com> writes:

> On 1/10/2023 3:53 PM, Huang Ying wrote:
>> Define struct migrate_pages_stats to organize the various statistics
>> in migrate_pages().  This makes it easier to collect and consume the
>> statistics in multiple functions.  This will be needed in the
>> following patches in the series.
>> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
>> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>> Cc: Yang Shi <shy828301@gmail.com>
>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>> Cc: Oscar Salvador <osalvador@suse.de>
>> Cc: Matthew Wilcox <willy@infradead.org>
>> Cc: Bharata B Rao <bharata@amd.com>
>> Cc: haoxin <xhao@linux.alibaba.com>
>> ---
>>   mm/migrate.c | 60 +++++++++++++++++++++++++++++-----------------------
>>   1 file changed, 34 insertions(+), 26 deletions(-)
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index a4d3fc65085f..d21de40861a0 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -1396,6 +1396,16 @@ static inline int try_split_folio(struct folio *folio, struct list_head *split_f
>>   	return rc;
>>   }
>>   +struct migrate_pages_stats {
>> +	int nr_succeeded;	/* Normal pages and THP migrated successfully, in units
>> +				   of base pages */
>> +	int nr_failed_pages;	/* Normal pages and THP failed to be migrated, in units
>> +				   of base pages.  Untried pages aren't counted */
>
> I think above 2 fields also contain the number of hugetlb
> pages. Otherwise looks good to me.

Yes.  Good catch, will update the patch.

> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>

Thanks!

Best Regards,
Huang, Ying

>
>> +	int nr_thp_succeeded;	/* THP migrated successfully */
>> +	int nr_thp_failed;	/* THP failed to be migrated */
>> +	int nr_thp_split;	/* THP split before migrating */
>> +};
>> +
>>   /*
>>    * migrate_pages - migrate the folios specified in a list, to the free folios
>>    *		   supplied as the target for the page migration
>> @@ -1430,13 +1440,8 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>>   	int large_retry = 1;
>>   	int thp_retry = 1;
>>   	int nr_failed = 0;
>> -	int nr_failed_pages = 0;
>>   	int nr_retry_pages = 0;
>> -	int nr_succeeded = 0;
>> -	int nr_thp_succeeded = 0;
>>   	int nr_large_failed = 0;
>> -	int nr_thp_failed = 0;
>> -	int nr_thp_split = 0;
>>   	int pass = 0;
>>   	bool is_large = false;
>>   	bool is_thp = false;
>> @@ -1446,9 +1451,11 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>>   	LIST_HEAD(split_folios);
>>   	bool nosplit = (reason == MR_NUMA_MISPLACED);
>>   	bool no_split_folio_counting = false;
>> +	struct migrate_pages_stats stats;
>>     	trace_mm_migrate_pages_start(mode, reason);
>>   +	memset(&stats, 0, sizeof(stats));
>>   split_folio_migration:
>>   	for (pass = 0; pass < 10 && (retry || large_retry); pass++) {
>>   		retry = 0;
>> @@ -1502,9 +1509,9 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>>   				/* Large folio migration is unsupported */
>>   				if (is_large) {
>>   					nr_large_failed++;
>> -					nr_thp_failed += is_thp;
>> +					stats.nr_thp_failed += is_thp;
>>   					if (!try_split_folio(folio, &split_folios)) {
>> -						nr_thp_split += is_thp;
>> +						stats.nr_thp_split += is_thp;
>>   						break;
>>   					}
>>   				/* Hugetlb migration is unsupported */
>> @@ -1512,7 +1519,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>>   					nr_failed++;
>>   				}
>>   -				nr_failed_pages += nr_pages;
>> +				stats.nr_failed_pages += nr_pages;
>>   				list_move_tail(&folio->lru, &ret_folios);
>>   				break;
>>   			case -ENOMEM:
>> @@ -1522,13 +1529,13 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>>   				 */
>>   				if (is_large) {
>>   					nr_large_failed++;
>> -					nr_thp_failed += is_thp;
>> +					stats.nr_thp_failed += is_thp;
>>   					/* Large folio NUMA faulting doesn't split to retry. */
>>   					if (!nosplit) {
>>   						int ret = try_split_folio(folio, &split_folios);
>>     						if (!ret) {
>> -							nr_thp_split += is_thp;
>> +							stats.nr_thp_split += is_thp;
>>   							break;
>>   						} else if (reason == MR_LONGTERM_PIN &&
>>   							   ret == -EAGAIN) {
>> @@ -1546,7 +1553,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>>   					nr_failed++;
>>   				}
>>   -				nr_failed_pages += nr_pages +
>> nr_retry_pages;
>> +				stats.nr_failed_pages += nr_pages + nr_retry_pages;
>>   				/*
>>   				 * There might be some split folios of fail-to-migrate large
>>   				 * folios left in split_folios list. Move them back to migration
>> @@ -1556,7 +1563,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>>   				list_splice_init(&split_folios, from);
>>   				/* nr_failed isn't updated for not used */
>>   				nr_large_failed += large_retry;
>> -				nr_thp_failed += thp_retry;
>> +				stats.nr_thp_failed += thp_retry;
>>   				goto out;
>>   			case -EAGAIN:
>>   				if (is_large) {
>> @@ -1568,8 +1575,8 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>>   				nr_retry_pages += nr_pages;
>>   				break;
>>   			case MIGRATEPAGE_SUCCESS:
>> -				nr_succeeded += nr_pages;
>> -				nr_thp_succeeded += is_thp;
>> +				stats.nr_succeeded += nr_pages;
>> +				stats.nr_thp_succeeded += is_thp;
>>   				break;
>>   			default:
>>   				/*
>> @@ -1580,20 +1587,20 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>>   				 */
>>   				if (is_large) {
>>   					nr_large_failed++;
>> -					nr_thp_failed += is_thp;
>> +					stats.nr_thp_failed += is_thp;
>>   				} else if (!no_split_folio_counting) {
>>   					nr_failed++;
>>   				}
>>   -				nr_failed_pages += nr_pages;
>> +				stats.nr_failed_pages += nr_pages;
>>   				break;
>>   			}
>>   		}
>>   	}
>>   	nr_failed += retry;
>>   	nr_large_failed += large_retry;
>> -	nr_thp_failed += thp_retry;
>> -	nr_failed_pages += nr_retry_pages;
>> +	stats.nr_thp_failed += thp_retry;
>> +	stats.nr_failed_pages += nr_retry_pages;
>>   	/*
>>   	 * Try to migrate split folios of fail-to-migrate large folios, no
>>   	 * nr_failed counting in this round, since all split folios of a
>> @@ -1626,16 +1633,17 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>>   	if (list_empty(from))
>>   		rc = 0;
>>   -	count_vm_events(PGMIGRATE_SUCCESS, nr_succeeded);
>> -	count_vm_events(PGMIGRATE_FAIL, nr_failed_pages);
>> -	count_vm_events(THP_MIGRATION_SUCCESS, nr_thp_succeeded);
>> -	count_vm_events(THP_MIGRATION_FAIL, nr_thp_failed);
>> -	count_vm_events(THP_MIGRATION_SPLIT, nr_thp_split);
>> -	trace_mm_migrate_pages(nr_succeeded, nr_failed_pages, nr_thp_succeeded,
>> -			       nr_thp_failed, nr_thp_split, mode, reason);
>> +	count_vm_events(PGMIGRATE_SUCCESS, stats.nr_succeeded);
>> +	count_vm_events(PGMIGRATE_FAIL, stats.nr_failed_pages);
>> +	count_vm_events(THP_MIGRATION_SUCCESS, stats.nr_thp_succeeded);
>> +	count_vm_events(THP_MIGRATION_FAIL, stats.nr_thp_failed);
>> +	count_vm_events(THP_MIGRATION_SPLIT, stats.nr_thp_split);
>> +	trace_mm_migrate_pages(stats.nr_succeeded, stats.nr_failed_pages,
>> +			       stats.nr_thp_succeeded, stats.nr_thp_failed,
>> +			       stats.nr_thp_split, mode, reason);
>>     	if (ret_succeeded)
>> -		*ret_succeeded = nr_succeeded;
>> +		*ret_succeeded = stats.nr_succeeded;
>>     	return rc;
>>   }


  reply	other threads:[~2023-01-11  2:16 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-10  7:53 [PATCH -v2 0/9] migrate_pages(): batch TLB flushing Huang Ying
2023-01-10  7:53 ` [PATCH -v2 1/9] migrate_pages: organize stats with struct migrate_pages_stats Huang Ying
2023-01-10 10:03   ` Baolin Wang
2023-01-11  2:15     ` Huang, Ying [this message]
2023-01-10  7:53 ` [PATCH -v2 2/9] migrate_pages: separate hugetlb folios migration Huang Ying
2023-01-10 10:30   ` Baolin Wang
2023-01-11  2:04     ` Huang, Ying
2023-01-10  7:53 ` [PATCH -v2 3/9] migrate_pages: restrict number of pages to migrate in batch Huang Ying
2023-01-11  3:19   ` Baolin Wang
2023-01-10  7:53 ` [PATCH -v2 4/9] migrate_pages: split unmap_and_move() to _unmap() and _move() Huang Ying
2023-01-11  3:29   ` Baolin Wang
2023-01-10  7:53 ` [PATCH -v2 5/9] migrate_pages: batch _unmap and _move Huang Ying
2023-01-10  7:53 ` [PATCH -v2 6/9] migrate_pages: move migrate_folio_unmap() Huang Ying
2023-01-10  7:53 ` [PATCH -v2 7/9] migrate_pages: share more code between _unmap and _move Huang Ying
2023-01-10  7:53 ` [PATCH -v2 8/9] migrate_pages: batch flushing TLB Huang Ying
2023-01-10  7:53 ` [PATCH -v2 9/9] migrate_pages: move THP/hugetlb migration support check to simplify code Huang Ying
2023-01-11  1:53 ` [PATCH -v2 0/9] migrate_pages(): batch TLB flushing Mike Kravetz
2023-01-11 22:21   ` Mike Kravetz
2023-01-12  0:09     ` Huang, Ying
2023-01-12  2:15       ` Mike Kravetz
2023-01-12  7:17         ` Huang, Ying
2023-01-13  0:11           ` Mike Kravetz
2023-01-13  2:42             ` Huang, Ying
2023-01-13  4:49               ` Mike Kravetz
2023-01-13  5:39                 ` Huang, Ying
2023-01-16  4:35               ` Matthew Wilcox
2023-01-16  6:05                 ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87a62p4y0j.fsf@yhuang6-desk2.ccr.corp.intel.com \
    --to=ying.huang@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=apopple@nvidia.com \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=bharata@amd.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=osalvador@suse.de \
    --cc=shy828301@gmail.com \
    --cc=willy@infradead.org \
    --cc=xhao@linux.alibaba.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox