From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C846C3DA7D for ; Thu, 5 Jan 2023 07:07:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A47DC8E0002; Thu, 5 Jan 2023 02:07:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9F81E8E0001; Thu, 5 Jan 2023 02:07:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8BF8C8E0002; Thu, 5 Jan 2023 02:07:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 7BD578E0001 for ; Thu, 5 Jan 2023 02:07:55 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 44744A0B2F for ; Thu, 5 Jan 2023 07:07:55 +0000 (UTC) X-FDA: 80319865710.11.D592C7A Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf18.hostedemail.com (Postfix) with ESMTP id A88741C0008 for ; Thu, 5 Jan 2023 07:07:52 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="MC0Lf/67"; spf=pass (imf18.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672902473; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/7ASrZMTFF6+jytc8s29O5INbJ9CtVo1sS68J/GNX6Q=; b=kpMHwRjEnUmtj6+TBCo6Bftal2xvo+fpw2ZMmrwVVct+vonXz8LgYn2dVEWjJ8cCvEK77k TAYd6paRX1zXJ7s75Jizf2s26xFSGPK5Pj/gIb0/QsXqCk0imszP5IWPMa6JaJus+fTY7P yLil1twz/EEwkqwMAFxRPWpKFYAJejw= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="MC0Lf/67"; spf=pass (imf18.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672902473; a=rsa-sha256; cv=none; b=RM9PvzjaB/8Vx6olujIzKB/PTOvgTgQZtSsfJkksV2V8ftxvbsoSJhp+vUM8NzNm0QMiiO DDZWV63py4nJiwBI1M8SSqR3E1OPdAGazILuYnO/mpYMw3kScM013VgEY6A7LnGyKSyIA0 o13UOIQALliISgZ7BcmU5ZLZ9pEH9pQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1672902472; x=1704438472; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=aDP+IGMLjZx3A3AwWc6RZJTNibITPhyuge/CSc9dDcc=; b=MC0Lf/67oTWEZe71Y7tEqd9NUHdm3mdQwfujE9dQykmpyQxpRtd3PIQ1 +zYg/C++hTYvGNcRR9v/7NLJawM8+S1GEwnaxEXsDDUHORcF4/q8Tir5G L3gLnV3Tm6hB/9TwQhpCXISy7LU5w68sWjm/uLXQjL+9dfplyZn5rNPyg zpDhQ3R3o0obbOcUmL+BrbJGNdFDusi6pHw+hc7UROlCQi3i/6VwfRje5 yTHpDWZt18eGFQEx9TMqn6rB3Dtc35eSBIZeYLBQMoYd0c2tm9ez88lle EKQY9k9OG6QoFSn6rlqasaSegJ3If8NMRQyCldU4Vuh3mP1vEK9uGbFUB Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10580"; a="323365929" X-IronPort-AV: E=Sophos;i="5.96,302,1665471600"; d="scan'208";a="323365929" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jan 2023 23:07:50 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10580"; a="686000778" X-IronPort-AV: E=Sophos;i="5.96,302,1665471600"; d="scan'208";a="686000778" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jan 2023 23:07:47 -0800 From: "Huang, Ying" To: Alistair Popple Cc: Andrew Morton , , , Zi Yan , Yang Shi , Baolin Wang , "Oscar Salvador" , Matthew Wilcox , "Bharata B Rao" , haoxin Subject: Re: [PATCH 1/8] migrate_pages: organize stats with struct migrate_pages_stats References: <20221227002859.27740-1-ying.huang@intel.com> <20221227002859.27740-2-ying.huang@intel.com> <87y1qhu0to.fsf@nvidia.com> <87lemheddk.fsf@yhuang6-desk2.ccr.corp.intel.com> <87358psc99.fsf@nvidia.com> Date: Thu, 05 Jan 2023 15:06:53 +0800 In-Reply-To: <87358psc99.fsf@nvidia.com> (Alistair Popple's message of "Thu, 05 Jan 2023 17:50:14 +1100") Message-ID: <87o7rdbgtu.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Stat-Signature: c31dpasqccrrpmgc4f97s8z6ssnj5ofr X-Rspam-User: X-Rspamd-Queue-Id: A88741C0008 X-Rspamd-Server: rspam06 X-HE-Tag: 1672902472-761154 X-HE-Meta: U2FsdGVkX18ko8Do89F4hBkxnwk/H+li/nOqbFVXa6gAsSxZLBFGbeQKMiL/CrhSiQH6/CVbP/0CEyWQOvN/SXFDbMI0ashJ6dp2uhGHL+FhUrmjzmgtQuPwshX1ASF36d5roxmPIkGUdlNfpmhbnGI+vEoMwG5iF04ymeS4v7mStlke0whSNAHNDxB9S4sqW6J2ITaQs2uV0eG+o673RjmXzgm54Wb+jsHr9fkftI/+JyPgYmbuQnOIPGI1Js9qTzLn/nsXib/i/bJO8NukNez4Nc6cnfrhLNW/PbL3SZAm6TBFppnO7ej6hlb/8W8e4vU9Bg7npv1PAmSUOE7hGl3RaiR5YES+4b3K7EFIirCsWEdNrk08YwaivFvQdkEIuUI0p8Xz5Gw4oUiMXO61tGmMAC7SDGc7N0OGx8sadMVxsw6uK2afmCSv/Ia99jw2ie66LexZusKo3oCEYICKDP9U+hUQKXuBs0AByiF0evdLFeX6KXMu0VD38spGXe11NDvyud2jcYuQGPHtMngDKfvszrVJNbaXh8CEHvyCfgt+coSQ3HWa/LP59O0pqw0bD0lQtb7o9QzgCyrh86f/jeRVMdwSIdlHH5A08aUAcSyj1T110Yf6YfCfMsNsx7qKP640z6G4f6aYmMB6tdKqTHYwvUUZ7zsHN9ZYgP15MkE5btB5Og4mF7RRohVlZ/xi4T+GQNHxJwFCRCpLYv9EIz+CIPFnNpfU+6yQguXQuMGy0Tv0JWLLTISHEwKbRhVfzaN1Mj5dXeF8ZZZ4lmD3XO78Tv8l9M0YMFhmTO436+PyV6IZXqn3+T6bkK2hysKi9CNKdrsmwYK5HamxGjtz4XDeRJOjG1FhC25VQeI5i7kAv+0jhRAHcaCAcn9NBVC15uUz2gug3+w3VWa0LJDk/QxL0oTgOhUO X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Alistair Popple writes: > "Huang, Ying" writes: > >> Alistair Popple writes: >> >>> Huang Ying writes: >>> >>>> Define struct migrate_pages_stats to organize the various statistics >>>> in migrate_pages(). This makes it easier to collect and consume the >>>> statistics in multiple functions. This will be needed in the >>>> following patches in the series. >>>> >>>> Signed-off-by: "Huang, Ying" >>>> Cc: Zi Yan >>>> Cc: Yang Shi >>>> Cc: Baolin Wang >>>> Cc: Oscar Salvador >>>> Cc: Matthew Wilcox >>>> Cc: Bharata B Rao >>>> Cc: Alistair Popple >>>> Cc: haoxin >>>> --- >>>> mm/migrate.c | 58 +++++++++++++++++++++++++++++----------------------- >>>> 1 file changed, 32 insertions(+), 26 deletions(-) >>>> >>>> diff --git a/mm/migrate.c b/mm/migrate.c >>>> index a4d3fc65085f..ec9263a33d38 100644 >>>> --- a/mm/migrate.c >>>> +++ b/mm/migrate.c >>>> @@ -1396,6 +1396,14 @@ static inline int try_split_folio(struct folio *folio, struct list_head *split_f >>>> return rc; >>>> } >>>> >>>> +struct migrate_pages_stats { >>>> + int nr_succeeded; >>>> + int nr_failed_pages; >>>> + int nr_thp_succeeded; >>>> + int nr_thp_failed; >>>> + int nr_thp_split; >>> >>> I think some brief comments in the code for what each stat is tracking >>> and their relationship to each other would be helpful (ie. does >>> nr_succeeded include thp subpages, etc). Or at least a reference to >>> where this is documented (ie. page_migration.rst) as I recall there has >>> been some confusion in the past that has lead to bugs. >> >> OK, will do that in the next version. > > You should add that nr_failed_pages doesn't count failures of migrations > that weren't attempted because eg. allocation failure as that was a > surprising detail to me at least. Unless of course you decide to fix > that :-) nr_failed_pages are used for /proc/vmstat. Syscall move_pages() cares about how many pages requested but not tried. But the system wide statistics doesn't care about it. I think that is the appropriate. Best Regards, Huang, Ying >>> Otherwise the patch looks good so: >>> >>> Reviewed-by: Alistair Popple >> >> Thanks! >> >> Best Regards, >> Huang, Ying >> >>>> +}; >>>> + >>>> /* >>>> * migrate_pages - migrate the folios specified in a list, to the free folios >>>> * supplied as the target for the page migration >>>> @@ -1430,13 +1438,8 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >>>> int large_retry = 1; >>>> int thp_retry = 1; >>>> int nr_failed = 0; >>>> - int nr_failed_pages = 0; >>>> int nr_retry_pages = 0; >>>> - int nr_succeeded = 0; >>>> - int nr_thp_succeeded = 0; >>>> int nr_large_failed = 0; >>>> - int nr_thp_failed = 0; >>>> - int nr_thp_split = 0; >>>> int pass = 0; >>>> bool is_large = false; >>>> bool is_thp = false; >>>> @@ -1446,9 +1449,11 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >>>> LIST_HEAD(split_folios); >>>> bool nosplit = (reason == MR_NUMA_MISPLACED); >>>> bool no_split_folio_counting = false; >>>> + struct migrate_pages_stats stats; >>>> >>>> trace_mm_migrate_pages_start(mode, reason); >>>> >>>> + memset(&stats, 0, sizeof(stats)); >>>> split_folio_migration: >>>> for (pass = 0; pass < 10 && (retry || large_retry); pass++) { >>>> retry = 0; >>>> @@ -1502,9 +1507,9 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >>>> /* Large folio migration is unsupported */ >>>> if (is_large) { >>>> nr_large_failed++; >>>> - nr_thp_failed += is_thp; >>>> + stats.nr_thp_failed += is_thp; >>>> if (!try_split_folio(folio, &split_folios)) { >>>> - nr_thp_split += is_thp; >>>> + stats.nr_thp_split += is_thp; >>>> break; >>>> } >>>> /* Hugetlb migration is unsupported */ >>>> @@ -1512,7 +1517,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >>>> nr_failed++; >>>> } >>>> >>>> - nr_failed_pages += nr_pages; >>>> + stats.nr_failed_pages += nr_pages; >>>> list_move_tail(&folio->lru, &ret_folios); >>>> break; >>>> case -ENOMEM: >>>> @@ -1522,13 +1527,13 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >>>> */ >>>> if (is_large) { >>>> nr_large_failed++; >>>> - nr_thp_failed += is_thp; >>>> + stats.nr_thp_failed += is_thp; >>>> /* Large folio NUMA faulting doesn't split to retry. */ >>>> if (!nosplit) { >>>> int ret = try_split_folio(folio, &split_folios); >>>> >>>> if (!ret) { >>>> - nr_thp_split += is_thp; >>>> + stats.nr_thp_split += is_thp; >>>> break; >>>> } else if (reason == MR_LONGTERM_PIN && >>>> ret == -EAGAIN) { >>>> @@ -1546,7 +1551,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >>>> nr_failed++; >>>> } >>>> >>>> - nr_failed_pages += nr_pages + nr_retry_pages; >>>> + stats.nr_failed_pages += nr_pages + nr_retry_pages; >>>> /* >>>> * There might be some split folios of fail-to-migrate large >>>> * folios left in split_folios list. Move them back to migration >>>> @@ -1556,7 +1561,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >>>> list_splice_init(&split_folios, from); >>>> /* nr_failed isn't updated for not used */ >>>> nr_large_failed += large_retry; >>>> - nr_thp_failed += thp_retry; >>>> + stats.nr_thp_failed += thp_retry; >>>> goto out; >>>> case -EAGAIN: >>>> if (is_large) { >>>> @@ -1568,8 +1573,8 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >>>> nr_retry_pages += nr_pages; >>>> break; >>>> case MIGRATEPAGE_SUCCESS: >>>> - nr_succeeded += nr_pages; >>>> - nr_thp_succeeded += is_thp; >>>> + stats.nr_succeeded += nr_pages; >>>> + stats.nr_thp_succeeded += is_thp; >>>> break; >>>> default: >>>> /* >>>> @@ -1580,20 +1585,20 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >>>> */ >>>> if (is_large) { >>>> nr_large_failed++; >>>> - nr_thp_failed += is_thp; >>>> + stats.nr_thp_failed += is_thp; >>>> } else if (!no_split_folio_counting) { >>>> nr_failed++; >>>> } >>>> >>>> - nr_failed_pages += nr_pages; >>>> + stats.nr_failed_pages += nr_pages; >>>> break; >>>> } >>>> } >>>> } >>>> nr_failed += retry; >>>> nr_large_failed += large_retry; >>>> - nr_thp_failed += thp_retry; >>>> - nr_failed_pages += nr_retry_pages; >>>> + stats.nr_thp_failed += thp_retry; >>>> + stats.nr_failed_pages += nr_retry_pages; >>>> /* >>>> * Try to migrate split folios of fail-to-migrate large folios, no >>>> * nr_failed counting in this round, since all split folios of a >>>> @@ -1626,16 +1631,17 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >>>> if (list_empty(from)) >>>> rc = 0; >>>> >>>> - count_vm_events(PGMIGRATE_SUCCESS, nr_succeeded); >>>> - count_vm_events(PGMIGRATE_FAIL, nr_failed_pages); >>>> - count_vm_events(THP_MIGRATION_SUCCESS, nr_thp_succeeded); >>>> - count_vm_events(THP_MIGRATION_FAIL, nr_thp_failed); >>>> - count_vm_events(THP_MIGRATION_SPLIT, nr_thp_split); >>>> - trace_mm_migrate_pages(nr_succeeded, nr_failed_pages, nr_thp_succeeded, >>>> - nr_thp_failed, nr_thp_split, mode, reason); >>>> + count_vm_events(PGMIGRATE_SUCCESS, stats.nr_succeeded); >>>> + count_vm_events(PGMIGRATE_FAIL, stats.nr_failed_pages); >>>> + count_vm_events(THP_MIGRATION_SUCCESS, stats.nr_thp_succeeded); >>>> + count_vm_events(THP_MIGRATION_FAIL, stats.nr_thp_failed); >>>> + count_vm_events(THP_MIGRATION_SPLIT, stats.nr_thp_split); >>>> + trace_mm_migrate_pages(stats.nr_succeeded, stats.nr_failed_pages, >>>> + stats.nr_thp_succeeded, stats.nr_thp_failed, >>>> + stats.nr_thp_split, mode, reason); >>>> >>>> if (ret_succeeded) >>>> - *ret_succeeded = nr_succeeded; >>>> + *ret_succeeded = stats.nr_succeeded; >>>> >>>> return rc; >>>> }