From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F355BC32771 for ; Thu, 22 Sep 2022 01:14:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8DDBC80014; Wed, 21 Sep 2022 21:14:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 88CA380011; Wed, 21 Sep 2022 21:14:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7558180014; Wed, 21 Sep 2022 21:14:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 65E9380011 for ; Wed, 21 Sep 2022 21:14:31 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 3C84F12133B for ; Thu, 22 Sep 2022 01:14:31 +0000 (UTC) X-FDA: 79937951142.18.4678C0A Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by imf31.hostedemail.com (Postfix) with ESMTP id 09F8E20009 for ; Thu, 22 Sep 2022 01:14:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663809270; x=1695345270; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=7JGgeiKPZDSphe46TEdkWdM/ODuTtoo1WqFDi/uvOlw=; b=SmM8igiWAqmPvkVkR5u2GP9t6rVtgxXkXDM1fpJgaCQe7vJ8eaksUzpH jmXFPvQrScBnfumyFJGsOLHHk356neD19FZpcXpPsKn0v+e/qb50qfgYE +UvQl+O6Z2R5El9Jzg3dDh3pJ1+/cLrxJazF1nywFvOCuxPwJZ+kRkf57 w3+kdqzRe3/bm+hkwXUGQIgL4Ap81n2ItAoh2WEeumNJiE2czLD/cyETE eFFoa8Hc05+3e58FhCtPVlXjGm9Lk0yX8vZaYF+w63jDeTAGyGoxJOsCp v1T00wOOGqA0NpyrOzOVgqlJiqD0pQu9ZeGTsp79q4/GawdFL6YcYh/mI A==; X-IronPort-AV: E=McAfee;i="6500,9779,10477"; a="298883047" X-IronPort-AV: E=Sophos;i="5.93,334,1654585200"; d="scan'208";a="298883047" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2022 18:14:28 -0700 X-IronPort-AV: E=Sophos;i="5.93,334,1654585200"; d="scan'208";a="794895421" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2022 18:14:25 -0700 From: "Huang, Ying" To: Zi Yan Cc: , , Andrew Morton , Yang Shi , Baolin Wang , Oscar Salvador , "Matthew Wilcox" Subject: Re: [RFC 1/6] mm/migrate_pages: separate huge page and normal pages migration References: <20220921060616.73086-1-ying.huang@intel.com> <20220921060616.73086-2-ying.huang@intel.com> <7192F7C6-CA9F-4184-832F-673D2ED5061D@nvidia.com> Date: Thu, 22 Sep 2022 09:14:17 +0800 In-Reply-To: <7192F7C6-CA9F-4184-832F-673D2ED5061D@nvidia.com> (Zi Yan's message of "Wed, 21 Sep 2022 11:55:41 -0400") Message-ID: <87mtasky6u.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii ARC-Authentication-Results: i=1; imf31.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=SmM8igiW; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf31.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.120 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663809270; a=rsa-sha256; cv=none; b=HSPFg0OBtdvod8hsLh1AP4QerMN6j0pp4fMrFRuR0Uw02t8sePQv5RAp2oc1Xltd3O2Tvb DIWgdkXZJKmaCij9+T6i8sFGJKb2C86tHEov7bvK56aXjIvR0LZDWgNKzwgIZaMihIKoC+ u6Knn5g3YJFwQnRdTQFGCYkt/GXyTgw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663809270; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7+U4OCCMMCKjIhNSuQTta4AI8wQ6GRrpGX+BCUdJOIY=; b=SK0MZNMqKyLlDr87HKqzKinU1A6/KJACRG7EeOVnqpTJ8YrIEXIQ0v9DV4hGPCxZgMHhMt ZBihNJuCa9bxul1GrI/x7fLpifJBUyfYfXdCpLX0IybDimgOZ8u5N0D1N7wIVJTF/YmWGj 3FHusP2XthUeYoougg0m9mGmYYqLC28= Authentication-Results: imf31.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=SmM8igiW; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf31.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.120 as permitted sender) smtp.mailfrom=ying.huang@intel.com X-Rspamd-Server: rspam06 X-Stat-Signature: xpdh8eimoohcxaxb9wia48haeyakabyo X-Rspam-User: X-Rspamd-Queue-Id: 09F8E20009 X-HE-Tag: 1663809269-316978 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi, Zi, Thank you for comments! Zi Yan writes: > On 21 Sep 2022, at 2:06, Huang Ying wrote: > >> This is a preparation patch to batch the page unmapping and moving for >> the normal pages and THPs. Based on that we can batch the TLB >> shootdown during the page migration and make it possible to use some >> hardware accelerator for the page copying. >> >> In this patch the huge page (PageHuge()) and normal page and THP >> migration is separated in migrate_pages() to make it easy to change >> the normal page and THP migration implementation. >> >> Signed-off-by: "Huang, Ying" >> Cc: Zi Yan >> Cc: Yang Shi >> Cc: Baolin Wang >> Cc: Oscar Salvador >> Cc: Matthew Wilcox >> --- >> mm/migrate.c | 73 +++++++++++++++++++++++++++++++++++++++++++++------- >> 1 file changed, 64 insertions(+), 9 deletions(-) > > Maybe it would be better to have two subroutines for hugetlb migration > and normal page migration respectively. migrate_pages() becomes very > large at this point. Yes. migrate_pages() becomes even larger with this patchset. I will consider more about how to deal with that. I will try the method in your comments in [3/6] for that too. Best Regards, Huang, Ying >> >> diff --git a/mm/migrate.c b/mm/migrate.c >> index 571d8c9fd5bc..117134f1c6dc 100644 >> --- a/mm/migrate.c >> +++ b/mm/migrate.c >> @@ -1414,6 +1414,66 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> >> trace_mm_migrate_pages_start(mode, reason); >> >> + for (pass = 0; pass < 10 && retry; pass++) { >> + retry = 0; >> + >> + list_for_each_entry_safe(page, page2, from, lru) { >> + nr_subpages = compound_nr(page); >> + cond_resched(); >> + >> + if (!PageHuge(page)) >> + continue; >> + >> + rc = unmap_and_move_huge_page(get_new_page, >> + put_new_page, private, page, >> + pass > 2, mode, reason, >> + &ret_pages); >> + /* >> + * The rules are: >> + * Success: hugetlb page will be put back >> + * -EAGAIN: stay on the from list >> + * -ENOMEM: stay on the from list >> + * -ENOSYS: stay on the from list >> + * Other errno: put on ret_pages list then splice to >> + * from list >> + */ >> + switch(rc) { >> + case -ENOSYS: >> + /* Hugetlb migration is unsupported */ >> + nr_failed++; >> + nr_failed_pages += nr_subpages; >> + list_move_tail(&page->lru, &ret_pages); >> + break; >> + case -ENOMEM: >> + /* >> + * When memory is low, don't bother to try to migrate >> + * other pages, just exit. >> + */ >> + nr_failed++; >> + nr_failed_pages += nr_subpages + nr_retry_pages; >> + goto out; >> + case -EAGAIN: >> + retry++; >> + nr_retry_pages += nr_subpages; >> + break; >> + case MIGRATEPAGE_SUCCESS: >> + nr_succeeded += nr_subpages; >> + break; >> + default: >> + /* >> + * Permanent failure (-EBUSY, etc.): >> + * unlike -EAGAIN case, the failed page is >> + * removed from migration page list and not >> + * retried in the next outer loop. >> + */ >> + nr_failed++; >> + nr_failed_pages += nr_subpages; >> + break; >> + } >> + } >> + } >> + nr_failed += retry; >> + retry = 1; >> thp_subpage_migration: >> for (pass = 0; pass < 10 && (retry || thp_retry); pass++) { >> retry = 0; >> @@ -1431,18 +1491,14 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> cond_resched(); >> >> if (PageHuge(page)) >> - rc = unmap_and_move_huge_page(get_new_page, >> - put_new_page, private, page, >> - pass > 2, mode, reason, >> - &ret_pages); >> - else >> - rc = unmap_and_move(get_new_page, put_new_page, >> + continue; >> + >> + rc = unmap_and_move(get_new_page, put_new_page, >> private, page, pass > 2, mode, >> reason, &ret_pages); >> /* >> * The rules are: >> - * Success: non hugetlb page will be freed, hugetlb >> - * page will be put back >> + * Success: page will be freed >> * -EAGAIN: stay on the from list >> * -ENOMEM: stay on the from list >> * -ENOSYS: stay on the from list >> @@ -1468,7 +1524,6 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> nr_thp_split++; >> break; >> } >> - /* Hugetlb migration is unsupported */ >> } else if (!no_subpage_counting) { >> nr_failed++; >> } >> -- >> 2.35.1 > > > -- > Best Regards, > Yan, Zi