From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C80EDC3DA7A for ; Thu, 5 Jan 2023 05:52:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 25FE88E0002; Thu, 5 Jan 2023 00:52:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1E8408E0001; Thu, 5 Jan 2023 00:52:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 03A3B8E0002; Thu, 5 Jan 2023 00:52:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id E28E78E0001 for ; Thu, 5 Jan 2023 00:52:56 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id C1F04409C1 for ; Thu, 5 Jan 2023 05:52:56 +0000 (UTC) X-FDA: 80319676752.02.BF7A9D6 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by imf17.hostedemail.com (Postfix) with ESMTP id E345E40014 for ; Thu, 5 Jan 2023 05:52:53 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=DKckJ0Bv; spf=pass (imf17.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.120 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672897974; a=rsa-sha256; cv=none; b=WMN639mnsG/970G+uqaZYanpIyhpRtlX9ABDT+Nd35dAhhlHjr3HtiAzDWO7oMf9xyDUhE eqkCEEVVs2LD9CPt0LSqfxdPR5tJhyJ+4Ntppl/jaFiOmmsIlukS4oEudm1ZeDv3gg0ayO fP4nE8k2Law/N718WxTF2UQfQ3TyOts= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=DKckJ0Bv; spf=pass (imf17.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.120 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672897974; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=G6A3a2bruPbz/SyvbjAC+IUiMxO+ITxTXoceb0pABT4=; b=AsAI92dN1a/4OZIQqwVwjeT6e6hp0n08bAO7eNOmtrt3DlHoKTHicqbYwrNKmSYxOEAdBU rMUg5Q6NLquld6/wyAi2/MUMHkYjdc4IqWXfmTQGaZKrZl6q4MdHyj+xQ+NGerxm5vDSOH LFCNXcFkykqfwH8AyzFITLwN1C7wrac= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1672897974; x=1704433974; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=3EXkT2fNQRTFG2OT2VWG5ftzhhu7iIfPyf2Gk/i3nmc=; b=DKckJ0BvEita0BhnZnZzIfMjcZLZiYLcCzntxV5LNDu2rRgzhqo/bN5F +VS5XgPId3//EDUdPaXBX2fT1iTnrpo2O/eThq/z8erTEM0PXJtNEJi7a 9wr2GhzTeQL9XWkmDo/2MiA9XXHNxYR+LADkpN1/KSPrno5xvIlsZokRH w+cV2QSjuSGpSguuDm1DFP5sbdfwluQDwOmMzgIiFJSxWneqJnHcH2N8v XhGt9JuGOkQd23PF4ZfjVp9nHgQsAjXcYN9HAD+W+/wtuac5z+Str059j 0Jk3q6Uj5H2twNGU1SoVTvmL4IW6I/2kf2PT2IFcQT6KLhrEkd8+6hPEP w==; X-IronPort-AV: E=McAfee;i="6500,9779,10580"; a="320830742" X-IronPort-AV: E=Sophos;i="5.96,302,1665471600"; d="scan'208";a="320830742" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jan 2023 21:52:51 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10580"; a="655435332" X-IronPort-AV: E=Sophos;i="5.96,302,1665471600"; d="scan'208";a="655435332" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jan 2023 21:52:47 -0800 From: "Huang, Ying" To: Alistair Popple Cc: Andrew Morton , , , Zi Yan , Yang Shi , Baolin Wang , "Oscar Salvador" , Matthew Wilcox , "Bharata B Rao" , haoxin Subject: Re: [PATCH 2/8] migrate_pages: separate hugetlb folios migration References: <20221227002859.27740-1-ying.huang@intel.com> <20221227002859.27740-3-ying.huang@intel.com> <87pmbttxmj.fsf@nvidia.com> Date: Thu, 05 Jan 2023 13:51:54 +0800 In-Reply-To: <87pmbttxmj.fsf@nvidia.com> (Alistair Popple's message of "Thu, 05 Jan 2023 15:13:25 +1100") Message-ID: <87pmbtedfp.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: E345E40014 X-Stat-Signature: cht3pu8tmog8skczkh5oo7ug4i6a8p11 X-HE-Tag: 1672897973-822914 X-HE-Meta: U2FsdGVkX1+IIKm6s4ojnu391j7Idd9VFB6Z4Yz69KJ96QYDmNivNyRKwhobHp6A0vhSTXHU+un2evGLWOuHRBSBJhIjaV/amd5DAnB03rL9Zf6iJvQTpE+DqkqVpRgOFWkn1LqbFf2szuFERaOoPjWPGZfZySoVNDHnSSemWIxV/4R0ob/op8SO6ihsvgYgkvLGsnxwgerq2YL/ZcfSHruwolhDgIrY7LCh9QGiA83lao44E0WQtHx3do//z6WSN1LMlv22qu283gZ400gaifAYfdEzm73Eg+luewY8OymE6HvFPZnXDlej+JsBp2VGzqeAToQnWZPt9SYehElYycF3t2UYx2Gspjhap1evwkkjY5XRb/myw3iK+VCvrU5S38GSn1mmu7tIysRHf+hCU2KKhcB6/J9ogFK8ao+rfQL8kCgD7PzJglNPetia6TWzORqeqEZEllf/fHWA/z1LEjittRU/jZf6uJ3QIu01oF8S7e8cTWrZXHIGLtuRpK7gO5fW8SwtQH3wAy0u0sKvCXlTQSU9jpvLVBIuJ7eBKQljDXNKvIdXTz/x/LkOmoewGxll1RcwuOQjauHMWDjcK8s6OhcZ7hL8g3E5Yd8B9t1b59MUpRrR27r8SP6XVKqEcVcQ4VhhWpKOjNg9lgJJEFGyhJPYlXLve0b/hVnfgADqbhginS8pBlbcqRFNOERbBf+sIz2EZjudsOXMwgQY4vLlT4IYXPA/Us63zty3AO3j2Azew/JOKO1LTUSlVN92CWpa7rfyDkYrnGjVZQkq313+D67/m+2D1bgF0OEcr4UR/+GujZrNBWrHjdqGOHnDqrXY75wTxAzKH3HW1/m+0x3hSLyrwZ9i39Jmz/uKHOKlFSlO9aTLCp+WgVac7kfKfqNUFgfdHJdyuPnK3fLv43C4YkJPF9Dp X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Alistair Popple writes: > Huang Ying writes: > >> This is a preparation patch to batch the folio unmapping and moving >> for the non-hugetlb folios. Based on that we can batch the TLB >> shootdown during the folio migration and make it possible to use some >> hardware accelerator for the folio copying. >> >> In this patch the hugetlb folios and non-hugetlb folios migration is >> separated in migrate_pages() to make it easy to change the non-hugetlb >> folios migration implementation. >> >> Signed-off-by: "Huang, Ying" >> Cc: Zi Yan >> Cc: Yang Shi >> Cc: Baolin Wang >> Cc: Oscar Salvador >> Cc: Matthew Wilcox >> Cc: Bharata B Rao >> Cc: Alistair Popple >> Cc: haoxin >> --- >> mm/migrate.c | 114 ++++++++++++++++++++++++++++++++++++++++++++------- >> 1 file changed, 99 insertions(+), 15 deletions(-) >> >> diff --git a/mm/migrate.c b/mm/migrate.c >> index ec9263a33d38..bdbe73fe2eb7 100644 >> --- a/mm/migrate.c >> +++ b/mm/migrate.c >> @@ -1404,6 +1404,87 @@ struct migrate_pages_stats { >> int nr_thp_split; >> }; >> >> +static int migrate_hugetlbs(struct list_head *from, new_page_t get_new_page, >> + free_page_t put_new_page, unsigned long private, >> + enum migrate_mode mode, int reason, >> + struct migrate_pages_stats *stats, >> + struct list_head *ret_folios) >> +{ >> + int retry = 1; >> + int nr_failed = 0; >> + int nr_retry_pages = 0; >> + int pass = 0; >> + struct folio *folio, *folio2; >> + int rc = 0, nr_pages; >> + >> + for (pass = 0; pass < 10 && retry; pass++) { >> + retry = 0; >> + nr_retry_pages = 0; >> + >> + list_for_each_entry_safe(folio, folio2, from, lru) { >> + if (!folio_test_hugetlb(folio)) >> + continue; >> + >> + nr_pages = folio_nr_pages(folio); >> + >> + cond_resched(); >> + >> + rc = unmap_and_move_huge_page(get_new_page, >> + put_new_page, private, >> + &folio->page, pass > 2, mode, >> + reason, ret_folios); >> + /* >> + * The rules are: >> + * Success: hugetlb folio will be put back >> + * -EAGAIN: stay on the from list >> + * -ENOMEM: stay on the from list >> + * -ENOSYS: stay on the from list >> + * Other errno: put on ret_folios list >> + */ >> + switch(rc) { >> + case -ENOSYS: >> + /* Hugetlb migration is unsupported */ >> + nr_failed++; >> + stats->nr_failed_pages += nr_pages; >> + list_move_tail(&folio->lru, ret_folios); >> + break; >> + case -ENOMEM: >> + /* >> + * When memory is low, don't bother to try to migrate >> + * other folios, just exit. >> + */ >> + nr_failed++; > > This currently isn't relevant for -ENOMEM and I think it would be > clearer if it was dropped. OK. >> + stats->nr_failed_pages += nr_pages; > > Makes sense not to continue migration with low memory, but shouldn't we > add the remaining unmigrated hugetlb folios to stats->nr_failed_pages as > well? Ie. don't we still have to continue the iteration to to find and > account for these? I think nr_failed_pages only counts tried pages. IIUC, it's the original behavior and behavior for non-hugetlb pages too. >> + goto out; > > Given this is the only use of the out label, and that there is a special > case for -ENOMEM there anyway I think it would be clearer to return > directly. Sounds good. Will do that in next version. >> + case -EAGAIN: >> + retry++; >> + nr_retry_pages += nr_pages; >> + break; >> + case MIGRATEPAGE_SUCCESS: >> + stats->nr_succeeded += nr_pages; >> + break; >> + default: >> + /* >> + * Permanent failure (-EBUSY, etc.): >> + * unlike -EAGAIN case, the failed folio is >> + * removed from migration folio list and not >> + * retried in the next outer loop. >> + */ >> + nr_failed++; >> + stats->nr_failed_pages += nr_pages; >> + break; >> + } >> + } >> + } >> +out: >> + nr_failed += retry; >> + stats->nr_failed_pages += nr_retry_pages; >> + if (rc != -ENOMEM) >> + rc = nr_failed; >> + >> + return rc; >> +} >> + >> /* >> * migrate_pages - migrate the folios specified in a list, to the free folios >> * supplied as the target for the page migration >> @@ -1437,7 +1518,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> int retry = 1; >> int large_retry = 1; >> int thp_retry = 1; >> - int nr_failed = 0; >> + int nr_failed; >> int nr_retry_pages = 0; >> int nr_large_failed = 0; >> int pass = 0; >> @@ -1454,6 +1535,12 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> trace_mm_migrate_pages_start(mode, reason); >> >> memset(&stats, 0, sizeof(stats)); >> + rc = migrate_hugetlbs(from, get_new_page, put_new_page, private, mode, reason, >> + &stats, &ret_folios); >> + if (rc < 0) >> + goto out; >> + nr_failed = rc; >> + >> split_folio_migration: >> for (pass = 0; pass < 10 && (retry || large_retry); pass++) { >> retry = 0; >> @@ -1462,30 +1549,28 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> nr_retry_pages = 0; >> >> list_for_each_entry_safe(folio, folio2, from, lru) { >> + if (folio_test_hugetlb(folio)) { > > How do we hit this case? Shouldn't migrate_hugetlbs() have already moved > any hugetlb folios off the from list? Retried hugetlb folios will be kept in from list. >> + list_move_tail(&folio->lru, &ret_folios); >> + continue; >> + } >> + >> /* >> * Large folio statistics is based on the source large >> * folio. Capture required information that might get >> * lost during migration. >> */ >> - is_large = folio_test_large(folio) && !folio_test_hugetlb(folio); >> + is_large = folio_test_large(folio); >> is_thp = is_large && folio_test_pmd_mappable(folio); >> nr_pages = folio_nr_pages(folio); >> + >> cond_resched(); >> >> - if (folio_test_hugetlb(folio)) >> - rc = unmap_and_move_huge_page(get_new_page, >> - put_new_page, private, >> - &folio->page, pass > 2, mode, >> - reason, >> - &ret_folios); >> - else >> - rc = unmap_and_move(get_new_page, put_new_page, >> - private, folio, pass > 2, mode, >> - reason, &ret_folios); >> + rc = unmap_and_move(get_new_page, put_new_page, >> + private, folio, pass > 2, mode, >> + reason, &ret_folios); >> /* >> * The rules are: >> - * Success: non hugetlb folio will be freed, hugetlb >> - * folio will be put back >> + * Success: folio will be freed >> * -EAGAIN: stay on the from list >> * -ENOMEM: stay on the from list >> * -ENOSYS: stay on the from list >> @@ -1512,7 +1597,6 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> stats.nr_thp_split += is_thp; >> break; >> } >> - /* Hugetlb migration is unsupported */ >> } else if (!no_split_folio_counting) { >> nr_failed++; >> } Best Regards, Huang, Ying