From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CEE4EC61D97 for ; Sun, 29 Jan 2023 00:38:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 075BD6B0072; Sat, 28 Jan 2023 19:38:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0259F6B0073; Sat, 28 Jan 2023 19:38:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E56246B0074; Sat, 28 Jan 2023 19:38:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id D4CF36B0072 for ; Sat, 28 Jan 2023 19:38:03 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id A487CC05A0 for ; Sun, 29 Jan 2023 00:38:03 +0000 (UTC) X-FDA: 80405974446.26.E522D7A Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by imf15.hostedemail.com (Postfix) with ESMTP id 5BA62A0004 for ; Sun, 29 Jan 2023 00:38:00 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=cnno3PLn; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf15.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674952681; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GlvEeuFuD7BUY3Org0xc6d+RCuh6l0/HbhGx5IyH2u0=; b=6PXGUP+Gvr8IIufW2IOJt6+245cBmk8qasVbG40c9lvDzaKMuozfyUp0SWZxZMxGLD7xIM bCaZPDiLiJnVhXu41/YOAdbmjavswnlxupKjlb0hXC4Vz+ZM7BVjw1v+CZHJuonMCMtZR8 wgJiGDRmlg7eV/ELPU+JpO0XFSkY9og= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=cnno3PLn; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf15.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1674952681; a=rsa-sha256; cv=none; b=NTlqN54g5PVlceEfTrwhj05VPniIy4egtyBZPGs19lHfvZVNTGunIa/yiFmJ8wPf1xGOQL VeCqWeBaN+bSTs13TM2U60jEf8T2lEM/qmgaFIQOvdzWs0e3HwyiwkDeeNCZpakIDFMdSi O84hHM8f5xtSq0HCnWZmgvmMyNk4R7I= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1674952680; x=1706488680; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=gK6yNJ0vHyVa4Kuqj72bhs0k5Sg+88tKiBNGsNOF2MA=; b=cnno3PLnzfkE2XRy3JB6G5+obVZyDLnJYZF/PVIihsHGMXZu6WuXWYdI dFKGqfm9bQrS437LGNlvtJotBDZAGtuELWH72o9d0zJ603zRPNefTJhiE Lq2uw0HTzpOpwg2MnSUc1+VGE0qYjXS3Id55l17SvXIZIMorH+kUMcP/T lhXs1dUngH3Ldg+83rkfjcv8td5SnI1uOWHz4VbaprS8huDXN+qnd7RUl tKFOgu2/yGbGuH5zyGnZgeOrC3kGW/2x08ywOs9tF+GKWNbGnCvvcWhdP Di7CyuRdkCqZKzUsZpw9Qi8AaXCFT3mV02aFzkV0oGVZa6u1jej8933T7 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10604"; a="310970643" X-IronPort-AV: E=Sophos;i="5.97,254,1669104000"; d="scan'208";a="310970643" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jan 2023 16:37:57 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10604"; a="837565513" X-IronPort-AV: E=Sophos;i="5.97,254,1669104000"; d="scan'208";a="837565513" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jan 2023 16:37:53 -0800 From: "Huang, Ying" To: Chen Wandun Cc: Andrew Morton , , , Baolin Wang , Zi Yan , Yang Shi , Oscar Salvador , Matthew Wilcox , Bharata B Rao , Alistair Popple , haoxin , Minchan Kim Subject: Re: [PATCH -v3 2/9] migrate_pages: separate hugetlb folios migration References: <20230116063057.653862-1-ying.huang@intel.com> <20230116063057.653862-3-ying.huang@intel.com> Date: Sun, 29 Jan 2023 08:36:54 +0800 In-Reply-To: (Chen Wandun's message of "Sat, 28 Jan 2023 17:29:30 +0800") Message-ID: <87edrech21.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 5BA62A0004 X-Stat-Signature: 5o48or5csemme4ptew7mqy47ye7tfgq9 X-HE-Tag: 1674952680-102670 X-HE-Meta: U2FsdGVkX1/wUMK1ycGgMoyzPyFYGRxSSjlvjkkVxu80Z5I/2LWej4UseFSkT/Fo8IqW3F/7P5AipiXl7e/URqaW0lw9LUHdw/C+CBYenzSUt2OpdPpzn7MZ0PIEq0ZtlHInhq/D4Cwt6I7VyAaM4SjTKf9oTtV1X1nRv3gLUm4Tq8JN0zYEr1mtPEhUDkERGMToTOb7Rg3+C2YxaxRyykAXkhRyQtOzlJO6Ww1OlIiv8d5Q96An0TF1pjctEUvFDwVlP5R34uqYBa+sxoGxSy7NeGH5S11k/Q6joWMERCFmoKV61bIdaLXEUspZQWijb9a+TS8DDqy+/hFg94P9q6VD55CE88FfueNnZDY8DoQR3nf2MsI1/VXvTTLc6NTdlG6TmEEFLt++8r/chSVFd3oI9+opGqDecu8QOjkvU9EKqwtEtCB1FPsqmYHGkoBjwINubNs52o+m87w8OseqwyGdTrGWZEcTWwaRBfChiAox01OZor9nM5P3rPiccywazy1HJBXwHL2ScE9vWNdRPBcy+BMYTU7/lQ9cptkkA9Yv8SAR1B8bq2Bb+qjbTlUPW0ZYchXioxjkoEmc+SLAHxTfg6l6xc/QOEGzk+x5XGBn7L2tIF902oITBdq6R0kdZnAzui97vaVRrdE5BWacAVh43KFuvGHV9sWZwm8NL8IZFT6+/nqHhk626uYc2t/ZPsLlJfNPAD1/j3LiusIeLRRJPIawZmD0wyjgt90CZrVAWfn8KNuvOyQ/0Bvl0kWYY0Xi+ZBFfUn7bGXzzGIKICTZLPlzUZHqX0OIsVo2XaP22aqCdiF8ZBcr1Qtykaii6M+OTkfWGeXF1M6lLSMoe/xJ4Pi5YED8nZX7UBIDlRJJK0ZhUeFQhJrDsmRcu0zT3uaNdftk56CZ+NqFlmEUtbqjlE9Jxx7AiejABJZY+Y4axNPx7ByVEHau0eU4OAKese89iiV40PGAzzjvJkO xC3zT0gu 27dtdE4KeWz4bTCvwrM3tv+0d7jmgJ+VXC9zLGR8dd6B9oTeWYDY1f9fvQt1z0rv0B1n7CYgRTWOif/kp0kcesISKnQi/5fCVItu3+G3UAEZQj3oAtWWGt9/3HeH4dACD+G2+dltsXJzGko6pqC8YUOpAS1FlxTjw5BSl56Kq1psqhbiwsPrXRPoGR2SjTtuTyRzJgqT9m6RLfLa48rWj1ij88F9t8cx67m0x0lwdCkGBXV3m53en0r63v4fp/Sti3fQazFRzzoOv/us/XDpSAKf5BNmv+N6kX9GKMz7c+XPzYEWCiFgJnBpMFpIAQMxwrBLFVGNST3kdBZ/SzRLvABFP4oeaYVxKzrgNId8EmOcLW9RyDqtQSCczblNzJ6mHP7pmZrdB2x2fPVY2+PMwkixMoci/DcYeRyKoIu0EwEkWhP2iYl387B5ch52bAq5lB9kJP8SsF8UWS3j8uv8ykZyChtZFPkXYIXJQ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Chen Wandun writes: > On 2023/1/16 14:30, Huang Ying wrote: >> This is a preparation patch to batch the folio unmapping and moving >> for the non-hugetlb folios. Based on that we can batch the TLB >> shootdown during the folio migration and make it possible to use some >> hardware accelerator for the folio copying. >> >> In this patch the hugetlb folios and non-hugetlb folios migration is >> separated in migrate_pages() to make it easy to change the non-hugetlb >> folios migration implementation. >> >> Signed-off-by: "Huang, Ying" >> Reviewed-by: Baolin Wang >> Cc: Zi Yan >> Cc: Yang Shi >> Cc: Oscar Salvador >> Cc: Matthew Wilcox >> Cc: Bharata B Rao >> Cc: Alistair Popple >> Cc: haoxin >> Cc: Minchan Kim >> --- >> mm/migrate.c | 141 +++++++++++++++++++++++++++++++++++++++++++-------- >> 1 file changed, 119 insertions(+), 22 deletions(-) >> >> diff --git a/mm/migrate.c b/mm/migrate.c >> index ef388a9e4747..be7f37523463 100644 >> --- a/mm/migrate.c >> +++ b/mm/migrate.c >> @@ -1396,6 +1396,8 @@ static inline int try_split_folio(struct folio *folio, struct list_head *split_f >> return rc; >> } >> +#define NR_MAX_MIGRATE_PAGES_RETRY 10 >> + >> struct migrate_pages_stats { >> int nr_succeeded; /* Normal and large folios migrated successfully, in >> units of base pages */ >> @@ -1406,6 +1408,95 @@ struct migrate_pages_stats { >> int nr_thp_split; /* THP split before migrating */ >> }; >> +/* >> + * Returns the number of hugetlb folios that were not migrated, or an error code >> + * after NR_MAX_MIGRATE_PAGES_RETRY attempts or if no hugetlb folios are movable >> + * any more because the list has become empty or no retryable hugetlb folios >> + * exist any more. It is caller's responsibility to call putback_movable_pages() >> + * only if ret != 0. >> + */ >> +static int migrate_hugetlbs(struct list_head *from, new_page_t get_new_page, >> + free_page_t put_new_page, unsigned long private, >> + enum migrate_mode mode, int reason, >> + struct migrate_pages_stats *stats, >> + struct list_head *ret_folios) >> +{ >> + int retry = 1; >> + int nr_failed = 0; >> + int nr_retry_pages = 0; >> + int pass = 0; >> + struct folio *folio, *folio2; >> + int rc, nr_pages; >> + >> + for (pass = 0; pass < NR_MAX_MIGRATE_PAGES_RETRY && retry; pass++) { >> + retry = 0; >> + nr_retry_pages = 0; >> + >> + list_for_each_entry_safe(folio, folio2, from, lru) { >> + if (!folio_test_hugetlb(folio)) >> + continue; >> + >> + nr_pages = folio_nr_pages(folio); >> + >> + cond_resched(); >> + >> + rc = unmap_and_move_huge_page(get_new_page, >> + put_new_page, private, >> + &folio->page, pass > 2, mode, >> + reason, ret_folios); >> + /* >> + * The rules are: >> + * Success: hugetlb folio will be put back >> + * -EAGAIN: stay on the from list >> + * -ENOMEM: stay on the from list >> + * -ENOSYS: stay on the from list >> + * Other errno: put on ret_folios list >> + */ >> + switch(rc) { >> + case -ENOSYS: >> + /* Hugetlb migration is unsupported */ >> + nr_failed++; >> + stats->nr_failed_pages += nr_pages; >> + list_move_tail(&folio->lru, ret_folios); >> + break; >> + case -ENOMEM: >> + /* >> + * When memory is low, don't bother to try to migrate >> + * other folios, just exit. >> + */ >> + stats->nr_failed_pages += nr_pages + nr_retry_pages; >> + return -ENOMEM; >> + case -EAGAIN: >> + retry++; >> + nr_retry_pages += nr_pages; >> + break; >> + case MIGRATEPAGE_SUCCESS: >> + stats->nr_succeeded += nr_pages; >> + break; >> + default: >> + /* >> + * Permanent failure (-EBUSY, etc.): >> + * unlike -EAGAIN case, the failed folio is >> + * removed from migration folio list and not >> + * retried in the next outer loop. >> + */ >> + nr_failed++; >> + stats->nr_failed_pages += nr_pages; >> + break; >> + } >> + } >> + } >> + /* >> + * nr_failed is number of hugetlb folios failed to be migrated. After >> + * NR_MAX_MIGRATE_PAGES_RETRY attempts, give up and count retried hugetlb >> + * folios as failed. >> + */ >> + nr_failed += retry; >> + stats->nr_failed_pages += nr_retry_pages; >> + >> + return nr_failed; >> +} >> + >> /* >> * migrate_pages - migrate the folios specified in a list, to the free folios >> * supplied as the target for the page migration >> @@ -1422,10 +1513,10 @@ struct migrate_pages_stats { >> * @ret_succeeded: Set to the number of folios migrated successfully if >> * the caller passes a non-NULL pointer. >> * >> - * The function returns after 10 attempts or if no folios are movable any more >> - * because the list has become empty or no retryable folios exist any more. >> - * It is caller's responsibility to call putback_movable_pages() to return folios >> - * to the LRU or free list only if ret != 0. >> + * The function returns after NR_MAX_MIGRATE_PAGES_RETRY attempts or if no folios >> + * are movable any more because the list has become empty or no retryable folios >> + * exist any more. It is caller's responsibility to call putback_movable_pages() >> + * only if ret != 0. >> * >> * Returns the number of {normal folio, large folio, hugetlb} that were not >> * migrated, or an error code. The number of large folio splits will be >> @@ -1439,7 +1530,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> int retry = 1; >> int large_retry = 1; >> int thp_retry = 1; >> - int nr_failed = 0; >> + int nr_failed; >> int nr_retry_pages = 0; >> int nr_large_failed = 0; >> int pass = 0; >> @@ -1456,38 +1547,45 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> trace_mm_migrate_pages_start(mode, reason); >> memset(&stats, 0, sizeof(stats)); >> + rc = migrate_hugetlbs(from, get_new_page, put_new_page, private, mode, reason, >> + &stats, &ret_folios); >> + if (rc < 0) >> + goto out; > How about continue migrate small page for -ENOMEM case? maybe > there are still > free small pages. Sounds reasonable to me. How about do that on top of this series? Do you have interest to do that? Best Regards, Huang, Ying