From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B487C64EC7 for ; Wed, 1 Mar 2023 06:19:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B0EB66B0072; Wed, 1 Mar 2023 01:19:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ABE376B0073; Wed, 1 Mar 2023 01:19:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9867E6B0074; Wed, 1 Mar 2023 01:19:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8A2146B0072 for ; Wed, 1 Mar 2023 01:19:58 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 523451211F1 for ; Wed, 1 Mar 2023 06:19:58 +0000 (UTC) X-FDA: 80519328876.21.9AFBC47 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by imf04.hostedemail.com (Postfix) with ESMTP id 4B2D14000A for ; Wed, 1 Mar 2023 06:19:55 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=VCUJu1x1; spf=pass (imf04.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.136 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677651595; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zzihxJcM1k9IE1uSFFR7wK3jntfK+MqYvTLEEjQxI2g=; b=HRHNxIpa0a4rh6XOIA1UNWLhb68Kkkhf3bsmH6sNdKmGZL6wZxlclb3IkKVjzNOimimUrS tJKXnbtpty297j7W+NPQZrwm5bI539cTb7s92yHZqr9DQ+RzpJ3FyeeA3xCvuV4ftEWrmW 3U5gffh+836yXsOohJ8pvhsbiGjQ434= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=VCUJu1x1; spf=pass (imf04.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.136 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677651595; a=rsa-sha256; cv=none; b=FsveW/CHlbBaD4e5sNWMznFHYvZfRygCJQqnFpSVWbZJtRgepNtMIr6uJY8Cw6sxAQJjq1 rPFS6RkdkwGuSIv6FRC//ZfeIqgX5/23u65m08dJuh7WMBcclOsJTgmipLHIy69bnPRk5d KWM1zNs0AQXjLZpr9z7FB6OAiz1fY1s= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1677651595; x=1709187595; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=f1EvyqOI7iFrXy5SrYy2Qs5gMHTyapwHJlT7PO8ZAYY=; b=VCUJu1x1y5H70GBzmpcNGAjKFcL6bsfVdw5VjJZsacSDpgv+qk6BHbEA 1KMnCEDUZrFHPjorBpbPtgFbJJsj4wLLQa5oNLUtTXN0CI/MMePGDv+yh LOZcitqXgnh7i1m8EqNvSuzLJvqZM6QDfpe6Wl0d0yWoJmcg5LCXbYrI+ CKTprms1kXd88IYNUekjjYgo82L09jK5zPaMBlsf19cMwJ4BpNkRw0fTO LQT4icL+au5xIbqDT77POlvCjSamkwrzEzaygtVgC5cpDrkGYtsRm3git pUp0WPRxTCA89XncqDC1W69Sa1l/gJijDj+7F1A0k9EVmDEkEKnbMz8R/ Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10635"; a="314001791" X-IronPort-AV: E=Sophos;i="5.98,224,1673942400"; d="scan'208";a="314001791" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2023 22:19:51 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10635"; a="676632503" X-IronPort-AV: E=Sophos;i="5.98,224,1673942400"; d="scan'208";a="676632503" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2023 22:19:47 -0800 From: "Huang, Ying" To: Baolin Wang Cc: Andrew Morton , , , Hugh Dickins , "Xu, Pengfei" , Christoph Hellwig , Stefan Roesch , Tejun Heo , Xin Hao , Zi Yan , Yang Shi , Matthew Wilcox , Mike Kravetz Subject: Re: [PATCH 3/3] migrate_pages: try migrate in batch asynchronously firstly References: <20230224141145.96814-1-ying.huang@intel.com> <20230224141145.96814-4-ying.huang@intel.com> Date: Wed, 01 Mar 2023 14:18:51 +0800 In-Reply-To: (Baolin Wang's message of "Wed, 1 Mar 2023 11:08:26 +0800") Message-ID: <87zg8x9epg.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 4B2D14000A X-Stat-Signature: 8fbui979h3waansjs8d1ku9gqi7txt9h X-HE-Tag: 1677651595-890583 X-HE-Meta: U2FsdGVkX18ng4g3woA9yJ6Mot79wSeyZQn9SXZ5bT7NSUQHu95fP7vinFPI37XEuHLyj3m3z7lGmL5/Q7tFhn6R711/7UT502v19oIm5ioH+OM2phN4z6dATfG1LmSs1MTU/xGHE4EUJef5EqXpoNxztKHvqzyaMZ2WEtlIUU9FsHy/u2rF0WZzg6SA51xfE2ARwejVOHARrrB8a5Dcs19Ela4Pw64gpbPf6qJfBHdbjmFoZvZNFPs0GWodPgkAaq/tf+rz3SsauHwADpiCy9ZlgC8H4SHBK+FkVELwM7vb9BubfeDlJJvIlZB9kBGsAKhQ7vWFwGfNttb5LDnpPwvukle82AoThvFKyL0UNs/zao+8LrRszdPZcndtakKruo83KIQMKzjStHshrbo8v7sUpqlKmQwTQq04y9EPPr/jsfPw+8oHawjyHfMtors0pvJ2ffFy0WZjPxcmyVujpzBgaKzY0JUcYxQnop1S35eTPF9/EzOXRfOVrJmiKRtar9FYCzQ/tAK41/E6GJvo8tA4pWpt2pYA29ROco7umirkI2jtJ9fOgWmFF5a+Ie1eGFmnCSNuVkuMOkZ2o/flNlEonj3NgjzFYFqr4gJPNj7l5YUTxqyinT8TsMQAItB5Offw0ZNMmmNYHd17yO1kPAorNpfMMOKxg9Cz0ay0N+U1hVgzBMhnEbJmN44jAGCKcYd/t8TE63qmgpTaW7O/PisEURr9uQytRiBwXpy/c57XsdkmgOOt1XtfUsAVaMZ8vxZswVNTZRGO/PvV61SNJaW+FYoMe017x/+wUAYdUAb7hrVm6jaE3IPN3hrv7eo48o6cIB8pAB1JnwNdDynKSI34kOPyDKhGbi1QGof+4Cpizp89sOreuAAVCjHP3hW3BrvGBfti0uGO4JgRonIddW7GR0iZEuyC8LOoINOaC6FUdx8JAESBaLUq6gux9OcgXradvt+SLAAzOBR+vld w41ECIPZ VFYumFAPr/m+ELTSfIUeSntuYQwZQtpqI3cK0OB5asJFvzA0wvmT2Saf6ahC9Y2JLeiqAOq/hWnUr8Atcgyd4KHXGGaMouDeLIuC5zKd264j52MXyADkzTVT6fh13A1AgqYIlQSoON0ZGVlWCncJtzCF0FMU+07TtRtvkINTYoI0QQG++6f3viXOWPHgIixMNLWZt4fjwY4ei4eESOJcZw9ahvZUZFDC54oSryNtwGWwbxx9KlZgEaJLcrxcHqZ1CykhadqGgaORvImyDJUE9ToZs5misNZGDYYyYCbBt4zqMStv9crE+wYq1z1I+omVnjZeUDj4ZYsaSEShe1kUjd1inw1Jfrx2hU3qXlt6thsKRa7Qc18kX+elaFLKlvaYqV0upqWGNz1NadWvW7FMMVK5pVj/4IXWdwzg9SIuWudxzYA7al/kqvCqA/UJOIXMdIMfj0fHu33wei9r4Pe85vJI8pA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Baolin Wang writes: > On 2/24/2023 10:11 PM, Huang Ying wrote: >> When we have locked more than one folios, we cannot wait the lock or >> bit (e.g., page lock, buffer head lock, writeback bit) synchronously. >> Otherwise deadlock may be triggered. This make it hard to batch the >> synchronous migration directly. >> This patch re-enables batching synchronous migration via trying to >> migrate in batch asynchronously firstly. And any folios that are >> failed to be migrated asynchronously will be migrated synchronously >> one by one. >> Test shows that this can restore the TLB flushing batching >> performance >> for synchronous migration effectively. >> Signed-off-by: "Huang, Ying" >> Cc: Hugh Dickins >> Cc: "Xu, Pengfei" >> Cc: Christoph Hellwig >> Cc: Stefan Roesch >> Cc: Tejun Heo >> Cc: Xin Hao >> Cc: Zi Yan >> Cc: Yang Shi >> Cc: Baolin Wang >> Cc: Matthew Wilcox >> Cc: Mike Kravetz >> --- >> mm/migrate.c | 65 ++++++++++++++++++++++++++++++++++++++++++++-------- >> 1 file changed, 55 insertions(+), 10 deletions(-) >> diff --git a/mm/migrate.c b/mm/migrate.c >> index 91198b487e49..c17ce5ee8d92 100644 >> --- a/mm/migrate.c >> +++ b/mm/migrate.c >> @@ -1843,6 +1843,51 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, >> return rc; >> } >> +static int migrate_pages_sync(struct list_head *from, new_page_t >> get_new_page, >> + free_page_t put_new_page, unsigned long private, >> + enum migrate_mode mode, int reason, struct list_head *ret_folios, >> + struct list_head *split_folios, struct migrate_pages_stats *stats) >> +{ >> + int rc, nr_failed = 0; >> + LIST_HEAD(folios); >> + struct migrate_pages_stats astats; >> + >> + memset(&astats, 0, sizeof(astats)); >> + /* Try to migrate in batch with MIGRATE_ASYNC mode firstly */ >> + rc = migrate_pages_batch(from, get_new_page, put_new_page, private, MIGRATE_ASYNC, >> + reason, &folios, split_folios, &astats, >> + NR_MAX_MIGRATE_PAGES_RETRY); >> + stats->nr_succeeded += astats.nr_succeeded; >> + stats->nr_thp_succeeded += astats.nr_thp_succeeded; >> + stats->nr_thp_split += astats.nr_thp_split; >> + if (rc < 0) { >> + stats->nr_failed_pages += astats.nr_failed_pages; >> + stats->nr_thp_failed += astats.nr_thp_failed; >> + list_splice_tail(&folios, ret_folios); >> + return rc; >> + } >> + stats->nr_thp_failed += astats.nr_thp_split; >> + nr_failed += astats.nr_thp_split; >> + /* >> + * Fall back to migrate all failed folios one by one synchronously. All >> + * failed folios except split THPs will be retried, so their failure >> + * isn't counted >> + */ >> + list_splice_tail_init(&folios, from); >> + while (!list_empty(from)) { >> + list_move(from->next, &folios); >> + rc = migrate_pages_batch(&folios, get_new_page, put_new_page, >> + private, mode, reason, ret_folios, >> + split_folios, stats, NR_MAX_MIGRATE_PAGES_RETRY); >> + list_splice_tail_init(&folios, ret_folios); >> + if (rc < 0) >> + return rc; >> + nr_failed += rc; >> + } >> + >> + return nr_failed; >> +} >> + >> /* >> * migrate_pages - migrate the folios specified in a list, to the free folios >> * supplied as the target for the page migration >> @@ -1874,7 +1919,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> enum migrate_mode mode, int reason, unsigned int *ret_succeeded) >> { >> int rc, rc_gather; >> - int nr_pages, batch; >> + int nr_pages; >> struct folio *folio, *folio2; >> LIST_HEAD(folios); >> LIST_HEAD(ret_folios); >> @@ -1890,10 +1935,6 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> if (rc_gather < 0) >> goto out; >> - if (mode == MIGRATE_ASYNC) >> - batch = NR_MAX_BATCHED_MIGRATION; >> - else >> - batch = 1; >> again: >> nr_pages = 0; >> list_for_each_entry_safe(folio, folio2, from, lru) { >> @@ -1904,16 +1945,20 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >> } >> nr_pages += folio_nr_pages(folio); >> - if (nr_pages >= batch) >> + if (nr_pages >= NR_MAX_BATCHED_MIGRATION) >> break; >> } >> - if (nr_pages >= batch) >> + if (nr_pages >= NR_MAX_BATCHED_MIGRATION) >> list_cut_before(&folios, from, &folio2->lru); >> else >> list_splice_init(from, &folios); >> - rc = migrate_pages_batch(&folios, get_new_page, put_new_page, private, >> - mode, reason, &ret_folios, &split_folios, &stats, >> - NR_MAX_MIGRATE_PAGES_RETRY); >> + if (mode == MIGRATE_ASYNC) >> + rc = migrate_pages_batch(&folios, get_new_page, put_new_page, private, >> + mode, reason, &ret_folios, &split_folios, &stats, >> + NR_MAX_MIGRATE_PAGES_RETRY); >> + else >> + rc = migrate_pages_sync(&folios, get_new_page, put_new_page, private, >> + mode, reason, &ret_folios, &split_folios, &stats); > > For split folios, it seems also reasonable to use migrate_pages_sync() > instead of always using fixed MIGRATE_ASYNC mode? For split folios, we only try to migrate them with minimal effort. Previously, we decrease the retry number from 10 to 1. Now, I think that it's reasonable to change the migration mode to MIGRATE_ASYNC to reduce latency. They have been counted as failure anyway. >> list_splice_tail_init(&folios, &ret_folios); >> if (rc < 0) { >> rc_gather = rc; Best Regards, Huang, Ying