From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73F76C021A0 for ; Thu, 13 Feb 2025 21:34:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DBB7C280003; Thu, 13 Feb 2025 16:34:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D6BC7280001; Thu, 13 Feb 2025 16:34:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C3326280003; Thu, 13 Feb 2025 16:34:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id A130D280001 for ; Thu, 13 Feb 2025 16:34:47 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 4EA4DA0BC0 for ; Thu, 13 Feb 2025 21:34:47 +0000 (UTC) X-FDA: 83116226214.14.29427EC Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by imf20.hostedemail.com (Postfix) with ESMTP id A0F951C000B for ; Thu, 13 Feb 2025 21:34:44 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf20.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739482485; a=rsa-sha256; cv=none; b=Q+N0ASnfAJoj1zN3LtTwwUa3hRgmRMsom5HJhPJhcyoljspk5daS8dx4SU4DflnEmFRPT2 RR+pXXjmWAHxww1FhvzgCgx4gyUIH94FsoxnZ6fIZ1AVzlyA8q1GnAJoTvRgcp3FaDgaix QszKARC/Jffzjy4AE+uLmlsnaCvwn/k= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf20.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739482485; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+Q8YdlY+z9xrq68gJ4DDPU4qQpXwC7faEnz1sKfIsSs=; b=D4JqyWDbmsl4VhhDwTTebqPyZtyvSB4M69GYlCgHWaUID+xb+HREEIPAUCyUwrNoTmQ97o tvskk363rD2tjRFPjO/YdhLohtAqW4aIdgvslscIeg4HRBTw2nUO3ZH2regO06f8eOFAlx pWTdTRV1HiILVkLsaIeA/erA0ZIdk+k= X-AuditID: a67dfc5b-3c9ff7000001d7ae-c0-67ae65704484 Date: Fri, 14 Feb 2025 06:34:35 +0900 From: Byungchul Park To: Zi Yan Cc: linux-mm@kvack.org, David Rientjes , Shivank Garg , Aneesh Kumar , David Hildenbrand , John Hubbard , Kirill Shutemov , Matthew Wilcox , Mel Gorman , "Rao, Bharata Bhasker" , Rik van Riel , RaghavendraKT , Wei Xu , Suyeon Lee , Lei Chen , "Shukla, Santosh" , "Grimm, Jon" , sj@kernel.org, shy828301@gmail.com, Liam Howlett , Gregory Price , "Huang, Ying" , kernel_team@skhynix.com Subject: Re: [RFC PATCH 4/5] mm/migrate: introduce multi-threaded page copy routine Message-ID: <20250213213435.GA26969@system.software.com> References: <20250103172419.4148674-1-ziy@nvidia.com> <20250103172419.4148674-5-ziy@nvidia.com> <20250213124401.GA29526@system.software.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.9.4 (2018-02-28) X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrAIsWRmVeSWpSXmKPExsXC9ZZnkW5B6rp0gzcb5SyenT3AbLF04jtm i6/rfzFbNDQ9YrHY+HQRu8Wvp38ZLWZO+8JuseeahsWnnTeZLbY3PGC3uLfmP6tFY/9vNoup XS+ZLa5t2spq0bZkI5PFm915FufWfGa3WHBiMaPF4a9vmCzeX/vIbvH7xxw2i9VrMixmH73H 7iDu0XrpL5vHmnlrGD12zrrL7rFgU6nH5hVaHptWdbJ5bPo0id1j50NLj40f/7N79Da/Y/P4 +PQWi8f7fVfZPPasusoYwBvFZZOSmpNZllqkb5fAlfF95xvGgsNJFbu/bmZtYPzs08XIwSEh YCIx5UJyFyMnmHn99wRmkDCLgKrE7g3OIGE2AXWJGzd+MoPYIgLSEqf7/gDZXBzMAnfZJCZ+ u8QCUi8sECox4zQTSA2vgIXEwaN3wGqEBI4wSvyYe5YRIiEocXLmExYQmxlo6J95l8B2MQMN Xf6PAyIsL9G8dTbYLk4BO4nu6T1g5aICyhIHth1nApkpIfCSXWLC9TcsEDdLShxccYNlAqPg LCQrZiFZMQthxSwkKxYwsqxiFMrMK8tNzMwx0cuozMus0EvOz93ECIz3ZbV/oncwfroQfIhR gINRiYf3wL216UKsiWXFlbmHGCU4mJVEeCWmrUkX4k1JrKxKLcqPLyrNSS0+xCjNwaIkzmv0 rTxFSCA9sSQ1OzW1ILUIJsvEwSnVwLj+cfy8xdZp0je3bUk99Ffn/c061jJeBr/ogzzFE+Zf PlgQP2MvY5VorTvvnoKH+7wyFsTu/a11crEXt+6c5+qxl2Z7Zh6rrOefLyZuxDDpUOe5rRHJ c0QOe3HYrLa+fXbrvqSOTep+/zJSa3n+O7e5uWf0PtFY9m7OFx6Z6RO6L/c3rQmJkVBiKc5I NNRiLipOBADo5noJ8wIAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrBIsWRmVeSWpSXmKPExsXC5WfdrFuQui7dYNEnPotnZw8wWyyd+I7Z 4uv6X8wWDU2PWCw2Pl3EbvHr6V9Gi5nTvrBbHJ57ktVizzUNi087bzJbbG94wG5xb81/VovG /t9sFlO7XjJbXNu0ldWibclGJos3u/Mszq35zG6x4MRiRovDX98wWby/9pHd4vePOWwWq9dk WMw+eo/dQcKj9dJfNo8189YweuycdZfdY8GmUo/NK7Q8Nq3qZPPY9GkSu8fOh5YeGz/+Z/fo bX7H5vHx6S0Wj/f7rrJ5LH7xgcljz6qrjAF8UVw2Kak5mWWpRfp2CVwZ33e+YSw4nFSx++tm 1gbGzz5djJwcEgImEtd/T2DuYuTgYBFQldi9wRkkzCagLnHjxk9mEFtEQFridN8fIJuLg1ng LpvExG+XWEDqhQVCJWacZgKp4RWwkDh49A5YjZDAEUaJH3PPMkIkBCVOznzCAmIzAw39M+8S 2C5moKHL/3FAhOUlmrfOBtvFKWAn0T29B6xcVEBZ4sC240wTGPlmIZk0C8mkWQiTZiGZtICR ZRWjSGZeWW5iZo6pXnF2RmVeZoVecn7uJkZg/C6r/TNxB+OXy+6HGAU4GJV4eA/cW5suxJpY VlyZe4hRgoNZSYRXYtqadCHelMTKqtSi/Pii0pzU4kOM0hwsSuK8XuGpCUIC6YklqdmpqQWp RTBZJg5OqQbGmw8v/JXOcbP5nq2rM+WvxQSuXztkFEvrE6ue3f8o/2m3+ofrB6Per1kT8oSf 4dSDRd2qKtrVvYt2SWz7I8WTeMCx+eSJx8s7Ph4Ok3puOX1/0cbEZ0zTr0zzm7r84MzyxhlR U/rFAk7ov/SOLp8bqZzheiFyetVtodiDrpY9x1U/Ky779uPoeSWW4oxEQy3mouJEADlm9QTb AgAA X-CFilter-Loop: Reflected X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: A0F951C000B X-Stat-Signature: 7ro6byrggseed9k93srwttfea76zxj61 X-Rspam-User: X-HE-Tag: 1739482484-228190 X-HE-Meta: U2FsdGVkX18XX7orgSmHrf/yM8pEclQUJNrBe9xnmPKuCaeEemg5cIdDEyAMtKfIeS7HfdLy9/+GkE2FPyeMyLMBKP3r5L1FzzjAdgtY7CX3VbIDNDCtvaoEA6BhPK3r1OamdpHG40gJSDjOuF+cI41J56DtAm8IFpyGmr9LYMEmpV4iIC7AKo2RmLncqFYucPlmvtofcy0iDlT7u+ZJHc68lkW4X5X3EsgzW812COseFX7yJMVISlZWyj2czL7qS3ECPYMnPZCmbDNPWrnIU+T0PvfM+eUhJ2KQAUjcH5vIICpxwpxEJBz+Jww8adJsk46k3iZQLvLCP4KiWUTgXZahc7BUnuUKgIUWsBdNzbMoA2BDSOPbrs4XDaHUt8mXhsq1v2jZmiXqf8lvFIcTmRdx50oFDneIjP5loZDZbUbUQ889v79o2hYAi+3kYN+BkB39cSC7YdowG2hMCeoSNkME4tUk8/X1zIlcAO6CZH70uXWu2Tt4zRnlKzrqiRC6gb8Drt8Nl6fCR1K4QEOwqS5fegGjREkeLVCuifsZ20E7110oYUFgMWCo/N0AHJam19Su/U4BVnHXcjONFta/0DZJ5wxdxhe2DStuwgBqtbUPiifAfaSQEE5kqfJdJ4Kx86FHgvsZA7QCyk7y4RidNEUTzrRn379xkTojKGnTQrm6BRGtBZ02NQpXki2TenKV8tlpCztaGFwBVfi98d7PhtaiOKirdgh1l/YxiofrrsbRnpjzL+UHll8mT9SEdXK8ZC+MD/ATjgEqHDfEDEng9X5ZX4kMmhdaX7NRWc2P0G3LSbBPs7UPzA7SbYDxtSj1h5WH6iTCI2AuDg8Ex8AZf1NibaSaBEficY21ZkUGxm7Qgh9XiUVqRAkiEQg6QgCnJ4AJrt2/Huu2KgR5k5M8pUMMjckDamwH71xIVevC3JqZwcFSh9YUNCLeu19utWgpFnnv9doZXmkYLIwkqkb PkX0YNzY k/A329mz7qDJ3rERC5lI/ecHhJAOw54ewqWBp7i0JPi8r1hTu+pTnd8NLHeYZrWR43K8dqDUXlMsuh6P2ywfYskC/t9dj77PWpAUX+JiTwTbj12EcRmDRBXUx+WW+abrwOFtJ3qxdAVoh2TQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Feb 13, 2025 at 10:34:05AM -0500, Zi Yan wrote: > On 13 Feb 2025, at 7:44, Byungchul Park wrote: > > > On Fri, Jan 03, 2025 at 12:24:18PM -0500, Zi Yan wrote: > >> Now page copies are batched, multi-threaded page copy can be used to > >> increase page copy throughput. Add copy_page_lists_mt() to copy pages in > >> multi-threaded manners. Empirical data show more than 32 base pages are > >> needed to show the benefit of using multi-threaded page copy, so use 32 as > >> the threshold. > >> > >> Signed-off-by: Zi Yan > >> --- > >> include/linux/migrate.h | 3 + > >> mm/Makefile | 2 +- > >> mm/copy_pages.c | 186 ++++++++++++++++++++++++++++++++++++++++ > >> mm/migrate.c | 19 ++-- > >> 4 files changed, 199 insertions(+), 11 deletions(-) > >> create mode 100644 mm/copy_pages.c > >> > >> diff --git a/include/linux/migrate.h b/include/linux/migrate.h > >> index 29919faea2f1..a0124f4893b0 100644 > >> --- a/include/linux/migrate.h > >> +++ b/include/linux/migrate.h > >> @@ -80,6 +80,9 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio); > >> int folio_migrate_mapping(struct address_space *mapping, > >> struct folio *newfolio, struct folio *folio, int extra_count); > >> > >> +int copy_page_lists_mt(struct list_head *dst_folios, > >> + struct list_head *src_folios, int nr_items); > >> + > >> #else > >> > >> static inline void putback_movable_pages(struct list_head *l) {} > >> diff --git a/mm/Makefile b/mm/Makefile > >> index 850386a67b3e..f8c7f6b4cebb 100644 > >> --- a/mm/Makefile > >> +++ b/mm/Makefile > >> @@ -92,7 +92,7 @@ obj-$(CONFIG_KMSAN) += kmsan/ > >> obj-$(CONFIG_FAILSLAB) += failslab.o > >> obj-$(CONFIG_FAIL_PAGE_ALLOC) += fail_page_alloc.o > >> obj-$(CONFIG_MEMTEST) += memtest.o > >> -obj-$(CONFIG_MIGRATION) += migrate.o > >> +obj-$(CONFIG_MIGRATION) += migrate.o copy_pages.o > >> obj-$(CONFIG_NUMA) += memory-tiers.o > >> obj-$(CONFIG_DEVICE_MIGRATION) += migrate_device.o > >> obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += huge_memory.o khugepaged.o > >> diff --git a/mm/copy_pages.c b/mm/copy_pages.c > >> new file mode 100644 > >> index 000000000000..0e2231199f66 > >> --- /dev/null > >> +++ b/mm/copy_pages.c > >> @@ -0,0 +1,186 @@ > >> +// SPDX-License-Identifier: GPL-2.0 > >> +/* > >> + * Parallel page copy routine. > >> + */ > >> + > >> +#include > >> +#include > >> +#include > >> +#include > >> +#include > >> + > >> + > >> +unsigned int limit_mt_num = 4; > >> + > >> +struct copy_item { > >> + char *to; > >> + char *from; > >> + unsigned long chunk_size; > >> +}; > >> + > >> +struct copy_page_info { > >> + struct work_struct copy_page_work; > >> + unsigned long num_items; > >> + struct copy_item item_list[]; > >> +}; > >> + > >> +static void copy_page_routine(char *vto, char *vfrom, > >> + unsigned long chunk_size) > >> +{ > >> + memcpy(vto, vfrom, chunk_size); > >> +} > >> + > >> +static void copy_page_work_queue_thread(struct work_struct *work) > >> +{ > >> + struct copy_page_info *my_work = (struct copy_page_info *)work; > >> + int i; > >> + > >> + for (i = 0; i < my_work->num_items; ++i) > >> + copy_page_routine(my_work->item_list[i].to, > >> + my_work->item_list[i].from, > >> + my_work->item_list[i].chunk_size); > >> +} > >> + > >> +int copy_page_lists_mt(struct list_head *dst_folios, > >> + struct list_head *src_folios, int nr_items) > >> +{ > >> + int err = 0; > >> + unsigned int total_mt_num = limit_mt_num; > >> + int to_node = folio_nid(list_first_entry(dst_folios, struct folio, lru)); > >> + int i; > >> + struct copy_page_info *work_items[32] = {0}; > >> + const struct cpumask *per_node_cpumask = cpumask_of_node(to_node); > > > > Hi, > > > > Why do you use the cpumask of dst's node than src for where queueing > > the works on? Is it for utilizing CPU cache? Isn't it better to use > > src's node than dst where nothing has been loaded to CPU cache? Or why > > Because some vendor’s CPU achieves higher copy throughput by pushing > data from src to dst, whereas other vendor’s CPU get higher by pulling > data from dst to src. More in [1]. Ah, okay. You have already added the additional option for it in 5/5. > > don't you avoid specifying cpus to queue on but let system_unbound_wq > > select the appropriate CPUs e.g. idlest CPUs, when the system is not > > that idle? > > Based on wq_select_unbound_cpu()[2], a round robin method is used to > select target CPUs, not the idlest CPUs. queue_work_node(), which queue jobs It was just an example based on what I think it should be.. but indeed! > on a NUMA node, uses select_numa_node_cpu() and it just chooses the > random (or known as first) CPU from the NUMA node[3]. There is no idleness > detection in workqueue implementation yet. > > In addition, based on CPU topology, not all idle CPUs are equal. For example, > AMD CPUs have CCDs and two cores from one CCD would saturate the CCD bandwidth. > This means if you want to achieve high copy throughput, even if all cores > in a CCD are idle, other idle CPUs from another CCD should be chosen first[4]. Yeah.. I'd like to think it more what is the best design for it. > I am planning to reach out to scheduling folks to learn more about CPU scheduling > and come up with a better workqueue or an alternative for multithreading page > migration. Good luck. Byungchul > [1] https://lore.kernel.org/linux-mm/8B66C7BA-96D6-4E04-89F7-13829BF480D7@nvidia.com/ > [2] https://elixir.bootlin.com/linux/v6.13.2/source/kernel/workqueue.c#L2212 > [3] https://elixir.bootlin.com/linux/v6.13.2/source/kernel/workqueue.c#L2408 > [4] https://lore.kernel.org/linux-mm/D969919C-A241-432E-A0E3-353CCD8AC7E8@nvidia.com/ > > > > > > > Byungchul > > > >> + int cpu_id_list[32] = {0}; > >> + int cpu; > >> + int max_items_per_thread; > >> + int item_idx; > >> + struct folio *src, *src2, *dst, *dst2; > >> + > >> + total_mt_num = min_t(unsigned int, total_mt_num, > >> + cpumask_weight(per_node_cpumask)); > >> + > >> + if (total_mt_num > 32) > >> + total_mt_num = 32; > >> + > >> + /* Each threads get part of each page, if nr_items < totla_mt_num */ > >> + if (nr_items < total_mt_num) > >> + max_items_per_thread = nr_items; > >> + else > >> + max_items_per_thread = (nr_items / total_mt_num) + > >> + ((nr_items % total_mt_num) ? 1 : 0); > >> + > >> + > >> + for (cpu = 0; cpu < total_mt_num; ++cpu) { > >> + work_items[cpu] = kzalloc(sizeof(struct copy_page_info) + > >> + sizeof(struct copy_item) * max_items_per_thread, > >> + GFP_NOWAIT); > >> + if (!work_items[cpu]) { > >> + err = -ENOMEM; > >> + goto free_work_items; > >> + } > >> + } > >> + > >> + i = 0; > >> + /* TODO: need a better cpu selection method */ > >> + for_each_cpu(cpu, per_node_cpumask) { > >> + if (i >= total_mt_num) > >> + break; > >> + cpu_id_list[i] = cpu; > >> + ++i; > >> + } > >> + > >> + if (nr_items < total_mt_num) { > >> + for (cpu = 0; cpu < total_mt_num; ++cpu) { > >> + INIT_WORK((struct work_struct *)work_items[cpu], > >> + copy_page_work_queue_thread); > >> + work_items[cpu]->num_items = max_items_per_thread; > >> + } > >> + > >> + item_idx = 0; > >> + dst = list_first_entry(dst_folios, struct folio, lru); > >> + dst2 = list_next_entry(dst, lru); > >> + list_for_each_entry_safe(src, src2, src_folios, lru) { > >> + unsigned long chunk_size = PAGE_SIZE * folio_nr_pages(src) / total_mt_num; > >> + /* XXX: not working in HIGHMEM */ > >> + char *vfrom = page_address(&src->page); > >> + char *vto = page_address(&dst->page); > >> + > >> + VM_WARN_ON(PAGE_SIZE * folio_nr_pages(src) % total_mt_num); > >> + VM_WARN_ON(folio_nr_pages(dst) != folio_nr_pages(src)); > >> + > >> + for (cpu = 0; cpu < total_mt_num; ++cpu) { > >> + work_items[cpu]->item_list[item_idx].to = > >> + vto + chunk_size * cpu; > >> + work_items[cpu]->item_list[item_idx].from = > >> + vfrom + chunk_size * cpu; > >> + work_items[cpu]->item_list[item_idx].chunk_size = > >> + chunk_size; > >> + } > >> + > >> + item_idx++; > >> + dst = dst2; > >> + dst2 = list_next_entry(dst, lru); > >> + } > >> + > >> + for (cpu = 0; cpu < total_mt_num; ++cpu) > >> + queue_work_on(cpu_id_list[cpu], > >> + system_unbound_wq, > >> + (struct work_struct *)work_items[cpu]); > >> + } else { > >> + int num_xfer_per_thread = nr_items / total_mt_num; > >> + int per_cpu_item_idx; > >> + > >> + > >> + for (cpu = 0; cpu < total_mt_num; ++cpu) { > >> + INIT_WORK((struct work_struct *)work_items[cpu], > >> + copy_page_work_queue_thread); > >> + > >> + work_items[cpu]->num_items = num_xfer_per_thread + > >> + (cpu < (nr_items % total_mt_num)); > >> + } > >> + > >> + cpu = 0; > >> + per_cpu_item_idx = 0; > >> + item_idx = 0; > >> + dst = list_first_entry(dst_folios, struct folio, lru); > >> + dst2 = list_next_entry(dst, lru); > >> + list_for_each_entry_safe(src, src2, src_folios, lru) { > >> + /* XXX: not working in HIGHMEM */ > >> + work_items[cpu]->item_list[per_cpu_item_idx].to = > >> + page_address(&dst->page); > >> + work_items[cpu]->item_list[per_cpu_item_idx].from = > >> + page_address(&src->page); > >> + work_items[cpu]->item_list[per_cpu_item_idx].chunk_size = > >> + PAGE_SIZE * folio_nr_pages(src); > >> + > >> + VM_WARN_ON(folio_nr_pages(dst) != > >> + folio_nr_pages(src)); > >> + > >> + per_cpu_item_idx++; > >> + item_idx++; > >> + dst = dst2; > >> + dst2 = list_next_entry(dst, lru); > >> + > >> + if (per_cpu_item_idx == work_items[cpu]->num_items) { > >> + queue_work_on(cpu_id_list[cpu], > >> + system_unbound_wq, > >> + (struct work_struct *)work_items[cpu]); > >> + per_cpu_item_idx = 0; > >> + cpu++; > >> + } > >> + } > >> + if (item_idx != nr_items) > >> + pr_warn("%s: only %d out of %d pages are transferred\n", > >> + __func__, item_idx - 1, nr_items); > >> + } > >> + > >> + /* Wait until it finishes */ > >> + for (i = 0; i < total_mt_num; ++i) > >> + flush_work((struct work_struct *)work_items[i]); > >> + > >> +free_work_items: > >> + for (cpu = 0; cpu < total_mt_num; ++cpu) > >> + kfree(work_items[cpu]); > >> + > >> + return err; > >> +} > >> diff --git a/mm/migrate.c b/mm/migrate.c > >> index 95c4cc4a7823..18440180d747 100644 > >> --- a/mm/migrate.c > >> +++ b/mm/migrate.c > >> @@ -1799,7 +1799,7 @@ static void migrate_folios_batch_move(struct list_head *src_folios, > >> int *nr_retry_pages) > >> { > >> struct folio *folio, *folio2, *dst, *dst2; > >> - int rc, nr_pages = 0, nr_mig_folios = 0; > >> + int rc, nr_pages = 0, total_nr_pages = 0, total_nr_folios = 0; > >> int old_page_state = 0; > >> struct anon_vma *anon_vma = NULL; > >> bool is_lru; > >> @@ -1807,11 +1807,6 @@ static void migrate_folios_batch_move(struct list_head *src_folios, > >> LIST_HEAD(err_src); > >> LIST_HEAD(err_dst); > >> > >> - if (mode != MIGRATE_ASYNC) { > >> - *retry += 1; > >> - return; > >> - } > >> - > >> /* > >> * Iterate over the list of locked src/dst folios to copy the metadata > >> */ > >> @@ -1859,19 +1854,23 @@ static void migrate_folios_batch_move(struct list_head *src_folios, > >> migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED, > >> anon_vma, true, ret_folios); > >> migrate_folio_undo_dst(dst, true, put_new_folio, private); > >> - } else /* MIGRATEPAGE_SUCCESS */ > >> - nr_mig_folios++; > >> + } else { /* MIGRATEPAGE_SUCCESS */ > >> + total_nr_pages += nr_pages; > >> + total_nr_folios++; > >> + } > >> > >> dst = dst2; > >> dst2 = list_next_entry(dst, lru); > >> } > >> > >> /* Exit if folio list for batch migration is empty */ > >> - if (!nr_mig_folios) > >> + if (!total_nr_pages) > >> goto out; > >> > >> /* Batch copy the folios */ > >> - { > >> + if (total_nr_pages > 32) { > >> + copy_page_lists_mt(dst_folios, src_folios, total_nr_folios); > >> + } else { > >> dst = list_first_entry(dst_folios, struct folio, lru); > >> dst2 = list_next_entry(dst, lru); > >> list_for_each_entry_safe(folio, folio2, src_folios, lru) { > >> -- > >> 2.45.2 > >> > > > Best Regards, > Yan, Zi