From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A246C0219D for ; Thu, 13 Feb 2025 12:44:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C794C6B0089; Thu, 13 Feb 2025 07:44:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C28CA6B008A; Thu, 13 Feb 2025 07:44:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC9526B008C; Thu, 13 Feb 2025 07:44:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8C9866B0089 for ; Thu, 13 Feb 2025 07:44:12 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 1E147A0E30 for ; Thu, 13 Feb 2025 12:44:12 +0000 (UTC) X-FDA: 83114889144.09.651E8F5 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by imf03.hostedemail.com (Postfix) with ESMTP id 390E320002 for ; Thu, 13 Feb 2025 12:44:08 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=none; spf=pass (imf03.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739450650; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ismYSvs+1xW9cO5KqQQgXxtEBhhWFYyYVHlTthiN6K0=; b=bs4spG5eDRBxs9zCgCCnNQ/mlo4W2O4NrWscOregJTnEFkFsmjGXY3xIMzXXi7GA3MajqI 6zTDR9UV8GlzVK3nIcUjRcGDhxSMO+8alts6JU3D2AQqSodGuq4HDe2bgwFq7Kn7VDU4ts MIN7YvR8ZC13E58yx7gNxpRMDhbFYHw= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=none; spf=pass (imf03.hostedemail.com: domain of byungchul@sk.com designates 166.125.252.92 as permitted sender) smtp.mailfrom=byungchul@sk.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739450650; a=rsa-sha256; cv=none; b=wNb0AxqjpYmN00C3RD376i2SzqsbCUzgzJgSRsDUH25SaHf5YGx79GqPJNkw/eAYBMjsN2 m8OmKJoluEBJczLNC2LsZ/i8PN/jRG/G6WVHb62QtWMLT0Vp27uTvgQFkrKDFAxMPhjAQ1 HbblELn66zd7nIhHJMjuWC1wRWxGqLc= X-AuditID: a67dfc5b-3e1ff7000001d7ae-1e-67ade9161d1b Date: Thu, 13 Feb 2025 21:44:01 +0900 From: Byungchul Park To: Zi Yan Cc: linux-mm@kvack.org, David Rientjes , Shivank Garg , Aneesh Kumar , David Hildenbrand , John Hubbard , Kirill Shutemov , Matthew Wilcox , Mel Gorman , "Rao, Bharata Bhasker" , Rik van Riel , RaghavendraKT , Wei Xu , Suyeon Lee , Lei Chen , "Shukla, Santosh" , "Grimm, Jon" , sj@kernel.org, shy828301@gmail.com, Liam Howlett , Gregory Price , "Huang, Ying" , kernel_team@skhynix.com Subject: Re: [RFC PATCH 4/5] mm/migrate: introduce multi-threaded page copy routine Message-ID: <20250213124401.GA29526@system.software.com> References: <20250103172419.4148674-1-ziy@nvidia.com> <20250103172419.4148674-5-ziy@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250103172419.4148674-5-ziy@nvidia.com> User-Agent: Mutt/1.9.4 (2018-02-28) X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrOIsWRmVeSWpSXmKPExsXC9ZZnka7Yy7XpBn23dSyenT3AbLF04jtm i6/rfzFbNDQ9YrHY+HQRu8Wvp38ZLWZO+8JuseeahsWnnTeZLbY3PGC3uLfmP6tFY/9vNoup XS+ZLa5t2spq0bZkI5PFm915FufWfGa3WHBiMaPF4a9vmCzeX/vIbvH7xxw2i9VrMixmH73H 7iDu0XrpL5vHmnlrGD12zrrL7rFgU6nH5hVaHptWdbJ5bPo0id1j50NLj40f/7N79Da/Y/P4 +PQWi8f7fVfZPPasusoYwBvFZZOSmpNZllqkb5fAlfF31VHGgnO+FX17m5gaGH/YdDFyckgI mEis6NjHBmN3bvnJCmKzCKhKnNh6nBnEZhNQl7hx4yeYLSIgLXG67w+QzcXBLHCXTWLit0ss XYwcHMICoRIzTjOB1PAKWEisvt/DCBIWEkiU+HDdGCIsKHFy5hMWEJtZQEvixr+XTCAlzEAj l//jAAlzCphJ3JyynRHEFhVQljiw7TgTxGXX2CUW9npA2JISB1fcYJnAKDALydRZSKbOQpi6 gJF5FaNQZl5ZbmJmjoleRmVeZoVecn7uJkZg9C6r/RO9g/HTheBDjAIcjEo8vAfurU0XYk0s K67MPcQowcGsJMIrMW1NuhBvSmJlVWpRfnxRaU5q8SFGaQ4WJXFeo2/lKUIC6YklqdmpqQWp RTBZJg5OqQZGKe/F7Uu3KQTnpKXJPlp2deLKrqjjxVnbY++mJj02/3Xw24MTj5k077gxnHr7 XcLwttWmZAmxJ/YRZwTrlKbJ7Pz++ZvHoQ2rg25VMHLL1q2uWTXRpCnw+IOCO46LYlh51f9W nsmfXazSfp41wPwxb/qh/o06/5dVOj9wPtv6Nzzm4STbKS9vKrEUZyQaajEXFScCAJSCgKra AgAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrPIsWRmVeSWpSXmKPExsXC5WfdrCv2cm26wd3lEhbPzh5gtlg68R2z xdf1v5gtGpoesVhsfLqI3eLX07+MFjOnfWG3ODz3JKvFnmsaFp923mS22N7wgN3i3pr/rBaN /b/ZLKZ2vWS2uLZpK6tF25KNTBZvdudZnFvzmd1iwYnFjBaHv75hsnh/7SO7xe8fc9gsVq/J sJh99B67g4RH66W/bB5r5q1h9Ng56y67x4JNpR6bV2h5bFrVyeax6dMkdo+dDy09Nn78z+7R 2/yOzePj01ssHu/3XWXzWPziA5PHnlVXGQP4orhsUlJzMstSi/TtErgy/q46ylhwzreib28T UwPjD5suRk4OCQETic4tP1lBbBYBVYkTW48zg9hsAuoSN278BLNFBKQlTvf9AbK5OJgF7rJJ TPx2iaWLkYNDWCBUYsZpJpAaXgELidX3exhBwkICiRIfrhtDhAUlTs58wgJiMwtoSdz495IJ pIQZaOTyfxwgYU4BM4mbU7YzgtiiAsoSB7YdZ5rAyDsLSfcsJN2zELoXMDKvYhTJzCvLTczM MdUrzs6ozMus0EvOz93ECIzGZbV/Ju5g/HLZ/RCjAAejEg/vgXtr04VYE8uKK3MPMUpwMCuJ 8EpMW5MuxJuSWFmVWpQfX1Sak1p8iFGag0VJnNcrPDVBSCA9sSQ1OzW1ILUIJsvEwSnVwDhD wOz0AuOHMcluN4oFPc8tNtETtF9t1lshFncpscvmsMBnl60ts6LU5nwPXR1X/+K596cKJ/6z HrsLXzSLLOftUirtn/qlKeL5prwiafF/ggU6eZ1RUcssVj454rNbRP6+xB93m1d1L+R+n+Ew 2n1043/hr77GtQcNUz6HBT4/VbrVrX6vuxJLcUaioRZzUXEiAEuMcuPCAgAA X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Queue-Id: 390E320002 X-Stat-Signature: 54dkrezz5wc59dgpk443ku7g6zeufrmj X-Rspamd-Server: rspam03 X-HE-Tag: 1739450648-18205 X-HE-Meta: U2FsdGVkX19sLxNQ8EtJBxWnlmYI1CRSmPxM/q/VPvREiTjMaufWfOZT8Vi2mbv2936WR/jaNUiBHDir+iYIauD/pPvAw8X/Q+WOp5+XtL3T4jjH2M2sY/753eqZvH+zWw5EPxCQm9KZPPhXTpWGqWP/Z2sOqGV3W+XK8MaE7SAyu34TDJGFjmRm3LQxYg08UO0/7a5TZqiWjkQyGEjxWBs4s+vcR6nLHGta3rLAFypLuEukDc8RmH2sgolO7iZDHEm0jkkW4Mu38fyAvXKU0dW+j3RI71d4nnUvYxcHG5VdTAAu9N0aP9MY0eY9PgQKmr7C7uBS+2VjsX6rxyQZX0CwIPplCNJaUKEmiF8aP53I41DFy49ZiL0ZZ2NxyC2eKR6rC5SjYGJesu/Ftu3l4T1Hqkw9/186PpINKlEodeouIlJhskOQkHKu15ytjd97rA9e8uIaxZJe0+wqatbgq2GxgPWA/MuWAblLj6U0osISNZJVcaAfzY1bkMT1ESXgyQpLpGexCQ60TSWADssPI/O26cS1K6zYYD6ZQnSfECUA28Se8HIVU+aVNqTkSy/uQ+4b+9LWuuyZA3MB/Nipcf779v1zQL4o/0lIDltKs58BpIEtm0gOqBLmF/3u+ZG5MTnwqRUPSUNZElbjAdsGVqOLsdXv2jBiVY0CEwIoq7ixp6bBnXzZRYY16d/Ix1Sm+ZUP6OVVnyNZLFeXzblwo7eFZmdEBSr9Tjq5yBAiKhVhpetJ/SeLbDIFZ/2FYa53kInt+JolNFujuclgrmumtf6v68k/QRd4LlV1pSzRtqYELe0ukjhLKAJo2vMEPL3Qf+WCdQvH/XVBRg2kmpMyVIF62ICUfTDOR3dXqKJ3KmjmlxxYTfIGiDi9wifcPPnpHi4D+HKSk7qlS+VHB3Ua4pBOmf1kJbst0m/eIMiNHCme4wzurkfO/n1BTIK5kt71YaRCMNm6/Wq+oNsp/J9 2kxPW+dc ThTlnnu3uZzWVC0/JlYRuIViP/rcrkxofoRei X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Jan 03, 2025 at 12:24:18PM -0500, Zi Yan wrote: > Now page copies are batched, multi-threaded page copy can be used to > increase page copy throughput. Add copy_page_lists_mt() to copy pages in > multi-threaded manners. Empirical data show more than 32 base pages are > needed to show the benefit of using multi-threaded page copy, so use 32 as > the threshold. > > Signed-off-by: Zi Yan > --- > include/linux/migrate.h | 3 + > mm/Makefile | 2 +- > mm/copy_pages.c | 186 ++++++++++++++++++++++++++++++++++++++++ > mm/migrate.c | 19 ++-- > 4 files changed, 199 insertions(+), 11 deletions(-) > create mode 100644 mm/copy_pages.c > > diff --git a/include/linux/migrate.h b/include/linux/migrate.h > index 29919faea2f1..a0124f4893b0 100644 > --- a/include/linux/migrate.h > +++ b/include/linux/migrate.h > @@ -80,6 +80,9 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio); > int folio_migrate_mapping(struct address_space *mapping, > struct folio *newfolio, struct folio *folio, int extra_count); > > +int copy_page_lists_mt(struct list_head *dst_folios, > + struct list_head *src_folios, int nr_items); > + > #else > > static inline void putback_movable_pages(struct list_head *l) {} > diff --git a/mm/Makefile b/mm/Makefile > index 850386a67b3e..f8c7f6b4cebb 100644 > --- a/mm/Makefile > +++ b/mm/Makefile > @@ -92,7 +92,7 @@ obj-$(CONFIG_KMSAN) += kmsan/ > obj-$(CONFIG_FAILSLAB) += failslab.o > obj-$(CONFIG_FAIL_PAGE_ALLOC) += fail_page_alloc.o > obj-$(CONFIG_MEMTEST) += memtest.o > -obj-$(CONFIG_MIGRATION) += migrate.o > +obj-$(CONFIG_MIGRATION) += migrate.o copy_pages.o > obj-$(CONFIG_NUMA) += memory-tiers.o > obj-$(CONFIG_DEVICE_MIGRATION) += migrate_device.o > obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += huge_memory.o khugepaged.o > diff --git a/mm/copy_pages.c b/mm/copy_pages.c > new file mode 100644 > index 000000000000..0e2231199f66 > --- /dev/null > +++ b/mm/copy_pages.c > @@ -0,0 +1,186 @@ > +// SPDX-License-Identifier: GPL-2.0 > +/* > + * Parallel page copy routine. > + */ > + > +#include > +#include > +#include > +#include > +#include > + > + > +unsigned int limit_mt_num = 4; > + > +struct copy_item { > + char *to; > + char *from; > + unsigned long chunk_size; > +}; > + > +struct copy_page_info { > + struct work_struct copy_page_work; > + unsigned long num_items; > + struct copy_item item_list[]; > +}; > + > +static void copy_page_routine(char *vto, char *vfrom, > + unsigned long chunk_size) > +{ > + memcpy(vto, vfrom, chunk_size); > +} > + > +static void copy_page_work_queue_thread(struct work_struct *work) > +{ > + struct copy_page_info *my_work = (struct copy_page_info *)work; > + int i; > + > + for (i = 0; i < my_work->num_items; ++i) > + copy_page_routine(my_work->item_list[i].to, > + my_work->item_list[i].from, > + my_work->item_list[i].chunk_size); > +} > + > +int copy_page_lists_mt(struct list_head *dst_folios, > + struct list_head *src_folios, int nr_items) > +{ > + int err = 0; > + unsigned int total_mt_num = limit_mt_num; > + int to_node = folio_nid(list_first_entry(dst_folios, struct folio, lru)); > + int i; > + struct copy_page_info *work_items[32] = {0}; > + const struct cpumask *per_node_cpumask = cpumask_of_node(to_node); Hi, Why do you use the cpumask of dst's node than src for where queueing the works on? Is it for utilizing CPU cache? Isn't it better to use src's node than dst where nothing has been loaded to CPU cache? Or why don't you avoid specifying cpus to queue on but let system_unbound_wq select the appropriate CPUs e.g. idlest CPUs, when the system is not that idle? Am I missing something? Byungchul > + int cpu_id_list[32] = {0}; > + int cpu; > + int max_items_per_thread; > + int item_idx; > + struct folio *src, *src2, *dst, *dst2; > + > + total_mt_num = min_t(unsigned int, total_mt_num, > + cpumask_weight(per_node_cpumask)); > + > + if (total_mt_num > 32) > + total_mt_num = 32; > + > + /* Each threads get part of each page, if nr_items < totla_mt_num */ > + if (nr_items < total_mt_num) > + max_items_per_thread = nr_items; > + else > + max_items_per_thread = (nr_items / total_mt_num) + > + ((nr_items % total_mt_num) ? 1 : 0); > + > + > + for (cpu = 0; cpu < total_mt_num; ++cpu) { > + work_items[cpu] = kzalloc(sizeof(struct copy_page_info) + > + sizeof(struct copy_item) * max_items_per_thread, > + GFP_NOWAIT); > + if (!work_items[cpu]) { > + err = -ENOMEM; > + goto free_work_items; > + } > + } > + > + i = 0; > + /* TODO: need a better cpu selection method */ > + for_each_cpu(cpu, per_node_cpumask) { > + if (i >= total_mt_num) > + break; > + cpu_id_list[i] = cpu; > + ++i; > + } > + > + if (nr_items < total_mt_num) { > + for (cpu = 0; cpu < total_mt_num; ++cpu) { > + INIT_WORK((struct work_struct *)work_items[cpu], > + copy_page_work_queue_thread); > + work_items[cpu]->num_items = max_items_per_thread; > + } > + > + item_idx = 0; > + dst = list_first_entry(dst_folios, struct folio, lru); > + dst2 = list_next_entry(dst, lru); > + list_for_each_entry_safe(src, src2, src_folios, lru) { > + unsigned long chunk_size = PAGE_SIZE * folio_nr_pages(src) / total_mt_num; > + /* XXX: not working in HIGHMEM */ > + char *vfrom = page_address(&src->page); > + char *vto = page_address(&dst->page); > + > + VM_WARN_ON(PAGE_SIZE * folio_nr_pages(src) % total_mt_num); > + VM_WARN_ON(folio_nr_pages(dst) != folio_nr_pages(src)); > + > + for (cpu = 0; cpu < total_mt_num; ++cpu) { > + work_items[cpu]->item_list[item_idx].to = > + vto + chunk_size * cpu; > + work_items[cpu]->item_list[item_idx].from = > + vfrom + chunk_size * cpu; > + work_items[cpu]->item_list[item_idx].chunk_size = > + chunk_size; > + } > + > + item_idx++; > + dst = dst2; > + dst2 = list_next_entry(dst, lru); > + } > + > + for (cpu = 0; cpu < total_mt_num; ++cpu) > + queue_work_on(cpu_id_list[cpu], > + system_unbound_wq, > + (struct work_struct *)work_items[cpu]); > + } else { > + int num_xfer_per_thread = nr_items / total_mt_num; > + int per_cpu_item_idx; > + > + > + for (cpu = 0; cpu < total_mt_num; ++cpu) { > + INIT_WORK((struct work_struct *)work_items[cpu], > + copy_page_work_queue_thread); > + > + work_items[cpu]->num_items = num_xfer_per_thread + > + (cpu < (nr_items % total_mt_num)); > + } > + > + cpu = 0; > + per_cpu_item_idx = 0; > + item_idx = 0; > + dst = list_first_entry(dst_folios, struct folio, lru); > + dst2 = list_next_entry(dst, lru); > + list_for_each_entry_safe(src, src2, src_folios, lru) { > + /* XXX: not working in HIGHMEM */ > + work_items[cpu]->item_list[per_cpu_item_idx].to = > + page_address(&dst->page); > + work_items[cpu]->item_list[per_cpu_item_idx].from = > + page_address(&src->page); > + work_items[cpu]->item_list[per_cpu_item_idx].chunk_size = > + PAGE_SIZE * folio_nr_pages(src); > + > + VM_WARN_ON(folio_nr_pages(dst) != > + folio_nr_pages(src)); > + > + per_cpu_item_idx++; > + item_idx++; > + dst = dst2; > + dst2 = list_next_entry(dst, lru); > + > + if (per_cpu_item_idx == work_items[cpu]->num_items) { > + queue_work_on(cpu_id_list[cpu], > + system_unbound_wq, > + (struct work_struct *)work_items[cpu]); > + per_cpu_item_idx = 0; > + cpu++; > + } > + } > + if (item_idx != nr_items) > + pr_warn("%s: only %d out of %d pages are transferred\n", > + __func__, item_idx - 1, nr_items); > + } > + > + /* Wait until it finishes */ > + for (i = 0; i < total_mt_num; ++i) > + flush_work((struct work_struct *)work_items[i]); > + > +free_work_items: > + for (cpu = 0; cpu < total_mt_num; ++cpu) > + kfree(work_items[cpu]); > + > + return err; > +} > diff --git a/mm/migrate.c b/mm/migrate.c > index 95c4cc4a7823..18440180d747 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -1799,7 +1799,7 @@ static void migrate_folios_batch_move(struct list_head *src_folios, > int *nr_retry_pages) > { > struct folio *folio, *folio2, *dst, *dst2; > - int rc, nr_pages = 0, nr_mig_folios = 0; > + int rc, nr_pages = 0, total_nr_pages = 0, total_nr_folios = 0; > int old_page_state = 0; > struct anon_vma *anon_vma = NULL; > bool is_lru; > @@ -1807,11 +1807,6 @@ static void migrate_folios_batch_move(struct list_head *src_folios, > LIST_HEAD(err_src); > LIST_HEAD(err_dst); > > - if (mode != MIGRATE_ASYNC) { > - *retry += 1; > - return; > - } > - > /* > * Iterate over the list of locked src/dst folios to copy the metadata > */ > @@ -1859,19 +1854,23 @@ static void migrate_folios_batch_move(struct list_head *src_folios, > migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED, > anon_vma, true, ret_folios); > migrate_folio_undo_dst(dst, true, put_new_folio, private); > - } else /* MIGRATEPAGE_SUCCESS */ > - nr_mig_folios++; > + } else { /* MIGRATEPAGE_SUCCESS */ > + total_nr_pages += nr_pages; > + total_nr_folios++; > + } > > dst = dst2; > dst2 = list_next_entry(dst, lru); > } > > /* Exit if folio list for batch migration is empty */ > - if (!nr_mig_folios) > + if (!total_nr_pages) > goto out; > > /* Batch copy the folios */ > - { > + if (total_nr_pages > 32) { > + copy_page_lists_mt(dst_folios, src_folios, total_nr_folios); > + } else { > dst = list_first_entry(dst_folios, struct folio, lru); > dst2 = list_next_entry(dst, lru); > list_for_each_entry_safe(folio, folio2, src_folios, lru) { > -- > 2.45.2 >