From: Zi Yan <ziy@nvidia.com>
To: Gregory Price <gourry@gourry.net>
Cc: linux-mm@kvack.org, David Rientjes <rientjes@google.com>,
Shivank Garg <shivankg@amd.com>,
Aneesh Kumar <AneeshKumar.KizhakeVeetil@arm.com>,
David Hildenbrand <david@redhat.com>,
John Hubbard <jhubbard@nvidia.com>,
Kirill Shutemov <k.shutemov@gmail.com>,
Matthew Wilcox <willy@infradead.org>,
Mel Gorman <mel.gorman@gmail.com>,
"Rao, Bharata Bhasker" <bharata@amd.com>,
Rik van Riel <riel@surriel.com>,
RaghavendraKT <Raghavendra.KodsaraThimmappa@amd.com>,
Wei Xu <weixugc@google.com>, Suyeon Lee <leesuyeon0506@gmail.com>,
Lei Chen <leillc@google.com>,
"Shukla, Santosh" <santosh.shukla@amd.com>,
"Grimm, Jon" <jon.grimm@amd.com>,
sj@kernel.org, shy828301@gmail.com,
Liam Howlett <liam.howlett@oracle.com>,
Gregory Price <gregory.price@memverge.com>,
"Huang, Ying" <ying.huang@linux.alibaba.com>
Subject: Re: [RFC PATCH 0/5] Accelerate page migration with batching and multi threads
Date: Fri, 03 Jan 2025 14:32:19 -0500 [thread overview]
Message-ID: <75226395-5D07-4FBC-B2A4-368809336C48@nvidia.com> (raw)
In-Reply-To: <Z3g3yMxh2OMOBERT@gourry-fedora-PF4VCD3F>
On 3 Jan 2025, at 14:17, Gregory Price wrote:
> On Fri, Jan 03, 2025 at 12:24:14PM -0500, Zi Yan wrote:
>> Hi all,
>>
>> This patchset accelerates page migration by batching folio copy operations and
>> using multiple CPU threads and is based on Shivank's Enhancements to Page
>> Migration with Batch Offloading via DMA patchset[1] and my original accelerate
>> page migration patchset[2]. It is on top of mm-everything-2025-01-03-05-59.
>> The last patch is for testing purpose and should not be considered.
>>
>
> This is well timed as I've been testing a batch-migration variant of
> migrate_misplaced_folio for my pagecache promotion work (attached).
>
> I will add this to my pagecache branch and give it a test at some point.
Great. Thanks.
>
> Quick question: is the multi-threaded movement supported in the context
> of task_work? i.e. in which context is the multi-threaded path
> safe/unsafe? (inline in a syscall, async only, etc).
It should work in any context, like syscall, memory compaction, and so on,
since it just distributes memcpy to different CPUs using workqueue.
>
> ~Gregory
>
> ---
>
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index 9438cc7c2aeb..17baf63964c0 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -146,6 +146,9 @@ int migrate_misplaced_folio_prepare(struct folio *folio,
> struct vm_area_struct *vma, int node);
> int migrate_misplaced_folio(struct folio *folio, struct vm_area_struct *vma,
> int node);
> +int migrate_misplaced_folio_batch(struct list_head *foliolist,
> + struct vm_area_struct *vma,
> + int node);
> #else
> static inline int migrate_misplaced_folio_prepare(struct folio *folio,
> struct vm_area_struct *vma, int node)
> @@ -157,6 +160,12 @@ static inline int migrate_misplaced_folio(struct folio *folio,
> {
> return -EAGAIN; /* can't migrate now */
> }
> +int migrate_misplaced_folio_batch(struct list_head *foliolist,
> + struct vm_area_struct *vma,
> + int node)
> +{
> + return -EAGAIN; /* can't migrate now */
> +}
> #endif /* CONFIG_NUMA_BALANCING */
>
> #ifdef CONFIG_MIGRATION
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 459f396f7bc1..454fd93c4cc7 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2608,5 +2608,27 @@ int migrate_misplaced_folio(struct folio *folio, struct vm_area_struct *vma,
> BUG_ON(!list_empty(&migratepages));
> return nr_remaining ? -EAGAIN : 0;
> }
> +
> +int migrate_misplaced_folio_batch(struct list_head *folio_list,
> + struct vm_area_struct *vma,
> + int node)
> +{
> + pg_data_t *pgdat = NODE_DATA(node);
> + unsigned int nr_succeeded;
> + int nr_remaining;
> +
> + nr_remaining = migrate_pages(folio_list, alloc_misplaced_dst_folio,
> + NULL, node, MIGRATE_ASYNC,
> + MR_NUMA_MISPLACED, &nr_succeeded);
> + if (nr_remaining)
> + putback_movable_pages(folio_list);
> +
> + if (nr_succeeded) {
> + count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded);
> + mod_node_page_state(pgdat, PGPROMOTE_SUCCESS, nr_succeeded);
> + }
> + BUG_ON(!list_empty(folio_list));
> + return nr_remaining ? -EAGAIN : 0;
> +}
> #endif /* CONFIG_NUMA_BALANCING */
> #endif /* CONFIG_NUMA */
Best Regards,
Yan, Zi
next prev parent reply other threads:[~2025-01-03 19:32 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-03 17:24 Zi Yan
2025-01-03 17:24 ` [RFC PATCH 1/5] mm: separate move/undo doing on folio list from migrate_pages_batch() Zi Yan
2025-01-03 17:24 ` [RFC PATCH 2/5] mm/migrate: factor out code in move_to_new_folio() and migrate_folio_move() Zi Yan
2025-01-03 17:24 ` [RFC PATCH 3/5] mm/migrate: add migrate_folios_batch_move to batch the folio move operations Zi Yan
2025-01-09 11:47 ` Shivank Garg
2025-01-09 14:08 ` Zi Yan
2025-01-03 17:24 ` [RFC PATCH 4/5] mm/migrate: introduce multi-threaded page copy routine Zi Yan
2025-01-06 1:18 ` Hyeonggon Yoo
2025-01-06 2:01 ` Zi Yan
2025-02-13 12:44 ` Byungchul Park
2025-02-13 15:34 ` Zi Yan
2025-02-13 21:34 ` Byungchul Park
2025-01-03 17:24 ` [RFC PATCH 5/5] test: add sysctl for folio copy tests and adjust NR_MAX_BATCHED_MIGRATION Zi Yan
2025-01-03 22:21 ` Gregory Price
2025-01-03 22:56 ` Zi Yan
2025-01-03 19:17 ` [RFC PATCH 0/5] Accelerate page migration with batching and multi threads Gregory Price
2025-01-03 19:32 ` Zi Yan [this message]
2025-01-03 22:09 ` Yang Shi
2025-01-06 2:33 ` Zi Yan
2025-01-09 11:47 ` Shivank Garg
2025-01-09 15:04 ` Zi Yan
2025-01-09 18:03 ` Shivank Garg
2025-01-09 19:32 ` Zi Yan
2025-01-10 17:05 ` Zi Yan
2025-01-10 19:51 ` Zi Yan
2025-01-16 4:57 ` Shivank Garg
2025-01-21 6:15 ` Shivank Garg
2025-02-13 8:17 ` Byungchul Park
2025-02-13 15:36 ` Zi Yan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=75226395-5D07-4FBC-B2A4-368809336C48@nvidia.com \
--to=ziy@nvidia.com \
--cc=AneeshKumar.KizhakeVeetil@arm.com \
--cc=Raghavendra.KodsaraThimmappa@amd.com \
--cc=bharata@amd.com \
--cc=david@redhat.com \
--cc=gourry@gourry.net \
--cc=gregory.price@memverge.com \
--cc=jhubbard@nvidia.com \
--cc=jon.grimm@amd.com \
--cc=k.shutemov@gmail.com \
--cc=leesuyeon0506@gmail.com \
--cc=leillc@google.com \
--cc=liam.howlett@oracle.com \
--cc=linux-mm@kvack.org \
--cc=mel.gorman@gmail.com \
--cc=riel@surriel.com \
--cc=rientjes@google.com \
--cc=santosh.shukla@amd.com \
--cc=shivankg@amd.com \
--cc=shy828301@gmail.com \
--cc=sj@kernel.org \
--cc=weixugc@google.com \
--cc=willy@infradead.org \
--cc=ying.huang@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox