From: "Garg, Shivank" <shivankg@amd.com>
To: "David Hildenbrand (Arm)" <david@kernel.org>, akpm@linux-foundation.org
Cc: lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com,
vbabka@kernel.org, willy@infradead.org, rppt@kernel.org,
surenb@google.com, mhocko@suse.com, ziy@nvidia.com,
matthew.brost@intel.com, joshua.hahnjy@gmail.com,
rakie.kim@sk.com, byungchul@sk.com, gourry@gourry.net,
ying.huang@linux.alibaba.com, apopple@nvidia.com,
dave@stgolabs.net, Jonathan.Cameron@huawei.com, rkodsara@amd.com,
vkoul@kernel.org, bharata@amd.com, sj@kernel.org,
weixugc@google.com, dan.j.williams@intel.com,
rientjes@google.com, xuezhengchu@huawei.com, yiannis@zptcorp.com,
dave.hansen@intel.com, hannes@cmpxchg.org, jhubbard@nvidia.com,
peterx@redhat.com, riel@surriel.com, shakeel.butt@linux.dev,
stalexan@redhat.com, tj@kernel.org, nifan.cxl@gmail.com,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [RFC PATCH v4 2/6] mm/migrate: skip data copy for already-copied folios
Date: Sun, 15 Mar 2026 23:55:20 +0530 [thread overview]
Message-ID: <99d3a0be-d33e-47c8-88b4-3f0e21f4a1cf@amd.com> (raw)
In-Reply-To: <4bfec2c1-9747-49d2-a490-87ab204cec1c@kernel.org>
On 3/12/2026 3:14 PM, David Hildenbrand (Arm) wrote:
> On 3/9/26 13:07, Shivank Garg wrote:
>> Add a PAGE_ALREADY_COPIED flag to the dst->private migration state.
>> When set, __migrate_folio() skips folio_mc_copy() and performs
>> metadata-only migration. All callers currently pass
>> already_copied=false. The batch-copy path enables it in a later patch.
>>
>> Move the dst->private state enum earlier in the file so
>> __migrate_folio() and move_to_new_folio() can see PAGE_ALREADY_COPIED.
>>
>> Signed-off-by: Shivank Garg <shivankg@amd.com>
>> ---
>> mm/migrate.c | 52 +++++++++++++++++++++++++++++++---------------------
>> 1 file changed, 31 insertions(+), 21 deletions(-)
>>
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index 1bf2cf8c44dd..1d8c1fb627c9 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -848,6 +848,18 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio)
>> }
>> EXPORT_SYMBOL(folio_migrate_flags);
>>
>> +/*
>> + * To record some information during migration, we use unused private
>> + * field of struct folio of the newly allocated destination folio.
>> + * This is safe because nobody is using it except us.
>> + */
>> +enum {
>> + PAGE_WAS_MAPPED = BIT(0),
>> + PAGE_WAS_MLOCKED = BIT(1),
>> + PAGE_ALREADY_COPIED = BIT(2),
>> + PAGE_OLD_STATES = PAGE_WAS_MAPPED | PAGE_WAS_MLOCKED | PAGE_ALREADY_COPIED,
>
> All these states really only apply to proper folios (not movable_ops).
> So once we complete decoupling movable_ops migration from folio
> migration, these flags would only appear in the folio migration part.
>
> Can we convert them first to state it clearly already that these are
> folio migration flags?
>
> FOLIO_MF_WAS_MAPPED
>
> ...
>
Sure, done.
Should I fold it into the series? Or send it as independent patch as this series would
likely take few more rounds of reviews and discussion.
Subject: [PATCH] mm/migrate: rename PAGE_ migration flags to FOLIO_MF_
These flags only track folio-specific state during migration and are
not used for movable_ops pages. Rename the enum values and the
old_page_state variable to match.
No functional change.
Suggested-by: David Hildenbrand <david@kernel.org>
Signed-off-by: Shivank Garg shivankg@amd.com
---
mm/migrate.c | 46 +++++++++++++++++++++++-----------------------
1 file changed, 23 insertions(+), 23 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 1bf2cf8c44dd..8c9115cc4586 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1133,26 +1133,26 @@ static int move_to_new_folio(struct folio *dst, struct folio *src,
* This is safe because nobody is using it except us.
*/
enum {
- PAGE_WAS_MAPPED = BIT(0),
- PAGE_WAS_MLOCKED = BIT(1),
- PAGE_OLD_STATES = PAGE_WAS_MAPPED | PAGE_WAS_MLOCKED,
+ FOLIO_MF_WAS_MAPPED = BIT(0),
+ FOLIO_MF_WAS_MLOCKED = BIT(1),
+ FOLIO_MF_OLD_STATES = FOLIO_MF_WAS_MAPPED | FOLIO_MF_WAS_MLOCKED,
};
static void __migrate_folio_record(struct folio *dst,
- int old_page_state,
+ int old_folio_state,
struct anon_vma *anon_vma)
{
- dst->private = (void *)anon_vma + old_page_state;
+ dst->private = (void *)anon_vma + old_folio_state;
}
static void __migrate_folio_extract(struct folio *dst,
- int *old_page_state,
+ int *old_folio_state,
struct anon_vma **anon_vmap)
{
unsigned long private = (unsigned long)dst->private;
- *anon_vmap = (struct anon_vma *)(private & ~PAGE_OLD_STATES);
- *old_page_state = private & PAGE_OLD_STATES;
+ *anon_vmap = (struct anon_vma *)(private & ~FOLIO_MF_OLD_STATES);
+ *old_folio_state = private & FOLIO_MF_OLD_STATES;
dst->private = NULL;
}
@@ -1207,7 +1207,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
{
struct folio *dst;
int rc = -EAGAIN;
- int old_page_state = 0;
+ int old_folio_state = 0;
struct anon_vma *anon_vma = NULL;
bool locked = false;
bool dst_locked = false;
@@ -1251,7 +1251,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
}
locked = true;
if (folio_test_mlocked(src))
- old_page_state |= PAGE_WAS_MLOCKED;
+ old_folio_state |= FOLIO_MF_WAS_MLOCKED;
if (folio_test_writeback(src)) {
/*
@@ -1300,7 +1300,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
dst_locked = true;
if (unlikely(page_has_movable_ops(&src->page))) {
- __migrate_folio_record(dst, old_page_state, anon_vma);
+ __migrate_folio_record(dst, old_folio_state, anon_vma);
return 0;
}
@@ -1326,11 +1326,11 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
VM_BUG_ON_FOLIO(folio_test_anon(src) &&
!folio_test_ksm(src) && !anon_vma, src);
try_to_migrate(src, mode == MIGRATE_ASYNC ? TTU_BATCH_FLUSH : 0);
- old_page_state |= PAGE_WAS_MAPPED;
+ old_folio_state |= FOLIO_MF_WAS_MAPPED;
}
if (!folio_mapped(src)) {
- __migrate_folio_record(dst, old_page_state, anon_vma);
+ __migrate_folio_record(dst, old_folio_state, anon_vma);
return 0;
}
@@ -1342,7 +1342,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
if (rc == -EAGAIN)
ret = NULL;
- migrate_folio_undo_src(src, old_page_state & PAGE_WAS_MAPPED,
+ migrate_folio_undo_src(src, old_folio_state & FOLIO_MF_WAS_MAPPED,
anon_vma, locked, ret);
migrate_folio_undo_dst(dst, dst_locked, put_new_folio, private);
@@ -1356,11 +1356,11 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
struct list_head *ret)
{
int rc;
- int old_page_state = 0;
+ int old_folio_state = 0;
struct anon_vma *anon_vma = NULL;
struct list_head *prev;
- __migrate_folio_extract(dst, &old_page_state, &anon_vma);
+ __migrate_folio_extract(dst, &old_folio_state, &anon_vma);
prev = dst->lru.prev;
list_del(&dst->lru);
@@ -1385,10 +1385,10 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
* isolated from the unevictable LRU: but this case is the easiest.
*/
folio_add_lru(dst);
- if (old_page_state & PAGE_WAS_MLOCKED)
+ if (old_folio_state & FOLIO_MF_WAS_MLOCKED)
lru_add_drain();
- if (old_page_state & PAGE_WAS_MAPPED)
+ if (old_folio_state & FOLIO_MF_WAS_MAPPED)
remove_migration_ptes(src, dst, 0);
out_unlock_both:
@@ -1420,11 +1420,11 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
*/
if (rc == -EAGAIN) {
list_add(&dst->lru, prev);
- __migrate_folio_record(dst, old_page_state, anon_vma);
+ __migrate_folio_record(dst, old_folio_state, anon_vma);
return rc;
}
- migrate_folio_undo_src(src, old_page_state & PAGE_WAS_MAPPED,
+ migrate_folio_undo_src(src, old_folio_state & FOLIO_MF_WAS_MAPPED,
anon_vma, true, ret);
migrate_folio_undo_dst(dst, true, put_new_folio, private);
@@ -1758,11 +1758,11 @@ static void migrate_folios_undo(struct list_head *src_folios,
dst = list_first_entry(dst_folios, struct folio, lru);
dst2 = list_next_entry(dst, lru);
list_for_each_entry_safe(folio, folio2, src_folios, lru) {
- int old_page_state = 0;
+ int old_folio_state = 0;
struct anon_vma *anon_vma = NULL;
- __migrate_folio_extract(dst, &old_page_state, &anon_vma);
- migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED,
+ __migrate_folio_extract(dst, &old_folio_state, &anon_vma);
+ migrate_folio_undo_src(folio, old_folio_state & FOLIO_MF_WAS_MAPPED,
anon_vma, true, ret_folios);
list_del(&dst->lru);
migrate_folio_undo_dst(dst, true, put_new_folio, private);
--
Thanks,
Shivank
next prev parent reply other threads:[~2026-03-15 18:25 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-09 12:07 [RFC PATCH v4 0/6] Accelerate page migration with batch copying and hardware offload Shivank Garg
2026-03-09 12:07 ` [RFC PATCH v4 1/6] mm: introduce folios_mc_copy() for batch folio copying Shivank Garg
2026-03-12 9:41 ` David Hildenbrand (Arm)
2026-03-15 18:09 ` Garg, Shivank
2026-03-09 12:07 ` [RFC PATCH v4 2/6] mm/migrate: skip data copy for already-copied folios Shivank Garg
2026-03-12 9:44 ` David Hildenbrand (Arm)
2026-03-15 18:25 ` Garg, Shivank [this message]
2026-03-23 12:20 ` David Hildenbrand (Arm)
2026-03-24 8:22 ` Huang, Ying
2026-04-03 11:08 ` Garg, Shivank
2026-04-07 6:52 ` Huang, Ying
2026-03-09 12:07 ` [RFC PATCH v4 3/6] mm/migrate: add batch-copy path in migrate_pages_batch Shivank Garg
2026-03-24 8:42 ` Huang, Ying
2026-04-03 11:09 ` Garg, Shivank
2026-03-09 12:07 ` [RFC PATCH v4 4/6] mm/migrate: add copy offload registration infrastructure Shivank Garg
2026-03-09 17:54 ` Gregory Price
2026-03-10 10:07 ` Garg, Shivank
2026-03-24 10:54 ` Huang, Ying
2026-04-03 11:11 ` Garg, Shivank
2026-04-07 7:40 ` Huang, Ying
2026-03-09 12:07 ` [RFC PATCH v4 5/6] drivers/migrate_offload: add DMA batch copy driver (dcbm) Shivank Garg
2026-03-09 18:04 ` Gregory Price
2026-03-12 9:33 ` Garg, Shivank
2026-03-24 8:10 ` Huang, Ying
2026-04-03 11:06 ` Garg, Shivank
2026-03-09 12:07 ` [RFC PATCH v4 6/6] mm/migrate: adjust NR_MAX_BATCHED_MIGRATION for testing Shivank Garg
2026-03-18 14:29 ` [RFC PATCH v4 0/6] Accelerate page migration with batch copying and hardware offload Garg, Shivank
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=99d3a0be-d33e-47c8-88b4-3f0e21f4a1cf@amd.com \
--to=shivankg@amd.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=bharata@amd.com \
--cc=byungchul@sk.com \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@intel.com \
--cc=dave@stgolabs.net \
--cc=david@kernel.org \
--cc=gourry@gourry.net \
--cc=hannes@cmpxchg.org \
--cc=jhubbard@nvidia.com \
--cc=joshua.hahnjy@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=matthew.brost@intel.com \
--cc=mhocko@suse.com \
--cc=nifan.cxl@gmail.com \
--cc=peterx@redhat.com \
--cc=rakie.kim@sk.com \
--cc=riel@surriel.com \
--cc=rientjes@google.com \
--cc=rkodsara@amd.com \
--cc=rppt@kernel.org \
--cc=shakeel.butt@linux.dev \
--cc=sj@kernel.org \
--cc=stalexan@redhat.com \
--cc=surenb@google.com \
--cc=tj@kernel.org \
--cc=vbabka@kernel.org \
--cc=vkoul@kernel.org \
--cc=weixugc@google.com \
--cc=willy@infradead.org \
--cc=xuezhengchu@huawei.com \
--cc=yiannis@zptcorp.com \
--cc=ying.huang@linux.alibaba.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox