From: Jonathan Cameron <jonathan.cameron@huawei.com>
To: Shivank Garg <shivankg@amd.com>
Cc: <akpm@linux-foundation.org>, <david@redhat.com>, <ziy@nvidia.com>,
<willy@infradead.org>, <matthew.brost@intel.com>,
<joshua.hahnjy@gmail.com>, <rakie.kim@sk.com>, <byungchul@sk.com>,
<gourry@gourry.net>, <ying.huang@linux.alibaba.com>,
<apopple@nvidia.com>, <lorenzo.stoakes@oracle.com>,
<Liam.Howlett@oracle.com>, <vbabka@suse.cz>, <rppt@kernel.org>,
<surenb@google.com>, <mhocko@suse.com>, <vkoul@kernel.org>,
<lucas.demarchi@intel.com>, <rdunlap@infradead.org>,
<jgg@ziepe.ca>, <kuba@kernel.org>, <justonli@chromium.org>,
<ivecera@redhat.com>, <dave.jiang@intel.com>,
<dan.j.williams@intel.com>, <rientjes@google.com>,
<Raghavendra.KodsaraThimmappa@amd.com>, <bharata@amd.com>,
<alirad.malek@zptcorp.com>, <yiannis@zptcorp.com>,
<weixugc@google.com>, <linux-kernel@vger.kernel.org>,
<linux-mm@kvack.org>
Subject: Re: [RFC V3 1/9] mm/migrate: factor out code in move_to_new_folio() and migrate_folio_move()
Date: Thu, 2 Oct 2025 11:30:19 +0100 [thread overview]
Message-ID: <20251002113019.000074df@huawei.com> (raw)
In-Reply-To: <20250923174752.35701-2-shivankg@amd.com>
On Tue, 23 Sep 2025 17:47:36 +0000
Shivank Garg <shivankg@amd.com> wrote:
> From: Zi Yan <ziy@nvidia.com>
>
> No function change is intended. The factored out code will be reused in
> an upcoming batched folio move function.
>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: Shivank Garg <shivankg@amd.com>
Hi. A few code structure things inline.
The naming of the various helpers needs some more thought I think as
with it like this the loss of readability of existing code is painful.
Jonathan
> ---
> mm/migrate.c | 106 ++++++++++++++++++++++++++++++++-------------------
> 1 file changed, 67 insertions(+), 39 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 9e5ef39ce73a..ad03e7257847 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1061,19 +1061,7 @@ static int fallback_migrate_folio(struct address_space *mapping,
> return migrate_folio(mapping, dst, src, mode);
> }
>
> -/*
> - * Move a src folio to a newly allocated dst folio.
> - *
> - * The src and dst folios are locked and the src folios was unmapped from
> - * the page tables.
> - *
> - * On success, the src folio was replaced by the dst folio.
> - *
> - * Return value:
> - * < 0 - error code
> - * MIGRATEPAGE_SUCCESS - success
> - */
> -static int move_to_new_folio(struct folio *dst, struct folio *src,
> +static int _move_to_new_folio_prep(struct folio *dst, struct folio *src,
I'm not sure the _ prefix is needed. Or maybe it should be __ like
__buffer_migrate_folio()
> enum migrate_mode mode)
> {
> struct address_space *mapping = folio_mapping(src);
> @@ -1098,7 +1086,12 @@ static int move_to_new_folio(struct folio *dst, struct folio *src,
> mode);
> else
> rc = fallback_migrate_folio(mapping, dst, src, mode);
> + return rc;
May be worth switching this whole function to early returns given we no longer
have a shared block of stuff to do at the end.
if (!mapping)
return migrate_folio(mapping, st, src, mode);
if (mapping_inaccessible(mapping))
return -EOPNOTSUPP;
if (mapping->a_ops->migrate_folio)
return mapping->a_ops->migrate_folio(mapping, dst, src, mode);
return fallback_migrate_folio(mapping, dst, src, mode);
> +}
>
> +static void _move_to_new_folio_finalize(struct folio *dst, struct folio *src,
> + int rc)
> +{
> if (rc == MIGRATEPAGE_SUCCESS) {
Perhaps
if (rc != MIGRATE_PAGE_SUCCESS)
return rc;
/*
* For pagecache folios,....
...
return rc;
Unless other stuff is likely to get added in here.
Or drag the condition to the caller.
> /*
> * For pagecache folios, src->mapping must be cleared before src
> @@ -1110,6 +1103,29 @@ static int move_to_new_folio(struct folio *dst, struct folio *src,
> if (likely(!folio_is_zone_device(dst)))
> flush_dcache_folio(dst);
> }
> +}
> +
> +/*
> + * Move a src folio to a newly allocated dst folio.
> + *
> + * The src and dst folios are locked and the src folios was unmapped from
> + * the page tables.
> + *
> + * On success, the src folio was replaced by the dst folio.
> + *
> + * Return value:
> + * < 0 - error code
> + * MIGRATEPAGE_SUCCESS - success
> + */
> +static int move_to_new_folio(struct folio *dst, struct folio *src,
> + enum migrate_mode mode)
> +{
> + int rc;
> +
> + rc = _move_to_new_folio_prep(dst, src, mode);
> +
> + _move_to_new_folio_finalize(dst, src, rc);
> +
> return rc;
> }
>
> @@ -1345,32 +1361,9 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
> return rc;
> }
>
> -/* Migrate the folio to the newly allocated folio in dst. */
> -static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
> - struct folio *src, struct folio *dst,
> - enum migrate_mode mode, enum migrate_reason reason,
> - struct list_head *ret)
> +static void _migrate_folio_move_finalize1(struct folio *src, struct folio *dst,
> + int old_page_state)
> {
> - int rc;
> - int old_page_state = 0;
> - struct anon_vma *anon_vma = NULL;
> - struct list_head *prev;
> -
> - __migrate_folio_extract(dst, &old_page_state, &anon_vma);
> - prev = dst->lru.prev;
> - list_del(&dst->lru);
> -
> - if (unlikely(page_has_movable_ops(&src->page))) {
> - rc = migrate_movable_ops_page(&dst->page, &src->page, mode);
> - if (rc)
> - goto out;
> - goto out_unlock_both;
> - }
> -
> - rc = move_to_new_folio(dst, src, mode);
> - if (rc)
> - goto out;
> -
> /*
> * When successful, push dst to LRU immediately: so that if it
> * turns out to be an mlocked page, remove_migration_ptes() will
> @@ -1386,8 +1379,12 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
>
> if (old_page_state & PAGE_WAS_MAPPED)
> remove_migration_ptes(src, dst, 0);
> +}
>
> -out_unlock_both:
> +static void _migrate_folio_move_finalize2(struct folio *src, struct folio *dst,
> + enum migrate_reason reason,
> + struct anon_vma *anon_vma)
> +{
> folio_unlock(dst);
> folio_set_owner_migrate_reason(dst, reason);
> /*
> @@ -1407,6 +1404,37 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
> put_anon_vma(anon_vma);
> folio_unlock(src);
> migrate_folio_done(src, reason);
> +}
> +
> +/* Migrate the folio to the newly allocated folio in dst. */
> +static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
> + struct folio *src, struct folio *dst,
> + enum migrate_mode mode, enum migrate_reason reason,
> + struct list_head *ret)
> +{
> + int rc;
> + int old_page_state = 0;
> + struct anon_vma *anon_vma = NULL;
> + struct list_head *prev;
> +
> + __migrate_folio_extract(dst, &old_page_state, &anon_vma);
> + prev = dst->lru.prev;
> + list_del(&dst->lru);
> +
> + if (unlikely(page_has_movable_ops(&src->page))) {
> + rc = migrate_movable_ops_page(&dst->page, &src->page, mode);
> + if (rc)
> + goto out;
> + goto out_unlock_both;
I would drop this..
> + }
and do
} else {
rc = move_to_new_folio(dst, src, mode);
if (rc)
goto out;
_migrate_folio_move_finalize1(src, dst, old_page_state);
}
_migrate_folio_move_finalize2(src, dst, reason, anon_vma);
return rc;
This makes sense now as the amount of code indented more in this approach
is much smaller than it would have been before you factored stuff out.
> +
> + rc = move_to_new_folio(dst, src, mode);
> + if (rc)
> + goto out;
> +
Hmm. These two functions might be useful but this is hurting readability
here. Can we come up with some more meaningful names perhaps?
> + _migrate_folio_move_finalize1(src, dst, old_page_state);
> +out_unlock_both:
> + _migrate_folio_move_finalize2(src, dst, reason, anon_vma);
>
> return rc;
> out:
next prev parent reply other threads:[~2025-10-02 10:30 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-23 17:47 [RFC V3 0/9] Accelerate page migration with batch copying and hardware offload Shivank Garg
2025-09-23 17:47 ` [RFC V3 1/9] mm/migrate: factor out code in move_to_new_folio() and migrate_folio_move() Shivank Garg
2025-10-02 10:30 ` Jonathan Cameron [this message]
2025-09-23 17:47 ` [RFC V3 2/9] mm/migrate: revive MIGRATE_NO_COPY in migrate_mode Shivank Garg
2025-09-23 17:47 ` [RFC V3 3/9] mm: Introduce folios_mc_copy() for batch copying folios Shivank Garg
2025-09-23 17:47 ` [RFC V3 4/9] mm/migrate: add migrate_folios_batch_move to batch the folio move operations Shivank Garg
2025-10-02 11:03 ` Jonathan Cameron
2025-10-16 9:17 ` Garg, Shivank
2025-09-23 17:47 ` [RFC V3 5/9] mm: add support for copy offload for folio Migration Shivank Garg
2025-10-02 11:10 ` Jonathan Cameron
2025-10-16 9:40 ` Garg, Shivank
2025-09-23 17:47 ` [RFC V3 6/9] mtcopy: introduce multi-threaded page copy routine Shivank Garg
2025-10-02 11:29 ` Jonathan Cameron
2025-10-20 8:28 ` Byungchul Park
2025-11-06 6:27 ` Garg, Shivank
2025-11-12 2:12 ` Byungchul Park
2025-09-23 17:47 ` [RFC V3 7/9] dcbm: add dma core batch migrator for batch page offloading Shivank Garg
2025-10-02 11:38 ` Jonathan Cameron
2025-10-16 9:59 ` Garg, Shivank
2025-09-23 17:47 ` [RFC V3 8/9] adjust NR_MAX_BATCHED_MIGRATION for testing Shivank Garg
2025-09-23 17:47 ` [RFC V3 9/9] mtcopy: spread threads across die " Shivank Garg
2025-09-24 1:49 ` [RFC V3 0/9] Accelerate page migration with batch copying and hardware offload Huang, Ying
2025-09-24 2:03 ` Zi Yan
2025-09-24 3:11 ` Huang, Ying
2025-09-24 3:22 ` Zi Yan
2025-10-02 17:10 ` Garg, Shivank
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251002113019.000074df@huawei.com \
--to=jonathan.cameron@huawei.com \
--cc=Liam.Howlett@oracle.com \
--cc=Raghavendra.KodsaraThimmappa@amd.com \
--cc=akpm@linux-foundation.org \
--cc=alirad.malek@zptcorp.com \
--cc=apopple@nvidia.com \
--cc=bharata@amd.com \
--cc=byungchul@sk.com \
--cc=dan.j.williams@intel.com \
--cc=dave.jiang@intel.com \
--cc=david@redhat.com \
--cc=gourry@gourry.net \
--cc=ivecera@redhat.com \
--cc=jgg@ziepe.ca \
--cc=joshua.hahnjy@gmail.com \
--cc=justonli@chromium.org \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=lucas.demarchi@intel.com \
--cc=matthew.brost@intel.com \
--cc=mhocko@suse.com \
--cc=rakie.kim@sk.com \
--cc=rdunlap@infradead.org \
--cc=rientjes@google.com \
--cc=rppt@kernel.org \
--cc=shivankg@amd.com \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
--cc=vkoul@kernel.org \
--cc=weixugc@google.com \
--cc=willy@infradead.org \
--cc=yiannis@zptcorp.com \
--cc=ying.huang@linux.alibaba.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox