From: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
To: Wei Yang <richard.weiyang@gmail.com>
Cc: akpm@linux-foundation.org, david@redhat.com, ziy@nvidia.com,
baolin.wang@linux.alibaba.com, Liam.Howlett@oracle.com,
npache@redhat.com, ryan.roberts@arm.com, dev.jain@arm.com,
baohua@kernel.org, lance.yang@linux.dev, linux-mm@kvack.org
Subject: Re: [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic
Date: Fri, 17 Oct 2025 10:44:21 +0100 [thread overview]
Message-ID: <7ed84d61-0a7b-4961-82eb-fc8d38b77162@lucifer.local> (raw)
In-Reply-To: <20251016004613.514-3-richard.weiyang@gmail.com>
On Thu, Oct 16, 2025 at 12:46:13AM +0000, Wei Yang wrote:
> Existing __split_unmapped_folio() code splits the given folio and update
> stats, but it is complicated to understand.
>
> After simplification, __split_unmapped_folio() directly calculate and
> update the folio statistics upon a successful split:
>
> * All resulting folios are @split_order.
>
> * The number of new folios are calculated directly from @old_order
> and @split_order.
>
> * The folio for the next split is identified as the one containing
> @split_at.
>
> * An xas_try_split() error is returned directly without worrying
> about stats updates.
You seem to be doing two things at once, a big refactoring where you move stuff
about AND changing functionality.
Can we split this out please? It makes review so much harder.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
>
> ---
> v2:
> * merge patch 2-5
> * retain start_order
> * new_folios -> nr_new_folios
> * add a comment at the end of the loop
> ---
> mm/huge_memory.c | 66 ++++++++++++++----------------------------------
> 1 file changed, 19 insertions(+), 47 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 4b2d5a7e5c8e..68e851f5fcb2 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3528,15 +3528,9 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> struct address_space *mapping, bool uniform_split)
> {
> bool is_anon = folio_test_anon(folio);
> - int order = folio_order(folio);
> - int start_order = uniform_split ? new_order : order - 1;
> - bool stop_split = false;
> - struct folio *next;
> + int old_order = folio_order(folio);
> + int start_order = uniform_split ? new_order : old_order - 1;
> int split_order;
> - int ret = 0;
> -
> - if (is_anon)
> - mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
>
> folio_clear_has_hwpoisoned(folio);
>
> @@ -3545,17 +3539,13 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> * folio is split to new_order directly.
> */
> for (split_order = start_order;
> - split_order >= new_order && !stop_split;
> + split_order >= new_order;
> split_order--) {
> - struct folio *end_folio = folio_next(folio);
> - int old_order = folio_order(folio);
> - struct folio *new_folio;
> + int nr_new_folios = 1UL << (old_order - split_order);
>
> /* order-1 anonymous folio is not supported */
> if (is_anon && split_order == 1)
> continue;
> - if (uniform_split && split_order != new_order)
> - continue;
>
> if (mapping) {
> /*
> @@ -3568,49 +3558,31 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> else {
> xas_set_order(xas, folio->index, split_order);
> xas_try_split(xas, folio, old_order);
> - if (xas_error(xas)) {
> - ret = xas_error(xas);
> - stop_split = true;
> - }
> + if (xas_error(xas))
> + return xas_error(xas);
> }
> }
>
> - if (!stop_split) {
> - folio_split_memcg_refs(folio, old_order, split_order);
> - split_page_owner(&folio->page, old_order, split_order);
> - pgalloc_tag_split(folio, old_order, split_order);
> + folio_split_memcg_refs(folio, old_order, split_order);
> + split_page_owner(&folio->page, old_order, split_order);
> + pgalloc_tag_split(folio, old_order, split_order);
> + __split_folio_to_order(folio, old_order, split_order);
>
> - __split_folio_to_order(folio, old_order, split_order);
> + if (is_anon) {
> + mod_mthp_stat(old_order, MTHP_STAT_NR_ANON, -1);
> + mod_mthp_stat(split_order, MTHP_STAT_NR_ANON, nr_new_folios);
> }
>
> /*
> - * Iterate through after-split folios and update folio stats.
> - * But in buddy allocator like split, the folio
> - * containing the specified page is skipped until its order
> - * is new_order, since the folio will be worked on in next
> - * iteration.
> + * For uniform split, we have finished the job.
Finsihed what job? This is unclear.
> + * For non-uniform split, we assign folio to the one the one
'To the one the one' you're duplicating that, and I have no idea what 'the one'
means?
> + * containing @split_at and assign @old_order to @split_order.
Now you're just describing code, and why are you making it kdoc-like in a
non-kdoc comment?
I mean you're now unconditionally assigning folio to page_folio(split_at) and
reassigning split-order to old_order so you really need to be clearer about what
you mean here, given there is no e.g.:
if (is uniform split)
break;
Something simpler would probably work better here.
> */
> - for (new_folio = folio; new_folio != end_folio; new_folio = next) {
> - next = folio_next(new_folio);
> - /*
> - * for buddy allocator like split, new_folio containing
> - * @split_at page could be split again, thus do not
> - * change stats yet. Wait until new_folio's order is
> - * @new_order or stop_split is set to true by the above
> - * xas_split() failure.
> - */
> - if (new_folio == page_folio(split_at)) {
> - folio = new_folio;
> - if (split_order != new_order && !stop_split)
> - continue;
> - }
> - if (is_anon)
> - mod_mthp_stat(folio_order(new_folio),
> - MTHP_STAT_NR_ANON, 1);
> - }
> + folio = page_folio(split_at);
> + old_order = split_order;
> }
>
> - return ret;
> + return 0;
> }
>
> bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
> --
> 2.34.1
>
>
next prev parent reply other threads:[~2025-10-17 9:44 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-16 0:46 [Patch v2 0/2] mm/huge_memory: cleanup __split_unmapped_folio() Wei Yang
2025-10-16 0:46 ` [Patch v2 1/2] mm/huge_memory: cache folio attribute in __split_unmapped_folio() Wei Yang
2025-10-16 1:34 ` Barry Song
2025-10-16 20:01 ` David Hildenbrand
2025-10-17 9:46 ` Lorenzo Stoakes
2025-10-19 7:51 ` Wei Yang
2025-10-16 0:46 ` [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic Wei Yang
2025-10-16 1:25 ` wang lian
2025-10-16 20:10 ` David Hildenbrand
2025-10-16 20:22 ` Zi Yan
2025-10-16 20:55 ` David Hildenbrand
2025-10-16 20:56 ` Zi Yan
2025-10-17 0:55 ` Wei Yang
2025-10-17 9:44 ` Lorenzo Stoakes [this message]
2025-10-17 14:26 ` Zi Yan
2025-10-17 14:29 ` Zi Yan
2025-10-17 14:44 ` Lorenzo Stoakes
2025-10-17 14:45 ` David Hildenbrand
2025-10-17 14:55 ` Lorenzo Stoakes
2025-10-17 17:24 ` Zi Yan
2025-10-20 14:03 ` Lorenzo Stoakes
2025-10-20 14:28 ` Zi Yan
2025-10-21 0:30 ` Wei Yang
2025-10-21 9:17 ` Lorenzo Stoakes
2025-10-19 8:00 ` Wei Yang
2025-10-20 11:55 ` Lorenzo Stoakes
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7ed84d61-0a7b-4961-82eb-fc8d38b77162@lucifer.local \
--to=lorenzo.stoakes@oracle.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=lance.yang@linux.dev \
--cc=linux-mm@kvack.org \
--cc=npache@redhat.com \
--cc=richard.weiyang@gmail.com \
--cc=ryan.roberts@arm.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox