linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Zi Yan <ziy@nvidia.com>
To: Wei Yang <richard.weiyang@gmail.com>
Cc: akpm@linux-foundation.org, david@redhat.com,
	lorenzo.stoakes@oracle.com, baolin.wang@linux.alibaba.com,
	Liam.Howlett@oracle.com, npache@redhat.com, ryan.roberts@arm.com,
	dev.jain@arm.com, baohua@kernel.org, lance.yang@linux.dev,
	linux-mm@kvack.org,
	"David Hildenbrand (Red Hat)" <david@kernel.org>
Subject: Re: [Patch v2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported()
Date: Wed, 05 Nov 2025 11:41:35 -0500	[thread overview]
Message-ID: <A4842E31-E15B-4E96-A313-F8C9F6A6B424@nvidia.com> (raw)
In-Reply-To: <20251105072521.1505-1-richard.weiyang@gmail.com>

On 5 Nov 2025, at 2:25, Wei Yang wrote:

> The functions uniform_split_supported() and
> non_uniform_split_supported() share significantly similar logic.
>
> The only functional difference is that uniform_split_supported()
> includes an additional check on the requested @new_order.
>
> The reason for this check comes from the following two aspects:
>
>   * some file system or swap cache just supports order-0 folio
>   * the behavioral difference between uniform/non-uniform split
>
> The behavioral difference between uniform split and non-uniform:
>
>   * uniform split splits folio directly to @new_order
>   * non-uniform split creates after-split folios with orders from
>     folio_order(folio) - 1 to new_order.
>
> This means for non-uniform split or !new_order split we should check the
> file system and swap cache respectively.
>
> This commit unifies the logic and merge the two functions into a single
> combined helper, removing redundant code and simplifying the split
> support checking mechanism.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>
>
> ---
> v2:
>   * remove need_check
>   * update comment
>   * add more explanation in change log
>   * selftests/split_huge_page_test pass
> ---
>  include/linux/huge_mm.h |  8 ++---
>  mm/huge_memory.c        | 70 ++++++++++++++++++-----------------------
>  2 files changed, 33 insertions(+), 45 deletions(-)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index cbb2243f8e56..79343809a7be 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -369,10 +369,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
>  		unsigned int new_order, bool unmapped);
>  int min_order_for_split(struct folio *folio);
>  int split_folio_to_list(struct folio *folio, struct list_head *list);
> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
> -		bool warns);
> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
> -		bool warns);
> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
> +		bool uniform_split, bool warns);
>  int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
>  		struct list_head *list);
>
> @@ -403,7 +401,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
>  static inline int try_folio_split_to_order(struct folio *folio,
>  		struct page *page, unsigned int new_order)
>  {
> -	if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
> +	if (!folio_split_supported(folio, new_order, /* uniform_split = */ false, /* warns= */ false))
>  		return split_huge_page_to_order(&folio->page, new_order);
>  	return folio_split(folio, new_order, page, NULL);
>  }
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 381a49c5ac3f..db442e0e3a46 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3666,55 +3666,49 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>  	return 0;
>  }
>
> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
> -		bool warns)
> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
> +		bool uniform_split, bool warns)

For this one, David suggested to use a enum

enum split_type {
	SPLIT_TYPE_UNIFORM,
	SPLIT_TYPE_NON_UNIFORM,
};

in a separate cleanup patch. It is better to send it along with this
one, so that:

1. it will be easy for reviewers to keep track of both changes,
2. it also helps resolve the dependency issue, where the cleanup patch
   goes after this one.

Thanks.


Best Regards,
Yan, Zi


  parent reply	other threads:[~2025-11-05 16:41 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-05  7:25 Wei Yang
2025-11-05  8:05 ` David Hildenbrand (Red Hat)
2025-11-05  9:15   ` Wei Yang
2025-11-05 16:28   ` Zi Yan
2025-11-05 16:41 ` Zi Yan [this message]
2025-11-06  1:46   ` Wei Yang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=A4842E31-E15B-4E96-A313-F8C9F6A6B424@nvidia.com \
    --to=ziy@nvidia.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=david@kernel.org \
    --cc=david@redhat.com \
    --cc=dev.jain@arm.com \
    --cc=lance.yang@linux.dev \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=npache@redhat.com \
    --cc=richard.weiyang@gmail.com \
    --cc=ryan.roberts@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox