linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [Patch v2] mm/huge_memory: consolidate order-related checks into folio_check_splittable()
@ 2025-12-23 12:25 Wei Yang
  2025-12-23 17:50 ` [syzbot ci] " syzbot ci
  0 siblings, 1 reply; 2+ messages in thread
From: Wei Yang @ 2025-12-23 12:25 UTC (permalink / raw)
  To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, lance.yang
  Cc: linux-mm, Wei Yang

The primary goal of the folio_check_splittable() function is to validate
whether a folio is suitable for splitting and to bail out early if it is
not.

Currently, some order-related checks are scattered throughout the
calling code rather than being centralized in folio_check_splittable().

This commit moves all remaining order-related validation logic into
folio_check_splittable(). This consolidation ensures that the function
serves its intended purpose as a single point of failure and improves
the clarity and maintainability of the surrounding code.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>

---
v2: just move current logic here
---
 mm/huge_memory.c | 63 +++++++++++++++++++++++-------------------------
 1 file changed, 30 insertions(+), 33 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index b8ee33318a60..59d72522399f 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3705,6 +3705,10 @@ int folio_check_splittable(struct folio *folio, unsigned int new_order,
 			   enum split_type split_type)
 {
 	VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
+
+	if (new_order >= folio_order(folio))
+		return -EINVAL;
+
 	/*
 	 * Folios that just got truncated cannot get split. Signal to the
 	 * caller that there was a race.
@@ -3719,28 +3723,33 @@ int folio_check_splittable(struct folio *folio, unsigned int new_order,
 		/* order-1 is not supported for anonymous THP. */
 		if (new_order == 1)
 			return -EINVAL;
-	} else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
-		if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
-		    !mapping_large_folio_support(folio->mapping)) {
-			/*
-			 * We can always split a folio down to a single page
-			 * (new_order == 0) uniformly.
-			 *
-			 * For any other scenario
-			 *   a) uniform split targeting a large folio
-			 *      (new_order > 0)
-			 *   b) any non-uniform split
-			 * we must confirm that the file system supports large
-			 * folios.
-			 *
-			 * Note that we might still have THPs in such
-			 * mappings, which is created from khugepaged when
-			 * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that
-			 * case, the mapping does not actually support large
-			 * folios properly.
-			 */
-			return -EINVAL;
+	} else {
+		if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
+			if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
+			    !mapping_large_folio_support(folio->mapping)) {
+				/*
+				 * We can always split a folio down to a
+				 * single page (new_order == 0) uniformly.
+				 *
+				 * For any other scenario
+				 *   a) uniform split targeting a large folio
+				 *      (new_order > 0)
+				 *   b) any non-uniform split
+				 * we must confirm that the file system
+				 * supports large folios.
+				 *
+				 * Note that we might still have THPs in such
+				 * mappings, which is created from khugepaged
+				 * when CONFIG_READ_ONLY_THP_FOR_FS is
+				 * enabled. But in that case, the mapping does
+				 * not actually support large folios properly.
+				 */
+				return -EINVAL;
+			}
 		}
+
+		if (new_order < mapping_min_folio_order(folio->mapping))
+			return -EINVAL;
 	}
 
 	/*
@@ -4008,11 +4017,6 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
 		goto out;
 	}
 
-	if (new_order >= old_order) {
-		ret = -EINVAL;
-		goto out;
-	}
-
 	ret = folio_check_splittable(folio, new_order, split_type);
 	if (ret) {
 		VM_WARN_ONCE(ret == -EINVAL, "Tried to split an unsplittable folio");
@@ -4036,16 +4040,9 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
 		anon_vma_lock_write(anon_vma);
 		mapping = NULL;
 	} else {
-		unsigned int min_order;
 		gfp_t gfp;
 
 		mapping = folio->mapping;
-		min_order = mapping_min_folio_order(folio->mapping);
-		if (new_order < min_order) {
-			ret = -EINVAL;
-			goto out;
-		}
-
 		gfp = current_gfp_context(mapping_gfp_mask(mapping) &
 							GFP_RECLAIM_MASK);
 
-- 
2.34.1



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-12-23 17:51 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-12-23 12:25 [Patch v2] mm/huge_memory: consolidate order-related checks into folio_check_splittable() Wei Yang
2025-12-23 17:50 ` [syzbot ci] " syzbot ci

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox