* [Patch v3 0/2] mm/huge_memory: Define split_type and consolidate split support checks
@ 2025-11-06 3:41 Wei Yang
2025-11-06 3:41 ` [Patch v3 1/2] mm/huge_memory: introduce enum split_type for clarity Wei Yang
2025-11-06 3:41 ` [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() Wei Yang
0 siblings, 2 replies; 32+ messages in thread
From: Wei Yang @ 2025-11-06 3:41 UTC (permalink / raw)
To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang
Cc: linux-mm, Wei Yang
This two-patch series focuses on improving code clarity and removing
redundancy in the huge memory handling logic related to folio splitting.
The series is based on an original proposal to merge two significantly
identical functions that check folio split support[1]. During this process, we
found an opportunity to improve readability by explicitly defining the split
types.
Patch 1: define split_type and use it
Patch 2: merge uniform_split_supported() and non_uniform_split_supported()
V3:
* rebase on latest mm-new with Zi Yan fix [3]
* introduce split_type to identify uniform/non-uniform split
V2:
* adjust comment
* remove need_check
V1: [1]
[1]: lkml.kernel.org/r/20251101021145.3676-1-richard.weiyang@gmail.com
[2]: lkml.kernel.org/r/20251105072521.1505-1-richard.weiyang@gmail.com
[3]: lkml.kernel.org/r/20251105162910.752266-1-ziy@nvidia.com
Wei Yang (2):
mm/huge_memory: introduce enum split_type for clarity
mm/huge_memory: merge uniform_split_supported() and
non_uniform_split_supported()
include/linux/huge_mm.h | 13 +++---
mm/huge_memory.c | 97 ++++++++++++++++++-----------------------
2 files changed, 51 insertions(+), 59 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 32+ messages in thread* [Patch v3 1/2] mm/huge_memory: introduce enum split_type for clarity 2025-11-06 3:41 [Patch v3 0/2] mm/huge_memory: Define split_type and consolidate split support checks Wei Yang @ 2025-11-06 3:41 ` Wei Yang 2025-11-06 10:17 ` David Hildenbrand (Red Hat) 2025-11-07 0:44 ` Zi Yan 2025-11-06 3:41 ` [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() Wei Yang 1 sibling, 2 replies; 32+ messages in thread From: Wei Yang @ 2025-11-06 3:41 UTC (permalink / raw) To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang Cc: linux-mm, Wei Yang We currently handle two distinct types of large folio splitting: * uniform split * non-uniform split Differentiating between these types using a simple boolean variable is not obvious and can harm code readability. This commit introduces enum split_type to explicitly define these two types. Replacing the existing boolean variable with this enumeration significantly improves code clarity and expressiveness when dealing with folio splitting logic. No functional change is expected. Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Cc: Zi Yan <ziy@nvidia.com> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org> --- include/linux/huge_mm.h | 5 +++++ mm/huge_memory.c | 30 +++++++++++++++--------------- 2 files changed, 20 insertions(+), 15 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index f381339842fa..9e96dbe2f246 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -364,6 +364,11 @@ unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long add unsigned long len, unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags); +enum split_type { + SPLIT_TYPE_UNIFORM, + SPLIT_TYPE_NON_UNIFORM, +}; + bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins); int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list, unsigned int new_order, bool unmapped); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 5795c0b4c39c..659532199233 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3598,16 +3598,16 @@ static void __split_folio_to_order(struct folio *folio, int old_order, * will be split until its order becomes @new_order. * @xas: xa_state pointing to folio->mapping->i_pages and locked by caller * @mapping: @folio->mapping - * @uniform_split: if the split is uniform or not (buddy allocator like split) + * @split_type: if the split is uniform or not (buddy allocator like split) * * * 1. uniform split: the given @folio into multiple @new_order small folios, * where all small folios have the same order. This is done when - * uniform_split is true. + * split_type is SPLIT_TYPE_UNIFORM. * 2. buddy allocator like (non-uniform) split: the given @folio is split into * half and one of the half (containing the given page) is split into half * until the given @folio's order becomes @new_order. This is done when - * uniform_split is false. + * split_type is SPLIT_TYPE_NON_UNIFORM. * * The high level flow for these two methods are: * 1. uniform split: @xas is split with no expectation of failure and a single @@ -3629,11 +3629,11 @@ static void __split_folio_to_order(struct folio *folio, int old_order, */ static int __split_unmapped_folio(struct folio *folio, int new_order, struct page *split_at, struct xa_state *xas, - struct address_space *mapping, bool uniform_split) + struct address_space *mapping, enum split_type split_type) { const bool is_anon = folio_test_anon(folio); int old_order = folio_order(folio); - int start_order = uniform_split ? new_order : old_order - 1; + int start_order = split_type == SPLIT_TYPE_UNIFORM ? new_order : old_order - 1; int split_order; /* @@ -3655,7 +3655,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order, * irq is disabled to allocate enough memory, whereas * non-uniform split can handle ENOMEM. */ - if (uniform_split) + if (split_type == SPLIT_TYPE_UNIFORM) xas_split(xas, folio, old_order); else { xas_set_order(xas, folio->index, split_order); @@ -3752,7 +3752,7 @@ bool uniform_split_supported(struct folio *folio, unsigned int new_order, * @split_at: a page within the new folio * @lock_at: a page within @folio to be left locked to caller * @list: after-split folios will be put on it if non NULL - * @uniform_split: perform uniform split or not (non-uniform split) + * @split_type: perform uniform split or not (non-uniform split) * @unmapped: The pages are already unmapped, they are migration entries. * * It calls __split_unmapped_folio() to perform uniform and non-uniform split. @@ -3769,7 +3769,7 @@ bool uniform_split_supported(struct folio *folio, unsigned int new_order, */ static int __folio_split(struct folio *folio, unsigned int new_order, struct page *split_at, struct page *lock_at, - struct list_head *list, bool uniform_split, bool unmapped) + struct list_head *list, enum split_type split_type, bool unmapped) { struct deferred_split *ds_queue; XA_STATE(xas, &folio->mapping->i_pages, folio->index); @@ -3794,10 +3794,10 @@ static int __folio_split(struct folio *folio, unsigned int new_order, if (new_order >= old_order) return -EINVAL; - if (uniform_split && !uniform_split_supported(folio, new_order, true)) + if (split_type == SPLIT_TYPE_UNIFORM && !uniform_split_supported(folio, new_order, true)) return -EINVAL; - if (!uniform_split && + if (split_type == SPLIT_TYPE_NON_UNIFORM && !non_uniform_split_supported(folio, new_order, true)) return -EINVAL; @@ -3859,7 +3859,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order, goto out; } - if (uniform_split) { + if (split_type == SPLIT_TYPE_UNIFORM) { xas_set_order(&xas, folio->index, new_order); xas_split_alloc(&xas, folio, old_order, gfp); if (xas_error(&xas)) { @@ -3973,7 +3973,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order, lruvec = folio_lruvec_lock(folio); ret = __split_unmapped_folio(folio, new_order, split_at, &xas, - mapping, uniform_split); + mapping, split_type); /* * Unfreeze after-split folios and put them back to the right @@ -4149,8 +4149,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list { struct folio *folio = page_folio(page); - return __folio_split(folio, new_order, &folio->page, page, list, true, - unmapped); + return __folio_split(folio, new_order, &folio->page, page, list, + SPLIT_TYPE_UNIFORM, unmapped); } /** @@ -4181,7 +4181,7 @@ int folio_split(struct folio *folio, unsigned int new_order, struct page *split_at, struct list_head *list) { return __folio_split(folio, new_order, split_at, &folio->page, list, - false, false); + SPLIT_TYPE_NON_UNIFORM, false); } int min_order_for_split(struct folio *folio) -- 2.34.1 ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 1/2] mm/huge_memory: introduce enum split_type for clarity 2025-11-06 3:41 ` [Patch v3 1/2] mm/huge_memory: introduce enum split_type for clarity Wei Yang @ 2025-11-06 10:17 ` David Hildenbrand (Red Hat) 2025-11-06 14:57 ` Wei Yang 2025-11-07 0:44 ` Zi Yan 1 sibling, 1 reply; 32+ messages in thread From: David Hildenbrand (Red Hat) @ 2025-11-06 10:17 UTC (permalink / raw) To: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang Cc: linux-mm On 06.11.25 04:41, Wei Yang wrote: > We currently handle two distinct types of large folio splitting: > * uniform split > * non-uniform split > > Differentiating between these types using a simple boolean variable is > not obvious and can harm code readability. > > This commit introduces enum split_type to explicitly define these two > types. Replacing the existing boolean variable with this enumeration > significantly improves code clarity and expressiveness when dealing with > folio splitting logic. > > No functional change is expected. > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > Cc: Zi Yan <ziy@nvidia.com> > Cc: "David Hildenbrand (Red Hat)" <david@kernel.org> > --- ... > * Unfreeze after-split folios and put them back to the right > @@ -4149,8 +4149,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list > { > struct folio *folio = page_folio(page); > > - return __folio_split(folio, new_order, &folio->page, page, list, true, > - unmapped); > + return __folio_split(folio, new_order, &folio->page, page, list, > + SPLIT_TYPE_UNIFORM, unmapped); > } > > /** > @@ -4181,7 +4181,7 @@ int folio_split(struct folio *folio, unsigned int new_order, > struct page *split_at, struct list_head *list) > { > return __folio_split(folio, new_order, split_at, &folio->page, list, > - false, false); > + SPLIT_TYPE_NON_UNIFORM, false); Looks like both these are not properly aligned. Should be return __folio_split(folio, new_order, split_at, &folio->page, list, SPLIT_TYPE_NON_UNIFORM, false); Thanks! Acked-by: David Hildenbrand (Red Hat) <david@kernel.org> -- Cheers David ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 1/2] mm/huge_memory: introduce enum split_type for clarity 2025-11-06 10:17 ` David Hildenbrand (Red Hat) @ 2025-11-06 14:57 ` Wei Yang 0 siblings, 0 replies; 32+ messages in thread From: Wei Yang @ 2025-11-06 14:57 UTC (permalink / raw) To: David Hildenbrand (Red Hat) Cc: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On Thu, Nov 06, 2025 at 11:17:00AM +0100, David Hildenbrand (Red Hat) wrote: >On 06.11.25 04:41, Wei Yang wrote: >> We currently handle two distinct types of large folio splitting: >> * uniform split >> * non-uniform split >> >> Differentiating between these types using a simple boolean variable is >> not obvious and can harm code readability. >> >> This commit introduces enum split_type to explicitly define these two >> types. Replacing the existing boolean variable with this enumeration >> significantly improves code clarity and expressiveness when dealing with >> folio splitting logic. >> >> No functional change is expected. >> >> Signed-off-by: Wei Yang <richard.weiyang@gmail.com> >> Cc: Zi Yan <ziy@nvidia.com> >> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org> >> --- > >... > >> * Unfreeze after-split folios and put them back to the right >> @@ -4149,8 +4149,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list >> { >> struct folio *folio = page_folio(page); >> - return __folio_split(folio, new_order, &folio->page, page, list, true, >> - unmapped); >> + return __folio_split(folio, new_order, &folio->page, page, list, >> + SPLIT_TYPE_UNIFORM, unmapped); >> } >> /** >> @@ -4181,7 +4181,7 @@ int folio_split(struct folio *folio, unsigned int new_order, >> struct page *split_at, struct list_head *list) >> { >> return __folio_split(folio, new_order, split_at, &folio->page, list, >> - false, false); >> + SPLIT_TYPE_NON_UNIFORM, false); > >Looks like both these are not properly aligned. > >Should be > >return __folio_split(folio, new_order, split_at, &folio->page, list, > SPLIT_TYPE_NON_UNIFORM, false); > Thanks. @Andrew Would you mind helping adjust this :-) > >Thanks! > >Acked-by: David Hildenbrand (Red Hat) <david@kernel.org> > >-- >Cheers > >David -- Wei Yang Help you, Help me ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 1/2] mm/huge_memory: introduce enum split_type for clarity 2025-11-06 3:41 ` [Patch v3 1/2] mm/huge_memory: introduce enum split_type for clarity Wei Yang 2025-11-06 10:17 ` David Hildenbrand (Red Hat) @ 2025-11-07 0:44 ` Zi Yan 1 sibling, 0 replies; 32+ messages in thread From: Zi Yan @ 2025-11-07 0:44 UTC (permalink / raw) To: Wei Yang Cc: akpm, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On 5 Nov 2025, at 22:41, Wei Yang wrote: > We currently handle two distinct types of large folio splitting: > * uniform split > * non-uniform split > > Differentiating between these types using a simple boolean variable is > not obvious and can harm code readability. > > This commit introduces enum split_type to explicitly define these two > types. Replacing the existing boolean variable with this enumeration > significantly improves code clarity and expressiveness when dealing with > folio splitting logic. > > No functional change is expected. > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > Cc: Zi Yan <ziy@nvidia.com> > Cc: "David Hildenbrand (Red Hat)" <david@kernel.org> > --- > include/linux/huge_mm.h | 5 +++++ > mm/huge_memory.c | 30 +++++++++++++++--------------- > 2 files changed, 20 insertions(+), 15 deletions(-) > LGTM. Thanks. Reviewed-by: Zi Yan <ziy@nvidia.com> Best Regards, Yan, Zi ^ permalink raw reply [flat|nested] 32+ messages in thread
* [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-06 3:41 [Patch v3 0/2] mm/huge_memory: Define split_type and consolidate split support checks Wei Yang 2025-11-06 3:41 ` [Patch v3 1/2] mm/huge_memory: introduce enum split_type for clarity Wei Yang @ 2025-11-06 3:41 ` Wei Yang 2025-11-06 10:20 ` David Hildenbrand (Red Hat) ` (2 more replies) 1 sibling, 3 replies; 32+ messages in thread From: Wei Yang @ 2025-11-06 3:41 UTC (permalink / raw) To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang Cc: linux-mm, Wei Yang The functions uniform_split_supported() and non_uniform_split_supported() share significantly similar logic. The only functional difference is that uniform_split_supported() includes an additional check on the requested @new_order. The reason for this check comes from the following two aspects: * some file system or swap cache just supports order-0 folio * the behavioral difference between uniform/non-uniform split The behavioral difference between uniform split and non-uniform: * uniform split splits folio directly to @new_order * non-uniform split creates after-split folios with orders from folio_order(folio) - 1 to new_order. This means for non-uniform split or !new_order split we should check the file system and swap cache respectively. This commit unifies the logic and merge the two functions into a single combined helper, removing redundant code and simplifying the split support checking mechanism. Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Cc: Zi Yan <ziy@nvidia.com> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org> --- v3: * adjust to use split_type * rebase on Zi Yan fix lkml.kernel.org/r/20251105162910.752266-1-ziy@nvidia.com v2: * remove need_check * update comment * add more explanation in change log --- include/linux/huge_mm.h | 8 ++--- mm/huge_memory.c | 71 +++++++++++++++++------------------------ 2 files changed, 33 insertions(+), 46 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 9e96dbe2f246..6f9e711b0954 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -374,10 +374,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list unsigned int new_order, bool unmapped); int min_order_for_split(struct folio *folio); int split_folio_to_list(struct folio *folio, struct list_head *list); -bool uniform_split_supported(struct folio *folio, unsigned int new_order, - bool warns); -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order, - bool warns); +bool folio_split_supported(struct folio *folio, unsigned int new_order, + enum split_type split_type, bool warns); int folio_split(struct folio *folio, unsigned int new_order, struct page *page, struct list_head *list); @@ -408,7 +406,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o static inline int try_folio_split_to_order(struct folio *folio, struct page *page, unsigned int new_order) { - if (!non_uniform_split_supported(folio, new_order, /* warns= */ false)) + if (!folio_split_supported(folio, new_order, SPLIT_TYPE_NON_UNIFORM, /* warns= */ false)) return split_huge_page_to_order(&folio->page, new_order); return folio_split(folio, new_order, page, NULL); } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 659532199233..c676f2ab0611 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3686,8 +3686,8 @@ static int __split_unmapped_folio(struct folio *folio, int new_order, return 0; } -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order, - bool warns) +bool folio_split_supported(struct folio *folio, unsigned int new_order, + enum split_type split_type, bool warns) { if (folio_test_anon(folio)) { /* order-1 is not supported for anonymous THP. */ @@ -3695,48 +3695,41 @@ bool non_uniform_split_supported(struct folio *folio, unsigned int new_order, "Cannot split to order-1 folio"); if (new_order == 1) return false; - } else if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && - !mapping_large_folio_support(folio->mapping)) { - /* - * No split if the file system does not support large folio. - * Note that we might still have THPs in such mappings due to - * CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping - * does not actually support large folios properly. - */ - VM_WARN_ONCE(warns, - "Cannot split file folio to non-0 order"); - return false; - } - - /* Only swapping a whole PMD-mapped folio is supported */ - if (folio_test_swapcache(folio)) { - VM_WARN_ONCE(warns, - "Cannot split swapcache folio to non-0 order"); - return false; - } - - return true; -} - -/* See comments in non_uniform_split_supported() */ -bool uniform_split_supported(struct folio *folio, unsigned int new_order, - bool warns) -{ - if (folio_test_anon(folio)) { - VM_WARN_ONCE(warns && new_order == 1, - "Cannot split to order-1 folio"); - if (new_order == 1) - return false; - } else if (new_order) { + } else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) { if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !mapping_large_folio_support(folio->mapping)) { + /* + * We can always split a folio down to a single page + * (new_order == 0) uniformly. + * + * For any other scenario + * a) uniform split targeting a large folio + * (new_order > 0) + * b) any non-uniform split + * we must confirm that the file system supports large + * folios. + * + * Note that we might still have THPs in such + * mappings, which is created from khugepaged when + * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that + * case, the mapping does not actually support large + * folios properly. + */ VM_WARN_ONCE(warns, "Cannot split file folio to non-0 order"); return false; } } - if (new_order && folio_test_swapcache(folio)) { + /* + * swapcache folio could only be split to order 0 + * + * non-uniform split creates after-split folios with orders from + * folio_order(folio) - 1 to new_order, making it not suitable for any + * swapcache folio split. Only uniform split to order-0 can be used + * here. + */ + if ((split_type == SPLIT_TYPE_NON_UNIFORM || new_order) && folio_test_swapcache(folio)) { VM_WARN_ONCE(warns, "Cannot split swapcache folio to non-0 order"); return false; @@ -3794,11 +3787,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order, if (new_order >= old_order) return -EINVAL; - if (split_type == SPLIT_TYPE_UNIFORM && !uniform_split_supported(folio, new_order, true)) - return -EINVAL; - - if (split_type == SPLIT_TYPE_NON_UNIFORM && - !non_uniform_split_supported(folio, new_order, true)) + if (!folio_split_supported(folio, new_order, split_type, /* warn = */ true)) return -EINVAL; is_hzp = is_huge_zero_folio(folio); -- 2.34.1 ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-06 3:41 ` [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() Wei Yang @ 2025-11-06 10:20 ` David Hildenbrand (Red Hat) 2025-11-07 0:46 ` Zi Yan 2025-11-17 1:22 ` Wei Yang 2 siblings, 0 replies; 32+ messages in thread From: David Hildenbrand (Red Hat) @ 2025-11-06 10:20 UTC (permalink / raw) To: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang Cc: linux-mm On 06.11.25 04:41, Wei Yang wrote: > The functions uniform_split_supported() and > non_uniform_split_supported() share significantly similar logic. > > The only functional difference is that uniform_split_supported() > includes an additional check on the requested @new_order. > > The reason for this check comes from the following two aspects: > > * some file system or swap cache just supports order-0 folio > * the behavioral difference between uniform/non-uniform split > > The behavioral difference between uniform split and non-uniform: > > * uniform split splits folio directly to @new_order > * non-uniform split creates after-split folios with orders from > folio_order(folio) - 1 to new_order. > > This means for non-uniform split or !new_order split we should check the > file system and swap cache respectively. > > This commit unifies the logic and merge the two functions into a single > combined helper, removing redundant code and simplifying the split > support checking mechanism. > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > Cc: Zi Yan <ziy@nvidia.com> > Cc: "David Hildenbrand (Red Hat)" <david@kernel.org> > Acked-by: David Hildenbrand (Red Hat) <david@kernel.org> -- Cheers David ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-06 3:41 ` [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() Wei Yang 2025-11-06 10:20 ` David Hildenbrand (Red Hat) @ 2025-11-07 0:46 ` Zi Yan 2025-11-07 1:17 ` Wei Yang 2025-11-17 1:22 ` Wei Yang 2 siblings, 1 reply; 32+ messages in thread From: Zi Yan @ 2025-11-07 0:46 UTC (permalink / raw) To: Wei Yang Cc: akpm, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On 5 Nov 2025, at 22:41, Wei Yang wrote: > The functions uniform_split_supported() and > non_uniform_split_supported() share significantly similar logic. > > The only functional difference is that uniform_split_supported() > includes an additional check on the requested @new_order. > > The reason for this check comes from the following two aspects: > > * some file system or swap cache just supports order-0 folio > * the behavioral difference between uniform/non-uniform split > > The behavioral difference between uniform split and non-uniform: > > * uniform split splits folio directly to @new_order > * non-uniform split creates after-split folios with orders from > folio_order(folio) - 1 to new_order. > > This means for non-uniform split or !new_order split we should check the > file system and swap cache respectively. > > This commit unifies the logic and merge the two functions into a single > combined helper, removing redundant code and simplifying the split > support checking mechanism. > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > Cc: Zi Yan <ziy@nvidia.com> > Cc: "David Hildenbrand (Red Hat)" <david@kernel.org> > > --- > v3: > * adjust to use split_type > * rebase on Zi Yan fix lkml.kernel.org/r/20251105162910.752266-1-ziy@nvidia.com > v2: > * remove need_check > * update comment > * add more explanation in change log > --- > include/linux/huge_mm.h | 8 ++--- > mm/huge_memory.c | 71 +++++++++++++++++------------------------ > 2 files changed, 33 insertions(+), 46 deletions(-) > LGTM. Thanks. Reviewed-by: Zi Yan <ziy@nvidia.com> Best Regards, Yan, Zi ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-07 0:46 ` Zi Yan @ 2025-11-07 1:17 ` Wei Yang 2025-11-07 2:07 ` Zi Yan 0 siblings, 1 reply; 32+ messages in thread From: Wei Yang @ 2025-11-07 1:17 UTC (permalink / raw) To: Zi Yan Cc: Wei Yang, akpm, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On Thu, Nov 06, 2025 at 07:46:14PM -0500, Zi Yan wrote: >On 5 Nov 2025, at 22:41, Wei Yang wrote: > >> The functions uniform_split_supported() and >> non_uniform_split_supported() share significantly similar logic. >> >> The only functional difference is that uniform_split_supported() >> includes an additional check on the requested @new_order. >> >> The reason for this check comes from the following two aspects: >> >> * some file system or swap cache just supports order-0 folio >> * the behavioral difference between uniform/non-uniform split >> >> The behavioral difference between uniform split and non-uniform: >> >> * uniform split splits folio directly to @new_order >> * non-uniform split creates after-split folios with orders from >> folio_order(folio) - 1 to new_order. >> >> This means for non-uniform split or !new_order split we should check the >> file system and swap cache respectively. >> >> This commit unifies the logic and merge the two functions into a single >> combined helper, removing redundant code and simplifying the split >> support checking mechanism. >> >> Signed-off-by: Wei Yang <richard.weiyang@gmail.com> >> Cc: Zi Yan <ziy@nvidia.com> >> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org> >> >> --- >> v3: >> * adjust to use split_type >> * rebase on Zi Yan fix lkml.kernel.org/r/20251105162910.752266-1-ziy@nvidia.com >> v2: >> * remove need_check >> * update comment >> * add more explanation in change log >> --- >> include/linux/huge_mm.h | 8 ++--- >> mm/huge_memory.c | 71 +++++++++++++++++------------------------ >> 2 files changed, 33 insertions(+), 46 deletions(-) >> >LGTM. Thanks. Reviewed-by: Zi Yan <ziy@nvidia.com> Hi, Zi I am thinking whether it is proper to move the check (new_order < min_order) from __folio_split() to folio_split_supported(). So that we could bail out early if file system couldn't split to new_order. Not sure you like it or not. -- Wei Yang Help you, Help me ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-07 1:17 ` Wei Yang @ 2025-11-07 2:07 ` Zi Yan 2025-11-07 2:49 ` Wei Yang 0 siblings, 1 reply; 32+ messages in thread From: Zi Yan @ 2025-11-07 2:07 UTC (permalink / raw) To: Wei Yang Cc: akpm, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On 6 Nov 2025, at 20:17, Wei Yang wrote: > On Thu, Nov 06, 2025 at 07:46:14PM -0500, Zi Yan wrote: >> On 5 Nov 2025, at 22:41, Wei Yang wrote: >> >>> The functions uniform_split_supported() and >>> non_uniform_split_supported() share significantly similar logic. >>> >>> The only functional difference is that uniform_split_supported() >>> includes an additional check on the requested @new_order. >>> >>> The reason for this check comes from the following two aspects: >>> >>> * some file system or swap cache just supports order-0 folio >>> * the behavioral difference between uniform/non-uniform split >>> >>> The behavioral difference between uniform split and non-uniform: >>> >>> * uniform split splits folio directly to @new_order >>> * non-uniform split creates after-split folios with orders from >>> folio_order(folio) - 1 to new_order. >>> >>> This means for non-uniform split or !new_order split we should check the >>> file system and swap cache respectively. >>> >>> This commit unifies the logic and merge the two functions into a single >>> combined helper, removing redundant code and simplifying the split >>> support checking mechanism. >>> >>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com> >>> Cc: Zi Yan <ziy@nvidia.com> >>> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org> >>> >>> --- >>> v3: >>> * adjust to use split_type >>> * rebase on Zi Yan fix lkml.kernel.org/r/20251105162910.752266-1-ziy@nvidia.com >>> v2: >>> * remove need_check >>> * update comment >>> * add more explanation in change log >>> --- >>> include/linux/huge_mm.h | 8 ++--- >>> mm/huge_memory.c | 71 +++++++++++++++++------------------------ >>> 2 files changed, 33 insertions(+), 46 deletions(-) >>> >> LGTM. Thanks. Reviewed-by: Zi Yan <ziy@nvidia.com> > > Hi, Zi > > I am thinking whether it is proper to move the check (new_order < min_order) > from __folio_split() to folio_split_supported(). So that we could bail out > early if file system couldn't split to new_order. > > Not sure you like it or not. It sounds reasonable. My only concern is that that might add another indentation to the else branch in folio_split_supported(). You can send a patch, so we can see how it looks. Best Regards, Yan, Zi ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-07 2:07 ` Zi Yan @ 2025-11-07 2:49 ` Wei Yang 2025-11-07 3:21 ` Zi Yan 0 siblings, 1 reply; 32+ messages in thread From: Wei Yang @ 2025-11-07 2:49 UTC (permalink / raw) To: Zi Yan Cc: Wei Yang, akpm, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On Thu, Nov 06, 2025 at 09:07:22PM -0500, Zi Yan wrote: >On 6 Nov 2025, at 20:17, Wei Yang wrote: > >> On Thu, Nov 06, 2025 at 07:46:14PM -0500, Zi Yan wrote: >>> On 5 Nov 2025, at 22:41, Wei Yang wrote: >>> >>>> The functions uniform_split_supported() and >>>> non_uniform_split_supported() share significantly similar logic. >>>> >>>> The only functional difference is that uniform_split_supported() >>>> includes an additional check on the requested @new_order. >>>> >>>> The reason for this check comes from the following two aspects: >>>> >>>> * some file system or swap cache just supports order-0 folio >>>> * the behavioral difference between uniform/non-uniform split >>>> >>>> The behavioral difference between uniform split and non-uniform: >>>> >>>> * uniform split splits folio directly to @new_order >>>> * non-uniform split creates after-split folios with orders from >>>> folio_order(folio) - 1 to new_order. >>>> >>>> This means for non-uniform split or !new_order split we should check the >>>> file system and swap cache respectively. >>>> >>>> This commit unifies the logic and merge the two functions into a single >>>> combined helper, removing redundant code and simplifying the split >>>> support checking mechanism. >>>> >>>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com> >>>> Cc: Zi Yan <ziy@nvidia.com> >>>> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org> >>>> >>>> --- >>>> v3: >>>> * adjust to use split_type >>>> * rebase on Zi Yan fix lkml.kernel.org/r/20251105162910.752266-1-ziy@nvidia.com >>>> v2: >>>> * remove need_check >>>> * update comment >>>> * add more explanation in change log >>>> --- >>>> include/linux/huge_mm.h | 8 ++--- >>>> mm/huge_memory.c | 71 +++++++++++++++++------------------------ >>>> 2 files changed, 33 insertions(+), 46 deletions(-) >>>> >>> LGTM. Thanks. Reviewed-by: Zi Yan <ziy@nvidia.com> >> >> Hi, Zi >> >> I am thinking whether it is proper to move the check (new_order < min_order) >> from __folio_split() to folio_split_supported(). So that we could bail out >> early if file system couldn't split to new_order. >> >> Not sure you like it or not. > >It sounds reasonable. My only concern is that that might add another >indentation to the else branch in folio_split_supported(). > >You can send a patch, so we can see how it looks. > Here is what come up my mind. If !CONFIG_READ_ONLY_THP_FOR_FS, we directly compare new_order and min_order. If CONFIG_READ_ONLY_THP_FOR_FS, one thing I am not sure is for the khugepaged collapsed THP. If its min_order is 0, it looks we can cover it with following check. Look forward your insight. diff --git a/mm/huge_memory.c b/mm/huge_memory.c index dee416b3f6ed..ef05f246df73 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3704,8 +3704,8 @@ bool folio_split_supported(struct folio *folio, unsigned int new_order, if (new_order == 1) return false; } else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) { - if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && - !mapping_large_folio_support(folio->mapping)) { + unsigned int min_order = mapping_min_folio_order(folio->mapping); + if (new_order < min_order) { /* * We can always split a folio down to a single page * (new_order == 0) uniformly. @@ -3827,7 +3827,6 @@ static int __folio_split(struct folio *folio, unsigned int new_order, } mapping = NULL; } else { - unsigned int min_order; gfp_t gfp; mapping = folio->mapping; @@ -3843,12 +3842,6 @@ static int __folio_split(struct folio *folio, unsigned int new_order, goto out; } - min_order = mapping_min_folio_order(folio->mapping); - if (new_order < min_order) { - ret = -EINVAL; - goto out; - } - gfp = current_gfp_context(mapping_gfp_mask(mapping) & GFP_RECLAIM_MASK); -- Wei Yang Help you, Help me ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-07 2:49 ` Wei Yang @ 2025-11-07 3:21 ` Zi Yan 2025-11-07 7:29 ` Wei Yang 0 siblings, 1 reply; 32+ messages in thread From: Zi Yan @ 2025-11-07 3:21 UTC (permalink / raw) To: Wei Yang Cc: akpm, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On 6 Nov 2025, at 21:49, Wei Yang wrote: > On Thu, Nov 06, 2025 at 09:07:22PM -0500, Zi Yan wrote: >> On 6 Nov 2025, at 20:17, Wei Yang wrote: >> >>> On Thu, Nov 06, 2025 at 07:46:14PM -0500, Zi Yan wrote: >>>> On 5 Nov 2025, at 22:41, Wei Yang wrote: >>>> >>>>> The functions uniform_split_supported() and >>>>> non_uniform_split_supported() share significantly similar logic. >>>>> >>>>> The only functional difference is that uniform_split_supported() >>>>> includes an additional check on the requested @new_order. >>>>> >>>>> The reason for this check comes from the following two aspects: >>>>> >>>>> * some file system or swap cache just supports order-0 folio >>>>> * the behavioral difference between uniform/non-uniform split >>>>> >>>>> The behavioral difference between uniform split and non-uniform: >>>>> >>>>> * uniform split splits folio directly to @new_order >>>>> * non-uniform split creates after-split folios with orders from >>>>> folio_order(folio) - 1 to new_order. >>>>> >>>>> This means for non-uniform split or !new_order split we should check the >>>>> file system and swap cache respectively. >>>>> >>>>> This commit unifies the logic and merge the two functions into a single >>>>> combined helper, removing redundant code and simplifying the split >>>>> support checking mechanism. >>>>> >>>>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com> >>>>> Cc: Zi Yan <ziy@nvidia.com> >>>>> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org> >>>>> >>>>> --- >>>>> v3: >>>>> * adjust to use split_type >>>>> * rebase on Zi Yan fix lkml.kernel.org/r/20251105162910.752266-1-ziy@nvidia.com >>>>> v2: >>>>> * remove need_check >>>>> * update comment >>>>> * add more explanation in change log >>>>> --- >>>>> include/linux/huge_mm.h | 8 ++--- >>>>> mm/huge_memory.c | 71 +++++++++++++++++------------------------ >>>>> 2 files changed, 33 insertions(+), 46 deletions(-) >>>>> >>>> LGTM. Thanks. Reviewed-by: Zi Yan <ziy@nvidia.com> >>> >>> Hi, Zi >>> >>> I am thinking whether it is proper to move the check (new_order < min_order) >>> from __folio_split() to folio_split_supported(). So that we could bail out >>> early if file system couldn't split to new_order. >>> >>> Not sure you like it or not. >> >> It sounds reasonable. My only concern is that that might add another >> indentation to the else branch in folio_split_supported(). >> >> You can send a patch, so we can see how it looks. >> > > Here is what come up my mind. > > If !CONFIG_READ_ONLY_THP_FOR_FS, we directly compare new_order and min_order. > > If CONFIG_READ_ONLY_THP_FOR_FS, one thing I am not sure is for the khugepaged > collapsed THP. If its min_order is 0, it looks we can cover it with following > check. 1. mapping_large_folio_support() checks if mapping_max_folio_order() > 0, meaning !mapping_large_folio_support() is mapping_max_folio_order() == 0, 2. mapping_max_folio_order() >= mapping_min_folio_order(), 3. combining 1) and 2) means mapping_min_folio_order() <= mapping_max_folio_order() == 0, meaning mapping_min_folio_order() == 0. so a FS without large folio support always has min_order == 0. > > Look forward your insight. > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index dee416b3f6ed..ef05f246df73 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -3704,8 +3704,8 @@ bool folio_split_supported(struct folio *folio, unsigned int new_order, > if (new_order == 1) > return false; > } else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) { > - if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && > - !mapping_large_folio_support(folio->mapping)) { > + unsigned int min_order = mapping_min_folio_order(folio->mapping); > + if (new_order < min_order) { This check is good for !CONFIG_READ_ONLY_THP_FOR_FS, but for CONFIG_READ_ONLY_THP_FOR_FS and !mapping_large_folio_support(), min_order is always 0, how can new_order be smaller than min_order to trigger the warning below? You will need to check new_order against mapping_max_folio_order(). OK, basically the check should be: if (new_order < mapping_min_folio_order() || new_order > mapping_max_folio_order()). Then, you might want to add a helper function mapping_folio_order_supported() instead and change the warning message below to "Cannot split file folio to unsupported order [%d, %d]", min_order, max_order (showing min/max order is optional since it kinda defeat the purpose of having the helper function). Of course, the comment needs to be changed. Hmm, but still how could the above check to trigger the warning when split_type == SPLIT_TYPE_NON_UNIFORM and new_order is 0? It will not trigger, since new_order (as 0) is supported by the mapping. I guess the min_order check code has to be in the else branch along with the existing "if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order)". > /* > * We can always split a folio down to a single page > * (new_order == 0) uniformly. > @@ -3827,7 +3827,6 @@ static int __folio_split(struct folio *folio, unsigned int new_order, > } > mapping = NULL; > } else { > - unsigned int min_order; > gfp_t gfp; > > mapping = folio->mapping; > @@ -3843,12 +3842,6 @@ static int __folio_split(struct folio *folio, unsigned int new_order, > goto out; > } > > - min_order = mapping_min_folio_order(folio->mapping); > - if (new_order < min_order) { > - ret = -EINVAL; > - goto out; > - } > - > gfp = current_gfp_context(mapping_gfp_mask(mapping) & > GFP_RECLAIM_MASK); > > -- > Wei Yang > Help you, Help me -- Best Regards, Yan, Zi ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-07 3:21 ` Zi Yan @ 2025-11-07 7:29 ` Wei Yang 2025-11-14 3:03 ` Wei Yang 0 siblings, 1 reply; 32+ messages in thread From: Wei Yang @ 2025-11-07 7:29 UTC (permalink / raw) To: Zi Yan Cc: Wei Yang, akpm, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On Thu, Nov 06, 2025 at 10:21:21PM -0500, Zi Yan wrote: >On 6 Nov 2025, at 21:49, Wei Yang wrote: > >> On Thu, Nov 06, 2025 at 09:07:22PM -0500, Zi Yan wrote: >>> On 6 Nov 2025, at 20:17, Wei Yang wrote: >>> >>>> On Thu, Nov 06, 2025 at 07:46:14PM -0500, Zi Yan wrote: >>>>> On 5 Nov 2025, at 22:41, Wei Yang wrote: >>>>> >>>>>> The functions uniform_split_supported() and >>>>>> non_uniform_split_supported() share significantly similar logic. >>>>>> >>>>>> The only functional difference is that uniform_split_supported() >>>>>> includes an additional check on the requested @new_order. >>>>>> >>>>>> The reason for this check comes from the following two aspects: >>>>>> >>>>>> * some file system or swap cache just supports order-0 folio >>>>>> * the behavioral difference between uniform/non-uniform split >>>>>> >>>>>> The behavioral difference between uniform split and non-uniform: >>>>>> >>>>>> * uniform split splits folio directly to @new_order >>>>>> * non-uniform split creates after-split folios with orders from >>>>>> folio_order(folio) - 1 to new_order. >>>>>> >>>>>> This means for non-uniform split or !new_order split we should check the >>>>>> file system and swap cache respectively. >>>>>> >>>>>> This commit unifies the logic and merge the two functions into a single >>>>>> combined helper, removing redundant code and simplifying the split >>>>>> support checking mechanism. >>>>>> >>>>>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com> >>>>>> Cc: Zi Yan <ziy@nvidia.com> >>>>>> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org> >>>>>> >>>>>> --- >>>>>> v3: >>>>>> * adjust to use split_type >>>>>> * rebase on Zi Yan fix lkml.kernel.org/r/20251105162910.752266-1-ziy@nvidia.com >>>>>> v2: >>>>>> * remove need_check >>>>>> * update comment >>>>>> * add more explanation in change log >>>>>> --- >>>>>> include/linux/huge_mm.h | 8 ++--- >>>>>> mm/huge_memory.c | 71 +++++++++++++++++------------------------ >>>>>> 2 files changed, 33 insertions(+), 46 deletions(-) >>>>>> >>>>> LGTM. Thanks. Reviewed-by: Zi Yan <ziy@nvidia.com> >>>> >>>> Hi, Zi >>>> >>>> I am thinking whether it is proper to move the check (new_order < min_order) >>>> from __folio_split() to folio_split_supported(). So that we could bail out >>>> early if file system couldn't split to new_order. >>>> >>>> Not sure you like it or not. >>> >>> It sounds reasonable. My only concern is that that might add another >>> indentation to the else branch in folio_split_supported(). >>> >>> You can send a patch, so we can see how it looks. >>> >> >> Here is what come up my mind. >> >> If !CONFIG_READ_ONLY_THP_FOR_FS, we directly compare new_order and min_order. >> >> If CONFIG_READ_ONLY_THP_FOR_FS, one thing I am not sure is for the khugepaged >> collapsed THP. If its min_order is 0, it looks we can cover it with following >> check. > >1. mapping_large_folio_support() checks if mapping_max_folio_order() > 0, meaning > !mapping_large_folio_support() is mapping_max_folio_order() == 0, >2. mapping_max_folio_order() >= mapping_min_folio_order(), >3. combining 1) and 2) means > mapping_min_folio_order() <= mapping_max_folio_order() == 0, > meaning mapping_min_folio_order() == 0. > >so a FS without large folio support always has min_order == 0. > >> >> Look forward your insight. >> >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >> index dee416b3f6ed..ef05f246df73 100644 >> --- a/mm/huge_memory.c >> +++ b/mm/huge_memory.c >> @@ -3704,8 +3704,8 @@ bool folio_split_supported(struct folio *folio, unsigned int new_order, >> if (new_order == 1) >> return false; >> } else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) { >> - if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && >> - !mapping_large_folio_support(folio->mapping)) { >> + unsigned int min_order = mapping_min_folio_order(folio->mapping); >> + if (new_order < min_order) { > >This check is good for !CONFIG_READ_ONLY_THP_FOR_FS, but >for CONFIG_READ_ONLY_THP_FOR_FS and !mapping_large_folio_support(), >min_order is always 0, how can new_order be smaller than min_order >to trigger the warning below? You will need to check new_order against >mapping_max_folio_order(). > >OK, basically the check should be: > Thanks for your analysis. >if (new_order < mapping_min_folio_order() || new_order > mapping_max_folio_order()). > This reminds me one thing, we don't check on max_order now. For example, the supported split order is [3, 5]. But new_order is set to 6. In current kernel, we don't do this. try_folio_split_or_unmap() pass min_order. But selftest will split from pmd_order - 1. >Then, you might want to add a helper function mapping_folio_order_supported() >instead and change the warning message below to "Cannot split file folio to >unsupported order [%d, %d]", min_order, max_order (showing min/max order >is optional since it kinda defeat the purpose of having the helper function). >Of course, the comment needs to be changed. > >Hmm, but still how could the above check to trigger the warning when >split_type == SPLIT_TYPE_NON_UNIFORM and new_order is 0? It will not >trigger, since new_order (as 0) is supported by the mapping. > >I guess the min_order check code has to be in the else branch along >with the existing "if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order)". > I am trying to think another way. For uniform split, after-split folio order is new_order. For non-uniform split, after-split folio order is [new_order, old_order - 1]. So I come up following draft change. diff --git a/mm/huge_memory.c b/mm/huge_memory.c index dee416b3f6ed..873680ab4cbb 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3703,28 +3703,18 @@ bool folio_split_supported(struct folio *folio, unsigned int new_order, "Cannot split to order-1 folio"); if (new_order == 1) return false; - } else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) { - if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && - !mapping_large_folio_support(folio->mapping)) { - /* - * We can always split a folio down to a single page - * (new_order == 0) uniformly. - * - * For any other scenario - * a) uniform split targeting a large folio - * (new_order > 0) - * b) any non-uniform split - * we must confirm that the file system supports large - * folios. - * - * Note that we might still have THPs in such - * mappings, which is created from khugepaged when - * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that - * case, the mapping does not actually support large - * folios properly. - */ + } else { + /* + * Some explanation here. + */ + if (new_order && !mapping_folio_order_supported(new_order)) { + VM_WARN_ONCE(warns, + "Cannot split file folio to unsupported order: %d", new_order); + return false; + } + if (split_type == SPLIT_TYPE_NON_UNIFORM && !mapping_folio_order_supported(old_order - 1)) { VM_WARN_ONCE(warns, - "Cannot split file folio to non-0 order"); + "Cannot split file folio to unsupported order: %d", old_order - 1); return false; } } -- Wei Yang Help you, Help me ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-07 7:29 ` Wei Yang @ 2025-11-14 3:03 ` Wei Yang 0 siblings, 0 replies; 32+ messages in thread From: Wei Yang @ 2025-11-14 3:03 UTC (permalink / raw) To: Wei Yang Cc: Zi Yan, akpm, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On Fri, Nov 07, 2025 at 07:29:44AM +0000, Wei Yang wrote: >On Thu, Nov 06, 2025 at 10:21:21PM -0500, Zi Yan wrote: >>On 6 Nov 2025, at 21:49, Wei Yang wrote: >> >>> On Thu, Nov 06, 2025 at 09:07:22PM -0500, Zi Yan wrote: >>>> On 6 Nov 2025, at 20:17, Wei Yang wrote: >>>> >>>>> On Thu, Nov 06, 2025 at 07:46:14PM -0500, Zi Yan wrote: >>>>>> On 5 Nov 2025, at 22:41, Wei Yang wrote: >>>>>> >>>>>>> The functions uniform_split_supported() and >>>>>>> non_uniform_split_supported() share significantly similar logic. >>>>>>> >>>>>>> The only functional difference is that uniform_split_supported() >>>>>>> includes an additional check on the requested @new_order. >>>>>>> >>>>>>> The reason for this check comes from the following two aspects: >>>>>>> >>>>>>> * some file system or swap cache just supports order-0 folio >>>>>>> * the behavioral difference between uniform/non-uniform split >>>>>>> >>>>>>> The behavioral difference between uniform split and non-uniform: >>>>>>> >>>>>>> * uniform split splits folio directly to @new_order >>>>>>> * non-uniform split creates after-split folios with orders from >>>>>>> folio_order(folio) - 1 to new_order. >>>>>>> >>>>>>> This means for non-uniform split or !new_order split we should check the >>>>>>> file system and swap cache respectively. >>>>>>> >>>>>>> This commit unifies the logic and merge the two functions into a single >>>>>>> combined helper, removing redundant code and simplifying the split >>>>>>> support checking mechanism. >>>>>>> >>>>>>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com> >>>>>>> Cc: Zi Yan <ziy@nvidia.com> >>>>>>> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org> >>>>>>> >>>>>>> --- >>>>>>> v3: >>>>>>> * adjust to use split_type >>>>>>> * rebase on Zi Yan fix lkml.kernel.org/r/20251105162910.752266-1-ziy@nvidia.com >>>>>>> v2: >>>>>>> * remove need_check >>>>>>> * update comment >>>>>>> * add more explanation in change log >>>>>>> --- >>>>>>> include/linux/huge_mm.h | 8 ++--- >>>>>>> mm/huge_memory.c | 71 +++++++++++++++++------------------------ >>>>>>> 2 files changed, 33 insertions(+), 46 deletions(-) >>>>>>> >>>>>> LGTM. Thanks. Reviewed-by: Zi Yan <ziy@nvidia.com> >>>>> >>>>> Hi, Zi >>>>> >>>>> I am thinking whether it is proper to move the check (new_order < min_order) >>>>> from __folio_split() to folio_split_supported(). So that we could bail out >>>>> early if file system couldn't split to new_order. >>>>> >>>>> Not sure you like it or not. >>>> >>>> It sounds reasonable. My only concern is that that might add another >>>> indentation to the else branch in folio_split_supported(). >>>> >>>> You can send a patch, so we can see how it looks. >>>> >>> >>> Here is what come up my mind. >>> >>> If !CONFIG_READ_ONLY_THP_FOR_FS, we directly compare new_order and min_order. >>> >>> If CONFIG_READ_ONLY_THP_FOR_FS, one thing I am not sure is for the khugepaged >>> collapsed THP. If its min_order is 0, it looks we can cover it with following >>> check. >> >>1. mapping_large_folio_support() checks if mapping_max_folio_order() > 0, meaning >> !mapping_large_folio_support() is mapping_max_folio_order() == 0, >>2. mapping_max_folio_order() >= mapping_min_folio_order(), >>3. combining 1) and 2) means >> mapping_min_folio_order() <= mapping_max_folio_order() == 0, >> meaning mapping_min_folio_order() == 0. >> >>so a FS without large folio support always has min_order == 0. >> >>> >>> Look forward your insight. >>> >>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >>> index dee416b3f6ed..ef05f246df73 100644 >>> --- a/mm/huge_memory.c >>> +++ b/mm/huge_memory.c >>> @@ -3704,8 +3704,8 @@ bool folio_split_supported(struct folio *folio, unsigned int new_order, >>> if (new_order == 1) >>> return false; >>> } else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) { >>> - if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && >>> - !mapping_large_folio_support(folio->mapping)) { >>> + unsigned int min_order = mapping_min_folio_order(folio->mapping); >>> + if (new_order < min_order) { >> >>This check is good for !CONFIG_READ_ONLY_THP_FOR_FS, but >>for CONFIG_READ_ONLY_THP_FOR_FS and !mapping_large_folio_support(), >>min_order is always 0, how can new_order be smaller than min_order >>to trigger the warning below? You will need to check new_order against >>mapping_max_folio_order(). >> >>OK, basically the check should be: >> > >Thanks for your analysis. > >>if (new_order < mapping_min_folio_order() || new_order > mapping_max_folio_order()). >> > >This reminds me one thing, we don't check on max_order now. > >For example, the supported split order is [3, 5]. But new_order is set to 6. > >In current kernel, we don't do this. try_folio_split_or_unmap() pass >min_order. But selftest will split from pmd_order - 1. > >>Then, you might want to add a helper function mapping_folio_order_supported() >>instead and change the warning message below to "Cannot split file folio to >>unsupported order [%d, %d]", min_order, max_order (showing min/max order >>is optional since it kinda defeat the purpose of having the helper function). >>Of course, the comment needs to be changed. >> >>Hmm, but still how could the above check to trigger the warning when >>split_type == SPLIT_TYPE_NON_UNIFORM and new_order is 0? It will not >>trigger, since new_order (as 0) is supported by the mapping. >> >>I guess the min_order check code has to be in the else branch along >>with the existing "if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order)". >> > >I am trying to think another way. > >For uniform split, after-split folio order is new_order. >For non-uniform split, after-split folio order is [new_order, old_order - 1]. > >So I come up following draft change. > >diff --git a/mm/huge_memory.c b/mm/huge_memory.c >index dee416b3f6ed..873680ab4cbb 100644 >--- a/mm/huge_memory.c >+++ b/mm/huge_memory.c >@@ -3703,28 +3703,18 @@ bool folio_split_supported(struct folio *folio, unsigned int new_order, > "Cannot split to order-1 folio"); > if (new_order == 1) > return false; >- } else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) { >- if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && >- !mapping_large_folio_support(folio->mapping)) { >- /* >- * We can always split a folio down to a single page >- * (new_order == 0) uniformly. >- * >- * For any other scenario >- * a) uniform split targeting a large folio >- * (new_order > 0) >- * b) any non-uniform split >- * we must confirm that the file system supports large >- * folios. >- * >- * Note that we might still have THPs in such >- * mappings, which is created from khugepaged when >- * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that >- * case, the mapping does not actually support large >- * folios properly. >- */ >+ } else { >+ /* >+ * Some explanation here. >+ */ >+ if (new_order && !mapping_folio_order_supported(new_order)) { >+ VM_WARN_ONCE(warns, >+ "Cannot split file folio to unsupported order: %d", new_order); >+ return false; >+ } >+ if (split_type == SPLIT_TYPE_NON_UNIFORM && !mapping_folio_order_supported(old_order - 1)) { > VM_WARN_ONCE(warns, >- "Cannot split file folio to non-0 order"); >+ "Cannot split file folio to unsupported order: %d", old_order - 1); > return false; > } > } Hi, Zi Yan Does it look good to you? -- Wei Yang Help you, Help me ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-06 3:41 ` [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() Wei Yang 2025-11-06 10:20 ` David Hildenbrand (Red Hat) 2025-11-07 0:46 ` Zi Yan @ 2025-11-17 1:22 ` Wei Yang 2025-11-17 15:56 ` Zi Yan 2 siblings, 1 reply; 32+ messages in thread From: Wei Yang @ 2025-11-17 1:22 UTC (permalink / raw) To: Wei Yang Cc: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On Thu, Nov 06, 2025 at 03:41:55AM +0000, Wei Yang wrote: >The functions uniform_split_supported() and >non_uniform_split_supported() share significantly similar logic. > >The only functional difference is that uniform_split_supported() >includes an additional check on the requested @new_order. > >The reason for this check comes from the following two aspects: > > * some file system or swap cache just supports order-0 folio > * the behavioral difference between uniform/non-uniform split > >The behavioral difference between uniform split and non-uniform: > > * uniform split splits folio directly to @new_order > * non-uniform split creates after-split folios with orders from > folio_order(folio) - 1 to new_order. > >This means for non-uniform split or !new_order split we should check the >file system and swap cache respectively. > >This commit unifies the logic and merge the two functions into a single >combined helper, removing redundant code and simplifying the split >support checking mechanism. > >Signed-off-by: Wei Yang <richard.weiyang@gmail.com> >Cc: Zi Yan <ziy@nvidia.com> >Cc: "David Hildenbrand (Red Hat)" <david@kernel.org> > [...] >-/* See comments in non_uniform_split_supported() */ >-bool uniform_split_supported(struct folio *folio, unsigned int new_order, >- bool warns) >-{ >- if (folio_test_anon(folio)) { >- VM_WARN_ONCE(warns && new_order == 1, >- "Cannot split to order-1 folio"); >- if (new_order == 1) >- return false; >- } else if (new_order) { >+ } else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) { > if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && > !mapping_large_folio_support(folio->mapping)) { After re-scan the code, I found we may have a NULL pointer dereference here. We bail out if folio->mapping == NULL in __folio_split(), which means it is possible to be NULL. But we access mapping->flags here. Looks there is no bug report yet, so I am not sure it worth a separate fix to original code. >+ /* >+ * We can always split a folio down to a single page >+ * (new_order == 0) uniformly. >+ * >+ * For any other scenario >+ * a) uniform split targeting a large folio >+ * (new_order > 0) >+ * b) any non-uniform split >+ * we must confirm that the file system supports large >+ * folios. >+ * >+ * Note that we might still have THPs in such >+ * mappings, which is created from khugepaged when >+ * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that >+ * case, the mapping does not actually support large >+ * folios properly. >+ */ > VM_WARN_ONCE(warns, > "Cannot split file folio to non-0 order"); > return false; > } > } > >- if (new_order && folio_test_swapcache(folio)) { >+ /* >+ * swapcache folio could only be split to order 0 >+ * >+ * non-uniform split creates after-split folios with orders from >+ * folio_order(folio) - 1 to new_order, making it not suitable for any >+ * swapcache folio split. Only uniform split to order-0 can be used >+ * here. >+ */ >+ if ((split_type == SPLIT_TYPE_NON_UNIFORM || new_order) && folio_test_swapcache(folio)) { > VM_WARN_ONCE(warns, > "Cannot split swapcache folio to non-0 order"); > return false; >@@ -3794,11 +3787,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order, > if (new_order >= old_order) > return -EINVAL; > >- if (split_type == SPLIT_TYPE_UNIFORM && !uniform_split_supported(folio, new_order, true)) >- return -EINVAL; >- >- if (split_type == SPLIT_TYPE_NON_UNIFORM && >- !non_uniform_split_supported(folio, new_order, true)) >+ if (!folio_split_supported(folio, new_order, split_type, /* warn = */ true)) > return -EINVAL; > > is_hzp = is_huge_zero_folio(folio); >-- >2.34.1 -- Wei Yang Help you, Help me ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-17 1:22 ` Wei Yang @ 2025-11-17 15:56 ` Zi Yan 2025-11-18 2:10 ` Wei Yang 2025-11-18 3:33 ` Wei Yang 0 siblings, 2 replies; 32+ messages in thread From: Zi Yan @ 2025-11-17 15:56 UTC (permalink / raw) To: Wei Yang Cc: akpm, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On 16 Nov 2025, at 20:22, Wei Yang wrote: > On Thu, Nov 06, 2025 at 03:41:55AM +0000, Wei Yang wrote: >> The functions uniform_split_supported() and >> non_uniform_split_supported() share significantly similar logic. >> >> The only functional difference is that uniform_split_supported() >> includes an additional check on the requested @new_order. >> >> The reason for this check comes from the following two aspects: >> >> * some file system or swap cache just supports order-0 folio >> * the behavioral difference between uniform/non-uniform split >> >> The behavioral difference between uniform split and non-uniform: >> >> * uniform split splits folio directly to @new_order >> * non-uniform split creates after-split folios with orders from >> folio_order(folio) - 1 to new_order. >> >> This means for non-uniform split or !new_order split we should check the >> file system and swap cache respectively. >> >> This commit unifies the logic and merge the two functions into a single >> combined helper, removing redundant code and simplifying the split >> support checking mechanism. >> >> Signed-off-by: Wei Yang <richard.weiyang@gmail.com> >> Cc: Zi Yan <ziy@nvidia.com> >> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org> >> > [...] >> -/* See comments in non_uniform_split_supported() */ >> -bool uniform_split_supported(struct folio *folio, unsigned int new_order, >> - bool warns) >> -{ >> - if (folio_test_anon(folio)) { >> - VM_WARN_ONCE(warns && new_order == 1, >> - "Cannot split to order-1 folio"); >> - if (new_order == 1) >> - return false; >> - } else if (new_order) { >> + } else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) { >> if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && >> !mapping_large_folio_support(folio->mapping)) { > > After re-scan the code, I found we may have a NULL pointer dereference here. > > We bail out if folio->mapping == NULL in __folio_split(), which means it is > possible to be NULL. But we access mapping->flags here. > > Looks there is no bug report yet, so I am not sure it worth a separate fix to > original code. Probably because the race is small, but a fix is still needed. Likely commit 6a50c9b512f7 ("mm: huge_memory: fix misused mapping_large_folio_support() for anon folios") introduced it, but please double check. > >> + /* >> + * We can always split a folio down to a single page >> + * (new_order == 0) uniformly. >> + * >> + * For any other scenario >> + * a) uniform split targeting a large folio >> + * (new_order > 0) >> + * b) any non-uniform split >> + * we must confirm that the file system supports large >> + * folios. >> + * >> + * Note that we might still have THPs in such >> + * mappings, which is created from khugepaged when >> + * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that >> + * case, the mapping does not actually support large >> + * folios properly. >> + */ >> VM_WARN_ONCE(warns, >> "Cannot split file folio to non-0 order"); >> return false; >> } >> } >> >> - if (new_order && folio_test_swapcache(folio)) { >> + /* >> + * swapcache folio could only be split to order 0 >> + * >> + * non-uniform split creates after-split folios with orders from >> + * folio_order(folio) - 1 to new_order, making it not suitable for any >> + * swapcache folio split. Only uniform split to order-0 can be used >> + * here. >> + */ >> + if ((split_type == SPLIT_TYPE_NON_UNIFORM || new_order) && folio_test_swapcache(folio)) { >> VM_WARN_ONCE(warns, >> "Cannot split swapcache folio to non-0 order"); >> return false; >> @@ -3794,11 +3787,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order, >> if (new_order >= old_order) >> return -EINVAL; >> >> - if (split_type == SPLIT_TYPE_UNIFORM && !uniform_split_supported(folio, new_order, true)) >> - return -EINVAL; >> - >> - if (split_type == SPLIT_TYPE_NON_UNIFORM && >> - !non_uniform_split_supported(folio, new_order, true)) >> + if (!folio_split_supported(folio, new_order, split_type, /* warn = */ true)) >> return -EINVAL; >> >> is_hzp = is_huge_zero_folio(folio); >> -- >> 2.34.1 > > -- > Wei Yang > Help you, Help me Best Regards, Yan, Zi ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-17 15:56 ` Zi Yan @ 2025-11-18 2:10 ` Wei Yang 2025-11-18 3:33 ` Wei Yang 1 sibling, 0 replies; 32+ messages in thread From: Wei Yang @ 2025-11-18 2:10 UTC (permalink / raw) To: Zi Yan Cc: Wei Yang, akpm, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On Mon, Nov 17, 2025 at 10:56:39AM -0500, Zi Yan wrote: >On 16 Nov 2025, at 20:22, Wei Yang wrote: > >> On Thu, Nov 06, 2025 at 03:41:55AM +0000, Wei Yang wrote: >>> The functions uniform_split_supported() and >>> non_uniform_split_supported() share significantly similar logic. >>> >>> The only functional difference is that uniform_split_supported() >>> includes an additional check on the requested @new_order. >>> >>> The reason for this check comes from the following two aspects: >>> >>> * some file system or swap cache just supports order-0 folio >>> * the behavioral difference between uniform/non-uniform split >>> >>> The behavioral difference between uniform split and non-uniform: >>> >>> * uniform split splits folio directly to @new_order >>> * non-uniform split creates after-split folios with orders from >>> folio_order(folio) - 1 to new_order. >>> >>> This means for non-uniform split or !new_order split we should check the >>> file system and swap cache respectively. >>> >>> This commit unifies the logic and merge the two functions into a single >>> combined helper, removing redundant code and simplifying the split >>> support checking mechanism. >>> >>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com> >>> Cc: Zi Yan <ziy@nvidia.com> >>> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org> >>> >> [...] >>> -/* See comments in non_uniform_split_supported() */ >>> -bool uniform_split_supported(struct folio *folio, unsigned int new_order, >>> - bool warns) >>> -{ >>> - if (folio_test_anon(folio)) { >>> - VM_WARN_ONCE(warns && new_order == 1, >>> - "Cannot split to order-1 folio"); >>> - if (new_order == 1) >>> - return false; >>> - } else if (new_order) { >>> + } else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) { >>> if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && >>> !mapping_large_folio_support(folio->mapping)) { >> >> After re-scan the code, I found we may have a NULL pointer dereference here. >> >> We bail out if folio->mapping == NULL in __folio_split(), which means it is >> possible to be NULL. But we access mapping->flags here. >> >> Looks there is no bug report yet, so I am not sure it worth a separate fix to >> original code. > >Probably because the race is small, but a fix is still needed. >Likely commit 6a50c9b512f7 ("mm: huge_memory: fix misused >mapping_large_folio_support() for anon folios") introduced it, but please >double check. > Thanks, I will check which one introduced this issue. @Andrew, I will prepare a stand-alone fix for this issue and rebase this patch set on top of it. -- Wei Yang Help you, Help me ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-17 15:56 ` Zi Yan 2025-11-18 2:10 ` Wei Yang @ 2025-11-18 3:33 ` Wei Yang 2025-11-18 4:10 ` Zi Yan 1 sibling, 1 reply; 32+ messages in thread From: Wei Yang @ 2025-11-18 3:33 UTC (permalink / raw) To: Zi Yan Cc: Wei Yang, akpm, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On Mon, Nov 17, 2025 at 10:56:39AM -0500, Zi Yan wrote: >On 16 Nov 2025, at 20:22, Wei Yang wrote: > >> On Thu, Nov 06, 2025 at 03:41:55AM +0000, Wei Yang wrote: >>> The functions uniform_split_supported() and >>> non_uniform_split_supported() share significantly similar logic. >>> >>> The only functional difference is that uniform_split_supported() >>> includes an additional check on the requested @new_order. >>> >>> The reason for this check comes from the following two aspects: >>> >>> * some file system or swap cache just supports order-0 folio >>> * the behavioral difference between uniform/non-uniform split >>> >>> The behavioral difference between uniform split and non-uniform: >>> >>> * uniform split splits folio directly to @new_order >>> * non-uniform split creates after-split folios with orders from >>> folio_order(folio) - 1 to new_order. >>> >>> This means for non-uniform split or !new_order split we should check the >>> file system and swap cache respectively. >>> >>> This commit unifies the logic and merge the two functions into a single >>> combined helper, removing redundant code and simplifying the split >>> support checking mechanism. >>> >>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com> >>> Cc: Zi Yan <ziy@nvidia.com> >>> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org> >>> >> [...] >>> -/* See comments in non_uniform_split_supported() */ >>> -bool uniform_split_supported(struct folio *folio, unsigned int new_order, >>> - bool warns) >>> -{ >>> - if (folio_test_anon(folio)) { >>> - VM_WARN_ONCE(warns && new_order == 1, >>> - "Cannot split to order-1 folio"); >>> - if (new_order == 1) >>> - return false; >>> - } else if (new_order) { >>> + } else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) { >>> if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && >>> !mapping_large_folio_support(folio->mapping)) { >> >> After re-scan the code, I found we may have a NULL pointer dereference here. >> >> We bail out if folio->mapping == NULL in __folio_split(), which means it is >> possible to be NULL. But we access mapping->flags here. >> >> Looks there is no bug report yet, so I am not sure it worth a separate fix to >> original code. > >Probably because the race is small, but a fix is still needed. >Likely commit 6a50c9b512f7 ("mm: huge_memory: fix misused >mapping_large_folio_support() for anon folios") introduced it, but please >double check. > After searching the history, it has four related commits. I listed here in timeline. [1] commit c010d47f107f609b9f4d6a103b6dfc53889049e9 Author: Zi Yan <ziy@nvidia.com> Date: Mon Feb 26 15:55:33 2024 -0500 mm: thp: split huge page to any lower order pages [2] commit 6a50c9b512f7734bc356f4bd47885a6f7c98491a (HEAD -> tmp) Author: Ran Xiaokai <ran.xiaokai@zte.com.cn> Date: Fri Jun 7 17:40:48 2024 +0800 mm: huge_memory: fix misused mapping_large_folio_support() for anon folios [3] commit 9b2f764933eb5e3ac9ebba26e3341529219c4401 (refs/bisect/bad) Author: Zi Yan <ziy@nvidia.com> Date: Wed Jan 22 11:19:27 2025 -0500 mm/huge_memory: allow split shmem large folio to any lower order [4] commit 58729c04cf1092b87aeef0bf0998c9e2e4771133 (HEAD -> tmp) Author: Zi Yan <ziy@nvidia.com> Date: Fri Mar 7 12:39:57 2025 -0500 mm/huge_memory: add buddy allocator like (non-uniform) folio_split() So I think the fix tag should be [1], right? And do we need cc stable? -- Wei Yang Help you, Help me ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-18 3:33 ` Wei Yang @ 2025-11-18 4:10 ` Zi Yan 2025-11-18 18:32 ` Andrew Morton 0 siblings, 1 reply; 32+ messages in thread From: Zi Yan @ 2025-11-18 4:10 UTC (permalink / raw) To: Wei Yang Cc: akpm, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On 17 Nov 2025, at 22:33, Wei Yang wrote: > On Mon, Nov 17, 2025 at 10:56:39AM -0500, Zi Yan wrote: >> On 16 Nov 2025, at 20:22, Wei Yang wrote: >> >>> On Thu, Nov 06, 2025 at 03:41:55AM +0000, Wei Yang wrote: >>>> The functions uniform_split_supported() and >>>> non_uniform_split_supported() share significantly similar logic. >>>> >>>> The only functional difference is that uniform_split_supported() >>>> includes an additional check on the requested @new_order. >>>> >>>> The reason for this check comes from the following two aspects: >>>> >>>> * some file system or swap cache just supports order-0 folio >>>> * the behavioral difference between uniform/non-uniform split >>>> >>>> The behavioral difference between uniform split and non-uniform: >>>> >>>> * uniform split splits folio directly to @new_order >>>> * non-uniform split creates after-split folios with orders from >>>> folio_order(folio) - 1 to new_order. >>>> >>>> This means for non-uniform split or !new_order split we should check the >>>> file system and swap cache respectively. >>>> >>>> This commit unifies the logic and merge the two functions into a single >>>> combined helper, removing redundant code and simplifying the split >>>> support checking mechanism. >>>> >>>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com> >>>> Cc: Zi Yan <ziy@nvidia.com> >>>> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org> >>>> >>> [...] >>>> -/* See comments in non_uniform_split_supported() */ >>>> -bool uniform_split_supported(struct folio *folio, unsigned int new_order, >>>> - bool warns) >>>> -{ >>>> - if (folio_test_anon(folio)) { >>>> - VM_WARN_ONCE(warns && new_order == 1, >>>> - "Cannot split to order-1 folio"); >>>> - if (new_order == 1) >>>> - return false; >>>> - } else if (new_order) { >>>> + } else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) { >>>> if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && >>>> !mapping_large_folio_support(folio->mapping)) { >>> >>> After re-scan the code, I found we may have a NULL pointer dereference here. >>> >>> We bail out if folio->mapping == NULL in __folio_split(), which means it is >>> possible to be NULL. But we access mapping->flags here. >>> >>> Looks there is no bug report yet, so I am not sure it worth a separate fix to >>> original code. >> >> Probably because the race is small, but a fix is still needed. >> Likely commit 6a50c9b512f7 ("mm: huge_memory: fix misused >> mapping_large_folio_support() for anon folios") introduced it, but please >> double check. >> > > After searching the history, it has four related commits. > > I listed here in timeline. > > [1] commit c010d47f107f609b9f4d6a103b6dfc53889049e9 > Author: Zi Yan <ziy@nvidia.com> > Date: Mon Feb 26 15:55:33 2024 -0500 > > mm: thp: split huge page to any lower order pages > > [2] commit 6a50c9b512f7734bc356f4bd47885a6f7c98491a (HEAD -> tmp) > Author: Ran Xiaokai <ran.xiaokai@zte.com.cn> > Date: Fri Jun 7 17:40:48 2024 +0800 > > mm: huge_memory: fix misused mapping_large_folio_support() for anon folios > > [3] commit 9b2f764933eb5e3ac9ebba26e3341529219c4401 (refs/bisect/bad) > Author: Zi Yan <ziy@nvidia.com> > Date: Wed Jan 22 11:19:27 2025 -0500 > > mm/huge_memory: allow split shmem large folio to any lower order > > [4] commit 58729c04cf1092b87aeef0bf0998c9e2e4771133 (HEAD -> tmp) > Author: Zi Yan <ziy@nvidia.com> > Date: Fri Mar 7 12:39:57 2025 -0500 > > mm/huge_memory: add buddy allocator like (non-uniform) folio_split() > > So I think the fix tag should be [1], right? I think so. > > And do we need cc stable? Yes, please. Thanks. Best Regards, Yan, Zi ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-18 4:10 ` Zi Yan @ 2025-11-18 18:32 ` Andrew Morton 2025-11-18 18:55 ` Zi Yan 0 siblings, 1 reply; 32+ messages in thread From: Andrew Morton @ 2025-11-18 18:32 UTC (permalink / raw) To: Zi Yan Cc: Wei Yang, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On Mon, 17 Nov 2025 23:10:26 -0500 Zi Yan <ziy@nvidia.com> wrote: > > [4] commit 58729c04cf1092b87aeef0bf0998c9e2e4771133 (HEAD -> tmp) > > Author: Zi Yan <ziy@nvidia.com> > > Date: Fri Mar 7 12:39:57 2025 -0500 > > > > mm/huge_memory: add buddy allocator like (non-uniform) folio_split() > > > > So I think the fix tag should be [1], right? > > I think so. > > > > > And do we need cc stable? > > Yes, please. I added: Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages") Cc: <stable@vger.kernel.org> ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-18 18:32 ` Andrew Morton @ 2025-11-18 18:55 ` Zi Yan 2025-11-18 22:06 ` Andrew Morton 0 siblings, 1 reply; 32+ messages in thread From: Zi Yan @ 2025-11-18 18:55 UTC (permalink / raw) To: Andrew Morton, Wei Yang Cc: david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On 18 Nov 2025, at 13:32, Andrew Morton wrote: > On Mon, 17 Nov 2025 23:10:26 -0500 Zi Yan <ziy@nvidia.com> wrote: > >>> [4] commit 58729c04cf1092b87aeef0bf0998c9e2e4771133 (HEAD -> tmp) >>> Author: Zi Yan <ziy@nvidia.com> >>> Date: Fri Mar 7 12:39:57 2025 -0500 >>> >>> mm/huge_memory: add buddy allocator like (non-uniform) folio_split() >>> >>> So I think the fix tag should be [1], right? >> >> I think so. >> >>> >>> And do we need cc stable? >> >> Yes, please. > > I added: > > Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages") > Cc: <stable@vger.kernel.org> Hi Andrew, This patch does not fix the NULL dereferencing issue yet. Wei is going to 1. send a patch to fix the bug on top of mm-stable, 2. resend this patchset on top of the fix in 1. This might be easier for back porting the fix. Maybe you can drop this series for now. Wei, do you agree? Best Regards, Yan, Zi ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-18 18:55 ` Zi Yan @ 2025-11-18 22:06 ` Andrew Morton 2025-11-19 0:52 ` Wei Yang 0 siblings, 1 reply; 32+ messages in thread From: Andrew Morton @ 2025-11-18 22:06 UTC (permalink / raw) To: Zi Yan Cc: Wei Yang, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On Tue, 18 Nov 2025 13:55:15 -0500 Zi Yan <ziy@nvidia.com> wrote: > >>> And do we need cc stable? > >> > >> Yes, please. > > > > I added: > > > > Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages") > > Cc: <stable@vger.kernel.org> > > Hi Andrew, > > This patch does not fix the NULL dereferencing issue yet. Wei is going to > 1. send a patch to fix the bug on top of mm-stable, > 2. resend this patchset on top of the fix in 1. > > This might be easier for back porting the fix. Maybe you can drop this series > for now. > > Wei, do you agree? Dropping this series messes up later patches. I could of course redo things but would prefer a fixed up version of this patchset asap, please. ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-18 22:06 ` Andrew Morton @ 2025-11-19 0:52 ` Wei Yang 2025-11-20 21:16 ` Andrew Morton 0 siblings, 1 reply; 32+ messages in thread From: Wei Yang @ 2025-11-19 0:52 UTC (permalink / raw) To: Andrew Morton Cc: Zi Yan, Wei Yang, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On Tue, Nov 18, 2025 at 02:06:58PM -0800, Andrew Morton wrote: >On Tue, 18 Nov 2025 13:55:15 -0500 Zi Yan <ziy@nvidia.com> wrote: > >> >>> And do we need cc stable? >> >> >> >> Yes, please. >> > >> > I added: >> > >> > Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages") >> > Cc: <stable@vger.kernel.org> >> >> Hi Andrew, >> >> This patch does not fix the NULL dereferencing issue yet. Wei is going to >> 1. send a patch to fix the bug on top of mm-stable, >> 2. resend this patchset on top of the fix in 1. >> >> This might be easier for back porting the fix. Maybe you can drop this series >> for now. >> >> Wei, do you agree? > >Dropping this series messes up later patches. I could of course redo >things but would prefer a fixed up version of this patchset asap, please. Ok, i will prepare a fixed on top of this. -- Wei Yang Help you, Help me ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-19 0:52 ` Wei Yang @ 2025-11-20 21:16 ` Andrew Morton 2025-11-21 0:55 ` Zi Yan 2025-11-21 9:00 ` Wei Yang 0 siblings, 2 replies; 32+ messages in thread From: Andrew Morton @ 2025-11-20 21:16 UTC (permalink / raw) To: Wei Yang Cc: Zi Yan, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On Wed, 19 Nov 2025 00:52:48 +0000 Wei Yang <richard.weiyang@gmail.com> wrote: > >> Hi Andrew, > >> > >> This patch does not fix the NULL dereferencing issue yet. Wei is going to > >> 1. send a patch to fix the bug on top of mm-stable, > >> 2. resend this patchset on top of the fix in 1. > >> > >> This might be easier for back porting the fix. Maybe you can drop this series > >> for now. > >> > >> Wei, do you agree? > > > >Dropping this series messes up later patches. I could of course redo > >things but would prefer a fixed up version of this patchset asap, please. > > Ok, i will prepare a fixed on top of this. Did this happen? I remain unclear on the status of this small series. Issues which I have flagged are: https://lkml.kernel.org/r/20251118021047.uyldr7aay6fb5evt@master https://lkml.kernel.org/r/136E8B1C-3352-412C-8038-627F5CC8A112@nvidia.com https://lkml.kernel.org/r/20251117012239.lqm33uu4vl4y5zqc@master https://lkml.kernel.org/r/337CD281-F5B3-47FE-82C3-ECB236450F60@nvidia.com and a couple of comments from yourself indicating that updates are required. Have all these things now been addressed? ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-20 21:16 ` Andrew Morton @ 2025-11-21 0:55 ` Zi Yan 2025-11-21 9:00 ` Wei Yang 1 sibling, 0 replies; 32+ messages in thread From: Zi Yan @ 2025-11-21 0:55 UTC (permalink / raw) To: Andrew Morton Cc: Wei Yang, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On 20 Nov 2025, at 16:16, Andrew Morton wrote: > On Wed, 19 Nov 2025 00:52:48 +0000 Wei Yang <richard.weiyang@gmail.com> wrote: > >>>> Hi Andrew, >>>> >>>> This patch does not fix the NULL dereferencing issue yet. Wei is going to >>>> 1. send a patch to fix the bug on top of mm-stable, >>>> 2. resend this patchset on top of the fix in 1. >>>> >>>> This might be easier for back porting the fix. Maybe you can drop this series >>>> for now. >>>> >>>> Wei, do you agree? >>> >>> Dropping this series messes up later patches. I could of course redo >>> things but would prefer a fixed up version of this patchset asap, please. >> >> Ok, i will prepare a fixed on top of this. > > Did this happen? > > I remain unclear on the status of this small series. Issues which I > have flagged are: > > https://lkml.kernel.org/r/20251118021047.uyldr7aay6fb5evt@master > https://lkml.kernel.org/r/136E8B1C-3352-412C-8038-627F5CC8A112@nvidia.com > https://lkml.kernel.org/r/20251117012239.lqm33uu4vl4y5zqc@master > https://lkml.kernel.org/r/337CD281-F5B3-47FE-82C3-ECB236450F60@nvidia.com > > and a couple of comments from yourself indicating that updates are > required. > > Have all these things now been addressed? The fix is: https://lore.kernel.org/all/20251119235302.24773-1-richard.weiyang@gmail.com/. It seems that you already picked it up. Best Regards, Yan, Zi ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-20 21:16 ` Andrew Morton 2025-11-21 0:55 ` Zi Yan @ 2025-11-21 9:00 ` Wei Yang 2025-11-21 14:59 ` Zi Yan 1 sibling, 1 reply; 32+ messages in thread From: Wei Yang @ 2025-11-21 9:00 UTC (permalink / raw) To: Andrew Morton Cc: Wei Yang, Zi Yan, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On Thu, Nov 20, 2025 at 01:16:21PM -0800, Andrew Morton wrote: >On Wed, 19 Nov 2025 00:52:48 +0000 Wei Yang <richard.weiyang@gmail.com> wrote: > >> >> Hi Andrew, >> >> >> >> This patch does not fix the NULL dereferencing issue yet. Wei is going to >> >> 1. send a patch to fix the bug on top of mm-stable, >> >> 2. resend this patchset on top of the fix in 1. >> >> >> >> This might be easier for back porting the fix. Maybe you can drop this series >> >> for now. >> >> >> >> Wei, do you agree? >> > >> >Dropping this series messes up later patches. I could of course redo >> >things but would prefer a fixed up version of this patchset asap, please. >> >> Ok, i will prepare a fixed on top of this. > >Did this happen? > >I remain unclear on the status of this small series. Issues which I >have flagged are: > >https://lkml.kernel.org/r/20251118021047.uyldr7aay6fb5evt@master >https://lkml.kernel.org/r/136E8B1C-3352-412C-8038-627F5CC8A112@nvidia.com >https://lkml.kernel.org/r/20251117012239.lqm33uu4vl4y5zqc@master >https://lkml.kernel.org/r/337CD281-F5B3-47FE-82C3-ECB236450F60@nvidia.com > There are two related topic: 1. A null pointer dereference bug: https://lkml.kernel.org/r/20251118021047.uyldr7aay6fb5evt@master https://lkml.kernel.org/r/20251117012239.lqm33uu4vl4y5zqc@master https://lkml.kernel.org/r/337CD281-F5B3-47FE-82C3-ECB236450F60@nvidia.com This three mail are related to the bug, which is fixed in : http://lkml.kernel.org/r/20251119235302.24773-1-richard.weiyang@gmail.com Currently looks good, and will do backport when necessary. 2. A further cleanup attempt: https://lkml.kernel.org/r/136E8B1C-3352-412C-8038-627F5CC8A112@nvidia.com This one is the related mail. I proposed one version in http://lkml.kernel.org/r/20251114075703.10434-1-richard.weiyang@gmail.com But it is not proper, will do follow up work later. >and a couple of comments from yourself indicating that updates are >required. > >Have all these things now been addressed? Hope it is clear now :-) -- Wei Yang Help you, Help me ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-21 9:00 ` Wei Yang @ 2025-11-21 14:59 ` Zi Yan 2025-11-21 16:50 ` Andrew Morton 0 siblings, 1 reply; 32+ messages in thread From: Zi Yan @ 2025-11-21 14:59 UTC (permalink / raw) To: Andrew Morton, Wei Yang Cc: david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On 21 Nov 2025, at 4:00, Wei Yang wrote: > On Thu, Nov 20, 2025 at 01:16:21PM -0800, Andrew Morton wrote: >> On Wed, 19 Nov 2025 00:52:48 +0000 Wei Yang <richard.weiyang@gmail.com> wrote: >> >>>>> Hi Andrew, >>>>> >>>>> This patch does not fix the NULL dereferencing issue yet. Wei is going to >>>>> 1. send a patch to fix the bug on top of mm-stable, >>>>> 2. resend this patchset on top of the fix in 1. >>>>> >>>>> This might be easier for back porting the fix. Maybe you can drop this series >>>>> for now. >>>>> >>>>> Wei, do you agree? >>>> >>>> Dropping this series messes up later patches. I could of course redo >>>> things but would prefer a fixed up version of this patchset asap, please. >>> >>> Ok, i will prepare a fixed on top of this. >> >> Did this happen? >> >> I remain unclear on the status of this small series. Issues which I >> have flagged are: >> >> https://lkml.kernel.org/r/20251118021047.uyldr7aay6fb5evt@master >> https://lkml.kernel.org/r/136E8B1C-3352-412C-8038-627F5CC8A112@nvidia.com >> https://lkml.kernel.org/r/20251117012239.lqm33uu4vl4y5zqc@master >> https://lkml.kernel.org/r/337CD281-F5B3-47FE-82C3-ECB236450F60@nvidia.com >> > > There are two related topic: > > 1. A null pointer dereference bug: > > https://lkml.kernel.org/r/20251118021047.uyldr7aay6fb5evt@master > https://lkml.kernel.org/r/20251117012239.lqm33uu4vl4y5zqc@master > https://lkml.kernel.org/r/337CD281-F5B3-47FE-82C3-ECB236450F60@nvidia.com > > This three mail are related to the bug, which is fixed in : > > http://lkml.kernel.org/r/20251119235302.24773-1-richard.weiyang@gmail.com > > Currently looks good, and will do backport when necessary. This one is picked up by Andrew. > > 2. A further cleanup attempt: > > https://lkml.kernel.org/r/136E8B1C-3352-412C-8038-627F5CC8A112@nvidia.com > > This one is the related mail. > > I proposed one version in > > http://lkml.kernel.org/r/20251114075703.10434-1-richard.weiyang@gmail.com > > But it is not proper, will do follow up work later. Please refrain from sending more patches related to __folio_split() and its related functions until the above hotfix is merged. I also have an ongoing cleanup patchset[1] and want to get it in before any other changes. Thanks. [1] https://lore.kernel.org/all/20251120035953.1115736-1-ziy@nvidia.com/ > >> and a couple of comments from yourself indicating that updates are >> required. >> >> Have all these things now been addressed? > > Hope it is clear now :-) > > -- > Wei Yang > Help you, Help me -- Best Regards, Yan, Zi ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-21 14:59 ` Zi Yan @ 2025-11-21 16:50 ` Andrew Morton 2025-11-21 17:00 ` Zi Yan 0 siblings, 1 reply; 32+ messages in thread From: Andrew Morton @ 2025-11-21 16:50 UTC (permalink / raw) To: Zi Yan Cc: Wei Yang, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On Fri, 21 Nov 2025 09:59:42 -0500 Zi Yan <ziy@nvidia.com> wrote: > > > > 2. A further cleanup attempt: > > > > https://lkml.kernel.org/r/136E8B1C-3352-412C-8038-627F5CC8A112@nvidia.com > > > > This one is the related mail. > > > > I proposed one version in > > > > http://lkml.kernel.org/r/20251114075703.10434-1-richard.weiyang@gmail.com > > > > But it is not proper, will do follow up work later. > > Please refrain from sending more patches related to __folio_split() and its > related functions until the above hotfix is merged. You're referring to https://lkml.kernel.org/r/20251119235302.24773-1-richard.weiyang@gmail.com? > I also have an ongoing > cleanup patchset[1] and want to get it in before any other changes. I remain unclear on the status of this patchset. Is it considered good to upstream or is additional work required? ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-21 16:50 ` Andrew Morton @ 2025-11-21 17:00 ` Zi Yan 2025-11-21 18:39 ` Andrew Morton 0 siblings, 1 reply; 32+ messages in thread From: Zi Yan @ 2025-11-21 17:00 UTC (permalink / raw) To: Andrew Morton Cc: Wei Yang, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On 21 Nov 2025, at 11:50, Andrew Morton wrote: > On Fri, 21 Nov 2025 09:59:42 -0500 Zi Yan <ziy@nvidia.com> wrote: > >>> >>> 2. A further cleanup attempt: >>> >>> https://lkml.kernel.org/r/136E8B1C-3352-412C-8038-627F5CC8A112@nvidia.com >>> >>> This one is the related mail. >>> >>> I proposed one version in >>> >>> http://lkml.kernel.org/r/20251114075703.10434-1-richard.weiyang@gmail.com >>> >>> But it is not proper, will do follow up work later. >> >> Please refrain from sending more patches related to __folio_split() and its >> related functions until the above hotfix is merged. > > You're referring to > https://lkml.kernel.org/r/20251119235302.24773-1-richard.weiyang@gmail.com? Yes. > >> I also have an ongoing >> cleanup patchset[1] and want to get it in before any other changes. > > I remain unclear on the status of this patchset. Is it considered good > to upstream or is additional work required? I am still having a discussion with David Hildenbrand about it and hopefully get it sorted out soon. The reason is that the above hotfix is good for backport but future user of folio_split_supported() can still dereference a NULL folio->mapping unless they check folio->mapping != NULL beforehand. I would like to avoid that by refactoring folio_split_support(). Wei’s further cleanup patch can come after my refactoring by just moving a code hunk above. Best Regards, Yan, Zi ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-21 17:00 ` Zi Yan @ 2025-11-21 18:39 ` Andrew Morton 2025-11-21 19:09 ` Zi Yan 0 siblings, 1 reply; 32+ messages in thread From: Andrew Morton @ 2025-11-21 18:39 UTC (permalink / raw) To: Zi Yan Cc: Wei Yang, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On Fri, 21 Nov 2025 12:00:51 -0500 Zi Yan <ziy@nvidia.com> wrote: > On 21 Nov 2025, at 11:50, Andrew Morton wrote: > > > On Fri, 21 Nov 2025 09:59:42 -0500 Zi Yan <ziy@nvidia.com> wrote: > > > >>> > >>> 2. A further cleanup attempt: > >>> > >>> https://lkml.kernel.org/r/136E8B1C-3352-412C-8038-627F5CC8A112@nvidia.com > >>> > >>> This one is the related mail. > >>> > >>> I proposed one version in > >>> > >>> http://lkml.kernel.org/r/20251114075703.10434-1-richard.weiyang@gmail.com > >>> > >>> But it is not proper, will do follow up work later. > >> > >> Please refrain from sending more patches related to __folio_split() and its > >> related functions until the above hotfix is merged. > > > > You're referring to > > https://lkml.kernel.org/r/20251119235302.24773-1-richard.weiyang@gmail.com? > > Yes. > > > > >> I also have an ongoing > >> cleanup patchset[1] and want to get it in before any other changes. > > > > I remain unclear on the status of this patchset. Is it considered good > > to upstream or is additional work required? > > I am still having a discussion with David Hildenbrand about it and hopefully > get it sorted out soon. The reason is that the above hotfix is good for > backport but future user of folio_split_supported() can still dereference > a NULL folio->mapping unless they check folio->mapping != NULL beforehand. > I would like to avoid that by refactoring folio_split_support(). > > Wei’s further cleanup patch can come after my refactoring by just moving > a code hunk above. > This is coming down to the wire. I'm considering dropping mm-huge_memory-introduce-enum-split_type-for-clarity.patch mm-huge_memory-introduce-enum-split_type-for-clarity-fix.patch mm-huge_memory-merge-uniform_split_supported-and-non_uniform_split_supported.patch and mm-huge_memoryc-introduce-folio_split_unmapped.patch mm-huge_memoryc-introduce-folio_split_unmapped-v2.patch mm-huge_memoryc-introduce-folio_split_unmapped-v2-fix.patch mm-huge_memoryc-introduce-folio_split_unmapped-v2-fix-fix.patch and we can revisit after the upcoming merge window. Thoughts? ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-21 18:39 ` Andrew Morton @ 2025-11-21 19:09 ` Zi Yan 2025-11-21 19:15 ` Andrew Morton 0 siblings, 1 reply; 32+ messages in thread From: Zi Yan @ 2025-11-21 19:09 UTC (permalink / raw) To: Andrew Morton Cc: Wei Yang, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On 21 Nov 2025, at 13:39, Andrew Morton wrote: > On Fri, 21 Nov 2025 12:00:51 -0500 Zi Yan <ziy@nvidia.com> wrote: > >> On 21 Nov 2025, at 11:50, Andrew Morton wrote: >> >>> On Fri, 21 Nov 2025 09:59:42 -0500 Zi Yan <ziy@nvidia.com> wrote: >>> >>>>> >>>>> 2. A further cleanup attempt: >>>>> >>>>> https://lkml.kernel.org/r/136E8B1C-3352-412C-8038-627F5CC8A112@nvidia.com >>>>> >>>>> This one is the related mail. >>>>> >>>>> I proposed one version in >>>>> >>>>> http://lkml.kernel.org/r/20251114075703.10434-1-richard.weiyang@gmail.com >>>>> >>>>> But it is not proper, will do follow up work later. >>>> >>>> Please refrain from sending more patches related to __folio_split() and its >>>> related functions until the above hotfix is merged. >>> >>> You're referring to >>> https://lkml.kernel.org/r/20251119235302.24773-1-richard.weiyang@gmail.com? >> >> Yes. >> >>> >>>> I also have an ongoing >>>> cleanup patchset[1] and want to get it in before any other changes. >>> >>> I remain unclear on the status of this patchset. Is it considered good >>> to upstream or is additional work required? >> >> I am still having a discussion with David Hildenbrand about it and hopefully >> get it sorted out soon. The reason is that the above hotfix is good for >> backport but future user of folio_split_supported() can still dereference >> a NULL folio->mapping unless they check folio->mapping != NULL beforehand. >> I would like to avoid that by refactoring folio_split_support(). >> >> Wei’s further cleanup patch can come after my refactoring by just moving >> a code hunk above. >> > > This is coming down to the wire. I'm considering dropping > > mm-huge_memory-introduce-enum-split_type-for-clarity.patch > mm-huge_memory-introduce-enum-split_type-for-clarity-fix.patch > mm-huge_memory-merge-uniform_split_supported-and-non_uniform_split_supported.patch > > and > > mm-huge_memoryc-introduce-folio_split_unmapped.patch > mm-huge_memoryc-introduce-folio_split_unmapped-v2.patch > mm-huge_memoryc-introduce-folio_split_unmapped-v2-fix.patch > mm-huge_memoryc-introduce-folio_split_unmapped-v2-fix-fix.patch > > and we can revisit after the upcoming merge window. Thoughts? These patches are fine as is. The patch I asked Wei to hold on sending is “[PATCH] mm/huge_memory: consolidate order-related checks into folio_split_supported()”[1]. The current mm-new, mm-unstable trees look good to me. Sorry if it was not clear. [1] https://lore.kernel.org/linux-mm/20251114075703.10434-1-richard.weiyang@gmail.com/ Best Regards, Yan, Zi ^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() 2025-11-21 19:09 ` Zi Yan @ 2025-11-21 19:15 ` Andrew Morton 0 siblings, 0 replies; 32+ messages in thread From: Andrew Morton @ 2025-11-21 19:15 UTC (permalink / raw) To: Zi Yan Cc: Wei Yang, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm On Fri, 21 Nov 2025 14:09:39 -0500 Zi Yan <ziy@nvidia.com> wrote: > >> Wei’s further cleanup patch can come after my refactoring by just moving > >> a code hunk above. > >> > > > > This is coming down to the wire. I'm considering dropping > > > > mm-huge_memory-introduce-enum-split_type-for-clarity.patch > > mm-huge_memory-introduce-enum-split_type-for-clarity-fix.patch > > mm-huge_memory-merge-uniform_split_supported-and-non_uniform_split_supported.patch > > > > and > > > > mm-huge_memoryc-introduce-folio_split_unmapped.patch > > mm-huge_memoryc-introduce-folio_split_unmapped-v2.patch > > mm-huge_memoryc-introduce-folio_split_unmapped-v2-fix.patch > > mm-huge_memoryc-introduce-folio_split_unmapped-v2-fix-fix.patch > > > > and we can revisit after the upcoming merge window. Thoughts? > > These patches are fine as is. Well that's good news. > The patch I asked Wei to hold on sending > is “[PATCH] mm/huge_memory: consolidate order-related checks into > folio_split_supported()”[1]. The current mm-new, mm-unstable trees > look good to me. Sorry if it was not clear. > > > [1] https://lore.kernel.org/linux-mm/20251114075703.10434-1-richard.weiyang@gmail.com/ OK. ^ permalink raw reply [flat|nested] 32+ messages in thread
end of thread, other threads:[~2025-11-21 19:16 UTC | newest] Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2025-11-06 3:41 [Patch v3 0/2] mm/huge_memory: Define split_type and consolidate split support checks Wei Yang 2025-11-06 3:41 ` [Patch v3 1/2] mm/huge_memory: introduce enum split_type for clarity Wei Yang 2025-11-06 10:17 ` David Hildenbrand (Red Hat) 2025-11-06 14:57 ` Wei Yang 2025-11-07 0:44 ` Zi Yan 2025-11-06 3:41 ` [Patch v3 2/2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() Wei Yang 2025-11-06 10:20 ` David Hildenbrand (Red Hat) 2025-11-07 0:46 ` Zi Yan 2025-11-07 1:17 ` Wei Yang 2025-11-07 2:07 ` Zi Yan 2025-11-07 2:49 ` Wei Yang 2025-11-07 3:21 ` Zi Yan 2025-11-07 7:29 ` Wei Yang 2025-11-14 3:03 ` Wei Yang 2025-11-17 1:22 ` Wei Yang 2025-11-17 15:56 ` Zi Yan 2025-11-18 2:10 ` Wei Yang 2025-11-18 3:33 ` Wei Yang 2025-11-18 4:10 ` Zi Yan 2025-11-18 18:32 ` Andrew Morton 2025-11-18 18:55 ` Zi Yan 2025-11-18 22:06 ` Andrew Morton 2025-11-19 0:52 ` Wei Yang 2025-11-20 21:16 ` Andrew Morton 2025-11-21 0:55 ` Zi Yan 2025-11-21 9:00 ` Wei Yang 2025-11-21 14:59 ` Zi Yan 2025-11-21 16:50 ` Andrew Morton 2025-11-21 17:00 ` Zi Yan 2025-11-21 18:39 ` Andrew Morton 2025-11-21 19:09 ` Zi Yan 2025-11-21 19:15 ` Andrew Morton
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox