From: "Lorenzo Stoakes (Oracle)" <ljs@kernel.org>
To: Qi Zheng <qi.zheng@linux.dev>
Cc: David Hildenbrand <david@kernel.org>, Zi Yan <ziy@nvidia.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
"Liam R . Howlett" <Liam.Howlett@oracle.com>,
Nico Pache <npache@redhat.com>,
Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
Barry Song <baohua@kernel.org>,
Lance Yang <lance.yang@linux.dev>,
Vlastimil Babka <vbabka@kernel.org>,
Mike Rapoport <rppt@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH 1/8] mm/huge_memory: simplify vma_is_specal_huge()
Date: Thu, 19 Mar 2026 10:39:36 +0000 [thread overview]
Message-ID: <e90ae3af-4d7a-4e46-9bdd-ec42eda18251@lucifer.local> (raw)
In-Reply-To: <cdd47671-c5ac-4170-aa98-cf5e1f09368d@linux.dev>
On Thu, Mar 19, 2026 at 11:16:20AM +0800, Qi Zheng wrote:
>
>
> On 3/19/26 4:39 AM, Lorenzo Stoakes (Oracle) wrote:
> > This function is confused - it overloads the term 'special' yet again,
> > checks for DAX but in many cases the code explicitly excludes DAX before
> > invoking the predicate.
> >
> > It also unnecessarily checks for vma->vm_file - this has to be present for
> > a driver to have set VMA_MIXEDMAP_BIT or VMA_PFNMAP_BIT.
> >
> > In fact, a far simpler form of this is to reverse the DAX predicate and
> > return false if DAX is set.
> >
> > This makes sense from the point of view of 'special' as in
> > vm_normal_page(), as DAX actually does potentially have retrievable folios.
> >
> > Also there's no need to have this in mm.h so move it to huge_memory.c.
> >
> > Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
> > ---
> > include/linux/huge_mm.h | 4 ++--
> > include/linux/mm.h | 16 ----------------
> > mm/huge_memory.c | 30 +++++++++++++++++++++++-------
> > 3 files changed, 25 insertions(+), 25 deletions(-)
> >
> > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> > index bd7f0e1d8094..61fda1672b29 100644
> > --- a/include/linux/huge_mm.h
> > +++ b/include/linux/huge_mm.h
> > @@ -83,7 +83,7 @@ extern struct kobj_attribute thpsize_shmem_enabled_attr;
> > * file is never split and the MAX_PAGECACHE_ORDER limit does not apply to
> > * it. Same to PFNMAPs where there's neither page* nor pagecache.
> > */
> > -#define THP_ORDERS_ALL_SPECIAL \
> > +#define THP_ORDERS_ALL_SPECIAL_DAX \
>
> As mentioned in the comments, the pfnmap case is also include in the
> 'special' case, right?
Yeah special = pfnmap, mixedmap. so renaming to SPECIAL_DAX to make clear it's
either dax or 'special' in the meaning of vm_normal_page().
>
> > (BIT(PMD_ORDER) | BIT(PUD_ORDER))
> > #define THP_ORDERS_ALL_FILE_DEFAULT \
> > ((BIT(MAX_PAGECACHE_ORDER + 1) - 1) & ~BIT(0))
> > @@ -92,7 +92,7 @@ extern struct kobj_attribute thpsize_shmem_enabled_attr;
> > * Mask of all large folio orders supported for THP.
> > */
> > #define THP_ORDERS_ALL \
> > - (THP_ORDERS_ALL_ANON | THP_ORDERS_ALL_SPECIAL | THP_ORDERS_ALL_FILE_DEFAULT)
> > + (THP_ORDERS_ALL_ANON | THP_ORDERS_ALL_SPECIAL_DAX | THP_ORDERS_ALL_FILE_DEFAULT)
> > enum tva_type {
> > TVA_SMAPS, /* Exposing "THPeligible:" in smaps. */
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index 6f0a3edb24e1..50d68b092204 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -5077,22 +5077,6 @@ long copy_folio_from_user(struct folio *dst_folio,
> > const void __user *usr_src,
> > bool allow_pagefault);
> > -/**
> > - * vma_is_special_huge - Are transhuge page-table entries considered special?
> > - * @vma: Pointer to the struct vm_area_struct to consider
> > - *
> > - * Whether transhuge page-table entries are considered "special" following
> > - * the definition in vm_normal_page().
> > - *
> > - * Return: true if transhuge page-table entries should be considered special,
> > - * false otherwise.
> > - */
> > -static inline bool vma_is_special_huge(const struct vm_area_struct *vma)
> > -{
> > - return vma_is_dax(vma) || (vma->vm_file &&
> > - (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP)));
> > -}
> > -
> > #endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */
> > #if MAX_NUMNODES > 1
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index 3fc02913b63e..f76edfa91e96 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -100,6 +100,14 @@ static inline bool file_thp_enabled(struct vm_area_struct *vma)
> > return !inode_is_open_for_write(inode) && S_ISREG(inode->i_mode);
> > }
> > +/* If returns true, we are unable to access the VMA's folios. */
> > +static bool vma_is_special_huge(struct vm_area_struct *vma)
> > +{
> > + if (vma_is_dax(vma))
> > + return false;
> > + return vma_test_any(vma, VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT);
> > +}
> > +
> > unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
> > vm_flags_t vm_flags,
> > enum tva_type type,
> > @@ -113,8 +121,8 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
> > /* Check the intersection of requested and supported orders. */
> > if (vma_is_anonymous(vma))
> > supported_orders = THP_ORDERS_ALL_ANON;
> > - else if (vma_is_special_huge(vma))
> > - supported_orders = THP_ORDERS_ALL_SPECIAL;
> > + else if (vma_is_dax(vma) || vma_is_special_huge(vma))
> > + supported_orders = THP_ORDERS_ALL_SPECIAL_DAX;
> > else
> > supported_orders = THP_ORDERS_ALL_FILE_DEFAULT;
> > @@ -2431,7 +2439,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
> > tlb->fullmm);
> > arch_check_zapped_pmd(vma, orig_pmd);
> > tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
> > - if (!vma_is_dax(vma) && vma_is_special_huge(vma)) {
> > + if (vma_is_special_huge(vma)) {
> > if (arch_needs_pgtable_deposit())
> > zap_deposited_table(tlb->mm, pmd);
> > spin_unlock(ptl);
> > @@ -2933,7 +2941,7 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
> > orig_pud = pudp_huge_get_and_clear_full(vma, addr, pud, tlb->fullmm);
> > arch_check_zapped_pud(vma, orig_pud);
> > tlb_remove_pud_tlb_entry(tlb, pud, addr);
> > - if (!vma_is_dax(vma) && vma_is_special_huge(vma)) {
> > + if (vma_is_special_huge(vma)) {
> > spin_unlock(ptl);
> > /* No zero page support yet */
> > } else {
> > @@ -3084,7 +3092,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
> > */
> > if (arch_needs_pgtable_deposit())
> > zap_deposited_table(mm, pmd);
> > - if (!vma_is_dax(vma) && vma_is_special_huge(vma))
> > + if (vma_is_special_huge(vma))
> > return;
> > if (unlikely(pmd_is_migration_entry(old_pmd))) {
> > const softleaf_t old_entry = softleaf_from_pmd(old_pmd);
> > @@ -4645,8 +4653,16 @@ static void split_huge_pages_all(void)
> > static inline bool vma_not_suitable_for_thp_split(struct vm_area_struct *vma)
> > {
> > - return vma_is_special_huge(vma) || (vma->vm_flags & VM_IO) ||
> > - is_vm_hugetlb_page(vma);
> > + if (vma_is_dax(vma))
> > + return true;
> > + if (vma_is_special_huge(vma))
> > + return true;
> > + if (vma_test(vma, VMA_IO_BIT))
> > + return true;
> > + if (is_vm_hugetlb_page(vma))
> > + return true;
> > +
> > + return false;
> > }
> > static int split_huge_pages_pid(int pid, unsigned long vaddr_start,
>
next prev parent reply other threads:[~2026-03-19 10:39 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-18 20:39 [PATCH 0/8] mm/huge_memory: refactor zap_huge_pmd() Lorenzo Stoakes (Oracle)
2026-03-18 20:39 ` [PATCH 1/8] mm/huge_memory: simplify vma_is_specal_huge() Lorenzo Stoakes (Oracle)
2026-03-18 20:45 ` David Hildenbrand (Arm)
2026-03-19 10:34 ` Lorenzo Stoakes (Oracle)
2026-03-19 13:03 ` David Hildenbrand (Arm)
2026-03-19 14:07 ` Lorenzo Stoakes (Oracle)
2026-03-19 3:16 ` Qi Zheng
2026-03-19 10:39 ` Lorenzo Stoakes (Oracle) [this message]
2026-03-18 20:39 ` [PATCH 2/8] mm/huge: avoid big else branch in zap_huge_pmd() Lorenzo Stoakes (Oracle)
2026-03-19 3:26 ` Qi Zheng
2026-03-19 6:38 ` Baolin Wang
2026-03-18 20:39 ` [PATCH 3/8] mm/huge_memory: have zap_huge_pmd return a boolean, add kdoc Lorenzo Stoakes (Oracle)
2026-03-19 3:29 ` Qi Zheng
2026-03-19 6:41 ` Baolin Wang
2026-03-18 20:39 ` [PATCH 4/8] mm/huge_memory: handle buggy PMD entry in zap_huge_pmd() Lorenzo Stoakes (Oracle)
2026-03-19 7:00 ` Baolin Wang
2026-03-19 10:58 ` Lorenzo Stoakes (Oracle)
2026-03-18 20:39 ` [PATCH 5/8] mm/huge_memory: add a common exit path to zap_huge_pmd() Lorenzo Stoakes (Oracle)
2026-03-18 20:39 ` [PATCH 6/8] mm/huge_memory: remove unnecessary VM_BUG_ON_PAGE() Lorenzo Stoakes (Oracle)
2026-03-19 7:12 ` Baolin Wang
2026-03-18 20:39 ` [PATCH 7/8] mm/huge_memory: deduplicate zap deposited table call Lorenzo Stoakes (Oracle)
2026-03-18 20:39 ` [PATCH 8/8] mm/huge_memory: deduplicate zap_huge_pmd() further by tracking state Lorenzo Stoakes (Oracle)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e90ae3af-4d7a-4e46-9bdd-ec42eda18251@lucifer.local \
--to=ljs@kernel.org \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=david@kernel.org \
--cc=dev.jain@arm.com \
--cc=lance.yang@linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=npache@redhat.com \
--cc=qi.zheng@linux.dev \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=surenb@google.com \
--cc=vbabka@kernel.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox