* [PATCH v2 0/2] Clean up split_huge_pmd_locked() and remove unnecessary folio pointers
@ 2025-04-25 10:38 Gavin Guo
2025-04-25 10:38 ` [PATCH v2 1/2] mm/huge_memory: Adjust try_to_migrate_one() and split_huge_pmd_locked() Gavin Guo
2025-04-25 10:38 ` [PATCH v2 2/2] mm/huge_memory: Remove useless folio pointers passing Gavin Guo
0 siblings, 2 replies; 9+ messages in thread
From: Gavin Guo @ 2025-04-25 10:38 UTC (permalink / raw)
To: linux-mm, akpm
Cc: gshan, david, willy, ziy, linmiaohe, hughd, revest, kernel-dev,
linux-kernel
The patch series enhance the folio verification by leveraging the
existing page_vma_mapped_walk() mechanism and removing redundant
folio pointers passing.
Link: https://lore.kernel.org/all/98d1d195-7821-4627-b518-83103ade56c0@redhat.com/
Link: https://lore.kernel.org/all/91599a3c-e69e-4d79-bac5-5013c96203d7@redhat.com/
Gavin Guo (2):
mm/huge_memory: Adjust try_to_migrate_one() and
split_huge_pmd_locked()
mm/huge_memory: Remove useless folio pointers passing
include/linux/huge_mm.h | 15 +++++++--------
mm/huge_memory.c | 37 ++++++++++---------------------------
mm/memory.c | 4 ++--
mm/mprotect.c | 2 +-
mm/rmap.c | 20 ++++++++++----------
5 files changed, 30 insertions(+), 48 deletions(-)
V1 -> V2:
1). Separate the logic into
- Adjust try_to_migrate_one() and split_huge_pmd_locked
- Remove useless folio pointers passing
2). Remove the unnecessary comments and brances around if condition.
base-commit: 02ddfb981de88a2c15621115dd7be2431252c568
prerequisite-patch-id: 9c9c975b11ad0f73acd863049b4f1732caa04e53
--
2.43.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v2 1/2] mm/huge_memory: Adjust try_to_migrate_one() and split_huge_pmd_locked()
2025-04-25 10:38 [PATCH v2 0/2] Clean up split_huge_pmd_locked() and remove unnecessary folio pointers Gavin Guo
@ 2025-04-25 10:38 ` Gavin Guo
2025-04-25 11:10 ` Zi Yan
2025-04-27 6:05 ` Baolin Wang
2025-04-25 10:38 ` [PATCH v2 2/2] mm/huge_memory: Remove useless folio pointers passing Gavin Guo
1 sibling, 2 replies; 9+ messages in thread
From: Gavin Guo @ 2025-04-25 10:38 UTC (permalink / raw)
To: linux-mm, akpm
Cc: gshan, david, willy, ziy, linmiaohe, hughd, revest, kernel-dev,
linux-kernel
The split_huge_pmd_locked function currently performs redundant checks
for migration entries and folio validation that are already handled by
the page_vma_mapped_walk mechanism in try_to_migrate_one.
Specifically, page_vma_mapped_walk already ensures that:
- The folio is properly mapped in the given VMA area
- pmd_trans_huge, pmd_devmap, and migration entry validation are
performed
To leverage page_vma_mapped_walk's work, moving TTU_SPLIT_HUGE_PMD
handling to the while loop checking and removing these duplicate checks
from split_huge_pmd_locked.
Suggested-by: David Hildenbrand <david@redhat.com>
Link: https://lore.kernel.org/all/98d1d195-7821-4627-b518-83103ade56c0@redhat.com/
Link: https://lore.kernel.org/all/91599a3c-e69e-4d79-bac5-5013c96203d7@redhat.com/
Signed-off-by: Gavin Guo <gavinguo@igalia.com>
Acked-by: David Hildenbrand <david@redhat.com>
---
mm/huge_memory.c | 21 ++-------------------
mm/rmap.c | 18 +++++++++---------
2 files changed, 11 insertions(+), 28 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 47d76d03ce30..485a0ba011af 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3075,27 +3075,10 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
pmd_t *pmd, bool freeze, struct folio *folio)
{
- bool pmd_migration = is_pmd_migration_entry(*pmd);
-
- VM_WARN_ON_ONCE(folio && !folio_test_pmd_mappable(folio));
VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PMD_SIZE));
- VM_WARN_ON_ONCE(folio && !folio_test_locked(folio));
- VM_BUG_ON(freeze && !folio);
-
- /*
- * When the caller requests to set up a migration entry, we
- * require a folio to check the PMD against. Otherwise, there
- * is a risk of replacing the wrong folio.
- */
- if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || pmd_migration) {
- /*
- * Do not apply pmd_folio() to a migration entry; and folio lock
- * guarantees that it must be of the wrong folio anyway.
- */
- if (folio && (pmd_migration || folio != pmd_folio(*pmd)))
- return;
+ if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) ||
+ is_pmd_migration_entry(*pmd))
__split_huge_pmd_locked(vma, pmd, address, freeze);
- }
}
void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
diff --git a/mm/rmap.c b/mm/rmap.c
index 67bb273dfb80..b53a4dcaeaae 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2291,13 +2291,6 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
if (flags & TTU_SYNC)
pvmw.flags = PVMW_SYNC;
- /*
- * unmap_page() in mm/huge_memory.c is the only user of migration with
- * TTU_SPLIT_HUGE_PMD and it wants to freeze.
- */
- if (flags & TTU_SPLIT_HUGE_PMD)
- split_huge_pmd_address(vma, address, true, folio);
-
/*
* For THP, we have to assume the worse case ie pmd for invalidation.
* For hugetlb, it could be much worse if we need to do pud
@@ -2323,9 +2316,16 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
mmu_notifier_invalidate_range_start(&range);
while (page_vma_mapped_walk(&pvmw)) {
-#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
/* PMD-mapped THP migration entry */
if (!pvmw.pte) {
+ if (flags & TTU_SPLIT_HUGE_PMD) {
+ split_huge_pmd_locked(vma, pvmw.address,
+ pvmw.pmd, true, NULL);
+ ret = false;
+ page_vma_mapped_walk_done(&pvmw);
+ break;
+ }
+#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
subpage = folio_page(folio,
pmd_pfn(*pvmw.pmd) - folio_pfn(folio));
VM_BUG_ON_FOLIO(folio_test_hugetlb(folio) ||
@@ -2337,8 +2337,8 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
break;
}
continue;
- }
#endif
+ }
/* Unexpected PMD-mapped THP? */
VM_BUG_ON_FOLIO(!pvmw.pte, folio);
--
2.43.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v2 2/2] mm/huge_memory: Remove useless folio pointers passing
2025-04-25 10:38 [PATCH v2 0/2] Clean up split_huge_pmd_locked() and remove unnecessary folio pointers Gavin Guo
2025-04-25 10:38 ` [PATCH v2 1/2] mm/huge_memory: Adjust try_to_migrate_one() and split_huge_pmd_locked() Gavin Guo
@ 2025-04-25 10:38 ` Gavin Guo
2025-04-25 11:40 ` Zi Yan
2025-04-27 6:06 ` Baolin Wang
1 sibling, 2 replies; 9+ messages in thread
From: Gavin Guo @ 2025-04-25 10:38 UTC (permalink / raw)
To: linux-mm, akpm
Cc: gshan, david, willy, ziy, linmiaohe, hughd, revest, kernel-dev,
linux-kernel
Since the previous commit "mm/huge_memory: Adjust try_to_migrate_one() and
split_huge_pmd_locked()" has simplified the logic by leveraging the
folio verification in page_vma_mapped_walk(), this patch removes the
unnecessary folio pointers passing.
Suggested-by: David Hildenbrand <david@redhat.com>
Link: https://lore.kernel.org/all/98d1d195-7821-4627-b518-83103ade56c0@redhat.com/
Link: https://lore.kernel.org/all/91599a3c-e69e-4d79-bac5-5013c96203d7@redhat.com/
Signed-off-by: Gavin Guo <gavinguo@igalia.com>
Acked-by: David Hildenbrand <david@redhat.com>
---
include/linux/huge_mm.h | 15 +++++++--------
mm/huge_memory.c | 16 ++++++++--------
mm/memory.c | 4 ++--
mm/mprotect.c | 2 +-
mm/rmap.c | 4 ++--
5 files changed, 20 insertions(+), 21 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index e893d546a49f..01a6d998d212 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -395,7 +395,7 @@ static inline int split_huge_page(struct page *page)
void deferred_split_folio(struct folio *folio, bool partially_mapped);
void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
- unsigned long address, bool freeze, struct folio *folio);
+ unsigned long address, bool freeze);
#define split_huge_pmd(__vma, __pmd, __address) \
do { \
@@ -403,12 +403,11 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
if (is_swap_pmd(*____pmd) || pmd_trans_huge(*____pmd) \
|| pmd_devmap(*____pmd)) \
__split_huge_pmd(__vma, __pmd, __address, \
- false, NULL); \
+ false); \
} while (0)
-
void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address,
- bool freeze, struct folio *folio);
+ bool freeze);
void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
unsigned long address);
@@ -503,7 +502,7 @@ static inline bool thp_migration_supported(void)
}
void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
- pmd_t *pmd, bool freeze, struct folio *folio);
+ pmd_t *pmd, bool freeze);
bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addr,
pmd_t *pmdp, struct folio *folio);
@@ -578,12 +577,12 @@ static inline void deferred_split_folio(struct folio *folio, bool partially_mapp
do { } while (0)
static inline void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
- unsigned long address, bool freeze, struct folio *folio) {}
+ unsigned long address, bool freeze) {}
static inline void split_huge_pmd_address(struct vm_area_struct *vma,
- unsigned long address, bool freeze, struct folio *folio) {}
+ unsigned long address, bool freeze) {}
static inline void split_huge_pmd_locked(struct vm_area_struct *vma,
unsigned long address, pmd_t *pmd,
- bool freeze, struct folio *folio) {}
+ bool freeze) {}
static inline bool unmap_huge_pmd_locked(struct vm_area_struct *vma,
unsigned long addr, pmd_t *pmdp,
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 485a0ba011af..7d292693c18e 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1786,7 +1786,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
pte_free(dst_mm, pgtable);
spin_unlock(src_ptl);
spin_unlock(dst_ptl);
- __split_huge_pmd(src_vma, src_pmd, addr, false, NULL);
+ __split_huge_pmd(src_vma, src_pmd, addr, false);
return -EAGAIN;
}
add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
@@ -2008,7 +2008,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
folio_unlock(folio);
spin_unlock(vmf->ptl);
fallback:
- __split_huge_pmd(vma, vmf->pmd, vmf->address, false, NULL);
+ __split_huge_pmd(vma, vmf->pmd, vmf->address, false);
return VM_FAULT_FALLBACK;
}
@@ -3073,7 +3073,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
}
void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
- pmd_t *pmd, bool freeze, struct folio *folio)
+ pmd_t *pmd, bool freeze)
{
VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PMD_SIZE));
if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) ||
@@ -3082,7 +3082,7 @@ void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
}
void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
- unsigned long address, bool freeze, struct folio *folio)
+ unsigned long address, bool freeze)
{
spinlock_t *ptl;
struct mmu_notifier_range range;
@@ -3092,20 +3092,20 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
(address & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE);
mmu_notifier_invalidate_range_start(&range);
ptl = pmd_lock(vma->vm_mm, pmd);
- split_huge_pmd_locked(vma, range.start, pmd, freeze, folio);
+ split_huge_pmd_locked(vma, range.start, pmd, freeze);
spin_unlock(ptl);
mmu_notifier_invalidate_range_end(&range);
}
void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address,
- bool freeze, struct folio *folio)
+ bool freeze)
{
pmd_t *pmd = mm_find_pmd(vma->vm_mm, address);
if (!pmd)
return;
- __split_huge_pmd(vma, pmd, address, freeze, folio);
+ __split_huge_pmd(vma, pmd, address, freeze);
}
static inline void split_huge_pmd_if_needed(struct vm_area_struct *vma, unsigned long address)
@@ -3117,7 +3117,7 @@ static inline void split_huge_pmd_if_needed(struct vm_area_struct *vma, unsigned
if (!IS_ALIGNED(address, HPAGE_PMD_SIZE) &&
range_in_vma(vma, ALIGN_DOWN(address, HPAGE_PMD_SIZE),
ALIGN(address, HPAGE_PMD_SIZE)))
- split_huge_pmd_address(vma, address, false, NULL);
+ split_huge_pmd_address(vma, address, false);
}
void vma_adjust_trans_huge(struct vm_area_struct *vma,
diff --git a/mm/memory.c b/mm/memory.c
index ba3ea0a82f7f..4f85167baff9 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1799,7 +1799,7 @@ static inline unsigned long zap_pmd_range(struct mmu_gather *tlb,
next = pmd_addr_end(addr, end);
if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) {
if (next - addr != HPAGE_PMD_SIZE)
- __split_huge_pmd(vma, pmd, addr, false, NULL);
+ __split_huge_pmd(vma, pmd, addr, false);
else if (zap_huge_pmd(tlb, vma, pmd, addr)) {
addr = next;
continue;
@@ -5892,7 +5892,7 @@ static inline vm_fault_t wp_huge_pmd(struct vm_fault *vmf)
split:
/* COW or write-notify handled on pte level: split pmd. */
- __split_huge_pmd(vma, vmf->pmd, vmf->address, false, NULL);
+ __split_huge_pmd(vma, vmf->pmd, vmf->address, false);
return VM_FAULT_FALLBACK;
}
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 62c1f7945741..88608d0dc2c2 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -379,7 +379,7 @@ static inline long change_pmd_range(struct mmu_gather *tlb,
if (is_swap_pmd(_pmd) || pmd_trans_huge(_pmd) || pmd_devmap(_pmd)) {
if ((next - addr != HPAGE_PMD_SIZE) ||
pgtable_split_needed(vma, cp_flags)) {
- __split_huge_pmd(vma, pmd, addr, false, NULL);
+ __split_huge_pmd(vma, pmd, addr, false);
/*
* For file-backed, the pmd could have been
* cleared; make sure pmd populated if
diff --git a/mm/rmap.c b/mm/rmap.c
index b53a4dcaeaae..4992005885ef 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1944,7 +1944,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
* restart so we can process the PTE-mapped THP.
*/
split_huge_pmd_locked(vma, pvmw.address,
- pvmw.pmd, false, folio);
+ pvmw.pmd, false);
flags &= ~TTU_SPLIT_HUGE_PMD;
page_vma_mapped_walk_restart(&pvmw);
continue;
@@ -2320,7 +2320,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
if (!pvmw.pte) {
if (flags & TTU_SPLIT_HUGE_PMD) {
split_huge_pmd_locked(vma, pvmw.address,
- pvmw.pmd, true, NULL);
+ pvmw.pmd, true);
ret = false;
page_vma_mapped_walk_done(&pvmw);
break;
--
2.43.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 1/2] mm/huge_memory: Adjust try_to_migrate_one() and split_huge_pmd_locked()
2025-04-25 10:38 ` [PATCH v2 1/2] mm/huge_memory: Adjust try_to_migrate_one() and split_huge_pmd_locked() Gavin Guo
@ 2025-04-25 11:10 ` Zi Yan
2025-04-25 11:23 ` David Hildenbrand
2025-04-27 6:05 ` Baolin Wang
1 sibling, 1 reply; 9+ messages in thread
From: Zi Yan @ 2025-04-25 11:10 UTC (permalink / raw)
To: Gavin Guo
Cc: linux-mm, akpm, gshan, david, willy, linmiaohe, hughd, revest,
kernel-dev, linux-kernel
On 25 Apr 2025, at 6:38, Gavin Guo wrote:
> The split_huge_pmd_locked function currently performs redundant checks
> for migration entries and folio validation that are already handled by
> the page_vma_mapped_walk mechanism in try_to_migrate_one.
>
> Specifically, page_vma_mapped_walk already ensures that:
> - The folio is properly mapped in the given VMA area
> - pmd_trans_huge, pmd_devmap, and migration entry validation are
> performed
>
> To leverage page_vma_mapped_walk's work, moving TTU_SPLIT_HUGE_PMD
> handling to the while loop checking and removing these duplicate checks
> from split_huge_pmd_locked.
>
> Suggested-by: David Hildenbrand <david@redhat.com>
> Link: https://lore.kernel.org/all/98d1d195-7821-4627-b518-83103ade56c0@redhat.com/
> Link: https://lore.kernel.org/all/91599a3c-e69e-4d79-bac5-5013c96203d7@redhat.com/
> Signed-off-by: Gavin Guo <gavinguo@igalia.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> ---
> mm/huge_memory.c | 21 ++-------------------
> mm/rmap.c | 18 +++++++++---------
> 2 files changed, 11 insertions(+), 28 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 47d76d03ce30..485a0ba011af 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3075,27 +3075,10 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
> void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
> pmd_t *pmd, bool freeze, struct folio *folio)
> {
> - bool pmd_migration = is_pmd_migration_entry(*pmd);
> -
> - VM_WARN_ON_ONCE(folio && !folio_test_pmd_mappable(folio));
> VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PMD_SIZE));
> - VM_WARN_ON_ONCE(folio && !folio_test_locked(folio));
> - VM_BUG_ON(freeze && !folio);
> -
> - /*
> - * When the caller requests to set up a migration entry, we
> - * require a folio to check the PMD against. Otherwise, there
> - * is a risk of replacing the wrong folio.
> - */
> - if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || pmd_migration) {
> - /*
> - * Do not apply pmd_folio() to a migration entry; and folio lock
> - * guarantees that it must be of the wrong folio anyway.
> - */
> - if (folio && (pmd_migration || folio != pmd_folio(*pmd)))
> - return;
> + if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) ||
> + is_pmd_migration_entry(*pmd))
> __split_huge_pmd_locked(vma, pmd, address, freeze);
> - }
> }
>
> void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 67bb273dfb80..b53a4dcaeaae 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -2291,13 +2291,6 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
> if (flags & TTU_SYNC)
> pvmw.flags = PVMW_SYNC;
>
> - /*
> - * unmap_page() in mm/huge_memory.c is the only user of migration with
> - * TTU_SPLIT_HUGE_PMD and it wants to freeze.
> - */
> - if (flags & TTU_SPLIT_HUGE_PMD)
> - split_huge_pmd_address(vma, address, true, folio);
> -
> /*
> * For THP, we have to assume the worse case ie pmd for invalidation.
> * For hugetlb, it could be much worse if we need to do pud
> @@ -2323,9 +2316,16 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
> mmu_notifier_invalidate_range_start(&range);
>
> while (page_vma_mapped_walk(&pvmw)) {
> -#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
> /* PMD-mapped THP migration entry */
This comment should be moved along with #ifdef to avoid confusion.
> if (!pvmw.pte) {
> + if (flags & TTU_SPLIT_HUGE_PMD) {
> + split_huge_pmd_locked(vma, pvmw.address,
> + pvmw.pmd, true, NULL);
> + ret = false;
> + page_vma_mapped_walk_done(&pvmw);
> + break;
> + }
> +#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
> subpage = folio_page(folio,
> pmd_pfn(*pvmw.pmd) - folio_pfn(folio));
> VM_BUG_ON_FOLIO(folio_test_hugetlb(folio) ||
> @@ -2337,8 +2337,8 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
> break;
> }
> continue;
> - }
> #endif
I wonder if we need a WARN here to make sure when THP migration support is not
present all PMDs are split in try_to_migrate_one().
> + }
>
> /* Unexpected PMD-mapped THP? */
> VM_BUG_ON_FOLIO(!pvmw.pte, folio);
> --
> 2.43.0
Otherwise, looks good to me. Thanks. Reviewed-by: Zi Yan <ziy@nvidia.com>
--
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 1/2] mm/huge_memory: Adjust try_to_migrate_one() and split_huge_pmd_locked()
2025-04-25 11:10 ` Zi Yan
@ 2025-04-25 11:23 ` David Hildenbrand
2025-04-25 11:39 ` Zi Yan
0 siblings, 1 reply; 9+ messages in thread
From: David Hildenbrand @ 2025-04-25 11:23 UTC (permalink / raw)
To: Zi Yan, Gavin Guo
Cc: linux-mm, akpm, gshan, willy, linmiaohe, hughd, revest,
kernel-dev, linux-kernel
On 25.04.25 13:10, Zi Yan wrote:
> On 25 Apr 2025, at 6:38, Gavin Guo wrote:
>
>> The split_huge_pmd_locked function currently performs redundant checks
>> for migration entries and folio validation that are already handled by
>> the page_vma_mapped_walk mechanism in try_to_migrate_one.
>>
>> Specifically, page_vma_mapped_walk already ensures that:
>> - The folio is properly mapped in the given VMA area
>> - pmd_trans_huge, pmd_devmap, and migration entry validation are
>> performed
>>
>> To leverage page_vma_mapped_walk's work, moving TTU_SPLIT_HUGE_PMD
>> handling to the while loop checking and removing these duplicate checks
>> from split_huge_pmd_locked.
>>
>> Suggested-by: David Hildenbrand <david@redhat.com>
>> Link: https://lore.kernel.org/all/98d1d195-7821-4627-b518-83103ade56c0@redhat.com/
>> Link: https://lore.kernel.org/all/91599a3c-e69e-4d79-bac5-5013c96203d7@redhat.com/
>> Signed-off-by: Gavin Guo <gavinguo@igalia.com>
>> Acked-by: David Hildenbrand <david@redhat.com>
>> ---
>
> I wonder if we need a WARN here to make sure when THP migration support is not
> present all PMDs are split in try_to_migrate_one().
Can you elaborate on the condition you have in mind?
If we have TTU_SPLIT_HUGE_PMD set, we'll never reach that point.
Without CONFIG_ARCH_ENABLE_THP_MIGRATION, we should be running into the
VM_BUG_ON_FOLIO(!pvmw.pte, folio);
right?
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 1/2] mm/huge_memory: Adjust try_to_migrate_one() and split_huge_pmd_locked()
2025-04-25 11:23 ` David Hildenbrand
@ 2025-04-25 11:39 ` Zi Yan
0 siblings, 0 replies; 9+ messages in thread
From: Zi Yan @ 2025-04-25 11:39 UTC (permalink / raw)
To: David Hildenbrand
Cc: Gavin Guo, linux-mm, akpm, gshan, willy, linmiaohe, hughd,
revest, kernel-dev, linux-kernel
On 25 Apr 2025, at 7:23, David Hildenbrand wrote:
> On 25.04.25 13:10, Zi Yan wrote:
>> On 25 Apr 2025, at 6:38, Gavin Guo wrote:
>>
>>> The split_huge_pmd_locked function currently performs redundant checks
>>> for migration entries and folio validation that are already handled by
>>> the page_vma_mapped_walk mechanism in try_to_migrate_one.
>>>
>>> Specifically, page_vma_mapped_walk already ensures that:
>>> - The folio is properly mapped in the given VMA area
>>> - pmd_trans_huge, pmd_devmap, and migration entry validation are
>>> performed
>>>
>>> To leverage page_vma_mapped_walk's work, moving TTU_SPLIT_HUGE_PMD
>>> handling to the while loop checking and removing these duplicate checks
>>> from split_huge_pmd_locked.
>>>
>>> Suggested-by: David Hildenbrand <david@redhat.com>
>>> Link: https://lore.kernel.org/all/98d1d195-7821-4627-b518-83103ade56c0@redhat.com/
>>> Link: https://lore.kernel.org/all/91599a3c-e69e-4d79-bac5-5013c96203d7@redhat.com/
>>> Signed-off-by: Gavin Guo <gavinguo@igalia.com>
>>> Acked-by: David Hildenbrand <david@redhat.com>
>>> ---
>
>>
>> I wonder if we need a WARN here to make sure when THP migration support is not
>> present all PMDs are split in try_to_migrate_one().
>
> Can you elaborate on the condition you have in mind?
>
> If we have TTU_SPLIT_HUGE_PMD set, we'll never reach that point.
>
> Without CONFIG_ARCH_ENABLE_THP_MIGRATION, we should be running into the
> VM_BUG_ON_FOLIO(!pvmw.pte, folio);
>
> right?
Right. Missed that code, which is right at the bottom. Sorry about that.
Thank you for pointing this out.
OK, please disregard my comments. The patch is good in current form.
--
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 2/2] mm/huge_memory: Remove useless folio pointers passing
2025-04-25 10:38 ` [PATCH v2 2/2] mm/huge_memory: Remove useless folio pointers passing Gavin Guo
@ 2025-04-25 11:40 ` Zi Yan
2025-04-27 6:06 ` Baolin Wang
1 sibling, 0 replies; 9+ messages in thread
From: Zi Yan @ 2025-04-25 11:40 UTC (permalink / raw)
To: Gavin Guo
Cc: linux-mm, akpm, gshan, david, willy, linmiaohe, hughd, revest,
kernel-dev, linux-kernel
On 25 Apr 2025, at 6:38, Gavin Guo wrote:
> Since the previous commit "mm/huge_memory: Adjust try_to_migrate_one() and
> split_huge_pmd_locked()" has simplified the logic by leveraging the
> folio verification in page_vma_mapped_walk(), this patch removes the
> unnecessary folio pointers passing.
>
> Suggested-by: David Hildenbrand <david@redhat.com>
> Link: https://lore.kernel.org/all/98d1d195-7821-4627-b518-83103ade56c0@redhat.com/
> Link: https://lore.kernel.org/all/91599a3c-e69e-4d79-bac5-5013c96203d7@redhat.com/
> Signed-off-by: Gavin Guo <gavinguo@igalia.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> ---
> include/linux/huge_mm.h | 15 +++++++--------
> mm/huge_memory.c | 16 ++++++++--------
> mm/memory.c | 4 ++--
> mm/mprotect.c | 2 +-
> mm/rmap.c | 4 ++--
> 5 files changed, 20 insertions(+), 21 deletions(-)
>
LGTM. Thanks. Reviewed-by: Zi Yan <ziy@nvidia.com>
--
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 1/2] mm/huge_memory: Adjust try_to_migrate_one() and split_huge_pmd_locked()
2025-04-25 10:38 ` [PATCH v2 1/2] mm/huge_memory: Adjust try_to_migrate_one() and split_huge_pmd_locked() Gavin Guo
2025-04-25 11:10 ` Zi Yan
@ 2025-04-27 6:05 ` Baolin Wang
1 sibling, 0 replies; 9+ messages in thread
From: Baolin Wang @ 2025-04-27 6:05 UTC (permalink / raw)
To: Gavin Guo, linux-mm, akpm
Cc: gshan, david, willy, ziy, linmiaohe, hughd, revest, kernel-dev,
linux-kernel
On 2025/4/25 18:38, Gavin Guo wrote:
> The split_huge_pmd_locked function currently performs redundant checks
> for migration entries and folio validation that are already handled by
> the page_vma_mapped_walk mechanism in try_to_migrate_one.
>
> Specifically, page_vma_mapped_walk already ensures that:
> - The folio is properly mapped in the given VMA area
> - pmd_trans_huge, pmd_devmap, and migration entry validation are
> performed
>
> To leverage page_vma_mapped_walk's work, moving TTU_SPLIT_HUGE_PMD
> handling to the while loop checking and removing these duplicate checks
> from split_huge_pmd_locked.
>
> Suggested-by: David Hildenbrand <david@redhat.com>
> Link: https://lore.kernel.org/all/98d1d195-7821-4627-b518-83103ade56c0@redhat.com/
> Link: https://lore.kernel.org/all/91599a3c-e69e-4d79-bac5-5013c96203d7@redhat.com/
> Signed-off-by: Gavin Guo <gavinguo@igalia.com>
> Acked-by: David Hildenbrand <david@redhat.com>
LGTM.
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 2/2] mm/huge_memory: Remove useless folio pointers passing
2025-04-25 10:38 ` [PATCH v2 2/2] mm/huge_memory: Remove useless folio pointers passing Gavin Guo
2025-04-25 11:40 ` Zi Yan
@ 2025-04-27 6:06 ` Baolin Wang
1 sibling, 0 replies; 9+ messages in thread
From: Baolin Wang @ 2025-04-27 6:06 UTC (permalink / raw)
To: Gavin Guo, linux-mm, akpm
Cc: gshan, david, willy, ziy, linmiaohe, hughd, revest, kernel-dev,
linux-kernel
On 2025/4/25 18:38, Gavin Guo wrote:
> Since the previous commit "mm/huge_memory: Adjust try_to_migrate_one() and
> split_huge_pmd_locked()" has simplified the logic by leveraging the
> folio verification in page_vma_mapped_walk(), this patch removes the
> unnecessary folio pointers passing.
>
> Suggested-by: David Hildenbrand <david@redhat.com>
> Link: https://lore.kernel.org/all/98d1d195-7821-4627-b518-83103ade56c0@redhat.com/
> Link: https://lore.kernel.org/all/91599a3c-e69e-4d79-bac5-5013c96203d7@redhat.com/
> Signed-off-by: Gavin Guo <gavinguo@igalia.com>
> Acked-by: David Hildenbrand <david@redhat.com>
LGTM.
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2025-04-27 6:07 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-25 10:38 [PATCH v2 0/2] Clean up split_huge_pmd_locked() and remove unnecessary folio pointers Gavin Guo
2025-04-25 10:38 ` [PATCH v2 1/2] mm/huge_memory: Adjust try_to_migrate_one() and split_huge_pmd_locked() Gavin Guo
2025-04-25 11:10 ` Zi Yan
2025-04-25 11:23 ` David Hildenbrand
2025-04-25 11:39 ` Zi Yan
2025-04-27 6:05 ` Baolin Wang
2025-04-25 10:38 ` [PATCH v2 2/2] mm/huge_memory: Remove useless folio pointers passing Gavin Guo
2025-04-25 11:40 ` Zi Yan
2025-04-27 6:06 ` Baolin Wang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox