linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH v4] hugetlbfs: check for shareable lock before calling huge_pmd_unshare()
@ 2025-10-13  9:33 Deepanshu Kartikey
  2025-10-13  9:54 ` David Hildenbrand
  0 siblings, 1 reply; 8+ messages in thread
From: Deepanshu Kartikey @ 2025-10-13  9:33 UTC (permalink / raw)
  To: muchun.song, osalvador, akpm, broonie, david
  Cc: linux-mm, linux-kernel, syzbot+f26d7c75c26ec19790e7

Hi David,

That makes a lot of sense - moving the assertions after the early return 
checks is cleaner since the locks are only needed when actual unsharing 
work happens.

Should I send a v5 with your suggested change?

Thanks,
Deepanshu


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4] hugetlbfs: check for shareable lock before calling huge_pmd_unshare()
  2025-10-13  9:33 [PATCH v4] hugetlbfs: check for shareable lock before calling huge_pmd_unshare() Deepanshu Kartikey
@ 2025-10-13  9:54 ` David Hildenbrand
  2025-10-13 13:12   ` Oscar Salvador
  0 siblings, 1 reply; 8+ messages in thread
From: David Hildenbrand @ 2025-10-13  9:54 UTC (permalink / raw)
  To: Deepanshu Kartikey, muchun.song, osalvador, akpm, broonie
  Cc: linux-mm, linux-kernel, syzbot+f26d7c75c26ec19790e7

On 13.10.25 11:33, Deepanshu Kartikey wrote:
> Hi David,
> 
> That makes a lot of sense - moving the assertions after the early return
> checks is cleaner since the locks are only needed when actual unsharing
> work happens.
> 
> Should I send a v5 with your suggested change?

Let's wait if the hugetlb maintainers have any preference.

-- 
Cheers

David / dhildenb



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4] hugetlbfs: check for shareable lock before calling huge_pmd_unshare()
  2025-10-13  9:54 ` David Hildenbrand
@ 2025-10-13 13:12   ` Oscar Salvador
  2025-10-13 14:22     ` Deepanshu Kartikey
  0 siblings, 1 reply; 8+ messages in thread
From: Oscar Salvador @ 2025-10-13 13:12 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Deepanshu Kartikey, muchun.song, akpm, broonie, linux-mm,
	linux-kernel, syzbot+f26d7c75c26ec19790e7

On Mon, Oct 13, 2025 at 11:54:00AM +0200, David Hildenbrand wrote:
> On 13.10.25 11:33, Deepanshu Kartikey wrote:
> > Hi David,
> > 
> > That makes a lot of sense - moving the assertions after the early return
> > checks is cleaner since the locks are only needed when actual unsharing
> > work happens.
> > 
> > Should I send a v5 with your suggested change?
> 
> Let's wait if the hugetlb maintainers have any preference.

Yes, now that I look again I think your suggestion makes more sense and
its much cleaner :-)

 

-- 
Oscar Salvador
SUSE Labs


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4] hugetlbfs: check for shareable lock before calling huge_pmd_unshare()
  2025-10-13 13:12   ` Oscar Salvador
@ 2025-10-13 14:22     ` Deepanshu Kartikey
  0 siblings, 0 replies; 8+ messages in thread
From: Deepanshu Kartikey @ 2025-10-13 14:22 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: David Hildenbrand, muchun.song, akpm, broonie, linux-mm,
	linux-kernel, syzbot+f26d7c75c26ec19790e7

Hi Oscar and David,

Since I've been working through the iterations on this fix, would it be
okay if I send v5 with David's suggested change? I'd like to see this
through to completion.

Thanks,
Deepanshu


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4] hugetlbfs: check for shareable lock before calling huge_pmd_unshare()
@ 2025-10-13  9:33 Deepanshu Kartikey
  0 siblings, 0 replies; 8+ messages in thread
From: Deepanshu Kartikey @ 2025-10-13  9:33 UTC (permalink / raw)
  To: =muchun.song, osalvador, akpm, broonie, david
  Cc: linux-mm, linux-kernel, syzbot+f26d7c75c26ec19790e7

Hi David,

That makes a lot of sense - moving the assertions after the early return 
checks is cleaner since the locks are only needed when actual unsharing 
work happens.

Should I send a v5 with your suggested change?

Thanks,
Deepanshu


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4] hugetlbfs: check for shareable lock before calling huge_pmd_unshare()
  2025-10-08  5:27 ` [PATCH v4] hugetlbfs: check for shareable lock before calling huge_pmd_unshare() Deepanshu Kartikey
  2025-10-13  8:09   ` Oscar Salvador
@ 2025-10-13  8:27   ` David Hildenbrand
  1 sibling, 0 replies; 8+ messages in thread
From: David Hildenbrand @ 2025-10-13  8:27 UTC (permalink / raw)
  To: Deepanshu Kartikey, muchun.song, osalvador, akpm, broonie
  Cc: linux-mm, linux-kernel, syzbot+f26d7c75c26ec19790e7

On 08.10.25 07:27, Deepanshu Kartikey wrote:
> When hugetlb_vmdelete_list() processes VMAs during truncate operations,
> it may encounter VMAs where huge_pmd_unshare() is called without the
> required shareable lock. This triggers an assertion failure in
> hugetlb_vma_assert_locked().
> 
> The previous fix in commit dd83609b8898 ("hugetlbfs: skip VMAs without
> shareable locks in hugetlb_vmdelete_list") skipped entire VMAs without
> shareable locks to avoid the assertion. However, this prevented pages
> from being unmapped and freed, causing a regression in fallocate(PUNCH_HOLE)
> operations where pages were not freed immediately, as reported by Mark Brown.
> 
> Instead of skipping VMAs or adding new flags, check __vma_shareable_lock()
> directly in __unmap_hugepage_range() right before calling huge_pmd_unshare().
> This ensures PMD unsharing only happens when the VMA has a shareable lock
> structure, while still allowing page unmapping and freeing to proceed for
> all VMAs.
> 
> Reported-by: syzbot+f26d7c75c26ec19790e7@syzkaller.appspotmail.com
> Tested-by: syzbot+f26d7c75c26ec19790e7@syzkaller.appspotmail.com
> Reported-by: Mark Brown <broonie@kernel.org>
> Fixes: dd83609b8898 ("hugetlbfs: skip VMAs without shareable locks in hugetlb_vmdelete_list")
> Suggested-by: Oscar Salvador <osalvador@suse.de>
> Suggested-by: David Hildenbrand <david@redhat.com>
> Link: https://lore.kernel.org/mm-commits/20250925203504.7BE02C4CEF7@smtp.kernel.org/ [v1]
> Link: https://lore.kernel.org/mm-commits/20250928185232.BEDB6C4CEF0@smtp.kernel.org/ [v2]
> Link: https://lore.kernel.org/linux-mm/20251003174553.3078839-1-kartikey406@gmail.com/ [v3]
> Signed-off-by: Deepanshu Kartikey <kartikey406@gmail.com>
> ---
> Changes in v4:
> - Simplified approach per Oscar's suggestion: check __vma_shareable_lock()
>    directly in __unmap_hugepage_range() before calling huge_pmd_unshare()
> - Removed ZAP_FLAG_NO_UNSHARE flag per David's feedback to avoid polluting
>    generic mm.h header
> - Reverted hugetlb_vmdelete_list() to not skip VMAs
> 
> Changes in v3:
> - Added ZAP_FLAG_NO_UNSHARE to skip only PMD unsharing, not entire VMA
> 
> Changes in v2:
> - Skip entire VMAs without shareable locks in hugetlb_vmdelete_list()
>    (caused PUNCH_HOLE regression)
> 
> Changes in v1:
> - Initial fix attempt
> ---
>   fs/hugetlbfs/inode.c | 10 +---------
>   mm/hugetlb.c         |  2 +-
>   2 files changed, 2 insertions(+), 10 deletions(-)
> 
> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
> index 9c94ed8c3ab0..1e040db18b20 100644
> --- a/fs/hugetlbfs/inode.c
> +++ b/fs/hugetlbfs/inode.c
> @@ -478,14 +478,6 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end,
>   		if (!hugetlb_vma_trylock_write(vma))
>   			continue;
>   
> -		/*
> -		 * Skip VMAs without shareable locks. Per the design in commit
> -		 * 40549ba8f8e0, these will be handled by remove_inode_hugepages()
> -		 * called after this function with proper locking.
> -		 */
> -		if (!__vma_shareable_lock(vma))
> -			goto skip;
> -
>   		v_start = vma_offset_start(vma, start);
>   		v_end = vma_offset_end(vma, end);
>   
> @@ -496,7 +488,7 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end,
>   		 * vmas.  Therefore, lock is not held when calling
>   		 * unmap_hugepage_range for private vmas.
>   		 */
> -skip:
> +
>   		hugetlb_vma_unlock_write(vma);
>   	}
>   }
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 6cac826cb61f..9ed85ab8420e 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -5885,7 +5885,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
>   		}
>   
>   		ptl = huge_pte_lock(h, mm, ptep);
> -		if (huge_pmd_unshare(mm, vma, address, ptep)) {
> +		if (__vma_shareable_lock(vma) && huge_pmd_unshare(mm, vma, address, ptep)) {
>   			spin_unlock(ptl);
>   			tlb_flush_pmd_range(tlb, address & PUD_MASK, PUD_SIZE);
>   			force_flush = true;

Wondering, couldn't we handle that in huge_pmd_unshare()?

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index eed59cfb5d218..f167cec4a5acc 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -7598,13 +7598,14 @@ int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
         p4d_t *p4d = p4d_offset(pgd, addr);
         pud_t *pud = pud_offset(p4d, addr);
  
-       i_mmap_assert_write_locked(vma->vm_file->f_mapping);
-       hugetlb_vma_assert_locked(vma);
         if (sz != PMD_SIZE)
                 return 0;
         if (!ptdesc_pmd_pts_count(virt_to_ptdesc(ptep)))
                 return 0;
  
+       i_mmap_assert_write_locked(vma->vm_file->f_mapping);
+       hugetlb_vma_assert_locked(vma);
+
         pud_clear(pud);
         /*
          * Once our caller drops the rmap lock, some other process might be

-- 
Cheers

David / dhildenb



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4] hugetlbfs: check for shareable lock before calling huge_pmd_unshare()
  2025-10-08  5:27 ` [PATCH v4] hugetlbfs: check for shareable lock before calling huge_pmd_unshare() Deepanshu Kartikey
@ 2025-10-13  8:09   ` Oscar Salvador
  2025-10-13  8:27   ` David Hildenbrand
  1 sibling, 0 replies; 8+ messages in thread
From: Oscar Salvador @ 2025-10-13  8:09 UTC (permalink / raw)
  To: Deepanshu Kartikey
  Cc: muchun.song, david, akpm, broonie, linux-mm, linux-kernel,
	syzbot+f26d7c75c26ec19790e7

On Wed, Oct 08, 2025 at 10:57:59AM +0530, Deepanshu Kartikey wrote:
> When hugetlb_vmdelete_list() processes VMAs during truncate operations,
> it may encounter VMAs where huge_pmd_unshare() is called without the
> required shareable lock. This triggers an assertion failure in
> hugetlb_vma_assert_locked().
> 
> The previous fix in commit dd83609b8898 ("hugetlbfs: skip VMAs without
> shareable locks in hugetlb_vmdelete_list") skipped entire VMAs without
> shareable locks to avoid the assertion. However, this prevented pages
> from being unmapped and freed, causing a regression in fallocate(PUNCH_HOLE)
> operations where pages were not freed immediately, as reported by Mark Brown.
> 
> Instead of skipping VMAs or adding new flags, check __vma_shareable_lock()
> directly in __unmap_hugepage_range() right before calling huge_pmd_unshare().
> This ensures PMD unsharing only happens when the VMA has a shareable lock
> structure, while still allowing page unmapping and freeing to proceed for
> all VMAs.
> 
> Reported-by: syzbot+f26d7c75c26ec19790e7@syzkaller.appspotmail.com
> Tested-by: syzbot+f26d7c75c26ec19790e7@syzkaller.appspotmail.com
> Reported-by: Mark Brown <broonie@kernel.org>
> Fixes: dd83609b8898 ("hugetlbfs: skip VMAs without shareable locks in hugetlb_vmdelete_list")
> Suggested-by: Oscar Salvador <osalvador@suse.de>
> Suggested-by: David Hildenbrand <david@redhat.com>
> Link: https://lore.kernel.org/mm-commits/20250925203504.7BE02C4CEF7@smtp.kernel.org/ [v1]
> Link: https://lore.kernel.org/mm-commits/20250928185232.BEDB6C4CEF0@smtp.kernel.org/ [v2]
> Link: https://lore.kernel.org/linux-mm/20251003174553.3078839-1-kartikey406@gmail.com/ [v3]
> Signed-off-by: Deepanshu Kartikey <kartikey406@gmail.com>

Acked-by: Oscar Salvador <osalvador@suse.de>

 

-- 
Oscar Salvador
SUSE Labs


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v4] hugetlbfs: check for shareable lock before calling  huge_pmd_unshare()
  2025-10-03 17:45 [PATCH v3] hugetlbfs: skip PMD unsharing when shareable lock unavailable Deepanshu Kartikey
@ 2025-10-08  5:27 ` Deepanshu Kartikey
  2025-10-13  8:09   ` Oscar Salvador
  2025-10-13  8:27   ` David Hildenbrand
  0 siblings, 2 replies; 8+ messages in thread
From: Deepanshu Kartikey @ 2025-10-08  5:27 UTC (permalink / raw)
  To: muchun.song, osalvador, david, akpm, broonie
  Cc: linux-mm, linux-kernel, Deepanshu Kartikey, syzbot+f26d7c75c26ec19790e7

When hugetlb_vmdelete_list() processes VMAs during truncate operations,
it may encounter VMAs where huge_pmd_unshare() is called without the
required shareable lock. This triggers an assertion failure in
hugetlb_vma_assert_locked().

The previous fix in commit dd83609b8898 ("hugetlbfs: skip VMAs without
shareable locks in hugetlb_vmdelete_list") skipped entire VMAs without
shareable locks to avoid the assertion. However, this prevented pages
from being unmapped and freed, causing a regression in fallocate(PUNCH_HOLE)
operations where pages were not freed immediately, as reported by Mark Brown.

Instead of skipping VMAs or adding new flags, check __vma_shareable_lock()
directly in __unmap_hugepage_range() right before calling huge_pmd_unshare().
This ensures PMD unsharing only happens when the VMA has a shareable lock
structure, while still allowing page unmapping and freeing to proceed for
all VMAs.

Reported-by: syzbot+f26d7c75c26ec19790e7@syzkaller.appspotmail.com
Tested-by: syzbot+f26d7c75c26ec19790e7@syzkaller.appspotmail.com
Reported-by: Mark Brown <broonie@kernel.org>
Fixes: dd83609b8898 ("hugetlbfs: skip VMAs without shareable locks in hugetlb_vmdelete_list")
Suggested-by: Oscar Salvador <osalvador@suse.de>
Suggested-by: David Hildenbrand <david@redhat.com>
Link: https://lore.kernel.org/mm-commits/20250925203504.7BE02C4CEF7@smtp.kernel.org/ [v1]
Link: https://lore.kernel.org/mm-commits/20250928185232.BEDB6C4CEF0@smtp.kernel.org/ [v2]
Link: https://lore.kernel.org/linux-mm/20251003174553.3078839-1-kartikey406@gmail.com/ [v3]
Signed-off-by: Deepanshu Kartikey <kartikey406@gmail.com>
---
Changes in v4:
- Simplified approach per Oscar's suggestion: check __vma_shareable_lock()
  directly in __unmap_hugepage_range() before calling huge_pmd_unshare()
- Removed ZAP_FLAG_NO_UNSHARE flag per David's feedback to avoid polluting
  generic mm.h header
- Reverted hugetlb_vmdelete_list() to not skip VMAs

Changes in v3:
- Added ZAP_FLAG_NO_UNSHARE to skip only PMD unsharing, not entire VMA

Changes in v2:
- Skip entire VMAs without shareable locks in hugetlb_vmdelete_list()
  (caused PUNCH_HOLE regression)

Changes in v1:
- Initial fix attempt
---
 fs/hugetlbfs/inode.c | 10 +---------
 mm/hugetlb.c         |  2 +-
 2 files changed, 2 insertions(+), 10 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 9c94ed8c3ab0..1e040db18b20 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -478,14 +478,6 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end,
 		if (!hugetlb_vma_trylock_write(vma))
 			continue;
 
-		/*
-		 * Skip VMAs without shareable locks. Per the design in commit
-		 * 40549ba8f8e0, these will be handled by remove_inode_hugepages()
-		 * called after this function with proper locking.
-		 */
-		if (!__vma_shareable_lock(vma))
-			goto skip;
-
 		v_start = vma_offset_start(vma, start);
 		v_end = vma_offset_end(vma, end);
 
@@ -496,7 +488,7 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end,
 		 * vmas.  Therefore, lock is not held when calling
 		 * unmap_hugepage_range for private vmas.
 		 */
-skip:
+
 		hugetlb_vma_unlock_write(vma);
 	}
 }
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 6cac826cb61f..9ed85ab8420e 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5885,7 +5885,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		}
 
 		ptl = huge_pte_lock(h, mm, ptep);
-		if (huge_pmd_unshare(mm, vma, address, ptep)) {
+		if (__vma_shareable_lock(vma) && huge_pmd_unshare(mm, vma, address, ptep)) {
 			spin_unlock(ptl);
 			tlb_flush_pmd_range(tlb, address & PUD_MASK, PUD_SIZE);
 			force_flush = true;
-- 
2.43.0



^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-10-13 14:23 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-10-13  9:33 [PATCH v4] hugetlbfs: check for shareable lock before calling huge_pmd_unshare() Deepanshu Kartikey
2025-10-13  9:54 ` David Hildenbrand
2025-10-13 13:12   ` Oscar Salvador
2025-10-13 14:22     ` Deepanshu Kartikey
  -- strict thread matches above, loose matches on Subject: below --
2025-10-13  9:33 Deepanshu Kartikey
2025-10-03 17:45 [PATCH v3] hugetlbfs: skip PMD unsharing when shareable lock unavailable Deepanshu Kartikey
2025-10-08  5:27 ` [PATCH v4] hugetlbfs: check for shareable lock before calling huge_pmd_unshare() Deepanshu Kartikey
2025-10-13  8:09   ` Oscar Salvador
2025-10-13  8:27   ` David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox