* Re: [PATCH] mm: fix finish_fault() handling for large folios
2025-02-26 11:48 [PATCH] mm: fix finish_fault() handling for large folios Brian Geffon
@ 2025-02-26 14:03 ` Matthew Wilcox
2025-02-26 14:31 ` Brian Geffon
2025-02-26 15:42 ` David Hildenbrand
2025-02-26 15:17 ` Baolin Wang
` (2 subsequent siblings)
3 siblings, 2 replies; 12+ messages in thread
From: Matthew Wilcox @ 2025-02-26 14:03 UTC (permalink / raw)
To: Brian Geffon
Cc: Andrew Morton, Zi Yan, Kefeng Wang, Suren Baghdasaryan, linux-mm,
linux-kernel, stable, Baolin Wang, Hugh Dickins, Marek Maslanka
On Wed, Feb 26, 2025 at 06:48:15AM -0500, Brian Geffon wrote:
> When handling faults for anon shmem finish_fault() will attempt to install
> ptes for the entire folio. Unfortunately if it encounters a single
> non-pte_none entry in that range it will bail, even if the pte that
> triggered the fault is still pte_none. When this situation happens the
> fault will be retried endlessly never making forward progress.
>
> This patch fixes this behavior and if it detects that a pte in the range
> is not pte_none it will fall back to setting just the pte for the
> address that triggered the fault.
Surely there's a similar problem in do_anonymous_page()?
At any rate, what a horrid function finish_fault() has become.
Special cases all over the place. What we should be doing is
deciding the range of PTEs to insert, bounded by the folio, the VMA
and any non-none entries. Maybe I'll get a chance to fix this up.
^ permalink raw reply [flat|nested] 12+ messages in thread* Re: [PATCH] mm: fix finish_fault() handling for large folios
2025-02-26 14:03 ` Matthew Wilcox
@ 2025-02-26 14:31 ` Brian Geffon
2025-02-26 15:42 ` David Hildenbrand
1 sibling, 0 replies; 12+ messages in thread
From: Brian Geffon @ 2025-02-26 14:31 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Andrew Morton, Zi Yan, Kefeng Wang, Suren Baghdasaryan, linux-mm,
linux-kernel, stable, Baolin Wang, Hugh Dickins, Marek Maslanka
On Wed, Feb 26, 2025 at 9:04 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Wed, Feb 26, 2025 at 06:48:15AM -0500, Brian Geffon wrote:
> > When handling faults for anon shmem finish_fault() will attempt to install
> > ptes for the entire folio. Unfortunately if it encounters a single
> > non-pte_none entry in that range it will bail, even if the pte that
> > triggered the fault is still pte_none. When this situation happens the
> > fault will be retried endlessly never making forward progress.
> >
> > This patch fixes this behavior and if it detects that a pte in the range
> > is not pte_none it will fall back to setting just the pte for the
> > address that triggered the fault.
>
> Surely there's a similar problem in do_anonymous_page()?
>
> At any rate, what a horrid function finish_fault() has become.
> Special cases all over the place. What we should be doing is
> deciding the range of PTEs to insert, bounded by the folio, the VMA
> and any non-none entries. Maybe I'll get a chance to fix this up.
I agree, I wasn't thrilled that the fix looked like this but I was
trying to keep the change minimal to aid in backporting to stable
kernels where this behavior is broken. With that being said, do you
have a preference on a minimal way we can fix this before
finish_fault() gets a proper cleanup?
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH] mm: fix finish_fault() handling for large folios
2025-02-26 14:03 ` Matthew Wilcox
2025-02-26 14:31 ` Brian Geffon
@ 2025-02-26 15:42 ` David Hildenbrand
2025-02-26 16:28 ` Matthew Wilcox
1 sibling, 1 reply; 12+ messages in thread
From: David Hildenbrand @ 2025-02-26 15:42 UTC (permalink / raw)
To: Matthew Wilcox, Brian Geffon
Cc: Andrew Morton, Zi Yan, Kefeng Wang, Suren Baghdasaryan, linux-mm,
linux-kernel, stable, Baolin Wang, Hugh Dickins, Marek Maslanka
On 26.02.25 15:03, Matthew Wilcox wrote:
> On Wed, Feb 26, 2025 at 06:48:15AM -0500, Brian Geffon wrote:
>> When handling faults for anon shmem finish_fault() will attempt to install
>> ptes for the entire folio. Unfortunately if it encounters a single
>> non-pte_none entry in that range it will bail, even if the pte that
>> triggered the fault is still pte_none. When this situation happens the
>> fault will be retried endlessly never making forward progress.
>>
>> This patch fixes this behavior and if it detects that a pte in the range
>> is not pte_none it will fall back to setting just the pte for the
>> address that triggered the fault.
>
> Surely there's a similar problem in do_anonymous_page()?
I recall we handle it in there correctly the last time I stared at it.
We check pte_none to decide which folio size we can allocate (including
basing the decision on other factors like VMA etc), and after retaking
the PTL, we recheck vmf_pte_changed / pte_range_none() to make sure
there were no races.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH] mm: fix finish_fault() handling for large folios
2025-02-26 15:42 ` David Hildenbrand
@ 2025-02-26 16:28 ` Matthew Wilcox
2025-02-26 16:52 ` David Hildenbrand
0 siblings, 1 reply; 12+ messages in thread
From: Matthew Wilcox @ 2025-02-26 16:28 UTC (permalink / raw)
To: David Hildenbrand
Cc: Brian Geffon, Andrew Morton, Zi Yan, Kefeng Wang,
Suren Baghdasaryan, linux-mm, linux-kernel, stable, Baolin Wang,
Hugh Dickins, Marek Maslanka
On Wed, Feb 26, 2025 at 04:42:46PM +0100, David Hildenbrand wrote:
> On 26.02.25 15:03, Matthew Wilcox wrote:
> > On Wed, Feb 26, 2025 at 06:48:15AM -0500, Brian Geffon wrote:
> > > When handling faults for anon shmem finish_fault() will attempt to install
> > > ptes for the entire folio. Unfortunately if it encounters a single
> > > non-pte_none entry in that range it will bail, even if the pte that
> > > triggered the fault is still pte_none. When this situation happens the
> > > fault will be retried endlessly never making forward progress.
> > >
> > > This patch fixes this behavior and if it detects that a pte in the range
> > > is not pte_none it will fall back to setting just the pte for the
> > > address that triggered the fault.
> >
> > Surely there's a similar problem in do_anonymous_page()?
>
> I recall we handle it in there correctly the last time I stared at it.
>
> We check pte_none to decide which folio size we can allocate (including
> basing the decision on other factors like VMA etc), and after retaking the
> PTL, we recheck vmf_pte_changed / pte_range_none() to make sure there were
> no races.
Ah, so then we'll retry and allocate a folio of the right size the next
time? Rather than the shmem case where the folio is already allocated
and we can't change that?
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH] mm: fix finish_fault() handling for large folios
2025-02-26 16:28 ` Matthew Wilcox
@ 2025-02-26 16:52 ` David Hildenbrand
0 siblings, 0 replies; 12+ messages in thread
From: David Hildenbrand @ 2025-02-26 16:52 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Brian Geffon, Andrew Morton, Zi Yan, Kefeng Wang,
Suren Baghdasaryan, linux-mm, linux-kernel, stable, Baolin Wang,
Hugh Dickins, Marek Maslanka
On 26.02.25 17:28, Matthew Wilcox wrote:
> On Wed, Feb 26, 2025 at 04:42:46PM +0100, David Hildenbrand wrote:
>> On 26.02.25 15:03, Matthew Wilcox wrote:
>>> On Wed, Feb 26, 2025 at 06:48:15AM -0500, Brian Geffon wrote:
>>>> When handling faults for anon shmem finish_fault() will attempt to install
>>>> ptes for the entire folio. Unfortunately if it encounters a single
>>>> non-pte_none entry in that range it will bail, even if the pte that
>>>> triggered the fault is still pte_none. When this situation happens the
>>>> fault will be retried endlessly never making forward progress.
>>>>
>>>> This patch fixes this behavior and if it detects that a pte in the range
>>>> is not pte_none it will fall back to setting just the pte for the
>>>> address that triggered the fault.
>>>
>>> Surely there's a similar problem in do_anonymous_page()?
>>
>> I recall we handle it in there correctly the last time I stared at it.
>>
>> We check pte_none to decide which folio size we can allocate (including
>> basing the decision on other factors like VMA etc), and after retaking the
>> PTL, we recheck vmf_pte_changed / pte_range_none() to make sure there were
>> no races.
>
> Ah, so then we'll retry and allocate a folio of the right size the next
> time?
IIRC we'll retry the fault in case we had a race. Likely, if we had a
race, somebody else installed a (large) folio and we essentially have to
second fault. If, for some reason, the race only touched parts of the
PTEs we tried to modify, we'll get another fault and allocate something
(smaller) that would fit into the new empty range.
So yes, we're more flexible because we're allocating the folios and
don't have to take whatever folio size is in the pagecache in consideration.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH] mm: fix finish_fault() handling for large folios
2025-02-26 11:48 [PATCH] mm: fix finish_fault() handling for large folios Brian Geffon
2025-02-26 14:03 ` Matthew Wilcox
@ 2025-02-26 15:17 ` Baolin Wang
2025-02-26 15:46 ` Brian Geffon
2025-02-26 16:23 ` [PATCH v2] " Brian Geffon
2025-02-27 13:32 ` [PATCH v3] " Brian Geffon
3 siblings, 1 reply; 12+ messages in thread
From: Baolin Wang @ 2025-02-26 15:17 UTC (permalink / raw)
To: Brian Geffon, Andrew Morton
Cc: Zi Yan, Kefeng Wang, Suren Baghdasaryan, linux-mm, linux-kernel,
stable, Hugh Dickins, Marek Maslanka
On 2025/2/26 19:48, Brian Geffon wrote:
> When handling faults for anon shmem finish_fault() will attempt to install
> ptes for the entire folio. Unfortunately if it encounters a single
> non-pte_none entry in that range it will bail, even if the pte that
> triggered the fault is still pte_none. When this situation happens the
> fault will be retried endlessly never making forward progress.
>
> This patch fixes this behavior and if it detects that a pte in the range
> is not pte_none it will fall back to setting just the pte for the
> address that triggered the fault.
Could you describe in detail how this situation occurs? How is the none
pte inserted within the range of the large folio? Because we have checks
in shmem to determine if a large folio is suitable.
Anyway, if we find the pte_range_none() is false, we can fallback to
per-page fault as the following code shows (untested), which seems more
simple?
diff --git a/mm/memory.c b/mm/memory.c
index a8196ae72e9a..8a2a9fda5410 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5219,7 +5219,12 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
bool is_cow = (vmf->flags & FAULT_FLAG_WRITE) &&
!(vma->vm_flags & VM_SHARED);
int type, nr_pages;
- unsigned long addr = vmf->address;
+ unsigned long addr;
+ bool fallback_per_page = false;
+
+
+fallback:
+ addr = vmf->address;
/* Did we COW the page? */
if (is_cow)
@@ -5258,7 +5263,8 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
* approach also applies to non-anonymous-shmem faults to avoid
* inflating the RSS of the process.
*/
- if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma))) {
+ if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma))
+ || unlikely(fallback_per_page)) {
nr_pages = 1;
} else if (nr_pages > 1) {
pgoff_t idx = folio_page_idx(folio, page);
@@ -5294,9 +5300,9 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
ret = VM_FAULT_NOPAGE;
goto unlock;
} else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
- update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
- ret = VM_FAULT_NOPAGE;
- goto unlock;
+ fallback_per_page = true;
+ pte_unmap_unlock(vmf->pte, vmf->ptl);
+ goto fallback;
}
folio_ref_add(folio, nr_pages - 1);
>
> Cc: stable@vger.kernel.org
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: Hugh Dickins <hughd@google.com>
> Fixes: 43e027e41423 ("mm: memory: extend finish_fault() to support large folio")
> Reported-by: Marek Maslanka <mmaslanka@google.com>
> Signed-off-by: Brian Geffon <bgeffon@google.com>
> ---
> mm/memory.c | 19 ++++++++++++++++---
> 1 file changed, 16 insertions(+), 3 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index b4d3d4893267..32de626ec1da 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -5258,9 +5258,22 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
> ret = VM_FAULT_NOPAGE;
> goto unlock;
> } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
> - update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
> - ret = VM_FAULT_NOPAGE;
> - goto unlock;
> + /*
> + * We encountered a set pte, let's just try to install the
> + * pte for the original fault if that pte is still pte none.
> + */
> + pgoff_t idx = (vmf->address - addr) / PAGE_SIZE;
> +
> + if (!pte_none(ptep_get_lockless(vmf->pte + idx))) {
> + update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
> + ret = VM_FAULT_NOPAGE;
> + goto unlock;
> + }
> +
> + vmf->pte = vmf->pte + idx;
> + page = folio_page(folio, idx);
> + addr = vmf->address;
> + nr_pages = 1;
> }
>
> folio_ref_add(folio, nr_pages - 1);
^ permalink raw reply [flat|nested] 12+ messages in thread* Re: [PATCH] mm: fix finish_fault() handling for large folios
2025-02-26 15:17 ` Baolin Wang
@ 2025-02-26 15:46 ` Brian Geffon
0 siblings, 0 replies; 12+ messages in thread
From: Brian Geffon @ 2025-02-26 15:46 UTC (permalink / raw)
To: Baolin Wang
Cc: Andrew Morton, Zi Yan, Kefeng Wang, Suren Baghdasaryan, linux-mm,
linux-kernel, stable, Hugh Dickins, Marek Maslanka
On Wed, Feb 26, 2025 at 10:17 AM Baolin Wang
<baolin.wang@linux.alibaba.com> wrote:
>
>
>
> On 2025/2/26 19:48, Brian Geffon wrote:
> > When handling faults for anon shmem finish_fault() will attempt to install
> > ptes for the entire folio. Unfortunately if it encounters a single
> > non-pte_none entry in that range it will bail, even if the pte that
> > triggered the fault is still pte_none. When this situation happens the
> > fault will be retried endlessly never making forward progress.
> >
> > This patch fixes this behavior and if it detects that a pte in the range
> > is not pte_none it will fall back to setting just the pte for the
> > address that triggered the fault.
>
> Could you describe in detail how this situation occurs? How is the none
> pte inserted within the range of the large folio? Because we have checks
> in shmem to determine if a large folio is suitable.
We're seeing it because of racing shmem_undo_range() calls, basically
we have a portion of the folio is zapped and that prevents
finish_fault() from ever completing because it has an assumption that
the entire range must be pte_none.
>
> Anyway, if we find the pte_range_none() is false, we can fallback to
> per-page fault as the following code shows (untested), which seems more
> simple?
Yah, I like this approach. I'll send a v2 with you as a Suggested-by.
>
> diff --git a/mm/memory.c b/mm/memory.c
> index a8196ae72e9a..8a2a9fda5410 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -5219,7 +5219,12 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
> bool is_cow = (vmf->flags & FAULT_FLAG_WRITE) &&
> !(vma->vm_flags & VM_SHARED);
> int type, nr_pages;
> - unsigned long addr = vmf->address;
> + unsigned long addr;
> + bool fallback_per_page = false;
> +
> +
> +fallback:
> + addr = vmf->address;
>
> /* Did we COW the page? */
> if (is_cow)
> @@ -5258,7 +5263,8 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
> * approach also applies to non-anonymous-shmem faults to avoid
> * inflating the RSS of the process.
> */
> - if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma))) {
> + if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma))
> + || unlikely(fallback_per_page)) {
> nr_pages = 1;
> } else if (nr_pages > 1) {
> pgoff_t idx = folio_page_idx(folio, page);
> @@ -5294,9 +5300,9 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
> ret = VM_FAULT_NOPAGE;
> goto unlock;
> } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
> - update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
> - ret = VM_FAULT_NOPAGE;
> - goto unlock;
> + fallback_per_page = true;
> + pte_unmap_unlock(vmf->pte, vmf->ptl);
> + goto fallback;
> }
>
> folio_ref_add(folio, nr_pages - 1);
>
> >
> > Cc: stable@vger.kernel.org
> > Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> > Cc: Hugh Dickins <hughd@google.com>
> > Fixes: 43e027e41423 ("mm: memory: extend finish_fault() to support large folio")
> > Reported-by: Marek Maslanka <mmaslanka@google.com>
> > Signed-off-by: Brian Geffon <bgeffon@google.com>
> > ---
> > mm/memory.c | 19 ++++++++++++++++---
> > 1 file changed, 16 insertions(+), 3 deletions(-)
> >
> > diff --git a/mm/memory.c b/mm/memory.c
> > index b4d3d4893267..32de626ec1da 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -5258,9 +5258,22 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
> > ret = VM_FAULT_NOPAGE;
> > goto unlock;
> > } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
> > - update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
> > - ret = VM_FAULT_NOPAGE;
> > - goto unlock;
> > + /*
> > + * We encountered a set pte, let's just try to install the
> > + * pte for the original fault if that pte is still pte none.
> > + */
> > + pgoff_t idx = (vmf->address - addr) / PAGE_SIZE;
> > +
> > + if (!pte_none(ptep_get_lockless(vmf->pte + idx))) {
> > + update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
> > + ret = VM_FAULT_NOPAGE;
> > + goto unlock;
> > + }
> > +
> > + vmf->pte = vmf->pte + idx;
> > + page = folio_page(folio, idx);
> > + addr = vmf->address;
> > + nr_pages = 1;
> > }
> >
> > folio_ref_add(folio, nr_pages - 1);
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2] mm: fix finish_fault() handling for large folios
2025-02-26 11:48 [PATCH] mm: fix finish_fault() handling for large folios Brian Geffon
2025-02-26 14:03 ` Matthew Wilcox
2025-02-26 15:17 ` Baolin Wang
@ 2025-02-26 16:23 ` Brian Geffon
2025-02-27 7:34 ` Baolin Wang
2025-02-27 13:32 ` [PATCH v3] " Brian Geffon
3 siblings, 1 reply; 12+ messages in thread
From: Brian Geffon @ 2025-02-26 16:23 UTC (permalink / raw)
To: Andrew Morton
Cc: Zi Yan, Kefeng Wang, Suren Baghdasaryan, linux-mm, linux-kernel,
Matthew Wilcox, David Hildenbrand, Brian Geffon, stable,
Hugh Dickins, Baolin Wang, Marek Maslanka
When handling faults for anon shmem finish_fault() will attempt to install
ptes for the entire folio. Unfortunately if it encounters a single
non-pte_none entry in that range it will bail, even if the pte that
triggered the fault is still pte_none. When this situation happens the
fault will be retried endlessly never making forward progress.
This patch fixes this behavior and if it detects that a pte in the range
is not pte_none it will fall back to setting a single pte.
Cc: stable@vger.kernel.org
Cc: Hugh Dickins <hughd@google.com>
Fixes: 43e027e41423 ("mm: memory: extend finish_fault() to support large folio")
Suggested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Reported-by: Marek Maslanka <mmaslanka@google.com>
Signed-off-by: Brian Geffon <bgeffon@google.com>
---
mm/memory.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index b4d3d4893267..b6c467fdbfa4 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5183,7 +5183,11 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
bool is_cow = (vmf->flags & FAULT_FLAG_WRITE) &&
!(vma->vm_flags & VM_SHARED);
int type, nr_pages;
- unsigned long addr = vmf->address;
+ unsigned long addr;
+ bool needs_fallback = false;
+
+fallback:
+ addr = vmf->address;
/* Did we COW the page? */
if (is_cow)
@@ -5222,7 +5226,8 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
* approach also applies to non-anonymous-shmem faults to avoid
* inflating the RSS of the process.
*/
- if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma))) {
+ if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma)) ||
+ unlikely(needs_fallback)) {
nr_pages = 1;
} else if (nr_pages > 1) {
pgoff_t idx = folio_page_idx(folio, page);
@@ -5258,9 +5263,9 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
ret = VM_FAULT_NOPAGE;
goto unlock;
} else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
- update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
- ret = VM_FAULT_NOPAGE;
- goto unlock;
+ needs_fallback = true;
+ pte_unmap_unlock(vmf->pte, vmf->ptl);
+ goto fallback;
}
folio_ref_add(folio, nr_pages - 1);
--
2.48.1.711.g2feabab25a-goog
^ permalink raw reply [flat|nested] 12+ messages in thread* Re: [PATCH v2] mm: fix finish_fault() handling for large folios
2025-02-26 16:23 ` [PATCH v2] " Brian Geffon
@ 2025-02-27 7:34 ` Baolin Wang
2025-02-27 16:47 ` Brian Geffon
0 siblings, 1 reply; 12+ messages in thread
From: Baolin Wang @ 2025-02-27 7:34 UTC (permalink / raw)
To: Brian Geffon, Andrew Morton
Cc: Zi Yan, Kefeng Wang, Suren Baghdasaryan, linux-mm, linux-kernel,
Matthew Wilcox, David Hildenbrand, stable, Hugh Dickins,
Marek Maslanka
On 2025/2/27 00:23, Brian Geffon wrote:
> When handling faults for anon shmem finish_fault() will attempt to install
> ptes for the entire folio. Unfortunately if it encounters a single
> non-pte_none entry in that range it will bail, even if the pte that
> triggered the fault is still pte_none. When this situation happens the
> fault will be retried endlessly never making forward progress.
>
> This patch fixes this behavior and if it detects that a pte in the range
> is not pte_none it will fall back to setting a single pte.
>
> Cc: stable@vger.kernel.org
> Cc: Hugh Dickins <hughd@google.com>
> Fixes: 43e027e41423 ("mm: memory: extend finish_fault() to support large folio")
> Suggested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> Reported-by: Marek Maslanka <mmaslanka@google.com>
> Signed-off-by: Brian Geffon <bgeffon@google.com>
> ---
> mm/memory.c | 15 ++++++++++-----
> 1 file changed, 10 insertions(+), 5 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index b4d3d4893267..b6c467fdbfa4 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -5183,7 +5183,11 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
> bool is_cow = (vmf->flags & FAULT_FLAG_WRITE) &&
> !(vma->vm_flags & VM_SHARED);
> int type, nr_pages;
> - unsigned long addr = vmf->address;
> + unsigned long addr;
> + bool needs_fallback = false;
> +
> +fallback:
> + addr = vmf->address;
>
> /* Did we COW the page? */
> if (is_cow)
> @@ -5222,7 +5226,8 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
> * approach also applies to non-anonymous-shmem faults to avoid
> * inflating the RSS of the process.
> */
> - if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma))) {
> + if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma)) ||
> + unlikely(needs_fallback)) {
Nit: can you align the code? Otherwise look good to me.
> nr_pages = 1;
> } else if (nr_pages > 1) {
> pgoff_t idx = folio_page_idx(folio, page);
> @@ -5258,9 +5263,9 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
> ret = VM_FAULT_NOPAGE;
> goto unlock;
> } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
> - update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
> - ret = VM_FAULT_NOPAGE;
> - goto unlock;
> + needs_fallback = true;
> + pte_unmap_unlock(vmf->pte, vmf->ptl);
> + goto fallback;
> }
>
> folio_ref_add(folio, nr_pages - 1);
^ permalink raw reply [flat|nested] 12+ messages in thread* Re: [PATCH v2] mm: fix finish_fault() handling for large folios
2025-02-27 7:34 ` Baolin Wang
@ 2025-02-27 16:47 ` Brian Geffon
0 siblings, 0 replies; 12+ messages in thread
From: Brian Geffon @ 2025-02-27 16:47 UTC (permalink / raw)
To: Baolin Wang
Cc: Andrew Morton, Zi Yan, Kefeng Wang, Suren Baghdasaryan, linux-mm,
linux-kernel, Matthew Wilcox, David Hildenbrand, stable,
Hugh Dickins, Marek Maslanka
On Thu, Feb 27, 2025 at 2:34 AM Baolin Wang
<baolin.wang@linux.alibaba.com> wrote:
>
>
>
> On 2025/2/27 00:23, Brian Geffon wrote:
> > When handling faults for anon shmem finish_fault() will attempt to install
> > ptes for the entire folio. Unfortunately if it encounters a single
> > non-pte_none entry in that range it will bail, even if the pte that
> > triggered the fault is still pte_none. When this situation happens the
> > fault will be retried endlessly never making forward progress.
> >
> > This patch fixes this behavior and if it detects that a pte in the range
> > is not pte_none it will fall back to setting a single pte.
> >
> > Cc: stable@vger.kernel.org
> > Cc: Hugh Dickins <hughd@google.com>
> > Fixes: 43e027e41423 ("mm: memory: extend finish_fault() to support large folio")
> > Suggested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> > Reported-by: Marek Maslanka <mmaslanka@google.com>
> > Signed-off-by: Brian Geffon <bgeffon@google.com>
> > ---
> > mm/memory.c | 15 ++++++++++-----
> > 1 file changed, 10 insertions(+), 5 deletions(-)
> >
> > diff --git a/mm/memory.c b/mm/memory.c
> > index b4d3d4893267..b6c467fdbfa4 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -5183,7 +5183,11 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
> > bool is_cow = (vmf->flags & FAULT_FLAG_WRITE) &&
> > !(vma->vm_flags & VM_SHARED);
> > int type, nr_pages;
> > - unsigned long addr = vmf->address;
> > + unsigned long addr;
> > + bool needs_fallback = false;
> > +
> > +fallback:
> > + addr = vmf->address;
> >
> > /* Did we COW the page? */
> > if (is_cow)
> > @@ -5222,7 +5226,8 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
> > * approach also applies to non-anonymous-shmem faults to avoid
> > * inflating the RSS of the process.
> > */
> > - if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma))) {
> > + if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma)) ||
> > + unlikely(needs_fallback)) {
>
> Nit: can you align the code? Otherwise look good to me.
I mailed a v3 with adjusted alignment, I'll let Andrew decide which
variation he prefers.
>
> > nr_pages = 1;
> > } else if (nr_pages > 1) {
> > pgoff_t idx = folio_page_idx(folio, page);
> > @@ -5258,9 +5263,9 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
> > ret = VM_FAULT_NOPAGE;
> > goto unlock;
> > } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
> > - update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
> > - ret = VM_FAULT_NOPAGE;
> > - goto unlock;
> > + needs_fallback = true;
> > + pte_unmap_unlock(vmf->pte, vmf->ptl);
> > + goto fallback;
> > }
> >
> > folio_ref_add(folio, nr_pages - 1);
Thanks for looking
Brian
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v3] mm: fix finish_fault() handling for large folios
2025-02-26 11:48 [PATCH] mm: fix finish_fault() handling for large folios Brian Geffon
` (2 preceding siblings ...)
2025-02-26 16:23 ` [PATCH v2] " Brian Geffon
@ 2025-02-27 13:32 ` Brian Geffon
3 siblings, 0 replies; 12+ messages in thread
From: Brian Geffon @ 2025-02-27 13:32 UTC (permalink / raw)
To: Andrew Morton
Cc: Zi Yan, Kefeng Wang, Suren Baghdasaryan, linux-mm, linux-kernel,
Matthew Wilcox, David Hildenbrand, Brian Geffon, stable,
Hugh Dickins, Baolin Wang, Marek Maslanka
When handling faults for anon shmem finish_fault() will attempt to install
ptes for the entire folio. Unfortunately if it encounters a single
non-pte_none entry in that range it will bail, even if the pte that
triggered the fault is still pte_none. When this situation happens the
fault will be retried endlessly never making forward progress.
This patch fixes this behavior and if it detects that a pte in the range
is not pte_none it will fall back to setting a single pte.
Cc: stable@vger.kernel.org
Cc: Hugh Dickins <hughd@google.com>
Fixes: 43e027e41423 ("mm: memory: extend finish_fault() to support large folio")
Suggested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Reported-by: Marek Maslanka <mmaslanka@google.com>
Signed-off-by: Brian Geffon <bgeffon@google.com>
---
mm/memory.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index b4d3d4893267..f3054bbb3c1e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5183,7 +5183,11 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
bool is_cow = (vmf->flags & FAULT_FLAG_WRITE) &&
!(vma->vm_flags & VM_SHARED);
int type, nr_pages;
- unsigned long addr = vmf->address;
+ unsigned long addr;
+ bool needs_fallback = false;
+
+fallback:
+ addr = vmf->address;
/* Did we COW the page? */
if (is_cow)
@@ -5222,7 +5226,8 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
* approach also applies to non-anonymous-shmem faults to avoid
* inflating the RSS of the process.
*/
- if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma))) {
+ if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma)) ||
+ unlikely(needs_fallback)) {
nr_pages = 1;
} else if (nr_pages > 1) {
pgoff_t idx = folio_page_idx(folio, page);
@@ -5258,9 +5263,9 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
ret = VM_FAULT_NOPAGE;
goto unlock;
} else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
- update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
- ret = VM_FAULT_NOPAGE;
- goto unlock;
+ needs_fallback = true;
+ pte_unmap_unlock(vmf->pte, vmf->ptl);
+ goto fallback;
}
folio_ref_add(folio, nr_pages - 1);
--
2.48.1.711.g2feabab25a-goog
^ permalink raw reply [flat|nested] 12+ messages in thread