* [PATCH v2] mm/khugepaged: alloc_charge_hpage() take care of mem charge errors
@ 2023-02-22 19:52 Peter Xu
2023-02-22 22:53 ` Yang Shi
0 siblings, 1 reply; 4+ messages in thread
From: Peter Xu @ 2023-02-22 19:52 UTC (permalink / raw)
To: linux-kernel, linux-mm
Cc: peterx, Andrew Morton, Yang Shi, David Stevens, Johannes Weiner
If memory charge failed, instead of returning the hpage but with an error,
allow the function to cleanup the folio properly, which is normally what a
function should do in this case - either return successfully, or return
with no side effect of partial runs with an indicated error.
This will also avoid the caller calling mem_cgroup_uncharge() unnecessarily
with either anon or shmem path (even if it's safe to do so).
Cc: Yang Shi <shy828301@gmail.com>
Reviewed-by: David Stevens <stevensd@chromium.org>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
v1->v2:
- Enhance commit message, drop "Fixes:" and "Cc: stable" tag, add R-bs.
---
mm/khugepaged.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 8dbc39896811..941d1c7ea910 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1063,12 +1063,19 @@ static int alloc_charge_hpage(struct page **hpage, struct mm_struct *mm,
gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() :
GFP_TRANSHUGE);
int node = hpage_collapse_find_target_node(cc);
+ struct folio *folio;
if (!hpage_collapse_alloc_page(hpage, gfp, node, &cc->alloc_nmask))
return SCAN_ALLOC_HUGE_PAGE_FAIL;
- if (unlikely(mem_cgroup_charge(page_folio(*hpage), mm, gfp)))
+
+ folio = page_folio(*hpage);
+ if (unlikely(mem_cgroup_charge(folio, mm, gfp))) {
+ folio_put(folio);
+ *hpage = NULL;
return SCAN_CGROUP_CHARGE_FAIL;
+ }
count_memcg_page_event(*hpage, THP_COLLAPSE_ALLOC);
+
return SCAN_SUCCEED;
}
--
2.39.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v2] mm/khugepaged: alloc_charge_hpage() take care of mem charge errors
2023-02-22 19:52 [PATCH v2] mm/khugepaged: alloc_charge_hpage() take care of mem charge errors Peter Xu
@ 2023-02-22 22:53 ` Yang Shi
2023-03-02 23:21 ` Zach O'Keefe
0 siblings, 1 reply; 4+ messages in thread
From: Yang Shi @ 2023-02-22 22:53 UTC (permalink / raw)
To: Peter Xu
Cc: linux-kernel, linux-mm, Andrew Morton, David Stevens, Johannes Weiner
On Wed, Feb 22, 2023 at 11:52 AM Peter Xu <peterx@redhat.com> wrote:
>
> If memory charge failed, instead of returning the hpage but with an error,
> allow the function to cleanup the folio properly, which is normally what a
> function should do in this case - either return successfully, or return
> with no side effect of partial runs with an indicated error.
>
> This will also avoid the caller calling mem_cgroup_uncharge() unnecessarily
> with either anon or shmem path (even if it's safe to do so).
Thanks for the cleanup. Reviewed-by: Yang Shi <shy828301@gmail.com>
>
> Cc: Yang Shi <shy828301@gmail.com>
> Reviewed-by: David Stevens <stevensd@chromium.org>
> Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
> v1->v2:
> - Enhance commit message, drop "Fixes:" and "Cc: stable" tag, add R-bs.
> ---
> mm/khugepaged.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 8dbc39896811..941d1c7ea910 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1063,12 +1063,19 @@ static int alloc_charge_hpage(struct page **hpage, struct mm_struct *mm,
> gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() :
> GFP_TRANSHUGE);
> int node = hpage_collapse_find_target_node(cc);
> + struct folio *folio;
>
> if (!hpage_collapse_alloc_page(hpage, gfp, node, &cc->alloc_nmask))
> return SCAN_ALLOC_HUGE_PAGE_FAIL;
> - if (unlikely(mem_cgroup_charge(page_folio(*hpage), mm, gfp)))
> +
> + folio = page_folio(*hpage);
> + if (unlikely(mem_cgroup_charge(folio, mm, gfp))) {
> + folio_put(folio);
> + *hpage = NULL;
> return SCAN_CGROUP_CHARGE_FAIL;
> + }
> count_memcg_page_event(*hpage, THP_COLLAPSE_ALLOC);
> +
> return SCAN_SUCCEED;
> }
>
> --
> 2.39.1
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v2] mm/khugepaged: alloc_charge_hpage() take care of mem charge errors
2023-02-22 22:53 ` Yang Shi
@ 2023-03-02 23:21 ` Zach O'Keefe
2023-03-03 14:59 ` Peter Xu
0 siblings, 1 reply; 4+ messages in thread
From: Zach O'Keefe @ 2023-03-02 23:21 UTC (permalink / raw)
To: Yang Shi
Cc: Peter Xu, linux-kernel, linux-mm, Andrew Morton, David Stevens,
Johannes Weiner
On Feb 22 14:53, Yang Shi wrote:
> On Wed, Feb 22, 2023 at 11:52 AM Peter Xu <peterx@redhat.com> wrote:
> >
> > If memory charge failed, instead of returning the hpage but with an error,
> > allow the function to cleanup the folio properly, which is normally what a
> > function should do in this case - either return successfully, or return
> > with no side effect of partial runs with an indicated error.
> >
> > This will also avoid the caller calling mem_cgroup_uncharge() unnecessarily
> > with either anon or shmem path (even if it's safe to do so).
>
> Thanks for the cleanup. Reviewed-by: Yang Shi <shy828301@gmail.com>
>
> >
> > Cc: Yang Shi <shy828301@gmail.com>
> > Reviewed-by: David Stevens <stevensd@chromium.org>
> > Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> > v1->v2:
> > - Enhance commit message, drop "Fixes:" and "Cc: stable" tag, add R-bs.
> > ---
> > mm/khugepaged.c | 9 ++++++++-
> > 1 file changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index 8dbc39896811..941d1c7ea910 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -1063,12 +1063,19 @@ static int alloc_charge_hpage(struct page **hpage, struct mm_struct *mm,
> > gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() :
> > GFP_TRANSHUGE);
> > int node = hpage_collapse_find_target_node(cc);
> > + struct folio *folio;
> >
> > if (!hpage_collapse_alloc_page(hpage, gfp, node, &cc->alloc_nmask))
> > return SCAN_ALLOC_HUGE_PAGE_FAIL;
> > - if (unlikely(mem_cgroup_charge(page_folio(*hpage), mm, gfp)))
> > +
> > + folio = page_folio(*hpage);
> > + if (unlikely(mem_cgroup_charge(folio, mm, gfp))) {
> > + folio_put(folio);
> > + *hpage = NULL;
> > return SCAN_CGROUP_CHARGE_FAIL;
> > + }
> > count_memcg_page_event(*hpage, THP_COLLAPSE_ALLOC);
> > +
> > return SCAN_SUCCEED;
> > }
> >
> > --
> > 2.39.1
> >
>
Thanks, Peter.
Can we also get rid of the unnecessary mem_cgroup_uncharge() calls while we're
at it? Maybe this deserves a separate patch, but after Yang's cleanup of the
!NUMA case (where we would preallocate a hugepage) we can depend on put_page()
do take care of that for us.
Regardless, can have my
Reviewed-by: Zach O'Keefe <zokeefe@google.com>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v2] mm/khugepaged: alloc_charge_hpage() take care of mem charge errors
2023-03-02 23:21 ` Zach O'Keefe
@ 2023-03-03 14:59 ` Peter Xu
0 siblings, 0 replies; 4+ messages in thread
From: Peter Xu @ 2023-03-03 14:59 UTC (permalink / raw)
To: Zach O'Keefe
Cc: Yang Shi, linux-kernel, linux-mm, Andrew Morton, David Stevens,
Johannes Weiner
On Thu, Mar 02, 2023 at 03:21:50PM -0800, Zach O'Keefe wrote:
> On Feb 22 14:53, Yang Shi wrote:
> > On Wed, Feb 22, 2023 at 11:52 AM Peter Xu <peterx@redhat.com> wrote:
> > >
> > > If memory charge failed, instead of returning the hpage but with an error,
> > > allow the function to cleanup the folio properly, which is normally what a
> > > function should do in this case - either return successfully, or return
> > > with no side effect of partial runs with an indicated error.
> > >
> > > This will also avoid the caller calling mem_cgroup_uncharge() unnecessarily
> > > with either anon or shmem path (even if it's safe to do so).
> >
> > Thanks for the cleanup. Reviewed-by: Yang Shi <shy828301@gmail.com>
> >
> > >
> > > Cc: Yang Shi <shy828301@gmail.com>
> > > Reviewed-by: David Stevens <stevensd@chromium.org>
> > > Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> > > Signed-off-by: Peter Xu <peterx@redhat.com>
> > > ---
> > > v1->v2:
> > > - Enhance commit message, drop "Fixes:" and "Cc: stable" tag, add R-bs.
> > > ---
> > > mm/khugepaged.c | 9 ++++++++-
> > > 1 file changed, 8 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > > index 8dbc39896811..941d1c7ea910 100644
> > > --- a/mm/khugepaged.c
> > > +++ b/mm/khugepaged.c
> > > @@ -1063,12 +1063,19 @@ static int alloc_charge_hpage(struct page **hpage, struct mm_struct *mm,
> > > gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() :
> > > GFP_TRANSHUGE);
> > > int node = hpage_collapse_find_target_node(cc);
> > > + struct folio *folio;
> > >
> > > if (!hpage_collapse_alloc_page(hpage, gfp, node, &cc->alloc_nmask))
> > > return SCAN_ALLOC_HUGE_PAGE_FAIL;
> > > - if (unlikely(mem_cgroup_charge(page_folio(*hpage), mm, gfp)))
> > > +
> > > + folio = page_folio(*hpage);
> > > + if (unlikely(mem_cgroup_charge(folio, mm, gfp))) {
> > > + folio_put(folio);
> > > + *hpage = NULL;
> > > return SCAN_CGROUP_CHARGE_FAIL;
> > > + }
> > > count_memcg_page_event(*hpage, THP_COLLAPSE_ALLOC);
> > > +
> > > return SCAN_SUCCEED;
> > > }
> > >
> > > --
> > > 2.39.1
> > >
> >
>
> Thanks, Peter.
>
> Can we also get rid of the unnecessary mem_cgroup_uncharge() calls while we're
> at it? Maybe this deserves a separate patch, but after Yang's cleanup of the
> !NUMA case (where we would preallocate a hugepage) we can depend on put_page()
> do take care of that for us.
Makes sense to me. I can prepare a separate patch to clean it up.
>
> Regardless, can have my
>
> Reviewed-by: Zach O'Keefe <zokeefe@google.com>
Thanks!
--
Peter Xu
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2023-03-03 14:59 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-22 19:52 [PATCH v2] mm/khugepaged: alloc_charge_hpage() take care of mem charge errors Peter Xu
2023-02-22 22:53 ` Yang Shi
2023-03-02 23:21 ` Zach O'Keefe
2023-03-03 14:59 ` Peter Xu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox