From: Shakeel Butt <shakeelb@google.com>
To: Michal Hocko <mhocko@suse.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>,
Christoph Lameter <cl@linux.com>,
Matthew Wilcox <willy@infradead.org>,
linux-mm@kvack.org, cgroups@vger.kernel.org
Subject: Re: cgroups: warning for metadata allocation with GFP_NOFAIL (was Re: folio_alloc_buffers() doing allocations > order 1 with GFP_NOFAIL)
Date: Wed, 8 Nov 2023 22:37:00 -0800 [thread overview]
Message-ID: <CALvZod4yTfqk9u6AmTyk9HZyGQOh0GTLLN6f0gHWy3WNKCm-vw@mail.gmail.com> (raw)
In-Reply-To: <t4vlvq3f5owdqr76ut3f5yk35jwyy76pvq4ji7zze5aimgh3uu@c2b5mmr4eytv>
On Wed, Nov 8, 2023 at 2:33 AM Michal Hocko <mhocko@suse.com> wrote:
>
> On Tue 07-11-23 10:05:24, Roman Gushchin wrote:
> > On Mon, Nov 06, 2023 at 06:57:05PM -0800, Christoph Lameter wrote:
> > > Right.. Well lets add the cgoup folks to this.
> >
> > Hello!
> >
> > I think it's the best thing we can do now. Thoughts?
> >
> > >From 5ed3e88f4f052b6ce8dbec0545dfc80eb7534a1a Mon Sep 17 00:00:00 2001
> > From: Roman Gushchin <roman.gushchin@linux.dev>
> > Date: Tue, 7 Nov 2023 09:18:02 -0800
> > Subject: [PATCH] mm: kmem: drop __GFP_NOFAIL when allocating objcg vectors
> >
> > Objcg vectors attached to slab pages to store slab object ownership
> > information are allocated using gfp flags for the original slab
> > allocation. Depending on slab page order and the size of slab objects,
> > objcg vector can take several pages.
> >
> > If the original allocation was done with the __GFP_NOFAIL flag, it
> > triggered a warning in the page allocation code. Indeed, order > 1
> > pages should not been allocated with the __GFP_NOFAIL flag.
> >
> > Fix this by simple dropping the __GFP_NOFAIL flag when allocating
> > the objcg vector. It effectively allows to skip the accounting of a
> > single slab object under a heavy memory pressure.
>
> It would be really good to describe what happens if the memcg metadata
> allocation fails. AFAICS both callers of memcg_alloc_slab_cgroups -
> memcg_slab_post_alloc_hook and account_slab will simply skip the
> accounting which is rather curious but probably tolerable (does this
> allow to runaway from memcg limits). If that is intended then it should
> be documented so that new users do not get it wrong. We do not want to
> error ever propagate down to the allocator caller which doesn't expect
> it.
The memcg metadata allocation failure is a situation kind of similar
to how we used to have per-memcg kmem caches for accounting slab
memory. The first allocation from a memcg triggers kmem cache creation
and lets the allocation pass through.
>
> Btw. if the large allocation is really necessary, which hasn't been
> explained so far AFAIK, would vmalloc fallback be an option?
>
For this specific scenario, large allocation is kind of unexpected,
like a large (multi-order) slab having tiny objects. Roman, do you
know the slab settings where this failure occurs?
Anyways, I think kvmalloc is a better option. Most of the time we
should have order 0 allocation here and for weird settings we fallback
to vmalloc.
next prev parent reply other threads:[~2023-11-09 6:37 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-01 0:13 folio_alloc_buffers() doing allocations > order 1 with GFP_NOFAIL Christoph Lameter (Ampere)
2023-11-01 8:08 ` Matthew Wilcox
2023-11-07 2:57 ` cgroups: warning for metadata allocation with GFP_NOFAIL (was Re: folio_alloc_buffers() doing allocations > order 1 with GFP_NOFAIL) Christoph Lameter
2023-11-07 18:05 ` Roman Gushchin
2023-11-07 18:18 ` Shakeel Butt
2023-11-08 10:33 ` Michal Hocko
2023-11-09 6:37 ` Shakeel Butt [this message]
2023-11-09 17:36 ` Roman Gushchin
2023-11-07 19:24 ` Matthew Wilcox
2023-11-07 21:33 ` Roman Gushchin
2023-11-07 21:37 ` Matthew Wilcox
2023-11-10 13:38 ` Matthew Wilcox
2023-11-13 19:48 ` Christoph Lameter
2023-11-13 22:48 ` Matthew Wilcox
2023-11-14 17:29 ` Roman Gushchin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CALvZod4yTfqk9u6AmTyk9HZyGQOh0GTLLN6f0gHWy3WNKCm-vw@mail.gmail.com \
--to=shakeelb@google.com \
--cc=cgroups@vger.kernel.org \
--cc=cl@linux.com \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=roman.gushchin@linux.dev \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox