From: Axel Rasmussen <axelrasmussen@google.com>
To: David Rientjes <rientjes@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>,
Andrew Morton <akpm@linux-foundation.org>,
Christoph Lameter <cl@linux.com>,
Hyeonggon Yoo <42.hyeyoo@gmail.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Pekka Enberg <penberg@kernel.org>,
Roman Gushchin <roman.gushchin@linux.dev>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH] mm, slub: print CPU id on slab OOM
Date: Mon, 12 Aug 2024 15:45:44 -0700 [thread overview]
Message-ID: <CAJHvVcgP_yOc7rgyEsyC2u+h0XLLCRAUp4Fd0nAX2fJ2KTvL9g@mail.gmail.com> (raw)
In-Reply-To: <6951700d-b6c0-b9b7-6587-1823a9d8c63d@google.com>
On Sun, Aug 11, 2024 at 1:21 PM David Rientjes <rientjes@google.com> wrote:
>
> On Sun, 11 Aug 2024, Vlastimil Babka wrote:
>
> > > diff --git a/mm/slub.c b/mm/slub.c
> > > index c9d8a2497fd6..7148047998de 100644
> > > --- a/mm/slub.c
> > > +++ b/mm/slub.c
> > > @@ -3422,7 +3422,8 @@ slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid)
> > > if ((gfpflags & __GFP_NOWARN) || !__ratelimit(&slub_oom_rs))
> > > return;
> > >
> > > - pr_warn("SLUB: Unable to allocate memory on node %d, gfp=%#x(%pGg)\n",
> > > + pr_warn("SLUB: Unable to allocate memory for CPU %u on node %d, gfp=%#x(%pGg)\n",
> >
> > BTW, wouldn't "on CPU" be more correct, as "for CPU" might be misleading
> > that we are somehow constrained to that CPU?
> >
>
> Agreed.
No objection to the rewording.
>
> When I suggested this patch, I was trying to ascertain whether something
> was really wonky based on some logs that we were seeing.
>
> node 0: slabs: 223, objs: 11819, free: 0
> node 1: slabs: 951, objs: 50262, free: 218
>
> This is for a NUMA_NO_NODE allocation, so I wanted to know if the cpu was
> on node 0 or node 1.
>
> Even with the patch, that requires knowing the cpu-to-node mapping. If we
> add the CPU output here, we likely also want to print out cpu_to_node().
Seems reasonable. Of course we could always look it up separately, but
it would be convenient to just print it directly. I can send a v2 to
add this.
>
> > > + preemptible() ? raw_smp_processor_id() : smp_processor_id(),
> >
> > Also could we just use raw_smp_processor_id() always here? I don't see
> > this has any advantage or am I missing something?
> >
>
> This matches my understanding as well.
That's fair, in any case it seems to matter very little for this use
case whether the read is "stable" or not. Better to keep it simple. I
can send a v2 with this tweak too.
prev parent reply other threads:[~2024-08-12 22:46 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20240806232649.3258741-1-axelrasmussen@google.com>
2024-08-09 7:36 ` Vlastimil Babka
2024-08-10 23:52 ` David Rientjes
2024-08-11 20:16 ` Vlastimil Babka
2024-08-11 20:21 ` David Rientjes
2024-08-12 22:45 ` Axel Rasmussen [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAJHvVcgP_yOc7rgyEsyC2u+h0XLLCRAUp4Fd0nAX2fJ2KTvL9g@mail.gmail.com \
--to=axelrasmussen@google.com \
--cc=42.hyeyoo@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox