linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: Muchun Song <songmuchun@bytedance.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
	Vladimir Davydov <vdavydov.dev@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Cgroups <cgroups@vger.kernel.org>,
	Linux Memory Management List <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [External] Re: [PATCH] mm: memcontrol: fix missing wakeup oom task
Date: Fri, 5 Feb 2021 11:21:47 +0100	[thread overview]
Message-ID: <YB0cO7R1WtJgAxI2@dhcp22.suse.cz> (raw)
In-Reply-To: <CAMZfGtWKNNhc1Jy1jzp2uZU_PM6GNWup7d=yUVk9AehKFo_CRw@mail.gmail.com>

On Fri 05-02-21 17:55:10, Muchun Song wrote:
> On Fri, Feb 5, 2021 at 4:24 PM Michal Hocko <mhocko@suse.com> wrote:
> >
> > On Fri 05-02-21 14:23:10, Muchun Song wrote:
> > > We call memcg_oom_recover() in the uncharge_batch() to wakeup OOM task
> > > when page uncharged, but for the slab pages, we do not do this when page
> > > uncharged.
> >
> > How does the patch deal with this?
> 
> When we uncharge a slab page via __memcg_kmem_uncharge,
> actually, this path forgets to do this for us compared to
> uncharge_batch(). Right?

Yes this was more more or less clear (still would have been nicer to be
explicit). But you still haven't replied to my question I believe. I
assume you rely on refill_stock doing draining but how does this address
the problem? Is it sufficient to do wakeups in the batched way?

> > > When we drain per cpu stock, we also should do this.
> >
> > Can we have anything the per-cpu stock while entering the OOM path. IIRC
> > we do drain all cpus before entering oom path.
> 
> You are right. I did not notice this. Thank you.
> 
> >
> > > The memcg_oom_recover() is small, so make it inline.
> >
> > Does this lead to any code generation improvements? I would expect
> > compiler to be clever enough to inline static functions if that pays
> > off. If yes make this a patch on its own.
> 
> I have disassembled the code, I see memcg_oom_recover is not
> inline. Maybe because memcg_oom_recover has a lot of callers.
> Just guess.
> 
> (gdb) disassemble uncharge_batch
>  [...]
>  0xffffffff81341c73 <+227>: callq  0xffffffff8133c420 <page_counter_uncharge>
>  0xffffffff81341c78 <+232>: jmpq   0xffffffff81341bc0 <uncharge_batch+48>
>  0xffffffff81341c7d <+237>: callq  0xffffffff8133e2c0 <memcg_oom_recover>

So does it really help to do the inlining?
-- 
Michal Hocko
SUSE Labs


  reply	other threads:[~2021-02-05 10:21 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-05  6:23 Muchun Song
2021-02-05  8:24 ` Michal Hocko
2021-02-05  9:55   ` [External] " Muchun Song
2021-02-05 10:21     ` Michal Hocko [this message]
2021-02-05 11:04       ` Muchun Song
2021-02-05 12:20         ` Michal Hocko
2021-02-05 15:30           ` Muchun Song
2021-02-05 16:04             ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YB0cO7R1WtJgAxI2@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=songmuchun@bytedance.com \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox