From: Michal Hocko <mhocko@suse.com>
To: Muchun Song <songmuchun@bytedance.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
Vladimir Davydov <vdavydov.dev@gmail.com>,
Andrew Morton <akpm@linux-foundation.org>,
Shakeel Butt <shakeelb@google.com>, Roman Gushchin <guro@fb.com>,
Stephen Rothwell <sfr@canb.auug.org.au>,
alexander.h.duyck@linux.intel.com,
Chris Down <chris@chrisdown.name>,
Yafang Shao <laoar.shao@gmail.com>,
richard.weiyang@gmail.com, LKML <linux-kernel@vger.kernel.org>,
Cgroups <cgroups@vger.kernel.org>,
Linux Memory Management List <linux-mm@kvack.org>
Subject: Re: [External] Re: [PATCH] mm: memcontrol: optimize per-lruvec stats counter memory usage
Date: Mon, 7 Dec 2020 16:09:01 +0100 [thread overview]
Message-ID: <20201207150847.GM25569@dhcp22.suse.cz> (raw)
In-Reply-To: <CAMZfGtUhG26cTgbSmg4g+rwOtEFhgbE3QXPR_LUf3FS-s=YbOA@mail.gmail.com>
On Mon 07-12-20 20:56:58, Muchun Song wrote:
> On Mon, Dec 7, 2020 at 8:36 PM Michal Hocko <mhocko@suse.com> wrote:
> >
> > On Sun 06-12-20 16:56:39, Muchun Song wrote:
> > > The vmstat threshold is 32 (MEMCG_CHARGE_BATCH), so the type of s32
> > > of lruvec_stat_cpu is enough. And introduce struct per_cpu_lruvec_stat
> > > to optimize memory usage.
> >
> > How much savings are we talking about here? I am not deeply familiar
> > with the pcp allocator but can it compact smaller data types much
> > better?
>
> It is a percpu struct. The size of struct lruvec_stat is 304(tested on the
> linux-5.5). So we can save 304 / 2 * nproc bytes per memcg where nproc
> is the number of the possible CPU. If we have n memory cgroup in the
> system. Finally, we can save (152 * nproc * n) bytes. In some configurations,
> nproc here may be 512. And if we have a lot of dying cgroup. The n can be
> 100, 000 (I once saw it on my server).
This should be part of the changelog. In general, any optimization
should come with some numbers showing the effect of the optimization.
As I've said I am not really familiar with pcp internals and how
efficiently it can organize smaller objects. Maybe it can really half
the memory consumption.
My only concern is that using smaller types for these counters can fire
back later on because we have an inderect dependency between the batch
size and the data type. In general I do not really object to the patch
as long as savings are non trivial so that we are not creating a
potential trap for something that is practically miniscule
microptimization.
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2020-12-07 15:09 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-06 8:56 Muchun Song
2020-12-07 12:36 ` Michal Hocko
2020-12-07 12:56 ` [External] " Muchun Song
2020-12-07 15:09 ` Michal Hocko [this message]
2020-12-07 15:19 ` Muchun Song
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201207150847.GM25569@dhcp22.suse.cz \
--to=mhocko@suse.com \
--cc=akpm@linux-foundation.org \
--cc=alexander.h.duyck@linux.intel.com \
--cc=cgroups@vger.kernel.org \
--cc=chris@chrisdown.name \
--cc=guro@fb.com \
--cc=hannes@cmpxchg.org \
--cc=laoar.shao@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=richard.weiyang@gmail.com \
--cc=sfr@canb.auug.org.au \
--cc=shakeelb@google.com \
--cc=songmuchun@bytedance.com \
--cc=vdavydov.dev@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox