From: Roman Gushchin <guro@fb.com>
To: Christopher Lameter <cl@linux.com>
Cc: Waiman Long <longman@redhat.com>,
Pekka Enberg <penberg@kernel.org>,
"David Rientjes" <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
"Andrew Morton" <akpm@linux-foundation.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Michal Hocko <mhocko@kernel.org>,
"Johannes Weiner" <hannes@cmpxchg.org>,
Shakeel Butt <shakeelb@google.com>,
"Vladimir Davydov" <vdavydov.dev@gmail.com>
Subject: Re: [PATCH v2 1/2] mm, slab: Extend slab/shrink to shrink all memcg caches
Date: Thu, 18 Jul 2019 17:05:04 +0000 [thread overview]
Message-ID: <20190718170458.GA6139@castle.dhcp.thefacebook.com> (raw)
In-Reply-To: <0100016c04e0192f-299df02d-a35f-46db-9833-37ba7a01f5f0-000000@email.amazonses.com>
On Thu, Jul 18, 2019 at 11:38:11AM +0000, Christopher Lameter wrote:
> On Wed, 17 Jul 2019, Waiman Long wrote:
>
> > Currently, a value of '1" is written to /sys/kernel/slab/<slab>/shrink
> > file to shrink the slab by flushing out all the per-cpu slabs and free
> > slabs in partial lists. This can be useful to squeeze out a bit more memory
> > under extreme condition as well as making the active object counts in
> > /proc/slabinfo more accurate.
>
> Acked-by: Christoph Lameter <cl@linux.com>
>
> > # grep task_struct /proc/slabinfo
> > task_struct 53137 53192 4288 61 4 : tunables 0 0
> > 0 : slabdata 872 872 0
> > # grep "^S[lRU]" /proc/meminfo
> > Slab: 3936832 kB
> > SReclaimable: 399104 kB
> > SUnreclaim: 3537728 kB
> >
> > After shrinking slabs:
> >
> > # grep "^S[lRU]" /proc/meminfo
> > Slab: 1356288 kB
> > SReclaimable: 263296 kB
> > SUnreclaim: 1092992 kB
>
> Well another indicator that it may not be a good decision to replicate the
> whole set of slabs for each memcg. Migrate the memcg ownership into the
> objects may allow the use of the same slab cache. In particular together
> with the slab migration patches this may be a viable way to reduce memory
> consumption.
>
Btw I'm working on an alternative solution. It's way too early to present
anything, but preliminary results are looking promising: slab memory usage
is decreased by 10-40% depending on the workload.
Thanks!
next prev parent reply other threads:[~2019-07-18 17:05 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-07-17 20:24 [PATCH v2 0/2] " Waiman Long
2019-07-17 20:24 ` [PATCH v2 1/2] " Waiman Long
2019-07-18 11:38 ` Christopher Lameter
2019-07-18 17:05 ` Roman Gushchin [this message]
2019-07-19 6:20 ` Michal Hocko
2019-07-19 14:09 ` Waiman Long
2019-07-17 20:24 ` [PATCH v2 2/2] mm, slab: Show last shrink time in us when slab/shrink is read Waiman Long
2019-07-18 11:39 ` Christopher Lameter
2019-07-18 14:36 ` Waiman Long
2019-07-18 18:04 ` Waiman Long
2019-07-19 6:14 ` Michal Hocko
2019-07-19 14:07 ` Waiman Long
2019-07-19 14:29 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190718170458.GA6139@castle.dhcp.thefacebook.com \
--to=guro@fb.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=hannes@cmpxchg.org \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=longman@redhat.com \
--cc=mhocko@kernel.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=shakeelb@google.com \
--cc=vdavydov.dev@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox