From: Vladimir Davydov <vdavydov@parallels.com>
To: Tejun Heo <tj@kernel.org>
Cc: Michal Hocko <mhocko@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
Christoph Lameter <cl@linux.com>,
Pekka Enberg <penberg@kernel.org>,
David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 0/2] Fix memcg/memory.high in case kmem accounting is enabled
Date: Mon, 31 Aug 2015 22:26:12 +0300 [thread overview]
Message-ID: <20150831192612.GE15420@esperanza> (raw)
In-Reply-To: <20150831170309.GF2271@mtj.duckdns.org>
On Mon, Aug 31, 2015 at 01:03:09PM -0400, Tejun Heo wrote:
> On Mon, Aug 31, 2015 at 07:51:32PM +0300, Vladimir Davydov wrote:
> ...
> > If we want to allow slab/slub implementation to invoke try_charge
> > wherever it wants, we need to introduce an asynchronous thread doing
> > reclaim when a memcg is approaching its limit (or teach kswapd do that).
>
> In the long term, I think this is the way to go.
Quite probably, or we can use task_work, or direct reclaim instead. It's
not that obvious to me yet which one is the best.
>
> > That's a way to go, but what's the point to complicate things
> > prematurely while it seems we can fix the problem by using the technique
> > similar to the one behind memory.high?
>
> Cuz we're now scattering workarounds to multiple places and I'm sure
> we'll add more try_charge() users (e.g. we want to fold in tcp memcg
> under the same knobs) and we'll have to worry about the same problem
> all over again and will inevitably miss some cases leading to subtle
> failures.
I don't think we will need to insert try_charge_kmem anywhere else,
because all kmem users either allocate memory using kmalloc and friends
or using alloc_pages. kmalloc is accounted. For those who prefer
alloc_pages, there is alloc_kmem_pages helper.
>
> > Nevertheless, even if we introduced such a thread, it'd be just insane
> > to allow slab/slub blindly insert try_charge. Let me repeat the examples
> > of SLAB/SLUB sub-optimal behavior caused by thoughtless usage of
> > try_charge I gave above:
> >
> > - memcg knows nothing about NUMA nodes, so what's the point in failing
> > !__GFP_WAIT allocations used by SLAB while inspecting NUMA nodes?
> > - memcg knows nothing about high order pages, so what's the point in
> > failing !__GFP_WAIT allocations used by SLUB to try to allocate a
> > high order page?
>
> Both are optimistic speculative actions and as long as memcg can
> guarantee that those requests will succeed under normal circumstances,
> as does the system-wide mm does, it isn't a problem.
>
> In general, we want to make sure inside-cgroup behaviors as close to
> system-wide behaviors as possible, scoped but equivalent in kind.
> Doing things differently, while inevitable in certain cases, is likely
> to get messy in the long term.
I totally agree that we should strive to make a kmem user feel roughly
the same in memcg as if it were running on a host with equal amount of
RAM. There are two ways to achieve that:
1. Make the API functions, i.e. kmalloc and friends, behave inside
memcg roughly the same way as they do in the root cgroup.
2. Make the internal memcg functions, i.e. try_charge and friends,
behave roughly the same way as alloc_pages.
I find way 1 more flexible, because we don't have to blindly follow
heuristics used on global memory reclaim and therefore have more
opportunities to achieve the same goal.
Thanks,
Vladimir
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2015-08-31 19:26 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-08-30 19:02 Vladimir Davydov
2015-08-30 19:02 ` [PATCH 1/2] mm/slab: skip memcg reclaim only if in atomic context Vladimir Davydov
2015-08-30 19:02 ` [PATCH 2/2] mm/slub: do not bypass memcg reclaim for high-order page allocation Vladimir Davydov
2015-08-31 13:24 ` [PATCH 0/2] Fix memcg/memory.high in case kmem accounting is enabled Michal Hocko
2015-08-31 13:43 ` Tejun Heo
2015-08-31 14:30 ` Vladimir Davydov
2015-08-31 14:39 ` Tejun Heo
2015-08-31 15:18 ` Vladimir Davydov
2015-08-31 15:47 ` Tejun Heo
2015-08-31 16:51 ` Vladimir Davydov
2015-08-31 17:03 ` Tejun Heo
2015-08-31 19:26 ` Vladimir Davydov [this message]
2015-08-31 20:22 ` Christoph Lameter
2015-09-01 9:25 ` Vladimir Davydov
2015-08-31 14:20 ` Vladimir Davydov
2015-08-31 14:46 ` Tejun Heo
2015-08-31 15:24 ` Vladimir Davydov
2015-09-01 12:36 ` Michal Hocko
2015-09-01 13:40 ` Vladimir Davydov
2015-09-01 15:01 ` Michal Hocko
2015-09-01 16:55 ` Vladimir Davydov
2015-09-01 18:38 ` Michal Hocko
2015-09-02 9:30 ` Vladimir Davydov
2015-09-02 18:16 ` Christoph Lameter
2015-09-03 9:36 ` Vladimir Davydov
2015-09-03 16:32 ` Tejun Heo
2015-09-04 11:15 ` Vladimir Davydov
2015-09-04 15:44 ` Tejun Heo
2015-09-04 18:21 ` Vladimir Davydov
2015-09-04 19:30 ` Tejun Heo
2015-09-04 14:38 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150831192612.GE15420@esperanza \
--to=vdavydov@parallels.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox