linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Johannes Weiner <hannes@cmpxchg.org>
To: Shakeel Butt <shakeelb@google.com>
Cc: Kirill Tkhai <ktkhai@virtuozzo.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	josef@toxicpanda.com, Jan Kara <jack@suse.cz>,
	Hugh Dickins <hughd@google.com>,
	"Darrick J. Wong" <darrick.wong@oracle.com>,
	Michal Hocko <mhocko@suse.com>,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	Roman Gushchin <guro@fb.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	Linux MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH RFC 0/3] mm: Reduce IO by improving algorithm of memcg pagecache pages eviction
Date: Wed, 9 Jan 2019 14:20:22 -0500	[thread overview]
Message-ID: <20190109192022.GA16027@cmpxchg.org> (raw)
In-Reply-To: <CALvZod6P12gUq-xTZ1V4ZBeFXGE6dGAfA5uiw6iN1w14eP9j2Q@mail.gmail.com>

On Wed, Jan 09, 2019 at 09:44:28AM -0800, Shakeel Butt wrote:
> Hi Johannes,
> 
> On Wed, Jan 9, 2019 at 8:45 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
> >
> > On Wed, Jan 09, 2019 at 03:20:18PM +0300, Kirill Tkhai wrote:
> > > On nodes without memory overcommit, it's common a situation,
> > > when memcg exceeds its limit and pages from pagecache are
> > > shrinked on reclaim, while node has a lot of free memory.
> > > Further access to the pages requires real device IO, while
> > > IO causes time delays, worse powerusage, worse throughput
> > > for other users of the device, etc.
> > >
> > > Cleancache is not a good solution for this problem, since
> > > it implies copying of page on every cleancache_put_page()
> > > and cleancache_get_page(). Also, it requires introduction
> > > of internal per-cleancache_ops data structures to manage
> > > cached pages and their inodes relationships, which again
> > > introduces overhead.
> > >
> > > This patchset introduces another solution. It introduces
> > > a new scheme for evicting memcg pages:
> > >
> > >   1)__remove_mapping() uncharges unmapped page memcg
> > >     and leaves page in pagecache on memcg reclaim;
> > >
> > >   2)putback_lru_page() places page into root_mem_cgroup
> > >     list, since its memcg is NULL. Page may be evicted
> > >     on global reclaim (and this will be easily, as
> > >     page is not mapped, so shrinker will shrink it
> > >     with 100% probability of success);
> > >
> > >   3)pagecache_get_page() charges page into memcg of
> > >     a task, which takes it first.
> > >
> > > Below is small test, which shows profit of the patchset.
> > >
> > > Create memcg with limit 20M (exact value does not matter much):
> > >   $ mkdir /sys/fs/cgroup/memory/ct
> > >   $ echo 20M > /sys/fs/cgroup/memory/ct/memory.limit_in_bytes
> > >   $ echo $$ > /sys/fs/cgroup/memory/ct/tasks
> > >
> > > Then twice read 1GB file:
> > >   $ time cat file_1gb > /dev/null
> > >
> > > Before (2 iterations):
> > >   1)0.01user 0.82system 0:11.16elapsed 7%CPU
> > >   2)0.01user 0.91system 0:11.16elapsed 8%CPU
> > >
> > > After (2 iterations):
> > >   1)0.01user 0.57system 0:11.31elapsed 5%CPU
> > >   2)0.00user 0.28system 0:00.28elapsed 100%CPU
> > >
> > > With the patch set applied, we have file pages are cached
> > > during the second read, so the result is 39 times faster.
> > >
> > > This may be useful for slow disks, NFS, nodes without
> > > overcommit by memory, in case of two memcg access the same
> > > files, etc.
> >
> > What you're implementing is work conservation: avoid causing IO work,
> > unless it's physically necessary, not when the memcg limit says so.
> >
> > This is a great idea, but we already have that in the form of the
> > memory.low setting (or softlimit in cgroup v1).
> >
> > Say you have a 100M system and two cgroups. Instead of setting the 20M
> > limit on group A as you did, you set 80M memory.low on group B. If B
> > is not using its share and there is no physical memory pressure, group
> > A can consume as much memory as it wants. If B starts and consumes its
> > 80M, A will get pushed back to 20M. (And when B grows beyond 80M, they
> > compete fairly over the remaining 20M, just like they would if A had
> > the 20M limit setting).
> 
> There is one difference between the example you give and the proposal.
> In your example when B starts and consumes its 80M and pushes back A
> to 20M, the direct reclaim can be very expensive and
> non-deterministic. While in the proposal, the B's direct reclaim will
> be very fast and deterministic (assuming no overcommit on hard limits)
> as it will always first reclaim unmapped clean pages which were
> charged to A.

That struck me more as a side-effect of the implementation having to
unmap the pages to be able to change their page->mem_cgroup.

But regardless, we cannot fundamentally change the memory isolation
semantics of the hard limit like these patches propose, so it's a moot
point. A scheme to prepare likely reclaim candidates in advance for a
low-latency workload startup would have to come in a different form.

  parent reply	other threads:[~2019-01-09 19:20 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-09 12:20 Kirill Tkhai
2019-01-09 12:20 ` [PATCH 1/3] mm: Uncharge and keep page in pagecache on memcg reclaim Kirill Tkhai
2019-01-09 12:20 ` [PATCH 2/3] mm: Recharge page memcg on first get from pagecache Kirill Tkhai
2019-01-09 12:20 ` [PATCH 3/3] mm: Pass FGP_NOWAIT in generic_file_buffered_read and enable ext4 Kirill Tkhai
2019-01-09 14:11 ` [PATCH RFC 0/3] mm: Reduce IO by improving algorithm of memcg pagecache pages eviction Michal Hocko
2019-01-09 15:43   ` Kirill Tkhai
2019-01-09 17:10     ` Michal Hocko
2019-01-10  9:42       ` Kirill Tkhai
2019-01-10  9:57         ` Michal Hocko
2019-01-09 15:49 ` Josef Bacik
2019-01-09 16:08   ` Kirill Tkhai
2019-01-09 16:33     ` Josef Bacik
2019-01-10 10:06       ` Kirill Tkhai
2019-01-09 16:45 ` Johannes Weiner
2019-01-09 17:44   ` Shakeel Butt
2019-01-09 17:44     ` Shakeel Butt
2019-01-09 19:20     ` Johannes Weiner [this message]
2019-01-09 17:37 ` Shakeel Butt
2019-01-09 17:37   ` Shakeel Butt
2019-01-10  9:46   ` Kirill Tkhai
2019-01-10 19:19     ` Shakeel Butt
2019-01-10 19:19       ` Shakeel Butt
2019-01-11 12:17       ` Kirill Tkhai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190109192022.GA16027@cmpxchg.org \
    --to=hannes@cmpxchg.org \
    --cc=akpm@linux-foundation.org \
    --cc=aryabinin@virtuozzo.com \
    --cc=darrick.wong@oracle.com \
    --cc=guro@fb.com \
    --cc=hughd@google.com \
    --cc=jack@suse.cz \
    --cc=josef@toxicpanda.com \
    --cc=ktkhai@virtuozzo.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@suse.com \
    --cc=shakeelb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox