linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Johannes Weiner <hannes@cmpxchg.org>
To: "T.J. Mercier" <tjmercier@google.com>
Cc: Michal Hocko <mhocko@kernel.org>,
	Roman Gushchin <roman.gushchin@linux.dev>,
	Shakeel Butt <shakeelb@google.com>,
	Muchun Song <muchun.song@linux.dev>,
	Andrew Morton <akpm@linux-foundation.org>,
	Efly Young <yangyifei03@kuaishou.com>,
	android-mm@google.com, yuzhao@google.com,
	cgroups@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm: memcg: Use larger chunks for proactive reclaim
Date: Wed, 31 Jan 2024 15:12:56 -0500	[thread overview]
Message-ID: <20240131201256.GD1227330@cmpxchg.org> (raw)
In-Reply-To: <CABdmKX2WRQyBpDaV0CuL4E0OdR9FEff5NAZ0hFv8W8U7e82=UA@mail.gmail.com>

On Wed, Jan 31, 2024 at 10:01:27AM -0800, T.J. Mercier wrote:
> On Wed, Jan 31, 2024 at 9:51 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
> >
> > On Wed, Jan 31, 2024 at 04:24:41PM +0000, T.J. Mercier wrote:
> > > Before 388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive
> > > reclaim") we passed the number of pages for the reclaim request directly
> > > to try_to_free_mem_cgroup_pages, which could lead to significant
> > > overreclaim in order to achieve fairness. After 0388536ac291 the number
> > > of pages was limited to a maxmimum of 32 (SWAP_CLUSTER_MAX) to reduce
> > > the amount of overreclaim. However such a small chunk size caused a
> > > regression in reclaim performance due to many more reclaim start/stop
> > > cycles inside memory_reclaim.
> > >
> > > Instead of limiting reclaim chunk size to the SWAP_CLUSTER_MAX constant,
> > > adjust the chunk size proportionally with number of pages left to
> > > reclaim. This allows for higher reclaim efficiency with large chunk
> > > sizes during the beginning of memory_reclaim, and reduces the amount of
> > > potential overreclaim by using small chunk sizes as the total reclaim
> > > amount is approached. Using 1/4 of the amount left to reclaim as the
> > > chunk size gives a good compromise between reclaim performance and
> > > overreclaim:
> > >
> > > root - full reclaim       pages/sec   time (sec)
> > > pre-0388536ac291      :    68047        10.46
> > > post-0388536ac291     :    13742        inf
> > > (reclaim-reclaimed)/4 :    67352        10.51
> > >
> > > /uid_0 - 1G reclaim       pages/sec   time (sec)  overreclaim (MiB)
> > > pre-0388536ac291      :    258822       1.12            107.8
> > > post-0388536ac291     :    105174       2.49            3.5
> > > (reclaim-reclaimed)/4 :    233396       1.12            -7.4
> > >
> > > /uid_0 - full reclaim     pages/sec   time (sec)
> > > pre-0388536ac291      :    72334        7.09
> > > post-0388536ac291     :    38105        14.45
> > > (reclaim-reclaimed)/4 :    72914        6.96
> > >
> > > Fixes: 0388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive reclaim")
> > > Signed-off-by: T.J. Mercier <tjmercier@google.com>
> > > ---
> > >  mm/memcontrol.c | 3 ++-
> > >  1 file changed, 2 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > > index 46d8d02114cf..d68fb89eadd2 100644
> > > --- a/mm/memcontrol.c
> > > +++ b/mm/memcontrol.c
> > > @@ -6977,7 +6977,8 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf,
> > >                       lru_add_drain_all();
> > >
> > >               reclaimed = try_to_free_mem_cgroup_pages(memcg,
> > > -                                     min(nr_to_reclaim - nr_reclaimed, SWAP_CLUSTER_MAX),
> > > +                                     max((nr_to_reclaim - nr_reclaimed) / 4,
> > > +                                         (nr_to_reclaim - nr_reclaimed) % 4),
> >
> > I don't see why the % 4 is needed. It only kicks in when the delta
> > drops below 4, but try_to_free_mem_cgroup_pages() already has
> >
> >                 .nr_to_reclaim = max(nr_pages, SWAP_CLUSTER_MAX),
> >
> > so it looks like dead code.
> 
> That right, it's only there for when the integer division reaches
> zero. I didn't want to assume anything about the implementation of
> try_to_free_mem_cgroup_pages, but I can just remove it entirely if
> you'd like.

What do others think?

We rely on the rounding up in a few other places and it's been doing
that for a decade. Maybe lampshade it for the benefit of the reader:

	/* Will converge on zero, but reclaim enforces a minimum */

but otherwise there is IMO no need to have defensive extra code.


  reply	other threads:[~2024-01-31 20:13 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-31 16:24 T.J. Mercier
2024-01-31 17:50 ` Johannes Weiner
2024-01-31 18:01   ` T.J. Mercier
2024-01-31 20:12     ` Johannes Weiner [this message]
2024-02-01 13:57 ` Michal Koutný
2024-02-01 15:34   ` Johannes Weiner
2024-02-01 18:10     ` T.J. Mercier
2024-02-02  5:02     ` Efly Young
2024-02-02 10:15       ` Michal Koutný
2024-02-02 18:22         ` T.J. Mercier
2024-02-02 19:46           ` Michal Koutný
2024-02-02 21:42             ` T.J. Mercier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240131201256.GD1227330@cmpxchg.org \
    --to=hannes@cmpxchg.org \
    --cc=akpm@linux-foundation.org \
    --cc=android-mm@google.com \
    --cc=cgroups@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=muchun.song@linux.dev \
    --cc=roman.gushchin@linux.dev \
    --cc=shakeelb@google.com \
    --cc=tjmercier@google.com \
    --cc=yangyifei03@kuaishou.com \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox