From: Josef Bacik <josef@toxicpanda.com>
To: Rik van Riel <riel@redhat.com>
Cc: Josef Bacik <josef@toxicpanda.com>,
linux-mm@kvack.org, hannes@cmpxchg.org, kernel-team@fb.com
Subject: Re: [PATCH][RFC] mm: make kswapd try harder to keep active pages in cache
Date: Wed, 3 May 2017 14:38:15 -0400 [thread overview]
Message-ID: <20170503183814.GA11572@destiny> (raw)
In-Reply-To: <1493835888.20270.4.camel@redhat.com>
On Wed, May 03, 2017 at 02:24:48PM -0400, Rik van Riel wrote:
> On Tue, 2017-05-02 at 17:27 -0400, Josef Bacik wrote:
>
> > + /*
> > + * If we don't have a lot of inactive or slab pages then
> > there's no
> > + * point in trying to free them exclusively, do the normal
> > scan stuff.
> > + */
> > + if (nr_inactive < total_high_wmark && nr_slab <
> > total_high_wmark)
> > + sc->inactive_only = 0;
>
> This part looks good. Below this point, there is obviously no
> point in skipping the active list.
>
> > + if (!global_reclaim(sc))
> > + sc->inactive_only = 0;
>
> Why the different behaviour with and without cgroups?
>
> Have you tested both of these?
>
Huh oops I thought I deleted that, sorry I'll kill that part.
> > + /*
> > + * We still want to slightly prefer slab over inactive, so
> > if inactive
> > + * is large enough just skip slab shrinking for now. If we
> > aren't able
> > + * to reclaim enough exclusively from the inactive lists
> > then we'll
> > + * reset this on the first loop and dip into slab.
> > + */
> > + if (nr_inactive > total_high_wmark && nr_inactive > nr_slab)
> > + skip_slab = true;
>
> I worry that this may be a little too aggressive,
> and result in the slab cache growing much larger
> than it should be on some systems.
>
> I wonder if it may make more sense to have the
> aggressiveness of slab scanning depend on the
> ratio of inactive to reclaimable slab pages, rather
> than having a hard cut-off like this?
>
So I originally had a thing that kept track of the rate of change of inactive vs
slab between kswapd runs, but this worked fine so I figured simpler was better.
Keep in mind that we only skip slab the first loop through, so if we fail to
free enough on the inactive list the first time through then we start evicting
slab as well. The idea is (and my testing bore this out) that with the new size
ratio way of shrinking slab we would sometimes be over zealous and evict slab
that we were actively using, even though we had reclaimed plenty of pages from
our inactive list to satisfy our sc->nr_to_reclaim.
I could probably change the ratio in the sc->inactive_only case to be based on
the slab to inactive ratio and see how that turns out, I'll get that wired up
and let you know how it goes. Thanks,
Josef
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-05-03 18:38 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-02 21:27 Josef Bacik
2017-05-03 18:24 ` Rik van Riel
2017-05-03 18:38 ` Josef Bacik [this message]
2017-05-03 19:20 ` Rik van Riel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170503183814.GA11572@destiny \
--to=josef@toxicpanda.com \
--cc=hannes@cmpxchg.org \
--cc=kernel-team@fb.com \
--cc=linux-mm@kvack.org \
--cc=riel@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox