linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Spock <dairinin@gmail.com>
To: guro@fb.com
Cc: mhocko@kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, Kernel-team@fb.com,
	riel@surriel.com, rdunlap@infradead.org,
	akpm@linux-foundation.org
Subject: Re: [RFC PATCH] mm: don't reclaim inodes with many attached pages
Date: Fri, 26 Oct 2018 20:00:07 +0300	[thread overview]
Message-ID: <CADa=ObqajeQkJA6cR_LXDLT8hrZcFY7kHFxSTFuX=Fg8GkQv1w@mail.gmail.com> (raw)
In-Reply-To: <20181026155652.GA7647@tower.DHCP.thefacebook.com>

пт, 26 окт. 2018 г. в 18:57, Roman Gushchin <guro@fb.com>:
>
> On Fri, Oct 26, 2018 at 10:57:35AM +0200, Michal Hocko wrote:
> > Spock doesn't seem to be cced here - fixed now
> >
> > On Tue 23-10-18 16:43:29, Roman Gushchin wrote:
> > > Spock reported that the commit 172b06c32b94 ("mm: slowly shrink slabs
> > > with a relatively small number of objects") leads to a regression on
> > > his setup: periodically the majority of the pagecache is evicted
> > > without an obvious reason, while before the change the amount of free
> > > memory was balancing around the watermark.
> > >
> > > The reason behind is that the mentioned above change created some
> > > minimal background pressure on the inode cache. The problem is that
> > > if an inode is considered to be reclaimed, all belonging pagecache
> > > page are stripped, no matter how many of them are there. So, if a huge
> > > multi-gigabyte file is cached in the memory, and the goal is to
> > > reclaim only few slab objects (unused inodes), we still can eventually
> > > evict all gigabytes of the pagecache at once.
> > >
> > > The workload described by Spock has few large non-mapped files in the
> > > pagecache, so it's especially noticeable.
> > >
> > > To solve the problem let's postpone the reclaim of inodes, which have
> > > more than 1 attached page. Let's wait until the pagecache pages will
> > > be evicted naturally by scanning the corresponding LRU lists, and only
> > > then reclaim the inode structure.
> >
> > Has this actually fixed/worked around the issue?
>
> Spock wrote this earlier to me directly. I believe I can quote it here:
>
> "Patch applied, looks good so far. System behaves like it was with
> pre-4.18.15 kernels.
> Also tried to add some user-level tests to the geneic background activity, like
> - stat'ing a bunch of files
> - streamed read several large files at once on ext4 and XFS
> - random reads on the whole collection with a read size of 16K
>
> I will be monitoring while fragmentation stacks up and report back if
> something bad happens."
>
> Spock, please let me know if you have any new results.
>
> Thanks!

Hello,

I'd say the patch fixed the problem, at least with my workload

MemTotal:        8164968 kB
MemFree:          135852 kB
MemAvailable:    6406088 kB
Buffers:           11988 kB
Cached:          6414124 kB
SwapCached:            0 kB
Active:          1491952 kB
Inactive:        5989576 kB
Active(anon):     542512 kB
Inactive(anon):   523780 kB
Active(file):     949440 kB
Inactive(file):  5465796 kB
Unevictable:        8872 kB
Mlocked:            8872 kB
SwapTotal:       4194300 kB
SwapFree:        4194300 kB
Dirty:               128 kB
Writeback:             0 kB
AnonPages:       1064232 kB
Mapped:            32348 kB
Shmem:              3952 kB
Slab:             205108 kB
SReclaimable:     148792 kB
SUnreclaim:        56316 kB
KernelStack:        3984 kB
PageTables:        11100 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     8276784 kB
Committed_AS:    1944792 kB
VmallocTotal:   34359738367 kB
VmallocUsed:           0 kB
VmallocChunk:          0 kB
AnonHugePages:      6144 kB
ShmemHugePages:        0 kB
ShmemPmdMapped:        0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:               0 kB
DirectMap4k:      271872 kB
DirectMap2M:     8116224 kB

  reply	other threads:[~2018-10-26 17:00 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-23 16:43 Roman Gushchin
2018-10-24 22:18 ` Andrew Morton
2018-10-24 23:49   ` Roman Gushchin
2018-10-24 22:19 ` Andrew Morton
2018-10-24 23:51   ` Roman Gushchin
2018-10-25  9:23   ` Michal Hocko
2018-10-25 19:44     ` Andrew Morton
2018-10-25 20:20       ` Sasha Levin
2018-10-25 20:27         ` Matthew Wilcox
2018-10-25 21:44           ` Sasha Levin
2018-10-25 20:32         ` Roman Gushchin
2018-10-26  7:33           ` Michal Hocko
2018-10-26 15:54             ` Roman Gushchin
2018-10-26  8:57 ` Michal Hocko
2018-10-26 15:56   ` Roman Gushchin
2018-10-26 17:00     ` Spock [this message]
2018-10-26 15:58   ` Roman Gushchin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CADa=ObqajeQkJA6cR_LXDLT8hrZcFY7kHFxSTFuX=Fg8GkQv1w@mail.gmail.com' \
    --to=dairinin@gmail.com \
    --cc=Kernel-team@fb.com \
    --cc=akpm@linux-foundation.org \
    --cc=guro@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=rdunlap@infradead.org \
    --cc=riel@surriel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox