From: Michal Hocko <mhocko@kernel.org>
To: Stefan Priebe - Profihost AG <s.priebe@profihost.ag>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
l.roehrs@profihost.ag, cgroups@vger.kernel.org,
Johannes Weiner <hannes@cmpxchg.org>,
Vlastimil Babka <vbabka@suse.cz>
Subject: Re: lot of MemAvailable but falling cache and raising PSI
Date: Mon, 9 Sep 2019 13:01:36 +0200 [thread overview]
Message-ID: <20190909110136.GG27159@dhcp22.suse.cz> (raw)
In-Reply-To: <1d9ee19a-98c9-cd78-1e5b-21d9d6e36792@profihost.ag>
[Cc Vlastimil - logs are http://lkml.kernel.org/r/1d9ee19a-98c9-cd78-1e5b-21d9d6e36792@profihost.ag]
On Mon 09-09-19 10:54:21, Stefan Priebe - Profihost AG wrote:
> Hello Michal,
>
> Am 09.09.19 um 10:27 schrieb Michal Hocko:
> > On Fri 06-09-19 12:08:31, Stefan Priebe - Profihost AG wrote:
> >> These are the biggest differences in meminfo before and after cached
> >> starts to drop. I didn't expect cached end up in MemFree.
> >>
> >> Before:
> >> MemTotal: 16423116 kB
> >> MemFree: 374572 kB
> >> MemAvailable: 5633816 kB
> >> Cached: 5550972 kB
> >> Inactive: 4696580 kB
> >> Inactive(file): 3624776 kB
> >>
> >>
> >> After:
> >> MemTotal: 16423116 kB
> >> MemFree: 3477168 kB
> >> MemAvailable: 6066916 kB
> >> Cached: 2724504 kB
> >> Inactive: 1854740 kB
> >> Inactive(file): 950680 kB
> >>
> >> Any explanation?
> >
> > Do you have more snapshots of /proc/vmstat as suggested by Vlastimil and
> > me earlier in this thread? Seeing the overall progress would tell us
> > much more than before and after. Or have I missed this data?
>
> I needed to wait until today to grab again such a situation but from
> what i know it is very clear that MemFree is low and than the kernel
> starts to drop the chaches.
>
> Attached you'll find two log files.
$ grep pgsteal_kswapd vmstat | uniq -c
1331 pgsteal_kswapd 37142300
$ grep pgscan_kswapd vmstat | uniq -c
1331 pgscan_kswapd 37285092
kswapd hasn't scanned nor reclaimed any memory throughout the whole
collected time span. On the other hand we can see direct reclaim active.
But we can see quite some direct reclaim activity:
$ awk '/pgsteal_direct/ {val=$2+0; ln++; if (last && val-last > 0) {printf("%d %d\n", ln, val-last)} last=val}' vmstat | head
17 1058
18 9773
19 1036
24 11413
49 1055
50 1050
51 17938
52 22665
53 29400
54 5997
So there is a steady source of the direct reclaim which is quite
unexpected considering the background reclaim is inactive. Or maybe it
is blocked not able to make a forward progress.
780513 pages has been reclaimed which is 3G worth of memory which
matches the dropdown you are seeing AFAICS.
$ grep allocstall_dma32 vmstat | uniq -c
1331 allocstall_dma32 0
$ grep allocstall_normal vmstat | uniq -c
1331 allocstall_normal 39
no direct reclaim invoked for DMA32 and Normal zones. But Movable zone
seems the be the source of the direct reclaim
awk '/allocstall_movable/ {val=$2+0; ln++; if (last && val-last > 0) {printf("%d %d\n", ln, val-last)} last=val}' vmstat | head
17 1
18 9
19 1
24 10
49 1
50 1
51 17
52 20
53 28
54 5
and that matches moments when we reclaimed memory. There seems to be a
steady THP allocations flow so maybe this is a source of the direct
reclaim?
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2019-09-09 11:01 UTC|newest]
Thread overview: 61+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-05 11:27 Stefan Priebe - Profihost AG
2019-09-05 11:40 ` Michal Hocko
2019-09-05 11:56 ` Stefan Priebe - Profihost AG
2019-09-05 16:28 ` Yang Shi
2019-09-05 17:26 ` Stefan Priebe - Profihost AG
2019-09-05 18:46 ` Yang Shi
2019-09-05 19:31 ` Stefan Priebe - Profihost AG
2019-09-06 10:08 ` Stefan Priebe - Profihost AG
2019-09-06 10:25 ` Vlastimil Babka
2019-09-06 18:52 ` Yang Shi
2019-09-07 7:32 ` Stefan Priebe - Profihost AG
2019-09-09 8:27 ` Michal Hocko
2019-09-09 8:54 ` Stefan Priebe - Profihost AG
2019-09-09 11:01 ` Michal Hocko [this message]
2019-09-09 12:08 ` Michal Hocko
2019-09-09 12:10 ` Stefan Priebe - Profihost AG
2019-09-09 12:28 ` Michal Hocko
2019-09-09 12:37 ` Stefan Priebe - Profihost AG
2019-09-09 12:49 ` Michal Hocko
2019-09-09 12:56 ` Stefan Priebe - Profihost AG
[not found] ` <52235eda-ffe2-721c-7ad7-575048e2d29d@profihost.ag>
2019-09-10 5:58 ` Stefan Priebe - Profihost AG
2019-09-10 8:29 ` Michal Hocko
2019-09-10 8:38 ` Stefan Priebe - Profihost AG
2019-09-10 9:02 ` Michal Hocko
2019-09-10 9:37 ` Stefan Priebe - Profihost AG
2019-09-10 11:07 ` Michal Hocko
2019-09-10 12:45 ` Stefan Priebe - Profihost AG
2019-09-10 12:57 ` Michal Hocko
2019-09-10 13:05 ` Stefan Priebe - Profihost AG
2019-09-10 13:14 ` Stefan Priebe - Profihost AG
2019-09-10 13:24 ` Michal Hocko
2019-09-11 6:12 ` Stefan Priebe - Profihost AG
2019-09-11 6:24 ` Stefan Priebe - Profihost AG
2019-09-11 13:59 ` Stefan Priebe - Profihost AG
2019-09-12 10:53 ` Stefan Priebe - Profihost AG
2019-09-12 11:06 ` Stefan Priebe - Profihost AG
2019-09-11 7:09 ` 5.3-rc-8 hung task in IO (was: Re: lot of MemAvailable but falling cache and raising PSI) Michal Hocko
2019-09-11 14:09 ` Stefan Priebe - Profihost AG
2019-09-11 14:56 ` Filipe Manana
2019-09-11 15:39 ` Stefan Priebe - Profihost AG
2019-09-11 15:56 ` Filipe Manana
2019-09-11 16:15 ` Stefan Priebe - Profihost AG
2019-09-11 16:19 ` Filipe Manana
2019-09-19 10:21 ` lot of MemAvailable but falling cache and raising PSI Stefan Priebe - Profihost AG
2019-09-23 12:08 ` Michal Hocko
2019-09-27 12:45 ` Vlastimil Babka
2019-09-30 6:56 ` Stefan Priebe - Profihost AG
2019-09-30 7:21 ` Vlastimil Babka
2019-10-22 7:41 ` Stefan Priebe - Profihost AG
2019-10-22 7:48 ` Vlastimil Babka
2019-10-22 10:02 ` Stefan Priebe - Profihost AG
2019-10-22 10:20 ` Oscar Salvador
2019-10-22 10:21 ` Vlastimil Babka
2019-10-22 11:08 ` Stefan Priebe - Profihost AG
2019-09-10 5:41 ` Stefan Priebe - Profihost AG
2019-09-09 11:49 ` Vlastimil Babka
2019-09-09 12:09 ` Stefan Priebe - Profihost AG
2019-09-09 12:21 ` Vlastimil Babka
2019-09-09 12:31 ` Stefan Priebe - Profihost AG
2019-09-05 12:15 ` Vlastimil Babka
2019-09-05 12:27 ` Stefan Priebe - Profihost AG
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190909110136.GG27159@dhcp22.suse.cz \
--to=mhocko@kernel.org \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=l.roehrs@profihost.ag \
--cc=linux-mm@kvack.org \
--cc=s.priebe@profihost.ag \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox