linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Sultan Alsawaf <sultan@kerneltoast.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Mel Gorman <mgorman@suse.de>, Dave Hansen <dave.hansen@intel.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Johannes Weiner <hannes@cmpxchg.org>
Subject: Re: [PATCH] mm: Stop kswapd early when nothing's waiting for it to free pages
Date: Tue, 25 Feb 2020 09:12:42 -0800	[thread overview]
Message-ID: <20200225171242.GA496421@sultan-box.localdomain> (raw)
In-Reply-To: <20200225090945.GJ22443@dhcp22.suse.cz>

On Tue, Feb 25, 2020 at 10:09:45AM +0100, Michal Hocko wrote:
> On Fri 21-02-20 13:08:24, Sultan Alsawaf wrote:
> [...]
> > Both of these logs are attached in a tarball.
> 
> Thanks! First of all
> $ grep pswp vmstat.1582318979
> pswpin 0
> pswpout 0
> 
> suggests that you do not have any swap storage, right?

Correct. I'm not using any swap (and it should not be necessary to make Linux mm
work of course). If I were to divide my RAM in half and use one half as swap,
do you think the results would be different? IMO they shouldn't be.

> The amount of anonymous memory is not really high (~560MB) but file LRU
> is _really_ low (~3MB), unevictable list is at ~200MB. That gets us to
> ~760M of memory which is 74% of the memory. Please note that your mem=2G
> setup gives you only 1G of memory in fact (based on the zone_info you
> have posted). That is not something unusual but the amount of the page
> cache is worrying because I would expect a heavy trashing because most
> of the executables are going to require major faults. Anonymous memory
> is not swapped out obviously so there is no other option than to refault
> constantly.

I noticed that only 1G was available as well. Perhaps direct reclaim wasn't
attempted due to the zone_reclaimable_pages() check, though I don't think direct
reclaim would've been particularly helpful in this case (see below).

> kswapd has some feedback mechanism to back off when the zone is hopless
> from the reclaim point of view AFAIR but it seems it has failed in this
> particular situation. It should have relied on the direct reclaim and
> eventually trigger the OOM killer. Your patch has worked around this by
> bailing out from the kswapd reclaim too early so a part of the page
> cache required for the code to move on would stay resident and move
> further.
> 
> The proper fix should, however, check the amount of reclaimable pages
> and back off if they cannot meet the target IMO. We cannot rely on the
> general reclaimability here because that could really be thrashing.

Yes, my guess was that thrashing out pages used by the running programs was the
cause for my freezes, but I didn't think of making kswapd back off a different
way.

Right now I don't see any such back-off mechanism in kswapd. Also, if we add
this into kswapd, we would need to plug it into the direct reclaim path as well,
no? I don't think direct reclaim would help with the situation I've run into;
although it wouldn't be as bad as letting kswapd evict pages to the high
watermark, it would still cause page thrashing that would just be capped to the
amount of pages a direct reclaimer is looking to steal.

Considering that my patch remedies this issue for me without invoking the OOM
killer, a proper solution should produce the same or better results. I don't
think the OOM killer should have been triggered in this case.

Sultan


  reply	other threads:[~2020-02-25 17:12 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-19 18:25 Sultan Alsawaf
2020-02-19 19:13 ` Dave Hansen
2020-02-19 19:40   ` Sultan Alsawaf
2020-02-19 20:05     ` Michal Hocko
2020-02-19 20:42       ` Sultan Alsawaf
2020-02-19 21:45         ` Mel Gorman
2020-02-19 22:42           ` Sultan Alsawaf
2020-02-20 10:19             ` Mel Gorman
2020-02-21  4:22               ` Sultan Alsawaf
2020-02-21  8:07                 ` Michal Hocko
     [not found]                   ` <20200221210824.GA3605@sultan-book.localdomain>
2020-02-21 21:24                     ` Dave Hansen
2020-02-25  9:09                     ` Michal Hocko
2020-02-25 17:12                       ` Sultan Alsawaf [this message]
2020-02-26  9:05                         ` Michal Hocko
2020-02-25 22:30                       ` Shakeel Butt
2020-02-26  9:08                         ` Michal Hocko
2020-02-26 17:00                           ` Shakeel Butt
2020-02-26 17:41                             ` Michal Hocko
2020-02-26 10:51                       ` Hillf Danton
2020-02-26 17:04                         ` Shakeel Butt
2020-02-27  1:48                         ` Hillf Danton
2020-02-21 18:04                 ` Shakeel Butt
2020-02-21 20:06                   ` Sultan Alsawaf
2020-02-20  8:29         ` Michal Hocko
2020-02-19 19:26 ` Andrew Morton
2020-02-19 22:45   ` Sultan Alsawaf
2020-02-19 19:35 ` Michal Hocko
2020-02-21  4:30 ` [PATCH v2] " Sultan Alsawaf
2020-02-21 18:22   ` Ira Weiny
2020-02-21 20:00     ` Sultan Alsawaf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200225171242.GA496421@sultan-box.localdomain \
    --to=sultan@kerneltoast.com \
    --cc=akpm@linux-foundation.org \
    --cc=dave.hansen@intel.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox