linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: Buddy Lumpkin <buddy.lumpkin@oracle.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	hannes@cmpxchg.org, riel@surriel.com, mgorman@suse.de,
	Matthew Wilcox <willy@infradead.org>,
	akpm@linux-foundation.org
Subject: Re: [RFC PATCH 1/1] vmscan: Support multiple kswapd threads per node
Date: Thu, 12 Apr 2018 15:23:24 +0200	[thread overview]
Message-ID: <20180412132324.GG23400@dhcp22.suse.cz> (raw)
In-Reply-To: <502E8C16-DEA1-40A5-85CB-923E3ABE0B45@oracle.com>

On Tue 10-04-18 20:10:24, Buddy Lumpkin wrote:
[...]
> > Also please note that the direct reclaim is a way to throttle overly
> > aggressive memory consumers. The more we do in the background context
> > the easier for them it will be to allocate faster. So I am not really
> > sure that more background threads will solve the underlying problem.
> 
> A single kswapd thread used to keep up with all of the demand you could
> create on a Linux system quite easily provided it didna??t have to scan a lot
> of pages that were ineligible for eviction.

Well, what do you mean by ineligible for eviction? Could you be more
specific? Are we talking about pages on LRU list or metadata and
shrinker based reclaim.

> 10 years ago, Fibre Channel was
> the popular high performance interconnect and if you were lucky enough
> to have the latest hardware rated at 10GFC, you could get 1.2GB/s per host
> bus adapter. Also, most high end storage solutions were still using spinning
> rust so it took an insane number of spindles behind each host bus adapter
> to saturate the channel if the access patterns were random. There really
> wasna??t a reason to try to thread kswapd, and I am pretty sure there hasna??t
> been any attempts to do this in the last 10 years.

I do not really see your point. Yeah you can get a faster storage today.
So what? Pagecache has always been bound by the RAM speed.

> > It is just a matter of memory hogs tunning to end in the very same
> > situtation AFAICS. Moreover the more they are going to allocate the more
> > less CPU time will _other_ (non-allocating) task get.
> 
> Please describe the scenario a bit more clearly. Once you start constructing
> the workload that can create this scenario, I think you will find that you end
> up with a mix that is rarely seen in practice.

What I meant is that the more you reclaim in the background to more you
allow memory hogs to allocate because they will not get throttled. All
that on behalf of other workload which is not memory bound and cannot
use CPU cycles additional kswapd would consume. Think of any computation
intensive workload spreading over most CPUs and a memory hungry data
processing.
-- 
Michal Hocko
SUSE Labs

  reply	other threads:[~2018-04-12 13:23 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-02  9:24 [RFC PATCH 0/1] mm: " Buddy Lumpkin
2018-04-02  9:24 ` [RFC PATCH 1/1] vmscan: " Buddy Lumpkin
2018-04-03 13:31   ` Michal Hocko
2018-04-03 19:07     ` Matthew Wilcox
2018-04-03 20:49       ` Buddy Lumpkin
2018-04-03 21:12         ` Matthew Wilcox
2018-04-04 10:07           ` Buddy Lumpkin
2018-04-05  4:08           ` Buddy Lumpkin
2018-04-11  6:37           ` Buddy Lumpkin
2018-04-11  3:52       ` Buddy Lumpkin
2018-04-03 19:41     ` Buddy Lumpkin
2018-04-12 13:16       ` Michal Hocko
2018-04-17  3:02         ` Buddy Lumpkin
2018-04-17  9:03           ` Michal Hocko
2018-04-03 20:13     ` Buddy Lumpkin
2018-04-11  3:10     ` Buddy Lumpkin
2018-04-12 13:23       ` Michal Hocko [this message]
2020-09-30 19:27 Sebastiaan Meijer
2020-10-01 12:30 ` Michal Hocko
2020-10-01 16:18   ` Sebastiaan Meijer
2020-10-02  7:03     ` Michal Hocko
2020-10-02  8:40       ` Mel Gorman
2020-10-02 13:53       ` Rik van Riel
2020-10-02 14:00         ` Matthew Wilcox
2020-10-02 14:29         ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180412132324.GG23400@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=buddy.lumpkin@oracle.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=riel@surriel.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox