From: Johannes Weiner <hannes@cmpxchg.org>
To: Dave Hansen <dave@sr71.net>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>,
"Kleen, Andi" <andi.kleen@intel.com>,
ksummit-discuss@lists.linuxfoundation.org
Subject: Re: [Ksummit-discuss] [TECH TOPIC] Memory thrashing, was Re: Self nomination
Date: Mon, 1 Aug 2016 14:19:24 -0400 [thread overview]
Message-ID: <20160801181924.GA9408@cmpxchg.org> (raw)
In-Reply-To: <20160801170846.GA8584@cmpxchg.org>
On Mon, Aug 01, 2016 at 01:08:46PM -0400, Johannes Weiner wrote:
> On Mon, Aug 01, 2016 at 09:11:32AM -0700, Dave Hansen wrote:
> > On 08/01/2016 09:06 AM, James Bottomley wrote:
> > >> With persistent memory devices you might actually run out of CPU
> > >> > capacity while performing basic page aging before you saturate the
> > >> > storage device (which is why Andi Kleen has been suggesting to
> > >> > replace LRU reclaim with random replacement for these devices). So
> > >> > storage device saturation might not be the final answer to this
> > >> > problem.
> > > We really wouldn't want this. All cloud jobs seem to have memory they
> > > allocate but rarely use, so we want the properties of the LRU list to
> > > get this on swap so we can re-use the memory pages for something else.
> > > A random replacement algorithm would play havoc with that.
> >
> > I don't want to put words in Andi's mouth, but what we want isn't
> > necessarily something that is random, but it's something that uses less
> > CPU to swap out a given page.
>
> Random eviction doesn't mean random outcome of what stabilizes in
> memory and swap. The idea is to apply pressure on all pages equally
> but in no particular order, and then the in-memory set forms based on
> reference frequencies and refaults/swapins.
Anyway, this is getting a little off-topic.
I only brought up CPU cost to make the point that, while sustained
swap-in rate might be a good signal to unload a machine or reschedule
a job elsewhere, it might not be a generic answer to the question of
how much a system's overall progress is actually impeded due to
somebody swapping; or whether the system is actually in a livelock
state that requires intervention by the OOM killer.
next prev parent reply other threads:[~2016-08-01 18:19 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-07-25 17:11 [Ksummit-discuss] " Johannes Weiner
2016-07-25 18:15 ` Rik van Riel
2016-07-26 10:56 ` Jan Kara
2016-07-26 13:10 ` Vlastimil Babka
2016-07-28 18:55 ` [Ksummit-discuss] [TECH TOPIC] Memory thrashing, was " Johannes Weiner
2016-07-28 21:41 ` James Bottomley
2016-08-01 15:46 ` Johannes Weiner
2016-08-01 16:06 ` James Bottomley
2016-08-01 16:11 ` Dave Hansen
2016-08-01 16:33 ` James Bottomley
2016-08-01 18:13 ` Rik van Riel
2016-08-01 19:51 ` Dave Hansen
2016-08-01 17:08 ` Johannes Weiner
2016-08-01 18:19 ` Johannes Weiner [this message]
2016-07-29 0:25 ` Rik van Riel
2016-07-29 11:07 ` Mel Gorman
2016-07-29 16:26 ` Luck, Tony
2016-08-01 15:17 ` Rik van Riel
2016-08-01 16:55 ` Johannes Weiner
2016-08-02 9:18 ` Jan Kara
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160801181924.GA9408@cmpxchg.org \
--to=hannes@cmpxchg.org \
--cc=James.Bottomley@HansenPartnership.com \
--cc=andi.kleen@intel.com \
--cc=dave@sr71.net \
--cc=ksummit-discuss@lists.linuxfoundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox