From: Larry Woodman <lwoodman@redhat.com>
To: Christoph Hellwig <hch@infradead.org>
Cc: Rik van Riel <riel@redhat.com>,
kosaki.motohiro@jp.fujitsu.com, akpm@linux-foundation.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
aarcange@redhat.com
Subject: Re: [PATCH] vmscan: limit concurrent reclaimers in shrink_zone
Date: Mon, 14 Dec 2009 09:22:16 -0500 [thread overview]
Message-ID: <1260800536.6666.2.camel@dhcp-100-19-198.bos.redhat.com> (raw)
In-Reply-To: <20091214131444.GA8990@infradead.org>
On Mon, 2009-12-14 at 08:14 -0500, Christoph Hellwig wrote:
> On Thu, Dec 10, 2009 at 06:56:26PM -0500, Rik van Riel wrote:
> > Under very heavy multi-process workloads, like AIM7, the VM can
> > get into trouble in a variety of ways. The trouble start when
> > there are hundreds, or even thousands of processes active in the
> > page reclaim code.
> >
> > Not only can the system suffer enormous slowdowns because of
> > lock contention (and conditional reschedules) between thousands
> > of processes in the page reclaim code, but each process will try
> > to free up to SWAP_CLUSTER_MAX pages, even when the system already
> > has lots of memory free. In Larry's case, this resulted in over
> > 6000 processes fighting over locks in the page reclaim code, even
> > though the system already had 1.5GB of free memory.
> >
> > It should be possible to avoid both of those issues at once, by
> > simply limiting how many processes are active in the page reclaim
> > code simultaneously.
> >
>
> This sounds like a very good argument against using direct reclaim at
> all. It reminds a bit of the issue we had in XFS with lots of processes
> pushing the AIL and causing massive slowdowns due to lock contention
> and cacheline bonucing. Moving all the AIL pushing into a dedicated
> thread solved that nicely. In the VM we already have that dedicated
> per-node kswapd thread, so moving off as much as possible work to
> should be equivalent.
Some of the new systems have 16 CPUs per-node.
>
> Of course any of this kind of tuning really requires a lot of testing
> and benchrmarking to verify those assumptions.
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2009-12-14 14:19 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-12-10 23:56 Rik van Riel
2009-12-11 2:03 ` Minchan Kim
2009-12-11 3:19 ` Rik van Riel
2009-12-11 3:43 ` Minchan Kim
2009-12-11 12:07 ` Larry Woodman
2009-12-11 13:41 ` Minchan Kim
2009-12-11 13:51 ` Rik van Riel
2009-12-11 14:08 ` Minchan Kim
2009-12-11 13:48 ` Rik van Riel
2009-12-11 21:24 ` Rik van Riel
2009-12-11 11:49 ` Larry Woodman
2009-12-14 13:08 ` Andi Kleen
2009-12-14 14:23 ` Larry Woodman
2009-12-14 16:19 ` Andi Kleen
2009-12-14 14:40 ` Rik van Riel
2009-12-14 13:14 ` Christoph Hellwig
2009-12-14 14:22 ` Larry Woodman [this message]
2009-12-14 14:52 ` Rik van Riel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1260800536.6666.2.camel@dhcp-100-19-198.bos.redhat.com \
--to=lwoodman@redhat.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=hch@infradead.org \
--cc=kosaki.motohiro@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=riel@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox