From: Sonny Rao <sonny@burdell.org>
To: David Chinner <dgc@sgi.com>
Cc: Bharata B Rao <bharata@in.ibm.com>, Theodore Ts'o <tytso@mit.edu>,
Dipankar Sarma <dipankar@in.ibm.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: VM balancing issues on 2.6.13: dentry cache not getting shrunk enough
Date: Wed, 14 Sep 2005 18:40:40 -0400 [thread overview]
Message-ID: <20050914224040.GA16627@kevlar.burdell.org> (raw)
In-Reply-To: <20050914220222.GA2265486@melbourne.sgi.com>
On Thu, Sep 15, 2005 at 08:02:22AM +1000, David Chinner wrote:
> On Wed, Sep 14, 2005 at 11:48:52AM -0400, Sonny Rao wrote:
> > On Wed, Sep 14, 2005 at 07:59:32AM +1000, David Chinner wrote:
> > > On Tue, Sep 13, 2005 at 02:17:52PM +0530, Bharata B Rao wrote:
> > > >
> > > > Second is Sonny Rao's rbtree dentry reclaim patch which is an attempt
> > > > to improve this dcache fragmentation problem.
> > >
> > > FYI, in the past I've tried this patch to reduce dcache fragmentation on
> > > an Altix (16k pages, 62 dentries to a slab page) under heavy
> > > fileserver workloads and it had no measurable effect. It appeared
> > > that there was almost always at least one active dentry on each page
> > > in the slab. The story may very well be different on 4k page
> > > machines, however.
>
> ....
>
> > I'm not surprised... With 62 dentrys per page, the likelyhood of
> > success is very small, and in fact performance could degrade since we
> > are holding the dcache lock more often and doing less useful work.
> >
> > It has been over a year and my memory is hazy, but I think I did see
> > about a 10% improvement on my workload (some sort of SFS simulation
> > with millions of files being randomly accessed) on an x86 machine but CPU
> > utilization also went way up which I think was the dcache lock.
>
> Hmmm - can't say that I've had the same experience. I did not notice
> any decrease in fragmentation or increase in CPU usage...
Well, this was on an x86 machine with 8 cores but relatively poor
scalability and horrific memory latencies ... i.e. it tends to
exaggerate the effects of bad locks compared to what I would see on a
more scalable POWER machine. We actually ran SFS on a 4-way POWER-5
machine with the patch and didn't see any real change in throughput,
and fragmentation was a little better. I can go dig out the data if
someone is really interested.
In your case with 62 dentry objects per page (which is only going to
get much worse as we bump up base page sizes), I think the chances of
success of this approach or anything similar are horrible because we
aren't really solving any of the fundamental issues.
For me, the patch we mostly an experiment to see if the "blunderbus"
effect (to quote mjb) could be controlled any better that we do
today. Mostly, it didn't seem worth it to me -- especially since we
wanted the global dcache lock to go away.
> FWIW, SFS is just one workload that produces fragmentation. Any
> load that mixes or switches repeatedly between filesystem traversals
> to producing memory pressure via the page cache tends to result in
> fragmentation of the inode and dentry slabs...
Yep, and that's more or less how I "simulated" SFS, just had tons of
small files. I wasn't trying to really simulate the networking part
or op mixture etc -- just the slab fragmentation as a "worst-case".
> > Whatever happened to the vfs_cache_pressue band-aid/sledgehammer ?
> > Is it not considered an option ?
>
> All that did was increase the fragmentation levels. Instead of
> seeing a 4-5:1 free/used ratio in the dcache, it would push out to
> 10-15:1 if vfs_cache_pressue was used to prefer reclaiming dentries
> over page cache pages. Going the other way and prefering reclaim of
> page cache pages did nothing to change the level of fragmentation.
> Reclaim still freed most of the dentries in the working set but it
> took a little longer to do it.
Yes, but on systems with smaller pages it does seem to have some
positive effect. I don't really know how well this has been
quantified.
> Right now our only solution to prevent fragmentation on reclaim is
> to throw more memory at the machine to prevent reclaim from
> happening as the workload changes.
That is unfortunate, but interesting because I didn't know if this was
not a "real-problem" as some have contended. I know SPEC SFS is a
somewhat questionable workload (really, what isn't though?), so the
evidence gathered from that didn't seem to convince many people.
What kind of (real) workload are you seeing this on?
Thanks,
Sonny
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2005-09-14 22:40 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-09-11 10:57 Theodore Ts'o
2005-09-11 12:00 ` Dipankar Sarma
2005-09-12 3:16 ` Theodore Ts'o
2005-09-12 6:16 ` Martin J. Bligh
2005-09-12 12:53 ` Bharata B Rao
2005-09-13 8:47 ` Bharata B Rao
2005-09-13 21:59 ` David Chinner
2005-09-14 9:01 ` Andi Kleen
2005-09-14 9:16 ` Manfred Spraul
2005-09-14 9:43 ` Andrew Morton
2005-09-14 9:52 ` Dipankar Sarma
2005-09-14 22:44 ` Theodore Ts'o
2005-09-14 9:35 ` Andrew Morton
2005-09-14 13:57 ` Martin J. Bligh
2005-09-14 15:37 ` Sonny Rao
2005-09-15 7:21 ` Helge Hafting
2005-09-14 22:48 ` David Chinner
2005-09-14 15:48 ` Sonny Rao
2005-09-14 22:02 ` David Chinner
2005-09-14 22:40 ` Sonny Rao [this message]
2005-09-15 1:14 ` David Chinner
2005-10-06 6:27 ` [PATCH] dcache: separate slab for directory dentries David Chinner, gnb
2005-10-06 12:28 ` Dave Kleikamp
2005-10-07 3:54 ` Greg Banks
2005-10-07 13:00 ` Dave Kleikamp
2005-09-14 21:34 ` VM balancing issues on 2.6.13: dentry cache not getting shrunk enough Marcelo Tosatti
2005-09-14 21:43 ` Dipankar Sarma
2005-09-15 4:28 ` Bharata B Rao
2005-09-14 23:08 ` Marcelo Tosatti
2005-09-15 9:39 ` Bharata B Rao
2005-09-15 13:29 ` Marcelo Tosatti
2005-10-02 16:32 ` Bharata B Rao
2005-10-02 20:06 ` Marcelo
2005-10-04 13:36 ` shrinkable cache statistics [was Re: VM balancing issues on 2.6.13: dentry cache not getting shrunk enough] Bharata B Rao
2005-10-05 21:25 ` Marcelo Tosatti
2005-10-07 8:12 ` Bharata B Rao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20050914224040.GA16627@kevlar.burdell.org \
--to=sonny@burdell.org \
--cc=bharata@in.ibm.com \
--cc=dgc@sgi.com \
--cc=dipankar@in.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=tytso@mit.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox