From: Brent Casavant <bcasavan@sgi.com>
To: William Lee Irwin III <wli@holomorphy.com>
Cc: Andrew Morton <akpm@osdl.org>, Hugh Dickins <hugh@veritas.com>,
linux-mm@kvack.org
Subject: Re: Scaling problem with shmem_sb_info->stat_lock
Date: Wed, 28 Jul 2004 17:21:58 -0500 [thread overview]
Message-ID: <Pine.SGI.4.58.0407281707370.33392@kzerza.americas.sgi.com> (raw)
In-Reply-To: <20040728095925.GQ2334@holomorphy.com>
On Wed, 28 Jul 2004, William Lee Irwin III wrote:
> Hugh Dickins <hugh@veritas.com> wrote:
> >> Though wli's per-cpu idea was sensible enough, converting to that
> >> didn't appeal to me very much. We only have a limited amount of
> >> per-cpu space, I think, but an indefinite number of tmpfs mounts.
>
> On Wed, Jul 28, 2004 at 02:26:25AM -0700, Andrew Morton wrote:
> > What's wrong with <linux/percpu_counter.h>?
>
> One issue with using it for the specific cases in question is that the
> maintenance of the statistics is entirely unnecessary for them.
Yeah. Hugh solved the stat_lock issue by getting rid of the superblock
info for the internal superblock(s?) corresponding to /dev/zero and
System V shared memory. There was no way to get at that information
anyway, so it wasn't useful to pay to keep it around.
> For the general case it may still make sense to do this. SGI will have
> to comment here, as the workloads I'm involved with are kernel intensive
> enough in other areas and generally run on small enough systems to have
> no visible issues in or around the areas described.
With Hugh's fix, the problem has now moved to other areas -- I consider
the stat_lock issue solved. Now I'm running up against the shmem_inode_info
lock field. A per-CPU structure isn't appropriate here because what it's
mostly protecting is the inode swap entries, and that isn't at all amenable
to a per-CPU breakdown (i.e. this is real data, not statistics).
The "obvious" fix is to morph the code so that the swap entries can be
updated in parallel to eachother and in parallel to the other miscellaneous
fields in the shmem_inode_info structure. But this would be one *nasty*
piece of work to accomplish, much less accomplish cleanly and correctly.
I'm pretty sure my Linux skillset isn't up to the task, though it hasn't
kept me from trying. On the upside I don't think it would significantly
impact performance on low processor-count systems, if we can manage to
do it at all.
I'm kind of hoping for a fairy godmother to drop in, wave her magic wand,
and say "Here's the quick and easy and obviously correct solution". But
what're the chances of that :).
Thanks,
Brent
--
Brent Casavant bcasavan@sgi.com Forget bright-eyed and
Operating System Engineer http://www.sgi.com/ bushy-tailed; I'm red-
Silicon Graphics, Inc. 44.8562N 93.1355W 860F eyed and bushy-haired.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>
next prev parent reply other threads:[~2004-07-28 22:21 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-07-12 21:11 Brent Casavant
2004-07-12 21:55 ` William Lee Irwin III
2004-07-12 22:42 ` Brent Casavant
2004-07-13 19:56 ` Brent Casavant
2004-07-13 20:41 ` Hugh Dickins
2004-07-13 21:35 ` Brent Casavant
2004-07-13 22:50 ` William Lee Irwin III
2004-07-13 22:22 ` William Lee Irwin III
2004-07-13 22:27 ` Brent Casavant
2004-07-28 9:26 ` Andrew Morton
2004-07-28 9:59 ` William Lee Irwin III
2004-07-28 22:21 ` Brent Casavant [this message]
2004-07-28 23:05 ` Andrew Morton
2004-07-28 23:40 ` Brent Casavant
2004-07-28 23:53 ` William Lee Irwin III
2004-07-28 23:53 ` William Lee Irwin III
2004-07-29 14:54 ` Brent Casavant
2004-07-29 19:58 ` Hugh Dickins
2004-07-29 21:21 ` Brent Casavant
2004-07-29 21:51 ` Brent Casavant
2004-07-30 1:00 ` William Lee Irwin III
2004-07-30 21:40 ` Brent Casavant
2004-07-30 23:34 ` Paul Jackson
2004-07-31 3:37 ` Ray Bryant
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Pine.SGI.4.58.0407281707370.33392@kzerza.americas.sgi.com \
--to=bcasavan@sgi.com \
--cc=akpm@osdl.org \
--cc=hugh@veritas.com \
--cc=linux-mm@kvack.org \
--cc=wli@holomorphy.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox