From: Andrew Morton <akpm@digeo.com>
To: Ingo Molnar <mingo@elte.hu>
Cc: mbligh@aracnet.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: 2.5.65-mm4
Date: Sun, 23 Mar 2003 23:17:16 -0800 [thread overview]
Message-ID: <20030323231716.44d7e306.akpm@digeo.com> (raw)
In-Reply-To: <Pine.LNX.4.44.0303240756010.1587-100000@localhost.localdomain>
Ingo Molnar <mingo@elte.hu> wrote:
>
>
> On Sun, 23 Mar 2003, Andrew Morton wrote:
>
> > Note that the lock_kernel() contention has been drastically reduced and
> > we're now hitting semaphore contention.
> >
> > Running `dbench 32' on the quad Xeon, this patch took the context switch
> > rate from 500/sec up to 125,000/sec.
>
> note that there is _nothing_ wrong in doing 125,000 context switches per
> sec, as long as performance increases over the lock_kernel() variant.
Yes, but we also take a big hit before we even get to schedule().
Pingponging the semaphore's waitqueue lock around, doing atomic ops against
the semaphore counter, etc.
In the case of ext2 the codepath which needs to be locked is very small, and
converting it to use a per-blockgroup spinlock was a big win on the 16-way
numas, and perhaps 8-way x440's. On 4-way xeon and ppc64 the effects were
very small indeed - 1.5% on xeon, zero on ppc64.
In the case of ext3 I am suspecting lock_journal() in JBD, not lock_super()
in the ext3 block allocator. The hold times in there are much longer, so we
may have a more complex problem. But until lock_super() is cleared up it is
hard to tell.
> > I've asked Alex to put together a patch for spinlock-based locking in
> > the block allocator (cut-n-paste from ext2).
>
> sure, do this if it increases performance. But if it _decreases_
> performance then it's plain pointless to do this just to avoid
> context-switches. With the 2.4 scheduler i'd agree - avoid
> context-switches like the plague. But context-switches are 100% localized
> to the same CPU with the O(1) scheduler, they (should) cause (almost) no
> scalability problem. The only thing this change will 'fix' is the
> context-switch statistics.
The funny thing is that when this is happening we tend to clock up a lot of
idle time. But Martin tends to not share vmstat traces with us (hint) so I
don't know if it was happening this time.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>
next prev parent reply other threads:[~2003-03-24 7:17 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2003-03-23 10:06 2.5.65-mm4 Andrew Morton
2003-03-23 17:55 ` 2.5.65-mm4 Alexander Hoogerhuis
2003-03-23 18:16 ` 2.5.65-mm4 Martin J. Bligh
2003-03-23 20:08 ` 2.5.65-mm4 Martin J. Bligh
2003-03-24 3:04 ` 2.5.65-mm4 Martin J. Bligh
2003-03-24 3:17 ` 2.5.65-mm4 Andrew Morton
2003-03-24 4:10 ` 2.5.65-mm4 Martin J. Bligh
2003-03-24 7:02 ` 2.5.65-mm4 Ingo Molnar
2003-03-24 7:17 ` Andrew Morton [this message]
2003-03-24 7:27 ` 2.5.65-mm4 William Lee Irwin III
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20030323231716.44d7e306.akpm@digeo.com \
--to=akpm@digeo.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mbligh@aracnet.com \
--cc=mingo@elte.hu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox