From: Davidlohr Bueso <davidlohr.bueso@hp.com>
To: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>, Rik van Riel <riel@redhat.com>,
Peter Zijlstra <a.p.zijlstra@chello.nl>,
Andrea Arcangeli <aarcange@redhat.com>,
Mel Gorman <mgorman@suse.de>, "Shi, Alex" <alex.shi@intel.com>,
Andi Kleen <andi@firstfloor.org>,
Andrew Morton <akpm@linux-foundation.org>,
Michel Lespinasse <walken@google.com>,
"Wilcox, Matthew R" <matthew.r.wilcox@intel.com>,
Dave Hansen <dave.hansen@intel.com>,
linux-kernel@vger.kernel.org, linux-mm <linux-mm@kvack.org>
Subject: Re: Performance regression from switching lock to rw-sem for anon-vma tree
Date: Fri, 14 Jun 2013 15:31:44 -0700 [thread overview]
Message-ID: <1371249104.1758.20.camel@buesod1.americas.hpqcorp.net> (raw)
In-Reply-To: <1371226197.27102.594.camel@schen9-DESK>
On Fri, 2013-06-14 at 09:09 -0700, Tim Chen wrote:
> Added copy to mailing list which I forgot in my previous reply:
>
> On Thu, 2013-06-13 at 16:43 -0700, Davidlohr Bueso wrote:
> > On Thu, 2013-06-13 at 16:15 -0700, Tim Chen wrote:
> > > Ingo,
> > >
> > > At the time of switching the anon-vma tree's lock from mutex to
> > > rw-sem (commit 5a505085), we encountered regressions for fork heavy workload.
> > > A lot of optimizations to rw-sem (e.g. lock stealing) helped to
> > > mitigate the problem. I tried an experiment on the 3.10-rc4 kernel
> > > to compare the performance of rw-sem to one that uses mutex. I saw
> > > a 8% regression in throughput for rw-sem vs a mutex implementation in
> > > 3.10-rc4.
> >
> > Funny, just yesterday I was discussing this issue with Michel. While I
> > didn't measure the anon-vma mutex->rwem conversion, I did convert the
> > i_mmap_mutex to a rwsem and noticed a performance regression on a few
> > aim7 workloads on a 8 socket, 80 core box, when keeping all writers,
> > which should perform very similarly to a mutex. While some of these
> > workloads recovered when I shared the lock among readers (similar to
> > anon-vma), it left me wondering.
> >
> > > For the experiments, I used the exim mail server workload in
> > > the MOSBENCH test suite on 4 socket (westmere) and a 4 socket
> > > (ivy bridge) with the number of clients sending mail equal
> > > to number of cores. The mail server will
> > > fork off a process to handle an incoming mail and put it into mail
> > > spool. The lock protecting the anon-vma tree is stressed due to
> > > heavy forking. On both machines, I saw that the mutex implementation
> > > has 8% more throughput. I've pinned the cpu frequency to maximum
> > > in the experiments.
> >
> > I got some similar -8% throughput on high_systime and shared.
> >
>
> That's interesting. Another perspective on rwsem vs mutex.
>
> > >
> > > I've tried two separate tweaks to the rw-sem on 3.10-rc4. I've tested
> > > each tweak individually.
> > >
> > > 1) Add an owner field when a writer holds the lock and introduce
> > > optimistic spinning when an active writer is holding the semaphore.
> > > It reduced the context switching by 30% to a level very close to the
> > > mutex implementation. However, I did not see any throughput improvement
> > > of exim.
> >
> > I was hoping that the lack of spin on owner was the main difference with
> > rwsems and am/was in the middle of implementing it. Could you send your
> > patch so I can give it a try on my workloads?
> >
> > Note that there have been a few recent (3.10) changes to mutexes that
> > give a nice performance boost, specially on large systems, most
> > noticeably:
> >
> > commit 2bd2c92c (mutex: Make more scalable by doing less atomic
> > operations)
> >
> > commit 0dc8c730 (mutex: Queue mutex spinners with MCS lock to reduce
> > cacheline contention)
> >
> > It might be worth looking into doing something similar to commit
> > 0dc8c730, in addition to the optimistic spinning.
>
> Okay. Here's my ugly experimental hack with some code lifted from optimistic spin
> within mutex. I've thought about doing the MCS lock thing but decided
> to keep the first try on the optimistic spinning simple.
Unfortunately this patch didn't make any difference, in fact it hurt
several of the workloads even more. I also tried disabling preemption
when spinning on owner to actually resemble spinlocks, which was my
original plan, yet not much difference.
A few ideas that come to mind are avoiding taking the ->wait_lock and
avoid dealing with waiters when doing the optimistic spinning (just like
mutexes do).
I agree that we should first deal with the optimistic spinning before
adding the MCS complexity.
> Matthew and I have also discussed possibly introducing some
> limited spinning for writer when semaphore is held by read.
> His idea was to have readers as well as writers set ->owner.
> Writers, as now, unconditionally clear owner. Readers clear
> owner if sem->owner == current. Writers spin on ->owner if ->owner
> is non-NULL and still active. That gives us a reasonable chance
> to spin since we'll be spinning on
> the most recent acquirer of the lock.
I also tried implementing this concept on top of your patch, didn't make
much of a difference with or without it.
Thanks,
Davidlohr
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-06-14 22:31 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1371165333.27102.568.camel@schen9-DESK>
[not found] ` <1371167015.1754.14.camel@buesod1.americas.hpqcorp.net>
2013-06-14 16:09 ` Tim Chen
2013-06-14 22:31 ` Davidlohr Bueso [this message]
2013-06-14 22:44 ` Tim Chen
2013-06-14 22:47 ` Michel Lespinasse
2013-06-17 22:27 ` Tim Chen
2013-06-16 9:50 ` Alex Shi
2013-06-17 16:22 ` Davidlohr Bueso
2013-06-17 18:45 ` Tim Chen
2013-06-17 19:05 ` Davidlohr Bueso
2013-06-17 22:28 ` Tim Chen
2013-06-17 23:18 ` Alex Shi
2013-06-17 23:20 ` Alex Shi
2013-06-17 23:35 ` Davidlohr Bueso
2013-06-18 0:08 ` Tim Chen
2013-06-19 23:11 ` Davidlohr Bueso
2013-06-19 23:24 ` Tim Chen
2013-06-13 23:26 Tim Chen
2013-06-19 13:16 ` Ingo Molnar
2013-06-19 16:53 ` Tim Chen
2013-06-26 0:19 ` Tim Chen
2013-06-26 9:51 ` Ingo Molnar
2013-06-26 21:36 ` Tim Chen
2013-06-27 0:25 ` Tim Chen
2013-06-27 8:36 ` Ingo Molnar
2013-06-27 20:53 ` Tim Chen
2013-06-27 23:31 ` Tim Chen
2013-06-28 9:38 ` Ingo Molnar
2013-06-28 21:04 ` Tim Chen
2013-06-29 7:12 ` Ingo Molnar
2013-07-01 20:28 ` Tim Chen
2013-07-02 6:45 ` Ingo Molnar
2013-07-16 17:53 ` Tim Chen
2013-07-23 9:45 ` Ingo Molnar
2013-07-23 9:51 ` Peter Zijlstra
2013-07-23 9:53 ` Ingo Molnar
2013-07-30 0:13 ` Tim Chen
2013-07-30 19:24 ` Ingo Molnar
2013-08-05 22:08 ` Tim Chen
2013-07-30 19:59 ` Davidlohr Bueso
2013-07-30 20:34 ` Tim Chen
2013-07-30 21:45 ` Davidlohr Bueso
2013-08-06 23:55 ` Davidlohr Bueso
2013-08-07 0:56 ` Tim Chen
2013-08-12 18:52 ` Ingo Molnar
2013-08-12 20:10 ` Tim Chen
2013-06-28 9:20 ` Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1371249104.1758.20.camel@buesod1.americas.hpqcorp.net \
--to=davidlohr.bueso@hp.com \
--cc=a.p.zijlstra@chello.nl \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=alex.shi@intel.com \
--cc=andi@firstfloor.org \
--cc=dave.hansen@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=matthew.r.wilcox@intel.com \
--cc=mgorman@suse.de \
--cc=mingo@elte.hu \
--cc=riel@redhat.com \
--cc=tim.c.chen@linux.intel.com \
--cc=walken@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox