From: Davidlohr Bueso <davidlohr@hp.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>,
Linus Torvalds <torvalds@linux-foundation.org>,
Peter Zijlstra <peterz@infradead.org>,
Michel Lespinasse <walken@google.com>,
Mel Gorman <mgorman@suse.de>, Rik van Riel <riel@redhat.com>,
KOSAKI Motohiro <kosaki.motohiro@gmail.com>,
aswin@hp.com, scott.norton@hp.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v4] mm: per-thread vma caching
Date: Mon, 03 Mar 2014 16:18:12 -0800 [thread overview]
Message-ID: <1393892292.30648.12.camel@buesod1.americas.hpqcorp.net> (raw)
In-Reply-To: <20140303160021.3001634fa62781d7b0359158@linux-foundation.org>
On Mon, 2014-03-03 at 16:00 -0800, Andrew Morton wrote:
> On Thu, 27 Feb 2014 13:48:24 -0800 Davidlohr Bueso <davidlohr@hp.com> wrote:
>
> > From: Davidlohr Bueso <davidlohr@hp.com>
> >
> > This patch is a continuation of efforts trying to optimize find_vma(),
> > avoiding potentially expensive rbtree walks to locate a vma upon faults.
> > The original approach (https://lkml.org/lkml/2013/11/1/410), where the
> > largest vma was also cached, ended up being too specific and random, thus
> > further comparison with other approaches were needed. There are two things
> > to consider when dealing with this, the cache hit rate and the latency of
> > find_vma(). Improving the hit-rate does not necessarily translate in finding
> > the vma any faster, as the overhead of any fancy caching schemes can be too
> > high to consider.
> >
> > We currently cache the last used vma for the whole address space, which
> > provides a nice optimization, reducing the total cycles in find_vma() by up
> > to 250%, for workloads with good locality. On the other hand, this simple
> > scheme is pretty much useless for workloads with poor locality. Analyzing
> > ebizzy runs shows that, no matter how many threads are running, the
> > mmap_cache hit rate is less than 2%, and in many situations below 1%.
> >
> > The proposed approach is to replace this scheme with a small per-thread cache,
> > maximizing hit rates at a very low maintenance cost. Invalidations are
> > performed by simply bumping up a 32-bit sequence number. The only expensive
> > operation is in the rare case of a seq number overflow, where all caches that
> > share the same address space are flushed. Upon a miss, the proposed replacement
> > policy is based on the page number that contains the virtual address in
> > question. Concretely, the following results are seen on an 80 core, 8 socket
> > x86-64 box:
> >
> > ...
> >
> > 2) Kernel build: This one is already pretty good with the current approach
> > as we're dealing with good locality.
> >
> > +----------------+----------+------------------+
> > | caching scheme | hit-rate | cycles (billion) |
> > +----------------+----------+------------------+
> > | baseline | 75.28% | 11.03 |
> > | patched | 88.09% | 9.31 |
> > +----------------+----------+------------------+
>
> What is the "cycles" number here? I'd like to believe we sped up kernel
> builds by 10% ;)
>
> Were any overall run time improvements observable?
Weeell not too much (I wouldn't normally go measuring cycles if I could
use a benchmark instead ;). As discussed a while back, all this occurs
under the mmap_sem anyway, so while we do optimize find_vma() in more
workloads than before, it doesn't translate in better benchmark
throughput :( The same occurs if we get rid of any caching and just rely
on rbtree walks, sure the cost of find_vma() goes way up, but that
really doesn't hurt from a user perspective. Fwiw, I did see in ebizzy
perf traces find_vma goes from ~7% to ~0.4%.
>
> > ...
> >
> > @@ -1228,6 +1229,9 @@ struct task_struct {
> > #ifdef CONFIG_COMPAT_BRK
> > unsigned brk_randomized:1;
> > #endif
> > + /* per-thread vma caching */
> > + u32 vmacache_seqnum;
> > + struct vm_area_struct *vmacache[VMACACHE_SIZE];
>
> So these are implicitly locked by being per-thread.
Yes.
> > +static inline void vmacache_invalidate(struct mm_struct *mm)
> > +{
> > + mm->vmacache_seqnum++;
> > +
> > + /* deal with overflows */
> > + if (unlikely(mm->vmacache_seqnum == 0))
> > + vmacache_flush_all(mm);
> > +}
>
> What's the locking rule for mm->vmacache_seqnum?
Invalidations occur under the mmap_sem (writing), just like
mm->mmap_cache did.
Thanks,
Davidlohr
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2014-03-04 0:18 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-02-27 21:48 Davidlohr Bueso
2014-02-28 4:39 ` Davidlohr Bueso
2014-03-04 0:00 ` Andrew Morton
2014-03-04 0:18 ` Davidlohr Bueso [this message]
2014-03-04 0:40 ` Andrew Morton
2014-03-04 0:59 ` Davidlohr Bueso
2014-03-04 1:23 ` Andrew Morton
2014-03-04 2:42 ` Davidlohr Bueso
2014-03-04 3:12 ` Andrew Morton
2014-03-04 3:13 ` Davidlohr Bueso
2014-03-04 3:26 ` Andrew Morton
2014-03-04 3:26 ` Linus Torvalds
2014-03-04 5:32 ` Davidlohr Bueso
2014-03-14 3:05 ` Li Zefan
2014-03-14 4:43 ` Andrew Morton
2014-03-06 22:56 ` Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1393892292.30648.12.camel@buesod1.americas.hpqcorp.net \
--to=davidlohr@hp.com \
--cc=akpm@linux-foundation.org \
--cc=aswin@hp.com \
--cc=kosaki.motohiro@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=riel@redhat.com \
--cc=scott.norton@hp.com \
--cc=torvalds@linux-foundation.org \
--cc=walken@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox