From: Andrew Morton <akpm@linux-foundation.org>
To: Davidlohr Bueso <davidlohr@hp.com>
Cc: Ingo Molnar <mingo@kernel.org>,
Linus Torvalds <torvalds@linux-foundation.org>,
Peter Zijlstra <peterz@infradead.org>,
Michel Lespinasse <walken@google.com>,
Mel Gorman <mgorman@suse.de>, Rik van Riel <riel@redhat.com>,
KOSAKI Motohiro <kosaki.motohiro@gmail.com>,
aswin@hp.com, scott.norton@hp.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v4] mm: per-thread vma caching
Date: Mon, 3 Mar 2014 16:00:21 -0800 [thread overview]
Message-ID: <20140303160021.3001634fa62781d7b0359158@linux-foundation.org> (raw)
In-Reply-To: <1393537704.2899.3.camel@buesod1.americas.hpqcorp.net>
On Thu, 27 Feb 2014 13:48:24 -0800 Davidlohr Bueso <davidlohr@hp.com> wrote:
> From: Davidlohr Bueso <davidlohr@hp.com>
>
> This patch is a continuation of efforts trying to optimize find_vma(),
> avoiding potentially expensive rbtree walks to locate a vma upon faults.
> The original approach (https://lkml.org/lkml/2013/11/1/410), where the
> largest vma was also cached, ended up being too specific and random, thus
> further comparison with other approaches were needed. There are two things
> to consider when dealing with this, the cache hit rate and the latency of
> find_vma(). Improving the hit-rate does not necessarily translate in finding
> the vma any faster, as the overhead of any fancy caching schemes can be too
> high to consider.
>
> We currently cache the last used vma for the whole address space, which
> provides a nice optimization, reducing the total cycles in find_vma() by up
> to 250%, for workloads with good locality. On the other hand, this simple
> scheme is pretty much useless for workloads with poor locality. Analyzing
> ebizzy runs shows that, no matter how many threads are running, the
> mmap_cache hit rate is less than 2%, and in many situations below 1%.
>
> The proposed approach is to replace this scheme with a small per-thread cache,
> maximizing hit rates at a very low maintenance cost. Invalidations are
> performed by simply bumping up a 32-bit sequence number. The only expensive
> operation is in the rare case of a seq number overflow, where all caches that
> share the same address space are flushed. Upon a miss, the proposed replacement
> policy is based on the page number that contains the virtual address in
> question. Concretely, the following results are seen on an 80 core, 8 socket
> x86-64 box:
>
> ...
>
> 2) Kernel build: This one is already pretty good with the current approach
> as we're dealing with good locality.
>
> +----------------+----------+------------------+
> | caching scheme | hit-rate | cycles (billion) |
> +----------------+----------+------------------+
> | baseline | 75.28% | 11.03 |
> | patched | 88.09% | 9.31 |
> +----------------+----------+------------------+
What is the "cycles" number here? I'd like to believe we sped up kernel
builds by 10% ;)
Were any overall run time improvements observable?
> ...
>
> @@ -1228,6 +1229,9 @@ struct task_struct {
> #ifdef CONFIG_COMPAT_BRK
> unsigned brk_randomized:1;
> #endif
> + /* per-thread vma caching */
> + u32 vmacache_seqnum;
> + struct vm_area_struct *vmacache[VMACACHE_SIZE];
So these are implicitly locked by being per-thread.
> +static inline void vmacache_invalidate(struct mm_struct *mm)
> +{
> + mm->vmacache_seqnum++;
> +
> + /* deal with overflows */
> + if (unlikely(mm->vmacache_seqnum == 0))
> + vmacache_flush_all(mm);
> +}
What's the locking rule for mm->vmacache_seqnum?
>
> ...
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2014-03-04 0:00 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-02-27 21:48 Davidlohr Bueso
2014-02-28 4:39 ` Davidlohr Bueso
2014-03-04 0:00 ` Andrew Morton [this message]
2014-03-04 0:18 ` Davidlohr Bueso
2014-03-04 0:40 ` Andrew Morton
2014-03-04 0:59 ` Davidlohr Bueso
2014-03-04 1:23 ` Andrew Morton
2014-03-04 2:42 ` Davidlohr Bueso
2014-03-04 3:12 ` Andrew Morton
2014-03-04 3:13 ` Davidlohr Bueso
2014-03-04 3:26 ` Andrew Morton
2014-03-04 3:26 ` Linus Torvalds
2014-03-04 5:32 ` Davidlohr Bueso
2014-03-14 3:05 ` Li Zefan
2014-03-14 4:43 ` Andrew Morton
2014-03-06 22:56 ` Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140303160021.3001634fa62781d7b0359158@linux-foundation.org \
--to=akpm@linux-foundation.org \
--cc=aswin@hp.com \
--cc=davidlohr@hp.com \
--cc=kosaki.motohiro@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=riel@redhat.com \
--cc=scott.norton@hp.com \
--cc=torvalds@linux-foundation.org \
--cc=walken@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox