From: Davidlohr Bueso <davidlohr@hp.com>
To: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Christoph Lameter <cl@linux.com>,
Pekka Enberg <penberg@kernel.org>, Dave Hansen <dave@sr71.net>,
Andrew Morton <akpm@linux-foundation.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
LKML <linux-kernel@vger.kernel.org>,
Fengguang Wu <fengguang.wu@intel.com>
Subject: Re: [PATCH 0/9] re-shrink 'struct page' when SLUB is on.
Date: Sun, 12 Jan 2014 19:36:58 -0800 [thread overview]
Message-ID: <1389584218.11984.0.camel@buesod1.americas.hpqcorp.net> (raw)
In-Reply-To: <20140113014408.GA25900@lge.com>
On Mon, 2014-01-13 at 10:44 +0900, Joonsoo Kim wrote:
> On Sat, Jan 11, 2014 at 06:55:39PM -0600, Christoph Lameter wrote:
> > On Sat, 11 Jan 2014, Pekka Enberg wrote:
> >
> > > On Sat, Jan 11, 2014 at 1:42 AM, Dave Hansen <dave@sr71.net> wrote:
> > > > On 01/10/2014 03:39 PM, Andrew Morton wrote:
> > > >>> I tested 4 cases, all of these on the "cache-cold kfree()" case. The
> > > >>> first 3 are with vanilla upstream kernel source. The 4th is patched
> > > >>> with my new slub code (all single-threaded):
> > > >>>
> > > >>> http://www.sr71.net/~dave/intel/slub/slub-perf-20140109.png
> > > >>
> > > >> So we're converging on the most complex option. argh.
> > > >
> > > > Yeah, looks that way.
> > >
> > > Seems like a reasonable compromise between memory usage and allocation speed.
> > >
> > > Christoph?
> >
> > Fundamentally I think this is good. I need to look at the details but I am
> > only going to be able to do that next week when I am back in the office.
>
> Hello,
>
> I have another guess about the performance result although I didn't look at
> these patches in detail. I guess that performance win of 64-byte sturct on
> small allocations can be caused by low latency when accessing slub's metadata,
> that is, struct page.
>
> Following is pages per slab via '/proc/slabinfo'.
>
> size pages per slab
> ...
> 256 1
> 512 1
> 1024 2
> 2048 4
> 4096 8
> 8192 8
>
> We only touch one struct page on small allocation.
> In 64-byte case, we always use one cacheline for touching struct page, since
> it is aligned to cacheline size. However, in 56-byte case, we possibly use
> two cachelines because struct page isn't aligned to cacheline size.
>
> This aspect can change on large allocation cases. For example, consider
> 4096-byte allocation case. In 64-byte case, it always touches 8 cachelines
> for metadata, however, in 56-byte case, it touches 7 or 8 cachelines since
> 8 struct page occupies 8 * 56 bytes memory, that is, 7 cacheline size.
>
> This guess may be wrong, so if you think it wrong, please ignore it. :)
>
> And I have another opinion on this patchset. Diminishing struct page size
> will affect other usecases beside the slub. As we know, Dave found this
> by doing sequential 'dd'. I think that it may be the best case for 56-byte case.
> If we randomly touch the struct page, this un-alignment can cause regression
> since touching the struct page will cause two cachline misses. So, I think
> that it is better to get more benchmark results to this patchset for convincing
> ourselves. If possible, how about asking Fengguang to run whole set of
> his benchmarks before going forward?
Cc'ing him.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2014-01-13 3:37 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-01-03 18:01 Dave Hansen
2014-01-03 18:01 ` [PATCH 1/9] mm: slab/slub: use page->list consistently instead of page->lru Dave Hansen
2014-01-03 18:01 ` [PATCH 2/9] mm: blk-mq: uses page->list incorrectly Dave Hansen
2014-01-03 18:01 ` [PATCH 3/9] mm: page->pfmemalloc only used by slab/skb Dave Hansen
2014-01-03 18:01 ` [PATCH 4/9] mm: slabs: reset page at free Dave Hansen
2014-01-03 18:01 ` [PATCH 5/9] mm: rearrange struct page Dave Hansen
2014-01-03 18:01 ` [PATCH 6/9] mm: slub: rearrange 'struct page' fields Dave Hansen
2014-01-03 18:02 ` [PATCH 7/9] mm: slub: abstract out double cmpxchg option Dave Hansen
2014-01-03 18:02 ` [PATCH 8/9] mm: slub: remove 'struct page' alignment restrictions Dave Hansen
2014-01-03 18:02 ` [PATCH 9/9] mm: slub: cleanups after code churn Dave Hansen
2014-01-03 22:18 ` [PATCH 0/9] re-shrink 'struct page' when SLUB is on Andrew Morton
2014-01-06 4:32 ` Joonsoo Kim
2014-01-10 20:52 ` Dave Hansen
2014-01-10 23:39 ` Andrew Morton
2014-01-10 23:42 ` Dave Hansen
2014-01-11 9:26 ` Pekka Enberg
2014-01-12 0:55 ` Christoph Lameter
2014-01-13 1:44 ` Joonsoo Kim
2014-01-13 3:36 ` Davidlohr Bueso [this message]
2014-01-13 13:46 ` Fengguang Wu
2014-01-13 15:42 ` Dave Hansen
2014-01-13 17:16 ` Dave Hansen
2014-01-14 20:07 ` Christoph Lameter
2014-01-14 22:05 ` Dave Hansen
2014-01-16 16:44 ` Christoph Lameter
2014-01-16 17:08 ` Dave Hansen
2014-01-16 18:26 ` Christoph Lameter
2014-01-14 17:40 ` Christoph Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1389584218.11984.0.camel@buesod1.americas.hpqcorp.net \
--to=davidlohr@hp.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=dave@sr71.net \
--cc=fengguang.wu@intel.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox