linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC][PATCH 0/9] re-shrink 'struct page' when SLUB is on.
@ 2014-01-14 18:00 Dave Hansen
  2014-01-14 18:00 ` [RFC][PATCH 1/9] mm: slab/slub: use page->list consistently instead of page->lru Dave Hansen
                   ` (8 more replies)
  0 siblings, 9 replies; 31+ messages in thread
From: Dave Hansen @ 2014-01-14 18:00 UTC (permalink / raw)
  To: linux-mm; +Cc: linux-kernel, akpm, penberg, cl, Dave Hansen

This is a minor update from the last version.  The most notable
thing is that I was able to demonstrate that maintaining the
cmpxchg16 optimization has _some_ value.

These are still of RFC quality.  They're stable, but definitely
needs some wider testing, especially on 32-bit.  Mostly just
resending for Christoph to take a look.

These currently apply on top of linux-next.

Otherwise, the code changes are just a few minor cleanups.

---

SLUB depends on a 16-byte cmpxchg for an optimization which
allows it to not disable interrupts in its fast path.  This
optimization has some small but measurable benefits stemming from
the cmpxchg code not needing to disable interrrupts:

	http://www.sr71.net/~dave/intel/slub/slub-perf-20140109.png

In order to get guaranteed 16-byte alignment (required by the
hardware on x86), 'struct page' is padded out from 56 to 64
bytes.

Those 8-bytes matter.  We've gone to great lengths to keep
'struct page' small in the past.  It's a shame that we bloat it
now just for alignment reasons when we have extra space.  Plus,
bloating such a commonly-touched structure *HAS* cache footprint
implications.  The implications can be easily shown with 'proc
stat' when doing 16.8M kmalloc(32)/kfree() pairs:

vanilla 64-byte struct page:
>            883,412 LLC-loads                 #    0.296 M/sec
>            566,546 LLC-load-misses           #   64.13% of all LL-cache hits
patched 56-byte struct page:
>            556,751 LLC-loads                 #    0.186 M/sec
>            339,106 LLC-load-misses           #   60.91% of all LL-cache hits

These patches attempt _internal_ alignment instead of external
alignment for slub.

I also got a bug report from some folks running a large database
benchmark.  Their old kernel uses slab and their new one uses
slub.  They were swapping and couldn't figure out why.  It turned
out to be the 2GB of RAM that the slub padding wastes on their
system.

On my box, that 2GB cost about $200 to populate back when we
bought it.  I want my $200 back.

This set takes me from 16909584K of reserved memory at boot down
to 14814472K, so almost *exactly* 2GB of savings!  It also helps
performance, presumably because it touches 14% fewer struct page
cachelines.  A 30GB dd to a ramfs file:

	dd if=/dev/zero of=bigfile bs=$((1<<30)) count=30

is sped up by about 4.4% in my testing.

I've run this through its paces and have not had stability issues
with it.  It definitely needs some more testing, but it's
definitely ready for a wider audience.

I also wrote up a document describing 'struct page's layout:

	http://tinyurl.com/n6kmedz

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2014-01-17 14:58 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-01-14 18:00 [RFC][PATCH 0/9] re-shrink 'struct page' when SLUB is on Dave Hansen
2014-01-14 18:00 ` [RFC][PATCH 1/9] mm: slab/slub: use page->list consistently instead of page->lru Dave Hansen
2014-01-14 19:31   ` Christoph Lameter
2014-01-15  2:31   ` David Rientjes
2014-01-15  6:58     ` Dave Hansen
2014-01-15  7:16       ` David Rientjes
2014-01-16  0:11   ` Kirill A. Shutemov
2014-01-14 18:00 ` [RFC][PATCH 2/9] mm: slub: abstract out double cmpxchg option Dave Hansen
2014-01-14 19:49   ` Christoph Lameter
2014-01-14 21:41     ` Dave Hansen
2014-01-15  2:37       ` David Rientjes
2014-01-16 16:45       ` Christoph Lameter
2014-01-16 17:13         ` Dave Hansen
2014-01-14 18:00 ` [RFC][PATCH 3/9] mm: page->pfmemalloc only used by slab/skb Dave Hansen
2014-01-14 19:49   ` Christoph Lameter
2014-01-14 22:17     ` Dave Hansen
2014-01-15  2:45       ` David Rientjes
2014-01-16  0:16   ` Kirill A. Shutemov
2014-01-14 18:00 ` [RFC][PATCH 4/9] mm: slabs: reset page at free Dave Hansen
2014-01-15  2:48   ` David Rientjes
2014-01-16 18:35     ` Dave Hansen
2014-01-16 18:32   ` Christoph Lameter
2014-01-14 18:00 ` [RFC][PATCH 5/9] mm: rearrange struct page Dave Hansen
2014-01-16  0:20   ` Kirill A. Shutemov
2014-01-16 18:34   ` Christoph Lameter
2014-01-16 22:29     ` Dave Hansen
2014-01-17 14:58       ` Christoph Lameter
2014-01-14 18:01 ` [RFC][PATCH 6/9] mm: slub: rearrange 'struct page' fields Dave Hansen
2014-01-14 18:01 ` [RFC][PATCH 7/9] mm: slub: remove 'struct page' alignment restrictions Dave Hansen
2014-01-14 18:01 ` [RFC][PATCH 8/9] mm: slub: cleanups after code churn Dave Hansen
2014-01-14 18:01 ` [RFC][PATCH 9/9] mm: fix alignment checks on 32-bit Dave Hansen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox