From: Mikulas Patocka <mpatocka@redhat.com>
To: Christopher Lameter <cl@linux.com>
Cc: Matthew Wilcox <willy@infradead.org>,
Pekka Enberg <penberg@kernel.org>,
David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org, dm-devel@redhat.com,
Mike Snitzer <msnitzer@redhat.com>
Subject: Re: [PATCH] slab: introduce the flag SLAB_MINIMIZE_WASTE
Date: Tue, 20 Mar 2018 15:22:03 -0400 (EDT) [thread overview]
Message-ID: <alpine.LRH.2.02.1803201510030.21066@file01.intranet.prod.int.rdu2.redhat.com> (raw)
In-Reply-To: <alpine.DEB.2.20.1803201250480.27540@nuc-kabylake>
On Tue, 20 Mar 2018, Christopher Lameter wrote:
> On Tue, 20 Mar 2018, Matthew Wilcox wrote:
>
> > On Tue, Mar 20, 2018 at 01:25:09PM -0400, Mikulas Patocka wrote:
> > > The reason why we need this is that we are going to merge code that does
> > > block device deduplication (it was developed separatedly and sold as a
> > > commercial product), and the code uses block sizes that are not a power of
> > > two (block sizes 192K, 448K, 640K, 832K are used in the wild). The slab
> > > allocator rounds up the allocation to the nearest power of two, but that
> > > wastes a lot of memory. Performance of the solution depends on efficient
> > > memory usage, so we should minimize wasted as much as possible.
> >
> > The SLUB allocator also falls back to using the page (buddy) allocator
> > for allocations above 8kB, so this patch is going to have no effect on
> > slub. You'd be better off using alloc_pages_exact() for this kind of
> > size, or managing your own pool of pages by using something like five
> > 192k blocks in a 1MB allocation.
>
> The fallback is only effective for kmalloc caches. Manually created caches
> do not follow this rule.
Yes - the dm-bufio layer uses manually created caches.
> Note that you can already control the page orders for allocation and
> the objects per slab using
>
> slub_min_order
> slub_max_order
> slub_min_objects
>
> This is documented in linux/Documentation/vm/slub.txt
>
> Maybe do the same thing for SLAB?
Yes, but I need to change it for a specific cache, not for all caches.
When the order is greater than 3 (PAGE_ALLOC_COSTLY_ORDER), the allocation
becomes unreliable, thus it is a bad idea to increase slub_max_order
system-wide.
Another problem with slub_max_order is that it would pad all caches to
slub_max_order, even those that already have a power-of-two size (in that
case, the padding is counterproductive).
BTW. the function "order_store" in mm/slub.c modifies the structure
kmem_cache without taking any locks - is it a bug?
Mikulas
next prev parent reply other threads:[~2018-03-20 19:22 UTC|newest]
Thread overview: 66+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-03-20 17:25 Mikulas Patocka
2018-03-20 17:35 ` Matthew Wilcox
2018-03-20 17:54 ` Christopher Lameter
2018-03-20 19:22 ` Mikulas Patocka [this message]
2018-03-20 20:42 ` Christopher Lameter
2018-03-20 22:02 ` Mikulas Patocka
2018-03-21 15:35 ` Christopher Lameter
2018-03-21 16:25 ` Mikulas Patocka
2018-03-21 17:10 ` Matthew Wilcox
2018-03-21 17:30 ` Christopher Lameter
2018-03-21 17:39 ` Christopher Lameter
2018-03-21 17:49 ` Matthew Wilcox
2018-03-21 18:01 ` Christopher Lameter
2018-03-21 18:23 ` Mikulas Patocka
2018-03-21 18:40 ` Christopher Lameter
2018-03-21 18:55 ` Mikulas Patocka
2018-03-21 18:55 ` Matthew Wilcox
2018-03-21 18:58 ` Christopher Lameter
2018-03-21 19:25 ` Mikulas Patocka
2018-03-21 18:36 ` Mikulas Patocka
2018-03-21 18:57 ` Christopher Lameter
2018-03-21 19:19 ` Mikulas Patocka
2018-03-21 20:09 ` Christopher Lameter
2018-03-21 20:37 ` Mikulas Patocka
2018-03-23 15:10 ` Christopher Lameter
2018-03-23 15:31 ` Mikulas Patocka
2018-03-23 15:48 ` Christopher Lameter
2018-04-13 9:22 ` Vlastimil Babka
2018-04-13 15:10 ` Mike Snitzer
2018-04-16 12:38 ` Vlastimil Babka
2018-04-16 14:27 ` Mike Snitzer
2018-04-16 14:37 ` Mikulas Patocka
2018-04-16 14:46 ` Mike Snitzer
2018-04-16 14:57 ` Mikulas Patocka
2018-04-16 15:18 ` Christopher Lameter
2018-04-16 15:25 ` Mikulas Patocka
2018-04-16 15:45 ` Christopher Lameter
2018-04-16 19:36 ` Mikulas Patocka
2018-04-16 19:53 ` Vlastimil Babka
2018-04-16 21:01 ` Mikulas Patocka
2018-04-17 14:40 ` Christopher Lameter
2018-04-17 18:53 ` Mikulas Patocka
2018-04-17 21:42 ` Christopher Lameter
2018-04-17 14:49 ` Christopher Lameter
2018-04-17 14:47 ` Christopher Lameter
2018-04-16 19:32 ` [PATCH RESEND] " Mikulas Patocka
2018-04-17 14:45 ` Christopher Lameter
2018-04-17 16:16 ` Vlastimil Babka
2018-04-17 16:38 ` Christopher Lameter
2018-04-17 19:09 ` Mikulas Patocka
2018-04-17 17:26 ` Mikulas Patocka
2018-04-17 19:13 ` Vlastimil Babka
2018-04-17 19:06 ` Mikulas Patocka
2018-04-18 14:55 ` Christopher Lameter
2018-04-25 21:04 ` Mikulas Patocka
2018-04-25 23:24 ` Mikulas Patocka
2018-04-26 19:01 ` Christopher Lameter
2018-04-26 21:09 ` Mikulas Patocka
2018-04-27 16:41 ` Christopher Lameter
2018-04-27 19:19 ` Mikulas Patocka
2018-06-13 17:01 ` Mikulas Patocka
2018-06-13 18:16 ` Christoph Hellwig
2018-06-13 18:53 ` Mikulas Patocka
2018-04-26 18:51 ` Christopher Lameter
2018-04-16 19:38 ` Vlastimil Babka
2018-04-16 21:04 ` Mikulas Patocka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.LRH.2.02.1803201510030.21066@file01.intranet.prod.int.rdu2.redhat.com \
--to=mpatocka@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=dm-devel@redhat.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-mm@kvack.org \
--cc=msnitzer@redhat.com \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox