linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Hyeonggon Yoo <42.hyeyoo@gmail.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org, Roman Gushchin <guro@fb.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Vlastimil Babka <vbabka@suse.cz>,
	linux-kernel@vger.kernel.org,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	David Rientjes <rientjes@google.com>,
	Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>
Subject: Re: [PATCH 3/5] mm/slab: Do not call kmalloc_large() for unsupported size
Date: Wed, 23 Feb 2022 03:24:27 +0000	[thread overview]
Message-ID: <YhWo6yKaHHE2O1xc@ip-172-31-19-208.ap-northeast-1.compute.internal> (raw)
In-Reply-To: <YhVAjMqYPNUBC4rY@casper.infradead.org>

On Tue, Feb 22, 2022 at 07:59:08PM +0000, Matthew Wilcox wrote:
> On Tue, Feb 22, 2022 at 08:10:32AM +0000, Hyeonggon Yoo wrote:
> > On Mon, Feb 21, 2022 at 03:53:39PM +0000, Matthew Wilcox wrote:
> > > On Mon, Feb 21, 2022 at 10:53:34AM +0000, Hyeonggon Yoo wrote:
> > > > SLAB's kfree() does not support freeing an object that is allocated from
> > > > kmalloc_large(). Fix this as SLAB do not pass requests larger than
> > > > KMALLOC_MAX_CACHE_SIZE directly to page allocator.
> > > 
> > > I was wondering if we wanted to go in the other direction and get rid of
> > > kmalloc cache sizes larger than, say, 64kB from the SLAB allocator.
> > 
> > Good point.
> > 
> > Hmm.. I don't think SLAB is benefiting from queueing that large objects,
> > and maximum size is still limited to what buddy allocator supports.
> > 
> > I'll try reducing kmalloc caches up to order-1 page like SLUB.
> > That would be easier to maintain.
> 
> If you have time to investigate these kinds of things, I think SLUB would
> benefit from caching order-2 and order-3 slabs as well.  Maybe not so much
> now that Mel included order-2 and order-3 caching in the page allocator.
> But it'd be interesting to have numbers.

That's interesting topic. But I think this is slightly different topic.
AFAIK It's rare that a workload would benefit more from using slab for
large size objects (8K, 16K, ... etc) than using page allocator.

And yeah, caching high order slabs may affect the numbers even if page
allocator caches high order pages. SLUB already caches them and SLUB can
cache more slabs by tuning number of cpu partial slabs (s->cpu_partial_slabs)
and number of node partial slabs. (s->min_partial)

I need to investigate what actually Mel did and learn how it affects
SLUB. So it will take some time. Thanks!

--
Hyeonggon


  reply	other threads:[~2022-02-23  3:24 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-21 10:53 [PATCH 0/5] slab cleanups Hyeonggon Yoo
2022-02-21 10:53 ` [PATCH 1/5] mm/sl[au]b: Unify __ksize() Hyeonggon Yoo
2022-02-23 18:39   ` Vlastimil Babka
2022-02-23 19:06     ` Marco Elver
2022-02-24 12:26       ` Vlastimil Babka
2022-02-21 10:53 ` [PATCH 2/5] mm/sl[auo]b: Do not export __ksize() Hyeonggon Yoo
2022-02-21 15:46   ` Matthew Wilcox
2022-02-23  3:26     ` Hyeonggon Yoo
2022-02-23 18:40     ` Vlastimil Babka
2022-02-21 10:53 ` [PATCH 3/5] mm/slab: Do not call kmalloc_large() for unsupported size Hyeonggon Yoo
2022-02-21 15:53   ` Matthew Wilcox
2022-02-22  8:10     ` Hyeonggon Yoo
2022-02-22 19:59       ` Matthew Wilcox
2022-02-23  3:24         ` Hyeonggon Yoo [this message]
2022-02-24 12:48   ` Vlastimil Babka
2022-02-24 13:31     ` Hyeonggon Yoo
2022-02-24 15:08       ` Vlastimil Babka
2022-02-21 10:53 ` [PATCH 4/5] mm/slub: Limit min_partial only in cache creation Hyeonggon Yoo
2022-02-22 23:48   ` David Rientjes
2022-02-23  3:37     ` Hyeonggon Yoo
2022-02-24 12:52       ` Vlastimil Babka
2022-02-21 10:53 ` [PATCH 5/5] mm/slub: Refactor deactivate_slab() Hyeonggon Yoo
2022-02-24 18:16   ` Vlastimil Babka
2022-02-25  9:34     ` Hyeonggon Yoo
2022-02-25  9:50       ` Hyeonggon Yoo
2022-02-25 10:07         ` Vlastimil Babka
2022-02-25 10:26           ` Hyeonggon Yoo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YhWo6yKaHHE2O1xc@ip-172-31-19-208.ap-northeast-1.compute.internal \
    --to=42.hyeyoo@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=guro@fb.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox