linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: Christoph Lameter <cl@gentwo.de>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>,
	netdev@vger.kernel.org, linux-mm@kvack.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Mel Gorman <mgorman@techsingularity.net>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	penberg@kernel.org, vbabka@suse.cz,
	Jakub Kicinski <kuba@kernel.org>,
	"David S. Miller" <davem@davemloft.net>,
	edumazet@google.com, pabeni@redhat.com
Subject: Re: [PATCH RFC] mm+net: allow to set kmem_cache create flag for SLAB_NEVER_MERGE
Date: Wed, 18 Jan 2023 05:17:15 +0000	[thread overview]
Message-ID: <Y8eA2xZ0KC2ZDinu@casper.infradead.org> (raw)
In-Reply-To: <36f5761f-d4d9-4ec9-a64-7a6c6c8b956f@gentwo.de>

On Tue, Jan 17, 2023 at 03:54:34PM +0100, Christoph Lameter wrote:
> On Tue, 17 Jan 2023, Jesper Dangaard Brouer wrote:
> 
> > When running different network performance microbenchmarks, I started
> > to notice that performance was reduced (slightly) when machines had
> > longer uptimes. I believe the cause was 'skbuff_head_cache' got
> > aliased/merged into the general slub for 256 bytes sized objects (with
> > my kernel config, without CONFIG_HARDENED_USERCOPY).
> 
> Well that is a common effect that we see in multiple subsystems. This is
> due to general memory fragmentation. Depending on the prior load the
> performance could actually be better after some runtime if the caches are
> populated avoiding the page allocator etc.

The page allocator isn't _that_ expensive.  I could see updating several
slabs being more expensive than allocating a new page.

> The merging could actually be beneficial since there may be more partial
> slabs to allocate from and thus avoiding expensive calls to the page
> allocator.

What might be more effective is allocating larger order slabs.  I see
that kmalloc-256 allocates a pair of pages and manages 32 objects within
that pair.  It should perform better in Jesper's scenario if it allocated
4 pages and managed 64 objects per slab.

Simplest way to test that should be booting a kernel with
'slub_min_order=2'.  Does that help matters at all, Jesper?  You could
also try slub_min_order=3.  Going above that starts to get a bit sketchy.


  reply	other threads:[~2023-01-18  5:17 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-17 13:40 Jesper Dangaard Brouer
2023-01-17 14:54 ` Christoph Lameter
2023-01-18  5:17   ` Matthew Wilcox [this message]
2023-01-19 18:08     ` Jesper Dangaard Brouer
2023-01-24 16:06   ` Hyeonggon Yoo
2023-01-18  7:36 ` Vlastimil Babka
2023-01-23 16:14   ` Jesper Dangaard Brouer
2023-05-31 12:03 ` Vlastimil Babka
2023-05-31 13:59   ` Jesper Dangaard Brouer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y8eA2xZ0KC2ZDinu@casper.infradead.org \
    --to=willy@infradead.org \
    --cc=akpm@linux-foundation.org \
    --cc=brouer@redhat.com \
    --cc=cl@gentwo.de \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=kuba@kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=penberg@kernel.org \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox