linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Kent Overstreet <kent.overstreet@linux.dev>
To: Harry Yoo <harry.yoo@oracle.com>
Cc: Zhenhua Huang <quic_zhenhuah@quicinc.com>,
	rientjes@google.com,  vbabka@suse.cz, cl@gentwo.org,
	roman.gushchin@linux.dev, surenb@google.com,
	 pasha.tatashin@soleen.com, akpm@linux-foundation.org,
	corbet@lwn.net, linux-mm@kvack.org,  linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org, quic_tingweiz@quicinc.com
Subject: Re: [PATCH 1/1] mm: slub: Introduce one knob to control the track of slub object
Date: Wed, 23 Jul 2025 07:38:45 -0400	[thread overview]
Message-ID: <aqscos5ivap537qljhqa2pntrxfimfkfuflji62rl2picpvaiv@sams7xovbtn6> (raw)
In-Reply-To: <aICpMWKNvhveAzth@hyeyoo>

On Wed, Jul 23, 2025 at 06:19:45PM +0900, Harry Yoo wrote:
> The subject is a bit misleading. I think it should be something like
> "alloc_tag: add an option to disable slab object accounting".
> 
> On Wed, Jul 23, 2025 at 04:03:28PM +0800, Zhenhua Huang wrote:
> > Mem profiling feature tracks both "alloc_slab_page"(page level) and slub
> > object level allocations. To track object level allocations,
> > slabobj_ext consumes 16 bytes per object for profiling slub object if
> > CONFIG_MEMCG is set.
> > Based on the data I've collected, this overhead accounts for approximately
> > 5.7% of slub memory usage — a considerable cost.
> > w/ noslub  slub_debug=-
> > Slab:              87520 kB
> > w/o noslub slub_debug=-
> > Slab:              92812 kB
> 
> Yes, the cost is not small and I hate that we have to pay 16 bytes of
> memory overhead for each slab object when both memcg and memory profiling
> are enabled.

I believe we did something about this for page_obj_ext; the exact
pointer compression scheme we went with escapes me at the moment.

We did it for page and not slab because page_obj_ext is a large fixed
size overhead and the page allocator is slower anyways, but it's
conceivable we could do the same for slub if the memory overhead vs. cpu
overhead tradeoff is worth it.

And - pointer compression is a valuable technique in general; coming up
with some fast general purpose code (perhaps involving virtual mappings,
we're not so limited on virtual address space as we used to be) might be
worth someone's time exploring.


  parent reply	other threads:[~2025-07-23 11:38 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-23  8:03 Zhenhua Huang
2025-07-23  9:19 ` Harry Yoo
2025-07-23 10:21   ` Zhenhua Huang
2025-07-23 11:38   ` Kent Overstreet [this message]
2025-07-24  3:29     ` Zhenhua Huang
2025-07-23 11:31 ` Kent Overstreet
2025-07-24  3:57   ` Zhenhua Huang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aqscos5ivap537qljhqa2pntrxfimfkfuflji62rl2picpvaiv@sams7xovbtn6 \
    --to=kent.overstreet@linux.dev \
    --cc=akpm@linux-foundation.org \
    --cc=cl@gentwo.org \
    --cc=corbet@lwn.net \
    --cc=harry.yoo@oracle.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=pasha.tatashin@soleen.com \
    --cc=quic_tingweiz@quicinc.com \
    --cc=quic_zhenhuah@quicinc.com \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox