linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V5 0/8] mm/slab: reduce slab accounting memory overhead by allocating slabobj_ext metadata within unsed slab space
@ 2026-01-05  8:02 Harry Yoo
  2026-01-05  8:02 ` [PATCH V5 1/8] mm/slab: use unsigned long for orig_size to ensure proper metadata align Harry Yoo
                   ` (9 more replies)
  0 siblings, 10 replies; 17+ messages in thread
From: Harry Yoo @ 2026-01-05  8:02 UTC (permalink / raw)
  To: akpm, vbabka
  Cc: andreyknvl, cl, dvyukov, glider, hannes, linux-mm, mhocko,
	muchun.song, rientjes, roman.gushchin, ryabinin.a.a,
	shakeel.butt, surenb, vincenzo.frascino, yeoreum.yun, harry.yoo,
	tytso, adilger.kernel, linux-ext4, linux-kernel, cgroups, hao.li

Happy new year!

V4: https://lore.kernel.org/linux-mm/20251027122847.320924-1-harry.yoo@oracle.com
V4 -> V5:
- Patch 4: Fixed returning false when the return type is unsigned long
- Patch 7: Fixed incorrect calculation of slabobj_ext offset (Thanks Hao!)

When CONFIG_MEMCG and CONFIG_MEM_ALLOC_PROFILING are enabled,
the kernel allocates two pointers per object: one for the memory cgroup
(actually, obj_cgroup) to which it belongs, and another for the code
location that requested the allocation.

In two special cases, this overhead can be eliminated by allocating
slabobj_ext metadata from unused space within a slab:

  Case 1. The "leftover" space after the last slab object is larger than
          the size of an array of slabobj_ext.

  Case 2. The per-object alignment padding is larger than
          sizeof(struct slabobj_ext).

For these two cases, one or two pointers can be saved per slab object.
Examples: ext4 inode cache (case 1) and xfs inode cache (case 2).
That's approximately 0.7-0.8% (memcg) or 1.5-1.6% (memcg + mem profiling)
of the total inode cache size.

Implementing case 2 is not straightforward, because the existing code
assumes that slab->obj_exts is an array of slabobj_ext, while case 2
breaks the assumption.

As suggested by Vlastimil, abstract access to individual slabobj_ext
metadata via a new helper named slab_obj_ext():

static inline struct slabobj_ext *slab_obj_ext(struct slab *slab,
                                               unsigned long obj_exts,
                                               unsigned int index)
{
        return (struct slabobj_ext *)(obj_exts + slab_get_stride(slab) * index);
} 

In the normal case (including case 1), slab->obj_exts points to an array
of slabobj_ext, and the stride is sizeof(struct slabobj_ext).

In case 2, the stride is s->size and
slab->obj_exts = slab_address(slab) + s->red_left_pad + (offset of slabobj_ext)

With this approach, the memcg charging fastpath doesn't need to care the
storage method of slabobj_ext.

Harry Yoo (8):
  mm/slab: use unsigned long for orig_size to ensure proper metadata
    align
  mm/slab: allow specifying free pointer offset when using constructor
  ext4: specify the free pointer offset for ext4_inode_cache
  mm/slab: abstract slabobj_ext access via new slab_obj_ext() helper
  mm/slab: use stride to access slabobj_ext
  mm/memcontrol,alloc_tag: handle slabobj_ext access under KASAN poison
  mm/slab: save memory by allocating slabobj_ext array from leftover
  mm/slab: place slabobj_ext metadata in unused space within s->size

 fs/ext4/super.c      |  20 ++-
 include/linux/slab.h |  39 +++--
 mm/memcontrol.c      |  31 +++-
 mm/slab.h            | 120 ++++++++++++++-
 mm/slab_common.c     |   8 +-
 mm/slub.c            | 345 +++++++++++++++++++++++++++++++++++--------
 6 files changed, 466 insertions(+), 97 deletions(-)

-- 
2.43.0



^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2026-01-07 17:43 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-01-05  8:02 [PATCH V5 0/8] mm/slab: reduce slab accounting memory overhead by allocating slabobj_ext metadata within unsed slab space Harry Yoo
2026-01-05  8:02 ` [PATCH V5 1/8] mm/slab: use unsigned long for orig_size to ensure proper metadata align Harry Yoo
2026-01-07 11:43   ` Vlastimil Babka
2026-01-05  8:02 ` [PATCH V5 2/8] mm/slab: allow specifying free pointer offset when using constructor Harry Yoo
2026-01-05  8:02 ` [PATCH V5 3/8] ext4: specify the free pointer offset for ext4_inode_cache Harry Yoo
2026-01-07 13:54   ` Vlastimil Babka
2026-01-05  8:02 ` [PATCH V5 4/8] mm/slab: abstract slabobj_ext access via new slab_obj_ext() helper Harry Yoo
2026-01-07 14:53   ` Hao Li
2026-01-07 14:56   ` Vlastimil Babka
2026-01-05  8:02 ` [PATCH V5 5/8] mm/slab: use stride to access slabobj_ext Harry Yoo
2026-01-05  8:02 ` [PATCH V5 6/8] mm/memcontrol,alloc_tag: handle slabobj_ext access under KASAN poison Harry Yoo
2026-01-05  8:02 ` [PATCH V5 7/8] mm/slab: save memory by allocating slabobj_ext array from leftover Harry Yoo
2026-01-07 17:08   ` Vlastimil Babka
2026-01-05  8:02 ` [PATCH V5 8/8] mm/slab: place slabobj_ext metadata in unused space within s->size Harry Yoo
2026-01-07 17:33   ` Vlastimil Babka
2026-01-05  8:05 ` [PATCH V5 0/8] mm/slab: reduce slab accounting memory overhead by allocating slabobj_ext metadata within unsed slab space Harry Yoo
2026-01-07 17:43 ` Vlastimil Babka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox