linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [GIT PULL] slab updates for 6.8
@ 2024-01-05  9:36 Vlastimil Babka
  2024-01-09 21:40 ` pr-tracker-bot
  0 siblings, 1 reply; 2+ messages in thread
From: Vlastimil Babka @ 2024-01-05  9:36 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: David Rientjes, Joonsoo Kim, Christoph Lameter, Pekka Enberg,
	Andrew Morton, linux-mm, LKML, patches, Roman Gushchin,
	Hyeonggon Yoo, Chengming Zhou, Stephen Rothwell

Hi Linus,

once the merge window opens, please pull the latest slab updates from:

  git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git tags/slab-for-6.8

there are more conflicts (with mm tree) than usual this time, but I believe
it's a one-off situation due to a bunch of code being deleted or shuffled due
to the SLAB removal.

Stephen's -next resolutions (merging slab-next after mm)
https://lore.kernel.org/all/20240102150224.3c091932@canb.auug.org.au/
https://lore.kernel.org/all/20240102151332.48a87d86@canb.auug.org.au/
https://lore.kernel.org/all/20240102153438.5b29f8c5@canb.auug.org.au/

Only the last one is more involved as changes to __kmalloc_large_node() and
free_large_kmalloc() in mm/slab_common.c from mm tree need to be replicated in
mm/slub.c

I have tried the opposite direction (mm after slab) and it was basically the
same. Parking this slab PR until mm PR's is certainly an option.

Thanks,
Vlastimil

======================================

- SLUB: delayed freezing of CPU partial slabs (Chengming Zhou)

  Freezing is an operation involving double_cmpxchg() that makes a slab
  exclusive for a particular CPU. Chengming noticed that we use it also in
  situations where we are not yet installing the slab as the CPU slab, because
  freezing also indicates that the slab is not on the shared list. This
  results in redundant freeze/unfreeze operation and can be avoided by marking
  separately the shared list presence by reusing the PG_workingset flag.

  This approach neatly avoids the issues described in 9b1ea29bc0d7 ("Revert
  "mm, slub: consider rest of partial list if acquire_slab() fails"") as we can
  now grab a slab from the shared list in a quick and guaranteed way without
  the cmpxchg_double() operation that amplifies the lock contention and can fail.

  As a result, lkp has reported 34.2% improvement of stress-ng.rawudp.ops_per_sec

- SLAB removal and SLUB cleanups (Vlastimil Babka)

  The SLAB allocator has been deprecated since 6.5 and nobody has objected so
  far. We agreed at LSF/MM to wait until the next LTS, which is 6.6, so we
  should be good to go now.

  This doesn't yet erase all traces of SLAB outside of mm/ so some dead code,
  comments or documentation remain, and will be cleaned up gradually (some
  series are already in the works).

  Removing the choice of allocators has already allowed to simplify and
  optimize the code wiring up the kmalloc APIs to the SLUB implementation.

----------------------------------------------------------------
Chengming Zhou (9):
      slub: Reflow ___slab_alloc()
      slub: Change get_partial() interfaces to return slab
      slub: Keep track of whether slub is on the per-node partial list
      slub: Prepare __slab_free() for unfrozen partial slab out of node partial list
      slub: Introduce freeze_slab()
      slub: Delay freezing of partial slabs
      slub: Optimize deactivate_slab()
      slub: Rename all *unfreeze_partials* functions to *put_partials*
      slub: Update frozen slabs documentations in the source

Vlastimil Babka (26):
      mm/slab, docs: switch mm-api docs generation from slab.c to slub.c
      mm/slab: remove CONFIG_SLAB from all Kconfig and Makefile
      KASAN: remove code paths guarded by CONFIG_SLAB
      KFENCE: cleanup kfence_guarded_alloc() after CONFIG_SLAB removal
      mm/memcontrol: remove CONFIG_SLAB #ifdef guards
      cpu/hotplug: remove CPUHP_SLAB_PREPARE hooks
      mm/slab: remove CONFIG_SLAB code from slab common code
      mm/mempool/dmapool: remove CONFIG_DEBUG_SLAB ifdefs
      mm/slab: remove mm/slab.c and slab_def.h
      mm/slab: move struct kmem_cache_cpu declaration to slub.c
      mm/slab: move the rest of slub_def.h to mm/slab.h
      mm/slab: consolidate includes in the internal mm/slab.h
      mm/slab: move pre/post-alloc hooks from slab.h to slub.c
      mm/slab: move memcg related functions from slab.h to slub.c
      mm/slab: move struct kmem_cache_node from slab.h to slub.c
      mm/slab: move kfree() from slab_common.c to slub.c
      mm/slab: move kmalloc_slab() to mm/slab.h
      mm/slab: move kmalloc() functions from slab_common.c to slub.c
      mm/slub: remove slab_alloc() and __kmem_cache_alloc_lru() wrappers
      mm/slub: optimize alloc fastpath code layout
      mm/slub: optimize free fast path code layout
      mm/slub: fix bulk alloc and free stats
      mm/slub: introduce __kmem_cache_free_bulk() without free hooks
      mm/slub: handle bulk and single object freeing separately
      mm/slub: free KFENCE objects in slab_free_hook()
      Merge branch 'slab/for-6.8/slub-hook-cleanups' into slab/for-next

 CREDITS                           |   12 +-
 Documentation/core-api/mm-api.rst |    2 +-
 arch/arm64/Kconfig                |    2 +-
 arch/s390/Kconfig                 |    2 +-
 arch/x86/Kconfig                  |    2 +-
 include/linux/cpuhotplug.h        |    1 -
 include/linux/slab.h              |   22 +-
 include/linux/slab_def.h          |  124 --
 include/linux/slub_def.h          |  204 --
 kernel/cpu.c                      |    5 -
 lib/Kconfig.debug                 |    1 -
 lib/Kconfig.kasan                 |   11 +-
 lib/Kconfig.kfence                |    2 +-
 lib/Kconfig.kmsan                 |    2 +-
 mm/Kconfig                        |   68 +-
 mm/Kconfig.debug                  |   16 +-
 mm/Makefile                       |    6 +-
 mm/dmapool.c                      |    2 +-
 mm/kasan/common.c                 |   13 +-
 mm/kasan/kasan.h                  |    3 +-
 mm/kasan/quarantine.c             |    7 -
 mm/kasan/report.c                 |    1 +
 mm/kfence/core.c                  |    4 -
 mm/memcontrol.c                   |    6 +-
 mm/mempool.c                      |    6 +-
 mm/slab.c                         | 4026 -------------------------------------
 mm/slab.h                         |  551 ++---
 mm/slab_common.c                  |  231 +--
 mm/slub.c                         | 1137 ++++++++---
 29 files changed, 1094 insertions(+), 5375 deletions(-)
 delete mode 100644 include/linux/slab_def.h
 delete mode 100644 include/linux/slub_def.h
 delete mode 100644 mm/slab.c


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [GIT PULL] slab updates for 6.8
  2024-01-05  9:36 [GIT PULL] slab updates for 6.8 Vlastimil Babka
@ 2024-01-09 21:40 ` pr-tracker-bot
  0 siblings, 0 replies; 2+ messages in thread
From: pr-tracker-bot @ 2024-01-09 21:40 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Linus Torvalds, David Rientjes, Joonsoo Kim, Christoph Lameter,
	Pekka Enberg, Andrew Morton, linux-mm, LKML, patches,
	Roman Gushchin, Hyeonggon Yoo, Chengming Zhou, Stephen Rothwell

The pull request you sent on Fri, 5 Jan 2024 10:36:08 +0100:

> git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git tags/slab-for-6.8

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/d30e51aa7b1f6fa7dd78d4598d1e4c047fcc3fb9

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2024-01-09 21:40 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-01-05  9:36 [GIT PULL] slab updates for 6.8 Vlastimil Babka
2024-01-09 21:40 ` pr-tracker-bot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox