linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	 David Rientjes <rientjes@google.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	 Matthew Wilcox <willy@infradead.org>,
	 "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	 Roman Gushchin <roman.gushchin@linux.dev>,
	 Hyeonggon Yoo <42.hyeyoo@gmail.com>,
	 Alexander Potapenko <glider@google.com>,
	Marco Elver <elver@google.com>,
	 Dmitry Vyukov <dvyukov@google.com>,
	linux-mm@kvack.org,  linux-kernel@vger.kernel.org,
	maple-tree@lists.infradead.org,  kasan-dev@googlegroups.com,
	Vlastimil Babka <vbabka@suse.cz>
Subject: [PATCH RFC v3 2/9] mm/slub: introduce __kmem_cache_free_bulk() without free hooks
Date: Wed, 29 Nov 2023 10:53:27 +0100	[thread overview]
Message-ID: <20231129-slub-percpu-caches-v3-2-6bcf536772bc@suse.cz> (raw)
In-Reply-To: <20231129-slub-percpu-caches-v3-0-6bcf536772bc@suse.cz>

Currently, when __kmem_cache_alloc_bulk() fails, it frees back the
objects that were allocated before the failure, using
kmem_cache_free_bulk(). Because kmem_cache_free_bulk() calls the free
hooks (kasan etc.) and those expect objects processed by the post alloc
hooks, slab_post_alloc_hook() is called before kmem_cache_free_bulk().

This is wasteful, although not a big concern in practice for the very
rare error path. But in order to efficiently handle percpu array batch
refill and free in the following patch, we will also need a variant of
kmem_cache_free_bulk() that avoids the free hooks. So introduce it first
and use it in the error path too.

As a consequence, __kmem_cache_alloc_bulk() no longer needs the objcg
parameter, remove it.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 mm/slub.c | 33 ++++++++++++++++++++++++++-------
 1 file changed, 26 insertions(+), 7 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index f0cd55bb4e11..16748aeada8f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3919,6 +3919,27 @@ int build_detached_freelist(struct kmem_cache *s, size_t size,
 	return same;
 }
 
+/*
+ * Internal bulk free of objects that were not initialised by the post alloc
+ * hooks and thus should not be processed by the free hooks
+ */
+static void __kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p)
+{
+	if (!size)
+		return;
+
+	do {
+		struct detached_freelist df;
+
+		size = build_detached_freelist(s, size, p, &df);
+		if (!df.slab)
+			continue;
+
+		do_slab_free(df.s, df.slab, df.freelist, df.tail, df.cnt,
+			     _RET_IP_);
+	} while (likely(size));
+}
+
 /* Note that interrupts must be enabled when calling this function. */
 void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p)
 {
@@ -3940,7 +3961,7 @@ EXPORT_SYMBOL(kmem_cache_free_bulk);
 
 #ifndef CONFIG_SLUB_TINY
 static inline int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags,
-			size_t size, void **p, struct obj_cgroup *objcg)
+					  size_t size, void **p)
 {
 	struct kmem_cache_cpu *c;
 	unsigned long irqflags;
@@ -4004,14 +4025,13 @@ static inline int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags,
 
 error:
 	slub_put_cpu_ptr(s->cpu_slab);
-	slab_post_alloc_hook(s, objcg, flags, i, p, false, s->object_size);
-	kmem_cache_free_bulk(s, i, p);
+	__kmem_cache_free_bulk(s, i, p);
 	return 0;
 
 }
 #else /* CONFIG_SLUB_TINY */
 static int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags,
-			size_t size, void **p, struct obj_cgroup *objcg)
+				   size_t size, void **p)
 {
 	int i;
 
@@ -4034,8 +4054,7 @@ static int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags,
 	return i;
 
 error:
-	slab_post_alloc_hook(s, objcg, flags, i, p, false, s->object_size);
-	kmem_cache_free_bulk(s, i, p);
+	__kmem_cache_free_bulk(s, i, p);
 	return 0;
 }
 #endif /* CONFIG_SLUB_TINY */
@@ -4055,7 +4074,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
 	if (unlikely(!s))
 		return 0;
 
-	i = __kmem_cache_alloc_bulk(s, flags, size, p, objcg);
+	i = __kmem_cache_alloc_bulk(s, flags, size, p);
 
 	/*
 	 * memcg and kmem_cache debug support and memory initialization.

-- 
2.43.0



  parent reply	other threads:[~2023-11-29  9:53 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-29  9:53 [PATCH RFC v3 0/9] SLUB percpu array caches and maple tree nodes Vlastimil Babka
2023-11-29  9:53 ` [PATCH RFC v3 1/9] mm/slub: fix bulk alloc and free stats Vlastimil Babka
2023-11-29  9:53 ` Vlastimil Babka [this message]
2023-11-29  9:53 ` [PATCH RFC v3 3/9] mm/slub: handle bulk and single object freeing separately Vlastimil Babka
2023-11-29  9:53 ` [PATCH RFC v3 4/9] mm/slub: free KFENCE objects in slab_free_hook() Vlastimil Babka
2023-11-29 12:00   ` Marco Elver
2023-11-29  9:53 ` [PATCH RFC v3 5/9] mm/slub: add opt-in percpu array cache of objects Vlastimil Babka
2023-11-29 10:35   ` Marco Elver
2023-12-15 18:28   ` Suren Baghdasaryan
2023-12-15 21:17     ` Suren Baghdasaryan
2023-11-29  9:53 ` [PATCH RFC v3 6/9] tools: Add SLUB percpu array functions for testing Vlastimil Babka
2023-11-29  9:53 ` [PATCH RFC v3 7/9] maple_tree: use slub percpu array Vlastimil Babka
2023-11-29  9:53 ` [PATCH RFC v3 8/9] maple_tree: Remove MA_STATE_PREALLOC Vlastimil Babka
2023-11-29  9:53 ` [PATCH RFC v3 9/9] maple_tree: replace preallocation with slub percpu array prefill Vlastimil Babka
2023-11-29 20:16 ` [PATCH RFC v3 0/9] SLUB percpu array caches and maple tree nodes Christoph Lameter (Ampere)
2023-11-29 21:20   ` Matthew Wilcox
2023-12-14 20:14     ` Christoph Lameter (Ampere)
2023-11-30  9:14   ` Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231129-slub-percpu-caches-v3-2-6bcf536772bc@suse.cz \
    --to=vbabka@suse.cz \
    --cc=42.hyeyoo@gmail.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=dvyukov@google.com \
    --cc=elver@google.com \
    --cc=glider@google.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=maple-tree@lists.infradead.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox