From: Vlastimil Babka <vbabka@suse.cz>
To: Christoph Lameter <cl@linux.com>,
Pekka Enberg <penberg@kernel.org>,
David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Matthew Wilcox <willy@infradead.org>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Roman Gushchin <roman.gushchin@linux.dev>,
Hyeonggon Yoo <42.hyeyoo@gmail.com>,
Alexander Potapenko <glider@google.com>,
Marco Elver <elver@google.com>,
Dmitry Vyukov <dvyukov@google.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
maple-tree@lists.infradead.org, kasan-dev@googlegroups.com,
Vlastimil Babka <vbabka@suse.cz>
Subject: [PATCH RFC v3 4/9] mm/slub: free KFENCE objects in slab_free_hook()
Date: Wed, 29 Nov 2023 10:53:29 +0100 [thread overview]
Message-ID: <20231129-slub-percpu-caches-v3-4-6bcf536772bc@suse.cz> (raw)
In-Reply-To: <20231129-slub-percpu-caches-v3-0-6bcf536772bc@suse.cz>
When freeing an object that was allocated from KFENCE, we do that in the
slowpath __slab_free(), relying on the fact that KFENCE "slab" cannot be
the cpu slab, so the fastpath has to fallback to the slowpath.
This optimization doesn't help much though, because is_kfence_address()
is checked earlier anyway during the free hook processing or detached
freelist building. Thus we can simplify the code by making the
slab_free_hook() free the KFENCE object immediately, similarly to KASAN
quarantine.
In slab_free_hook() we can place kfence_free() above init processing, as
callers have been making sure to set init to false for KFENCE objects.
This simplifies slab_free(). This places it also above kasan_slab_free()
which is ok as that skips KFENCE objects anyway.
While at it also determine the init value in slab_free_freelist_hook()
outside of the loop.
This change will also make introducing per cpu array caches easier.
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
mm/slub.c | 21 ++++++++++-----------
1 file changed, 10 insertions(+), 11 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 7d23f10d42e6..59912a376c6d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1772,7 +1772,7 @@ static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab,
* production configuration these hooks all should produce no code at all.
*
* Returns true if freeing of the object can proceed, false if its reuse
- * was delayed by KASAN quarantine.
+ * was delayed by KASAN quarantine, or it was returned to KFENCE.
*/
static __always_inline
bool slab_free_hook(struct kmem_cache *s, void *x, bool init)
@@ -1790,6 +1790,9 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init)
__kcsan_check_access(x, s->object_size,
KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT);
+ if (kfence_free(kasan_reset_tag(x)))
+ return false;
+
/*
* As memory initialization might be integrated into KASAN,
* kasan_slab_free and initialization memset's must be
@@ -1819,22 +1822,25 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s,
void *object;
void *next = *head;
void *old_tail = *tail;
+ bool init;
if (is_kfence_address(next)) {
slab_free_hook(s, next, false);
- return true;
+ return false;
}
/* Head and tail of the reconstructed freelist */
*head = NULL;
*tail = NULL;
+ init = slab_want_init_on_free(s);
+
do {
object = next;
next = get_freepointer(s, object);
/* If object's reuse doesn't have to be delayed */
- if (slab_free_hook(s, object, slab_want_init_on_free(s))) {
+ if (slab_free_hook(s, object, init)) {
/* Move object to the new freelist */
set_freepointer(s, object, *head);
*head = object;
@@ -3619,9 +3625,6 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
stat(s, FREE_SLOWPATH);
- if (kfence_free(head))
- return;
-
if (IS_ENABLED(CONFIG_SLUB_TINY) || kmem_cache_debug(s)) {
free_to_partial_list(s, slab, head, tail, cnt, addr);
return;
@@ -3806,13 +3809,9 @@ static __fastpath_inline
void slab_free(struct kmem_cache *s, struct slab *slab, void *object,
unsigned long addr)
{
- bool init;
-
memcg_slab_free_hook(s, slab, &object, 1);
- init = !is_kfence_address(object) && slab_want_init_on_free(s);
-
- if (likely(slab_free_hook(s, object, init)))
+ if (likely(slab_free_hook(s, object, slab_want_init_on_free(s))))
do_slab_free(s, slab, object, object, 1, addr);
}
--
2.43.0
next prev parent reply other threads:[~2023-11-29 9:54 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-29 9:53 [PATCH RFC v3 0/9] SLUB percpu array caches and maple tree nodes Vlastimil Babka
2023-11-29 9:53 ` [PATCH RFC v3 1/9] mm/slub: fix bulk alloc and free stats Vlastimil Babka
2023-11-29 9:53 ` [PATCH RFC v3 2/9] mm/slub: introduce __kmem_cache_free_bulk() without free hooks Vlastimil Babka
2023-11-29 9:53 ` [PATCH RFC v3 3/9] mm/slub: handle bulk and single object freeing separately Vlastimil Babka
2023-11-29 9:53 ` Vlastimil Babka [this message]
2023-11-29 12:00 ` [PATCH RFC v3 4/9] mm/slub: free KFENCE objects in slab_free_hook() Marco Elver
2023-11-29 9:53 ` [PATCH RFC v3 5/9] mm/slub: add opt-in percpu array cache of objects Vlastimil Babka
2023-11-29 10:35 ` Marco Elver
2023-12-15 18:28 ` Suren Baghdasaryan
2023-12-15 21:17 ` Suren Baghdasaryan
2023-11-29 9:53 ` [PATCH RFC v3 6/9] tools: Add SLUB percpu array functions for testing Vlastimil Babka
2023-11-29 9:53 ` [PATCH RFC v3 7/9] maple_tree: use slub percpu array Vlastimil Babka
2023-11-29 9:53 ` [PATCH RFC v3 8/9] maple_tree: Remove MA_STATE_PREALLOC Vlastimil Babka
2023-11-29 9:53 ` [PATCH RFC v3 9/9] maple_tree: replace preallocation with slub percpu array prefill Vlastimil Babka
2023-11-29 20:16 ` [PATCH RFC v3 0/9] SLUB percpu array caches and maple tree nodes Christoph Lameter (Ampere)
2023-11-29 21:20 ` Matthew Wilcox
2023-12-14 20:14 ` Christoph Lameter (Ampere)
2023-11-30 9:14 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231129-slub-percpu-caches-v3-4-6bcf536772bc@suse.cz \
--to=vbabka@suse.cz \
--cc=42.hyeyoo@gmail.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=dvyukov@google.com \
--cc=elver@google.com \
--cc=glider@google.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=kasan-dev@googlegroups.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=maple-tree@lists.infradead.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox