* [PATCH 0/5] mm/slub: Improve data handling of krealloc() when orig_size is enabled
@ 2024-09-09 1:29 Feng Tang
2024-09-09 1:29 ` [PATCH 1/5] mm/kasan: Don't store metadata inside kmalloc object when slub_debug_orig_size is on Feng Tang
` (5 more replies)
0 siblings, 6 replies; 21+ messages in thread
From: Feng Tang @ 2024-09-09 1:29 UTC (permalink / raw)
To: Vlastimil Babka, Andrew Morton, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo,
Andrey Konovalov, Marco Elver, Shuah Khan, David Gow,
Danilo Krummrich
Cc: linux-mm, kasan-dev, linux-kernel, Feng Tang
Danilo Krummrich's patch [1] raised one problem about krealloc() that
its caller doesn't know what's the actual request size, say the object
is 64 bytes kmalloc one, but the original caller may only requested 48
bytes. And when krealloc() shrinks or grows in the same object, or
allocate a new bigger object, it lacks this 'original size' information
to do accurate data preserving or zeroing (when __GFP_ZERO is set).
And when some slub debug option is enabled, kmalloc caches do have this
'orig_size' feature. As suggested by Vlastimil, utilize it to do more
accurate data handling, as well as enforce the kmalloc-redzone sanity check.
To make the 'orig_size' accurate, we adjust some kasan/slub meta data
handling. Also add a slub kunit test case for krealloc().
This patchset has dependency over patches in both -mm tree and -slab
trees, so it is written based on linux-next tree '20240905' version.
[1]. https://lore.kernel.org/lkml/20240812223707.32049-1-dakr@kernel.org/
Thanks,
Feng
Feng Tang (5):
mm/kasan: Don't store metadata inside kmalloc object when
slub_debug_orig_size is on
mm/slub: Consider kfence case for get_orig_size()
mm/slub: Improve redzone check and zeroing for krealloc()
kunit: kfence: Make KFENCE_TEST_REQUIRES macro available for all kunit
case
mm/slub, kunit: Add testcase for krealloc redzone and zeroing
include/kunit/test.h | 6 ++
lib/slub_kunit.c | 46 +++++++++++++++
mm/kasan/generic.c | 5 +-
mm/kfence/kfence_test.c | 9 +--
mm/slab.h | 6 ++
mm/slab_common.c | 84 ---------------------------
mm/slub.c | 125 ++++++++++++++++++++++++++++++++++------
7 files changed, 171 insertions(+), 110 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 1/5] mm/kasan: Don't store metadata inside kmalloc object when slub_debug_orig_size is on
2024-09-09 1:29 [PATCH 0/5] mm/slub: Improve data handling of krealloc() when orig_size is enabled Feng Tang
@ 2024-09-09 1:29 ` Feng Tang
2024-09-09 16:24 ` Andrey Konovalov
2024-09-09 1:29 ` [PATCH 2/5] mm/slub: Consider kfence case for get_orig_size() Feng Tang
` (4 subsequent siblings)
5 siblings, 1 reply; 21+ messages in thread
From: Feng Tang @ 2024-09-09 1:29 UTC (permalink / raw)
To: Vlastimil Babka, Andrew Morton, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo,
Andrey Konovalov, Marco Elver, Shuah Khan, David Gow,
Danilo Krummrich
Cc: linux-mm, kasan-dev, linux-kernel, Feng Tang
For a kmalloc object, when both kasan and slub redzone sanity check
are enabled, they could both manipulate its data space like storing
kasan free meta data and setting up kmalloc redzone, and may affect
accuracy of that object's 'orig_size'.
As an accurate 'orig_size' will be needed by some function like
krealloc() soon, save kasan's free meta data in slub's metadata area
instead of inside object when 'orig_size' is enabled.
This will make it easier to maintain/understand the code. Size wise,
when these two options are both enabled, the slub meta data space is
already huge, and this just slightly increase the overall size.
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
mm/kasan/generic.c | 5 ++++-
mm/slab.h | 6 ++++++
mm/slub.c | 17 -----------------
3 files changed, 10 insertions(+), 18 deletions(-)
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 6310a180278b..cad376199d47 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -393,8 +393,11 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
* be touched after it was freed, or
* 2. Object has a constructor, which means it's expected to
* retain its content until the next allocation.
+ * 3. It is from a kmalloc cache which enables the debug option
+ * to store original size.
*/
- if ((cache->flags & SLAB_TYPESAFE_BY_RCU) || cache->ctor) {
+ if ((cache->flags & SLAB_TYPESAFE_BY_RCU) || cache->ctor ||
+ slub_debug_orig_size(cache)) {
cache->kasan_info.free_meta_offset = *size;
*size += sizeof(struct kasan_free_meta);
goto free_meta_added;
diff --git a/mm/slab.h b/mm/slab.h
index 90f95bda4571..7a0e9b34ba2a 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -689,6 +689,12 @@ void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab)
void __check_heap_object(const void *ptr, unsigned long n,
const struct slab *slab, bool to_user);
+static inline bool slub_debug_orig_size(struct kmem_cache *s)
+{
+ return (kmem_cache_debug_flags(s, SLAB_STORE_USER) &&
+ (s->flags & SLAB_KMALLOC));
+}
+
#ifdef CONFIG_SLUB_DEBUG
void skip_orig_size_check(struct kmem_cache *s, const void *object);
#endif
diff --git a/mm/slub.c b/mm/slub.c
index 23761533329d..996a72fa6f62 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -230,12 +230,6 @@ static inline bool kmem_cache_debug(struct kmem_cache *s)
return kmem_cache_debug_flags(s, SLAB_DEBUG_FLAGS);
}
-static inline bool slub_debug_orig_size(struct kmem_cache *s)
-{
- return (kmem_cache_debug_flags(s, SLAB_STORE_USER) &&
- (s->flags & SLAB_KMALLOC));
-}
-
void *fixup_red_left(struct kmem_cache *s, void *p)
{
if (kmem_cache_debug_flags(s, SLAB_RED_ZONE))
@@ -760,21 +754,10 @@ static inline void set_orig_size(struct kmem_cache *s,
void *object, unsigned int orig_size)
{
void *p = kasan_reset_tag(object);
- unsigned int kasan_meta_size;
if (!slub_debug_orig_size(s))
return;
- /*
- * KASAN can save its free meta data inside of the object at offset 0.
- * If this meta data size is larger than 'orig_size', it will overlap
- * the data redzone in [orig_size+1, object_size]. Thus, we adjust
- * 'orig_size' to be as at least as big as KASAN's meta data.
- */
- kasan_meta_size = kasan_metadata_size(s, true);
- if (kasan_meta_size > orig_size)
- orig_size = kasan_meta_size;
-
p += get_info_end(s);
p += sizeof(struct track) * 2;
--
2.34.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 2/5] mm/slub: Consider kfence case for get_orig_size()
2024-09-09 1:29 [PATCH 0/5] mm/slub: Improve data handling of krealloc() when orig_size is enabled Feng Tang
2024-09-09 1:29 ` [PATCH 1/5] mm/kasan: Don't store metadata inside kmalloc object when slub_debug_orig_size is on Feng Tang
@ 2024-09-09 1:29 ` Feng Tang
2024-09-09 1:29 ` [PATCH 3/5] mm/slub: Improve redzone check and zeroing for krealloc() Feng Tang
` (3 subsequent siblings)
5 siblings, 0 replies; 21+ messages in thread
From: Feng Tang @ 2024-09-09 1:29 UTC (permalink / raw)
To: Vlastimil Babka, Andrew Morton, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo,
Andrey Konovalov, Marco Elver, Shuah Khan, David Gow,
Danilo Krummrich
Cc: linux-mm, kasan-dev, linux-kernel, Feng Tang
When 'orig_size' of kmalloc object is enabled by debug option, it
should either contains the actual requested size or the cache's
'object_size'.
But it's not true if that object is a kfence-allocated one, and its
'orig_size' in metadata could be zero or other values. This is not
a big issue for current 'orig_size' usage, as init_object() and
check_object() during alloc/free process will be skipped for kfence
addresses.
As 'orig_size' will be used by some function block like krealloc(),
handle it by returning the 'object_size' in get_orig_size() for
kfence addresses.
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
mm/slub.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index 996a72fa6f62..4cb3822dba08 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -768,7 +768,7 @@ static inline unsigned int get_orig_size(struct kmem_cache *s, void *object)
{
void *p = kasan_reset_tag(object);
- if (!slub_debug_orig_size(s))
+ if (!slub_debug_orig_size(s) || is_kfence_address(object))
return s->object_size;
p += get_info_end(s);
--
2.34.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 3/5] mm/slub: Improve redzone check and zeroing for krealloc()
2024-09-09 1:29 [PATCH 0/5] mm/slub: Improve data handling of krealloc() when orig_size is enabled Feng Tang
2024-09-09 1:29 ` [PATCH 1/5] mm/kasan: Don't store metadata inside kmalloc object when slub_debug_orig_size is on Feng Tang
2024-09-09 1:29 ` [PATCH 2/5] mm/slub: Consider kfence case for get_orig_size() Feng Tang
@ 2024-09-09 1:29 ` Feng Tang
2024-09-10 10:06 ` Danilo Krummrich
2024-09-10 13:15 ` Vlastimil Babka
2024-09-09 1:29 ` [PATCH 4/5] kunit: kfence: Make KFENCE_TEST_REQUIRES macro available for all kunit case Feng Tang
` (2 subsequent siblings)
5 siblings, 2 replies; 21+ messages in thread
From: Feng Tang @ 2024-09-09 1:29 UTC (permalink / raw)
To: Vlastimil Babka, Andrew Morton, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo,
Andrey Konovalov, Marco Elver, Shuah Khan, David Gow,
Danilo Krummrich
Cc: linux-mm, kasan-dev, linux-kernel, Feng Tang
For current krealloc(), one problem is its caller doesn't know what's
the actual request size, say the object is 64 bytes kmalloc one, but
the original caller may only requested 48 bytes. And when krealloc()
shrinks or grows in the same object, or allocate a new bigger object,
it lacks this 'original size' information to do accurate data preserving
or zeroing (when __GFP_ZERO is set).
And when some slub debug option is enabled, kmalloc caches do have this
'orig_size' feature. So utilize it to do more accurate data handling,
as well as enforce the kmalloc-redzone sanity check.
The krealloc() related code is moved from slab_common.c to slub.c for
more efficient function calling.
Suggested-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
mm/slab_common.c | 84 -------------------------------------
mm/slub.c | 106 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 106 insertions(+), 84 deletions(-)
diff --git a/mm/slab_common.c b/mm/slab_common.c
index ad438ba62485..e59942fb7970 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1297,90 +1297,6 @@ module_init(slab_proc_init);
#endif /* CONFIG_SLUB_DEBUG */
-static __always_inline __realloc_size(2) void *
-__do_krealloc(const void *p, size_t new_size, gfp_t flags)
-{
- void *ret;
- size_t ks;
-
- /* Check for double-free before calling ksize. */
- if (likely(!ZERO_OR_NULL_PTR(p))) {
- if (!kasan_check_byte(p))
- return NULL;
- ks = ksize(p);
- } else
- ks = 0;
-
- /* If the object still fits, repoison it precisely. */
- if (ks >= new_size) {
- /* Zero out spare memory. */
- if (want_init_on_alloc(flags)) {
- kasan_disable_current();
- memset((void *)p + new_size, 0, ks - new_size);
- kasan_enable_current();
- }
-
- p = kasan_krealloc((void *)p, new_size, flags);
- return (void *)p;
- }
-
- ret = kmalloc_node_track_caller_noprof(new_size, flags, NUMA_NO_NODE, _RET_IP_);
- if (ret && p) {
- /* Disable KASAN checks as the object's redzone is accessed. */
- kasan_disable_current();
- memcpy(ret, kasan_reset_tag(p), ks);
- kasan_enable_current();
- }
-
- return ret;
-}
-
-/**
- * krealloc - reallocate memory. The contents will remain unchanged.
- * @p: object to reallocate memory for.
- * @new_size: how many bytes of memory are required.
- * @flags: the type of memory to allocate.
- *
- * If @p is %NULL, krealloc() behaves exactly like kmalloc(). If @new_size
- * is 0 and @p is not a %NULL pointer, the object pointed to is freed.
- *
- * If __GFP_ZERO logic is requested, callers must ensure that, starting with the
- * initial memory allocation, every subsequent call to this API for the same
- * memory allocation is flagged with __GFP_ZERO. Otherwise, it is possible that
- * __GFP_ZERO is not fully honored by this API.
- *
- * This is the case, since krealloc() only knows about the bucket size of an
- * allocation (but not the exact size it was allocated with) and hence
- * implements the following semantics for shrinking and growing buffers with
- * __GFP_ZERO.
- *
- * new bucket
- * 0 size size
- * |--------|----------------|
- * | keep | zero |
- *
- * In any case, the contents of the object pointed to are preserved up to the
- * lesser of the new and old sizes.
- *
- * Return: pointer to the allocated memory or %NULL in case of error
- */
-void *krealloc_noprof(const void *p, size_t new_size, gfp_t flags)
-{
- void *ret;
-
- if (unlikely(!new_size)) {
- kfree(p);
- return ZERO_SIZE_PTR;
- }
-
- ret = __do_krealloc(p, new_size, flags);
- if (ret && kasan_reset_tag(p) != kasan_reset_tag(ret))
- kfree(p);
-
- return ret;
-}
-EXPORT_SYMBOL(krealloc_noprof);
-
/**
* kfree_sensitive - Clear sensitive information in memory before freeing
* @p: object to free memory of
diff --git a/mm/slub.c b/mm/slub.c
index 4cb3822dba08..d4c938dfb89e 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4709,6 +4709,112 @@ void kfree(const void *object)
}
EXPORT_SYMBOL(kfree);
+static __always_inline __realloc_size(2) void *
+__do_krealloc(const void *p, size_t new_size, gfp_t flags)
+{
+ void *ret;
+ size_t ks;
+ int orig_size = 0;
+ struct kmem_cache *s;
+
+ /* Check for double-free before calling ksize. */
+ if (likely(!ZERO_OR_NULL_PTR(p))) {
+ if (!kasan_check_byte(p))
+ return NULL;
+
+ s = virt_to_cache(p);
+ orig_size = get_orig_size(s, (void *)p);
+ ks = s->object_size;
+ } else
+ ks = 0;
+
+ /* If the object doesn't fit, allocate a bigger one */
+ if (new_size > ks)
+ goto alloc_new;
+
+ /* Zero out spare memory. */
+ if (want_init_on_alloc(flags)) {
+ kasan_disable_current();
+ if (orig_size < new_size)
+ memset((void *)p + orig_size, 0, new_size - orig_size);
+ else
+ memset((void *)p + new_size, 0, ks - new_size);
+ kasan_enable_current();
+ }
+
+ if (slub_debug_orig_size(s) && !is_kfence_address(p)) {
+ set_orig_size(s, (void *)p, new_size);
+ if (s->flags & SLAB_RED_ZONE && new_size < ks)
+ memset_no_sanitize_memory((void *)p + new_size,
+ SLUB_RED_ACTIVE, ks - new_size);
+ }
+
+ p = kasan_krealloc((void *)p, new_size, flags);
+ return (void *)p;
+
+alloc_new:
+ ret = kmalloc_node_track_caller_noprof(new_size, flags, NUMA_NO_NODE, _RET_IP_);
+ if (ret && p) {
+ /* Disable KASAN checks as the object's redzone is accessed. */
+ kasan_disable_current();
+ if (orig_size)
+ memcpy(ret, kasan_reset_tag(p), orig_size);
+ kasan_enable_current();
+ }
+
+ return ret;
+}
+
+/**
+ * krealloc - reallocate memory. The contents will remain unchanged.
+ * @p: object to reallocate memory for.
+ * @new_size: how many bytes of memory are required.
+ * @flags: the type of memory to allocate.
+ *
+ * If @p is %NULL, krealloc() behaves exactly like kmalloc(). If @new_size
+ * is 0 and @p is not a %NULL pointer, the object pointed to is freed.
+ *
+ * If __GFP_ZERO logic is requested, callers must ensure that, starting with the
+ * initial memory allocation, every subsequent call to this API for the same
+ * memory allocation is flagged with __GFP_ZERO. Otherwise, it is possible that
+ * __GFP_ZERO is not fully honored by this API.
+ *
+ * When slub_debug_orig_size() is off, since krealloc() only knows about the
+ * bucket size of an allocation (but not the exact size it was allocated with)
+ * and hence implements the following semantics for shrinking and growing
+ * buffers with __GFP_ZERO.
+ *
+ * new bucket
+ * 0 size size
+ * |--------|----------------|
+ * | keep | zero |
+ *
+ * Otherwize, the original allocation size 'orig_size' could be used to
+ * precisely clear the requested size, and the new size will also be stored as
+ * the new 'orig_size'.
+ *
+ * In any case, the contents of the object pointed to are preserved up to the
+ * lesser of the new and old sizes.
+ *
+ * Return: pointer to the allocated memory or %NULL in case of error
+ */
+void *krealloc_noprof(const void *p, size_t new_size, gfp_t flags)
+{
+ void *ret;
+
+ if (unlikely(!new_size)) {
+ kfree(p);
+ return ZERO_SIZE_PTR;
+ }
+
+ ret = __do_krealloc(p, new_size, flags);
+ if (ret && kasan_reset_tag(p) != kasan_reset_tag(ret))
+ kfree(p);
+
+ return ret;
+}
+EXPORT_SYMBOL(krealloc_noprof);
+
struct detached_freelist {
struct slab *slab;
void *tail;
--
2.34.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 4/5] kunit: kfence: Make KFENCE_TEST_REQUIRES macro available for all kunit case
2024-09-09 1:29 [PATCH 0/5] mm/slub: Improve data handling of krealloc() when orig_size is enabled Feng Tang
` (2 preceding siblings ...)
2024-09-09 1:29 ` [PATCH 3/5] mm/slub: Improve redzone check and zeroing for krealloc() Feng Tang
@ 2024-09-09 1:29 ` Feng Tang
2024-09-10 13:17 ` Vlastimil Babka
2024-09-09 1:29 ` [PATCH 5/5] mm/slub, kunit: Add testcase for krealloc redzone and zeroing Feng Tang
2024-09-09 17:12 ` [PATCH 0/5] mm/slub: Improve data handling of krealloc() when orig_size is enabled Vlastimil Babka
5 siblings, 1 reply; 21+ messages in thread
From: Feng Tang @ 2024-09-09 1:29 UTC (permalink / raw)
To: Vlastimil Babka, Andrew Morton, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo,
Andrey Konovalov, Marco Elver, Shuah Khan, David Gow,
Danilo Krummrich
Cc: linux-mm, kasan-dev, linux-kernel, Feng Tang
KFENCE_TEST_REQUIRES macro is convenient for judging if a prerequisite of a
test case exists. Lift it into kunit/test.h so that all kunit test cases
can benefit from it.
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
include/kunit/test.h | 6 ++++++
mm/kfence/kfence_test.c | 9 ++-------
2 files changed, 8 insertions(+), 7 deletions(-)
diff --git a/include/kunit/test.h b/include/kunit/test.h
index 5ac237c949a0..8a8027e10b89 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -643,6 +643,12 @@ void __printf(2, 3) kunit_log_append(struct string_stream *log, const char *fmt,
WRITE_ONCE(test->last_seen.line, __LINE__); \
} while (0)
+#define KUNIT_TEST_REQUIRES(test, cond) do { \
+ if (!(cond)) \
+ kunit_skip((test), "Test requires: " #cond); \
+} while (0)
+
+
/**
* KUNIT_SUCCEED() - A no-op expectation. Only exists for code clarity.
* @test: The test context object.
diff --git a/mm/kfence/kfence_test.c b/mm/kfence/kfence_test.c
index 00fd17285285..5dbb22c8c44f 100644
--- a/mm/kfence/kfence_test.c
+++ b/mm/kfence/kfence_test.c
@@ -32,11 +32,6 @@
#define arch_kfence_test_address(addr) (addr)
#endif
-#define KFENCE_TEST_REQUIRES(test, cond) do { \
- if (!(cond)) \
- kunit_skip((test), "Test requires: " #cond); \
-} while (0)
-
/* Report as observed from console. */
static struct {
spinlock_t lock;
@@ -561,7 +556,7 @@ static void test_init_on_free(struct kunit *test)
};
int i;
- KFENCE_TEST_REQUIRES(test, IS_ENABLED(CONFIG_INIT_ON_FREE_DEFAULT_ON));
+ KUNIT_TEST_REQUIRES(test, IS_ENABLED(CONFIG_INIT_ON_FREE_DEFAULT_ON));
/* Assume it hasn't been disabled on command line. */
setup_test_cache(test, size, 0, NULL);
@@ -609,7 +604,7 @@ static void test_gfpzero(struct kunit *test)
int i;
/* Skip if we think it'd take too long. */
- KFENCE_TEST_REQUIRES(test, kfence_sample_interval <= 100);
+ KUNIT_TEST_REQUIRES(test, kfence_sample_interval <= 100);
setup_test_cache(test, size, 0, NULL);
buf1 = test_alloc(test, size, GFP_KERNEL, ALLOCATE_ANY);
--
2.34.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 5/5] mm/slub, kunit: Add testcase for krealloc redzone and zeroing
2024-09-09 1:29 [PATCH 0/5] mm/slub: Improve data handling of krealloc() when orig_size is enabled Feng Tang
` (3 preceding siblings ...)
2024-09-09 1:29 ` [PATCH 4/5] kunit: kfence: Make KFENCE_TEST_REQUIRES macro available for all kunit case Feng Tang
@ 2024-09-09 1:29 ` Feng Tang
2024-09-10 10:09 ` Danilo Krummrich
2024-09-10 13:29 ` Vlastimil Babka
2024-09-09 17:12 ` [PATCH 0/5] mm/slub: Improve data handling of krealloc() when orig_size is enabled Vlastimil Babka
5 siblings, 2 replies; 21+ messages in thread
From: Feng Tang @ 2024-09-09 1:29 UTC (permalink / raw)
To: Vlastimil Babka, Andrew Morton, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo,
Andrey Konovalov, Marco Elver, Shuah Khan, David Gow,
Danilo Krummrich
Cc: linux-mm, kasan-dev, linux-kernel, Feng Tang
Danilo Krummrich raised issue about krealloc+GFP_ZERO [1], and Vlastimil
suggested to add some test case which can sanity test the kmalloc-redzone
and zeroing by utilizing the kmalloc's 'orig_size' debug feature.
It covers the grow and shrink case of krealloc() re-using current kmalloc
object, and the case of re-allocating a new bigger object.
User can add "slub_debug" kernel cmdline parameter to test it.
[1]. https://lore.kernel.org/lkml/20240812223707.32049-1-dakr@kernel.org/
Suggested-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
lib/slub_kunit.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 46 insertions(+)
diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
index 6e3a1e5a7142..03e0089149ad 100644
--- a/lib/slub_kunit.c
+++ b/lib/slub_kunit.c
@@ -186,6 +186,51 @@ static void test_leak_destroy(struct kunit *test)
KUNIT_EXPECT_EQ(test, 1, slab_errors);
}
+static void test_krealloc_redzone_zeroing(struct kunit *test)
+{
+ char *p;
+ int i;
+
+ KUNIT_TEST_REQUIRES(test, __slub_debug_enabled());
+
+ /* Allocate a 64B kmalloc object */
+ p = kzalloc(48, GFP_KERNEL);
+ if (unlikely(is_kfence_address(p))) {
+ kfree(p);
+ return;
+ }
+ memset(p, 0xff, 48);
+
+ kasan_disable_current();
+ OPTIMIZER_HIDE_VAR(p);
+
+ /* Test shrink */
+ p = krealloc(p, 40, GFP_KERNEL | __GFP_ZERO);
+ for (i = 40; i < 64; i++)
+ KUNIT_EXPECT_EQ(test, p[i], SLUB_RED_ACTIVE);
+
+ /* Test grow within the same 64B kmalloc object */
+ p = krealloc(p, 56, GFP_KERNEL | __GFP_ZERO);
+ for (i = 40; i < 56; i++)
+ KUNIT_EXPECT_EQ(test, p[i], 0);
+ for (i = 56; i < 64; i++)
+ KUNIT_EXPECT_EQ(test, p[i], SLUB_RED_ACTIVE);
+
+ /* Test grow with allocating a bigger 128B object */
+ p = krealloc(p, 112, GFP_KERNEL | __GFP_ZERO);
+ if (unlikely(is_kfence_address(p)))
+ goto exit;
+
+ for (i = 56; i < 112; i++)
+ KUNIT_EXPECT_EQ(test, p[i], 0);
+ for (i = 112; i < 128; i++)
+ KUNIT_EXPECT_EQ(test, p[i], SLUB_RED_ACTIVE);
+
+exit:
+ kfree(p);
+ kasan_enable_current();
+}
+
static int test_init(struct kunit *test)
{
slab_errors = 0;
@@ -196,6 +241,7 @@ static int test_init(struct kunit *test)
}
static struct kunit_case test_cases[] = {
+ KUNIT_CASE(test_krealloc_redzone_zeroing),
KUNIT_CASE(test_clobber_zone),
#ifndef CONFIG_KASAN
--
2.34.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 1/5] mm/kasan: Don't store metadata inside kmalloc object when slub_debug_orig_size is on
2024-09-09 1:29 ` [PATCH 1/5] mm/kasan: Don't store metadata inside kmalloc object when slub_debug_orig_size is on Feng Tang
@ 2024-09-09 16:24 ` Andrey Konovalov
2024-09-10 2:17 ` Feng Tang
0 siblings, 1 reply; 21+ messages in thread
From: Andrey Konovalov @ 2024-09-09 16:24 UTC (permalink / raw)
To: Feng Tang
Cc: Vlastimil Babka, Andrew Morton, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo,
Marco Elver, Shuah Khan, David Gow, Danilo Krummrich, linux-mm,
kasan-dev, linux-kernel
On Mon, Sep 9, 2024 at 3:30 AM Feng Tang <feng.tang@intel.com> wrote:
>
> For a kmalloc object, when both kasan and slub redzone sanity check
> are enabled, they could both manipulate its data space like storing
> kasan free meta data and setting up kmalloc redzone, and may affect
> accuracy of that object's 'orig_size'.
>
> As an accurate 'orig_size' will be needed by some function like
> krealloc() soon, save kasan's free meta data in slub's metadata area
> instead of inside object when 'orig_size' is enabled.
>
> This will make it easier to maintain/understand the code. Size wise,
> when these two options are both enabled, the slub meta data space is
> already huge, and this just slightly increase the overall size.
>
> Signed-off-by: Feng Tang <feng.tang@intel.com>
> ---
> mm/kasan/generic.c | 5 ++++-
> mm/slab.h | 6 ++++++
> mm/slub.c | 17 -----------------
> 3 files changed, 10 insertions(+), 18 deletions(-)
>
> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> index 6310a180278b..cad376199d47 100644
> --- a/mm/kasan/generic.c
> +++ b/mm/kasan/generic.c
> @@ -393,8 +393,11 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> * be touched after it was freed, or
> * 2. Object has a constructor, which means it's expected to
> * retain its content until the next allocation.
Nit: ", or" above.
> + * 3. It is from a kmalloc cache which enables the debug option
> + * to store original size.
> */
> - if ((cache->flags & SLAB_TYPESAFE_BY_RCU) || cache->ctor) {
> + if ((cache->flags & SLAB_TYPESAFE_BY_RCU) || cache->ctor ||
> + slub_debug_orig_size(cache)) {
> cache->kasan_info.free_meta_offset = *size;
> *size += sizeof(struct kasan_free_meta);
> goto free_meta_added;
> diff --git a/mm/slab.h b/mm/slab.h
> index 90f95bda4571..7a0e9b34ba2a 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -689,6 +689,12 @@ void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab)
> void __check_heap_object(const void *ptr, unsigned long n,
> const struct slab *slab, bool to_user);
>
> +static inline bool slub_debug_orig_size(struct kmem_cache *s)
> +{
> + return (kmem_cache_debug_flags(s, SLAB_STORE_USER) &&
> + (s->flags & SLAB_KMALLOC));
> +}
> +
> #ifdef CONFIG_SLUB_DEBUG
> void skip_orig_size_check(struct kmem_cache *s, const void *object);
> #endif
> diff --git a/mm/slub.c b/mm/slub.c
> index 23761533329d..996a72fa6f62 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -230,12 +230,6 @@ static inline bool kmem_cache_debug(struct kmem_cache *s)
> return kmem_cache_debug_flags(s, SLAB_DEBUG_FLAGS);
> }
>
> -static inline bool slub_debug_orig_size(struct kmem_cache *s)
> -{
> - return (kmem_cache_debug_flags(s, SLAB_STORE_USER) &&
> - (s->flags & SLAB_KMALLOC));
> -}
> -
> void *fixup_red_left(struct kmem_cache *s, void *p)
> {
> if (kmem_cache_debug_flags(s, SLAB_RED_ZONE))
> @@ -760,21 +754,10 @@ static inline void set_orig_size(struct kmem_cache *s,
> void *object, unsigned int orig_size)
> {
> void *p = kasan_reset_tag(object);
> - unsigned int kasan_meta_size;
>
> if (!slub_debug_orig_size(s))
> return;
>
> - /*
> - * KASAN can save its free meta data inside of the object at offset 0.
> - * If this meta data size is larger than 'orig_size', it will overlap
> - * the data redzone in [orig_size+1, object_size]. Thus, we adjust
> - * 'orig_size' to be as at least as big as KASAN's meta data.
> - */
> - kasan_meta_size = kasan_metadata_size(s, true);
> - if (kasan_meta_size > orig_size)
> - orig_size = kasan_meta_size;
> -
> p += get_info_end(s);
> p += sizeof(struct track) * 2;
>
> --
> 2.34.1
>
Acked-by: Andrey Konovalov <andreyknvl@gmail.com>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 0/5] mm/slub: Improve data handling of krealloc() when orig_size is enabled
2024-09-09 1:29 [PATCH 0/5] mm/slub: Improve data handling of krealloc() when orig_size is enabled Feng Tang
` (4 preceding siblings ...)
2024-09-09 1:29 ` [PATCH 5/5] mm/slub, kunit: Add testcase for krealloc redzone and zeroing Feng Tang
@ 2024-09-09 17:12 ` Vlastimil Babka
2024-09-10 2:20 ` Feng Tang
5 siblings, 1 reply; 21+ messages in thread
From: Vlastimil Babka @ 2024-09-09 17:12 UTC (permalink / raw)
To: Feng Tang, Andrew Morton, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo,
Andrey Konovalov, Marco Elver, Shuah Khan, David Gow,
Danilo Krummrich
Cc: linux-mm, kasan-dev, linux-kernel
On 9/9/24 03:29, Feng Tang wrote:
> Danilo Krummrich's patch [1] raised one problem about krealloc() that
> its caller doesn't know what's the actual request size, say the object
> is 64 bytes kmalloc one, but the original caller may only requested 48
> bytes. And when krealloc() shrinks or grows in the same object, or
> allocate a new bigger object, it lacks this 'original size' information
> to do accurate data preserving or zeroing (when __GFP_ZERO is set).
>
> And when some slub debug option is enabled, kmalloc caches do have this
> 'orig_size' feature. As suggested by Vlastimil, utilize it to do more
> accurate data handling, as well as enforce the kmalloc-redzone sanity check.
>
> To make the 'orig_size' accurate, we adjust some kasan/slub meta data
> handling. Also add a slub kunit test case for krealloc().
>
> This patchset has dependency over patches in both -mm tree and -slab
> trees, so it is written based on linux-next tree '20240905' version.
Thanks, given the timing with merge window opening soon, I would take this
into the slab tree after the merge window, when the current -next becomes
6.12-rc1.
>
> [1]. https://lore.kernel.org/lkml/20240812223707.32049-1-dakr@kernel.org/
>
> Thanks,
> Feng
>
> Feng Tang (5):
> mm/kasan: Don't store metadata inside kmalloc object when
> slub_debug_orig_size is on
> mm/slub: Consider kfence case for get_orig_size()
> mm/slub: Improve redzone check and zeroing for krealloc()
> kunit: kfence: Make KFENCE_TEST_REQUIRES macro available for all kunit
> case
> mm/slub, kunit: Add testcase for krealloc redzone and zeroing
>
> include/kunit/test.h | 6 ++
> lib/slub_kunit.c | 46 +++++++++++++++
> mm/kasan/generic.c | 5 +-
> mm/kfence/kfence_test.c | 9 +--
> mm/slab.h | 6 ++
> mm/slab_common.c | 84 ---------------------------
> mm/slub.c | 125 ++++++++++++++++++++++++++++++++++------
> 7 files changed, 171 insertions(+), 110 deletions(-)
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 1/5] mm/kasan: Don't store metadata inside kmalloc object when slub_debug_orig_size is on
2024-09-09 16:24 ` Andrey Konovalov
@ 2024-09-10 2:17 ` Feng Tang
0 siblings, 0 replies; 21+ messages in thread
From: Feng Tang @ 2024-09-10 2:17 UTC (permalink / raw)
To: Andrey Konovalov
Cc: Vlastimil Babka, Andrew Morton, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo,
Marco Elver, Shuah Khan, David Gow, Danilo Krummrich, linux-mm,
kasan-dev, linux-kernel
On Mon, Sep 09, 2024 at 06:24:21PM +0200, Andrey Konovalov wrote:
> On Mon, Sep 9, 2024 at 3:30 AM Feng Tang <feng.tang@intel.com> wrote:
> >
> > For a kmalloc object, when both kasan and slub redzone sanity check
> > are enabled, they could both manipulate its data space like storing
> > kasan free meta data and setting up kmalloc redzone, and may affect
> > accuracy of that object's 'orig_size'.
> >
> > As an accurate 'orig_size' will be needed by some function like
> > krealloc() soon, save kasan's free meta data in slub's metadata area
> > instead of inside object when 'orig_size' is enabled.
> >
> > This will make it easier to maintain/understand the code. Size wise,
> > when these two options are both enabled, the slub meta data space is
> > already huge, and this just slightly increase the overall size.
> >
> > Signed-off-by: Feng Tang <feng.tang@intel.com>
> > ---
> > mm/kasan/generic.c | 5 ++++-
> > mm/slab.h | 6 ++++++
> > mm/slub.c | 17 -----------------
> > 3 files changed, 10 insertions(+), 18 deletions(-)
> >
> > diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> > index 6310a180278b..cad376199d47 100644
> > --- a/mm/kasan/generic.c
> > +++ b/mm/kasan/generic.c
> > @@ -393,8 +393,11 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> > * be touched after it was freed, or
> > * 2. Object has a constructor, which means it's expected to
> > * retain its content until the next allocation.
>
> Nit: ", or" above.
Aha, yes, I missed that.
Hi Vlastimil,
Could you help to change it when taking the patches, or you prefer me
to send a new version? thanks!
>
> > + * 3. It is from a kmalloc cache which enables the debug option
> > + * to store original size.
> > */
> > - if ((cache->flags & SLAB_TYPESAFE_BY_RCU) || cache->ctor) {
> > + if ((cache->flags & SLAB_TYPESAFE_BY_RCU) || cache->ctor ||
> > + slub_debug_orig_size(cache)) {
> > cache->kasan_info.free_meta_offset = *size;
> > *size += sizeof(struct kasan_free_meta);
> > goto free_meta_added;
> > diff --git a/mm/slab.h b/mm/slab.h
> > index 90f95bda4571..7a0e9b34ba2a 100644
> > --- a/mm/slab.h
> > +++ b/mm/slab.h
> > @@ -689,6 +689,12 @@ void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab)
> > void __check_heap_object(const void *ptr, unsigned long n,
> > const struct slab *slab, bool to_user);
> >
> > +static inline bool slub_debug_orig_size(struct kmem_cache *s)
> > +{
> > + return (kmem_cache_debug_flags(s, SLAB_STORE_USER) &&
> > + (s->flags & SLAB_KMALLOC));
> > +}
> > +
> > #ifdef CONFIG_SLUB_DEBUG
> > void skip_orig_size_check(struct kmem_cache *s, const void *object);
> > #endif
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 23761533329d..996a72fa6f62 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -230,12 +230,6 @@ static inline bool kmem_cache_debug(struct kmem_cache *s)
> > return kmem_cache_debug_flags(s, SLAB_DEBUG_FLAGS);
> > }
> >
> > -static inline bool slub_debug_orig_size(struct kmem_cache *s)
> > -{
> > - return (kmem_cache_debug_flags(s, SLAB_STORE_USER) &&
> > - (s->flags & SLAB_KMALLOC));
> > -}
> > -
> > void *fixup_red_left(struct kmem_cache *s, void *p)
> > {
> > if (kmem_cache_debug_flags(s, SLAB_RED_ZONE))
> > @@ -760,21 +754,10 @@ static inline void set_orig_size(struct kmem_cache *s,
> > void *object, unsigned int orig_size)
> > {
> > void *p = kasan_reset_tag(object);
> > - unsigned int kasan_meta_size;
> >
> > if (!slub_debug_orig_size(s))
> > return;
> >
> > - /*
> > - * KASAN can save its free meta data inside of the object at offset 0.
> > - * If this meta data size is larger than 'orig_size', it will overlap
> > - * the data redzone in [orig_size+1, object_size]. Thus, we adjust
> > - * 'orig_size' to be as at least as big as KASAN's meta data.
> > - */
> > - kasan_meta_size = kasan_metadata_size(s, true);
> > - if (kasan_meta_size > orig_size)
> > - orig_size = kasan_meta_size;
> > -
> > p += get_info_end(s);
> > p += sizeof(struct track) * 2;
> >
> > --
> > 2.34.1
> >
>
> Acked-by: Andrey Konovalov <andreyknvl@gmail.com>
Thank you!
- Feng
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 0/5] mm/slub: Improve data handling of krealloc() when orig_size is enabled
2024-09-09 17:12 ` [PATCH 0/5] mm/slub: Improve data handling of krealloc() when orig_size is enabled Vlastimil Babka
@ 2024-09-10 2:20 ` Feng Tang
0 siblings, 0 replies; 21+ messages in thread
From: Feng Tang @ 2024-09-10 2:20 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Andrew Morton, Christoph Lameter, Pekka Enberg, David Rientjes,
Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo, Andrey Konovalov,
Marco Elver, Shuah Khan, David Gow, Danilo Krummrich, linux-mm,
kasan-dev, linux-kernel
On Mon, Sep 09, 2024 at 07:12:31PM +0200, Vlastimil Babka wrote:
> On 9/9/24 03:29, Feng Tang wrote:
> > Danilo Krummrich's patch [1] raised one problem about krealloc() that
> > its caller doesn't know what's the actual request size, say the object
> > is 64 bytes kmalloc one, but the original caller may only requested 48
> > bytes. And when krealloc() shrinks or grows in the same object, or
> > allocate a new bigger object, it lacks this 'original size' information
> > to do accurate data preserving or zeroing (when __GFP_ZERO is set).
> >
> > And when some slub debug option is enabled, kmalloc caches do have this
> > 'orig_size' feature. As suggested by Vlastimil, utilize it to do more
> > accurate data handling, as well as enforce the kmalloc-redzone sanity check.
> >
> > To make the 'orig_size' accurate, we adjust some kasan/slub meta data
> > handling. Also add a slub kunit test case for krealloc().
> >
> > This patchset has dependency over patches in both -mm tree and -slab
> > trees, so it is written based on linux-next tree '20240905' version.
>
> Thanks, given the timing with merge window opening soon, I would take this
> into the slab tree after the merge window, when the current -next becomes
> 6.12-rc1.
Sounds good to me. Thanks for the review!
- Feng
> >
> > [1]. https://lore.kernel.org/lkml/20240812223707.32049-1-dakr@kernel.org/
> >
> > Thanks,
> > Feng
> >
> > Feng Tang (5):
> > mm/kasan: Don't store metadata inside kmalloc object when
> > slub_debug_orig_size is on
> > mm/slub: Consider kfence case for get_orig_size()
> > mm/slub: Improve redzone check and zeroing for krealloc()
> > kunit: kfence: Make KFENCE_TEST_REQUIRES macro available for all kunit
> > case
> > mm/slub, kunit: Add testcase for krealloc redzone and zeroing
> >
> > include/kunit/test.h | 6 ++
> > lib/slub_kunit.c | 46 +++++++++++++++
> > mm/kasan/generic.c | 5 +-
> > mm/kfence/kfence_test.c | 9 +--
> > mm/slab.h | 6 ++
> > mm/slab_common.c | 84 ---------------------------
> > mm/slub.c | 125 ++++++++++++++++++++++++++++++++++------
> > 7 files changed, 171 insertions(+), 110 deletions(-)
> >
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 3/5] mm/slub: Improve redzone check and zeroing for krealloc()
2024-09-09 1:29 ` [PATCH 3/5] mm/slub: Improve redzone check and zeroing for krealloc() Feng Tang
@ 2024-09-10 10:06 ` Danilo Krummrich
2024-09-10 13:39 ` Feng Tang
2024-09-10 13:15 ` Vlastimil Babka
1 sibling, 1 reply; 21+ messages in thread
From: Danilo Krummrich @ 2024-09-10 10:06 UTC (permalink / raw)
To: Feng Tang
Cc: Vlastimil Babka, Andrew Morton, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo,
Andrey Konovalov, Marco Elver, Shuah Khan, David Gow, linux-mm,
kasan-dev, linux-kernel
On Mon, Sep 09, 2024 at 09:29:56AM +0800, Feng Tang wrote:
> For current krealloc(), one problem is its caller doesn't know what's
> the actual request size, say the object is 64 bytes kmalloc one, but
> the original caller may only requested 48 bytes. And when krealloc()
> shrinks or grows in the same object, or allocate a new bigger object,
> it lacks this 'original size' information to do accurate data preserving
> or zeroing (when __GFP_ZERO is set).
>
> And when some slub debug option is enabled, kmalloc caches do have this
> 'orig_size' feature. So utilize it to do more accurate data handling,
> as well as enforce the kmalloc-redzone sanity check.
>
> The krealloc() related code is moved from slab_common.c to slub.c for
> more efficient function calling.
I think it would be good to do this in a separate commit, for a better diff and
history.
>
> Suggested-by: Vlastimil Babka <vbabka@suse.cz>
> Signed-off-by: Feng Tang <feng.tang@intel.com>
> ---
> mm/slab_common.c | 84 -------------------------------------
> mm/slub.c | 106 +++++++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 106 insertions(+), 84 deletions(-)
>
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index ad438ba62485..e59942fb7970 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -1297,90 +1297,6 @@ module_init(slab_proc_init);
>
> #endif /* CONFIG_SLUB_DEBUG */
>
> -static __always_inline __realloc_size(2) void *
> -__do_krealloc(const void *p, size_t new_size, gfp_t flags)
> -{
> - void *ret;
> - size_t ks;
> -
> - /* Check for double-free before calling ksize. */
> - if (likely(!ZERO_OR_NULL_PTR(p))) {
> - if (!kasan_check_byte(p))
> - return NULL;
> - ks = ksize(p);
> - } else
> - ks = 0;
> -
> - /* If the object still fits, repoison it precisely. */
> - if (ks >= new_size) {
> - /* Zero out spare memory. */
> - if (want_init_on_alloc(flags)) {
> - kasan_disable_current();
> - memset((void *)p + new_size, 0, ks - new_size);
> - kasan_enable_current();
> - }
> -
> - p = kasan_krealloc((void *)p, new_size, flags);
> - return (void *)p;
> - }
> -
> - ret = kmalloc_node_track_caller_noprof(new_size, flags, NUMA_NO_NODE, _RET_IP_);
> - if (ret && p) {
> - /* Disable KASAN checks as the object's redzone is accessed. */
> - kasan_disable_current();
> - memcpy(ret, kasan_reset_tag(p), ks);
> - kasan_enable_current();
> - }
> -
> - return ret;
> -}
> -
> -/**
> - * krealloc - reallocate memory. The contents will remain unchanged.
> - * @p: object to reallocate memory for.
> - * @new_size: how many bytes of memory are required.
> - * @flags: the type of memory to allocate.
> - *
> - * If @p is %NULL, krealloc() behaves exactly like kmalloc(). If @new_size
> - * is 0 and @p is not a %NULL pointer, the object pointed to is freed.
> - *
> - * If __GFP_ZERO logic is requested, callers must ensure that, starting with the
> - * initial memory allocation, every subsequent call to this API for the same
> - * memory allocation is flagged with __GFP_ZERO. Otherwise, it is possible that
> - * __GFP_ZERO is not fully honored by this API.
> - *
> - * This is the case, since krealloc() only knows about the bucket size of an
> - * allocation (but not the exact size it was allocated with) and hence
> - * implements the following semantics for shrinking and growing buffers with
> - * __GFP_ZERO.
> - *
> - * new bucket
> - * 0 size size
> - * |--------|----------------|
> - * | keep | zero |
> - *
> - * In any case, the contents of the object pointed to are preserved up to the
> - * lesser of the new and old sizes.
> - *
> - * Return: pointer to the allocated memory or %NULL in case of error
> - */
> -void *krealloc_noprof(const void *p, size_t new_size, gfp_t flags)
> -{
> - void *ret;
> -
> - if (unlikely(!new_size)) {
> - kfree(p);
> - return ZERO_SIZE_PTR;
> - }
> -
> - ret = __do_krealloc(p, new_size, flags);
> - if (ret && kasan_reset_tag(p) != kasan_reset_tag(ret))
> - kfree(p);
> -
> - return ret;
> -}
> -EXPORT_SYMBOL(krealloc_noprof);
> -
> /**
> * kfree_sensitive - Clear sensitive information in memory before freeing
> * @p: object to free memory of
> diff --git a/mm/slub.c b/mm/slub.c
> index 4cb3822dba08..d4c938dfb89e 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -4709,6 +4709,112 @@ void kfree(const void *object)
> }
> EXPORT_SYMBOL(kfree);
>
> +static __always_inline __realloc_size(2) void *
> +__do_krealloc(const void *p, size_t new_size, gfp_t flags)
> +{
> + void *ret;
> + size_t ks;
> + int orig_size = 0;
> + struct kmem_cache *s;
> +
> + /* Check for double-free before calling ksize. */
> + if (likely(!ZERO_OR_NULL_PTR(p))) {
> + if (!kasan_check_byte(p))
> + return NULL;
> +
> + s = virt_to_cache(p);
> + orig_size = get_orig_size(s, (void *)p);
> + ks = s->object_size;
> + } else
> + ks = 0;
> +
> + /* If the object doesn't fit, allocate a bigger one */
> + if (new_size > ks)
> + goto alloc_new;
> +
> + /* Zero out spare memory. */
> + if (want_init_on_alloc(flags)) {
> + kasan_disable_current();
> + if (orig_size < new_size)
> + memset((void *)p + orig_size, 0, new_size - orig_size);
> + else
> + memset((void *)p + new_size, 0, ks - new_size);
> + kasan_enable_current();
> + }
> +
> + if (slub_debug_orig_size(s) && !is_kfence_address(p)) {
> + set_orig_size(s, (void *)p, new_size);
> + if (s->flags & SLAB_RED_ZONE && new_size < ks)
> + memset_no_sanitize_memory((void *)p + new_size,
> + SLUB_RED_ACTIVE, ks - new_size);
> + }
> +
> + p = kasan_krealloc((void *)p, new_size, flags);
> + return (void *)p;
> +
> +alloc_new:
> + ret = kmalloc_node_track_caller_noprof(new_size, flags, NUMA_NO_NODE, _RET_IP_);
> + if (ret && p) {
> + /* Disable KASAN checks as the object's redzone is accessed. */
> + kasan_disable_current();
> + if (orig_size)
> + memcpy(ret, kasan_reset_tag(p), orig_size);
> + kasan_enable_current();
> + }
> +
> + return ret;
> +}
> +
> +/**
> + * krealloc - reallocate memory. The contents will remain unchanged.
> + * @p: object to reallocate memory for.
> + * @new_size: how many bytes of memory are required.
> + * @flags: the type of memory to allocate.
> + *
> + * If @p is %NULL, krealloc() behaves exactly like kmalloc(). If @new_size
> + * is 0 and @p is not a %NULL pointer, the object pointed to is freed.
> + *
> + * If __GFP_ZERO logic is requested, callers must ensure that, starting with the
> + * initial memory allocation, every subsequent call to this API for the same
> + * memory allocation is flagged with __GFP_ZERO. Otherwise, it is possible that
> + * __GFP_ZERO is not fully honored by this API.
> + *
> + * When slub_debug_orig_size() is off, since krealloc() only knows about the
I think you want to remove ' since ' here.
> + * bucket size of an allocation (but not the exact size it was allocated with)
> + * and hence implements the following semantics for shrinking and growing
> + * buffers with __GFP_ZERO.
> + *
> + * new bucket
> + * 0 size size
> + * |--------|----------------|
> + * | keep | zero |
> + *
> + * Otherwize, the original allocation size 'orig_size' could be used to
Typo in 'otherwise'.
> + * precisely clear the requested size, and the new size will also be stored as
> + * the new 'orig_size'.
> + *
> + * In any case, the contents of the object pointed to are preserved up to the
> + * lesser of the new and old sizes.
> + *
> + * Return: pointer to the allocated memory or %NULL in case of error
> + */
> +void *krealloc_noprof(const void *p, size_t new_size, gfp_t flags)
> +{
> + void *ret;
> +
> + if (unlikely(!new_size)) {
> + kfree(p);
> + return ZERO_SIZE_PTR;
> + }
> +
> + ret = __do_krealloc(p, new_size, flags);
> + if (ret && kasan_reset_tag(p) != kasan_reset_tag(ret))
> + kfree(p);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL(krealloc_noprof);
> +
> struct detached_freelist {
> struct slab *slab;
> void *tail;
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 5/5] mm/slub, kunit: Add testcase for krealloc redzone and zeroing
2024-09-09 1:29 ` [PATCH 5/5] mm/slub, kunit: Add testcase for krealloc redzone and zeroing Feng Tang
@ 2024-09-10 10:09 ` Danilo Krummrich
2024-09-10 13:29 ` Vlastimil Babka
1 sibling, 0 replies; 21+ messages in thread
From: Danilo Krummrich @ 2024-09-10 10:09 UTC (permalink / raw)
To: Feng Tang
Cc: Vlastimil Babka, Andrew Morton, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo,
Andrey Konovalov, Marco Elver, Shuah Khan, David Gow, linux-mm,
kasan-dev, linux-kernel
On Mon, Sep 09, 2024 at 09:29:58AM +0800, Feng Tang wrote:
> Danilo Krummrich raised issue about krealloc+GFP_ZERO [1], and Vlastimil
> suggested to add some test case which can sanity test the kmalloc-redzone
> and zeroing by utilizing the kmalloc's 'orig_size' debug feature.
>
> It covers the grow and shrink case of krealloc() re-using current kmalloc
> object, and the case of re-allocating a new bigger object.
>
> User can add "slub_debug" kernel cmdline parameter to test it.
>
> [1]. https://lore.kernel.org/lkml/20240812223707.32049-1-dakr@kernel.org/
>
> Suggested-by: Vlastimil Babka <vbabka@suse.cz>
> Signed-off-by: Feng Tang <feng.tang@intel.com>
Reviewed-by: Danilo Krummrich <dakr@kernel.org>
> ---
> lib/slub_kunit.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 46 insertions(+)
>
> diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
> index 6e3a1e5a7142..03e0089149ad 100644
> --- a/lib/slub_kunit.c
> +++ b/lib/slub_kunit.c
> @@ -186,6 +186,51 @@ static void test_leak_destroy(struct kunit *test)
> KUNIT_EXPECT_EQ(test, 1, slab_errors);
> }
>
> +static void test_krealloc_redzone_zeroing(struct kunit *test)
> +{
> + char *p;
> + int i;
> +
> + KUNIT_TEST_REQUIRES(test, __slub_debug_enabled());
> +
> + /* Allocate a 64B kmalloc object */
> + p = kzalloc(48, GFP_KERNEL);
> + if (unlikely(is_kfence_address(p))) {
> + kfree(p);
> + return;
> + }
> + memset(p, 0xff, 48);
> +
> + kasan_disable_current();
> + OPTIMIZER_HIDE_VAR(p);
> +
> + /* Test shrink */
> + p = krealloc(p, 40, GFP_KERNEL | __GFP_ZERO);
> + for (i = 40; i < 64; i++)
> + KUNIT_EXPECT_EQ(test, p[i], SLUB_RED_ACTIVE);
> +
> + /* Test grow within the same 64B kmalloc object */
> + p = krealloc(p, 56, GFP_KERNEL | __GFP_ZERO);
> + for (i = 40; i < 56; i++)
> + KUNIT_EXPECT_EQ(test, p[i], 0);
> + for (i = 56; i < 64; i++)
> + KUNIT_EXPECT_EQ(test, p[i], SLUB_RED_ACTIVE);
> +
> + /* Test grow with allocating a bigger 128B object */
> + p = krealloc(p, 112, GFP_KERNEL | __GFP_ZERO);
> + if (unlikely(is_kfence_address(p)))
> + goto exit;
> +
> + for (i = 56; i < 112; i++)
> + KUNIT_EXPECT_EQ(test, p[i], 0);
> + for (i = 112; i < 128; i++)
> + KUNIT_EXPECT_EQ(test, p[i], SLUB_RED_ACTIVE);
> +
> +exit:
> + kfree(p);
> + kasan_enable_current();
> +}
> +
> static int test_init(struct kunit *test)
> {
> slab_errors = 0;
> @@ -196,6 +241,7 @@ static int test_init(struct kunit *test)
> }
>
> static struct kunit_case test_cases[] = {
> + KUNIT_CASE(test_krealloc_redzone_zeroing),
> KUNIT_CASE(test_clobber_zone),
>
> #ifndef CONFIG_KASAN
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 3/5] mm/slub: Improve redzone check and zeroing for krealloc()
2024-09-09 1:29 ` [PATCH 3/5] mm/slub: Improve redzone check and zeroing for krealloc() Feng Tang
2024-09-10 10:06 ` Danilo Krummrich
@ 2024-09-10 13:15 ` Vlastimil Babka
2024-09-10 14:18 ` Feng Tang
1 sibling, 1 reply; 21+ messages in thread
From: Vlastimil Babka @ 2024-09-10 13:15 UTC (permalink / raw)
To: Feng Tang, Andrew Morton, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo,
Andrey Konovalov, Marco Elver, Shuah Khan, David Gow,
Danilo Krummrich
Cc: linux-mm, kasan-dev, linux-kernel
On 9/9/24 03:29, Feng Tang wrote:
> For current krealloc(), one problem is its caller doesn't know what's
> the actual request size, say the object is 64 bytes kmalloc one, but
It's more accurate to say the caller doesn't pass the old size (it might
actually know it).
> the original caller may only requested 48 bytes. And when krealloc()
> shrinks or grows in the same object, or allocate a new bigger object,
> it lacks this 'original size' information to do accurate data preserving
> or zeroing (when __GFP_ZERO is set).
Let's describe the problem specifically by adding:
Thus with slub debug redzone and object tracking enabled, parts of the
object after krealloc() might contain redzone data instead of zeroes, which
is violating the __GFP_ZERO guarantees.
> And when some slub debug option is enabled, kmalloc caches do have this
> 'orig_size' feature. So utilize it to do more accurate data handling,
> as well as enforce the kmalloc-redzone sanity check.
>
> The krealloc() related code is moved from slab_common.c to slub.c for
> more efficient function calling.
Agreed with Danilo that having the move as separate preparatory patch would
make review easier.
Thanks.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 4/5] kunit: kfence: Make KFENCE_TEST_REQUIRES macro available for all kunit case
2024-09-09 1:29 ` [PATCH 4/5] kunit: kfence: Make KFENCE_TEST_REQUIRES macro available for all kunit case Feng Tang
@ 2024-09-10 13:17 ` Vlastimil Babka
2024-09-10 14:14 ` Feng Tang
0 siblings, 1 reply; 21+ messages in thread
From: Vlastimil Babka @ 2024-09-10 13:17 UTC (permalink / raw)
To: Feng Tang, Andrew Morton, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo,
Andrey Konovalov, Marco Elver, Shuah Khan, David Gow,
Danilo Krummrich
Cc: linux-mm, kasan-dev, linux-kernel
On 9/9/24 03:29, Feng Tang wrote:
> KFENCE_TEST_REQUIRES macro is convenient for judging if a prerequisite of a
> test case exists. Lift it into kunit/test.h so that all kunit test cases
> can benefit from it.
>
> Signed-off-by: Feng Tang <feng.tang@intel.com>
I think you should have Cc'd kunit and kfence maintainers on this one.
But if that's necessary depends on the review for patch 5...
> ---
> include/kunit/test.h | 6 ++++++
> mm/kfence/kfence_test.c | 9 ++-------
> 2 files changed, 8 insertions(+), 7 deletions(-)
>
> diff --git a/include/kunit/test.h b/include/kunit/test.h
> index 5ac237c949a0..8a8027e10b89 100644
> --- a/include/kunit/test.h
> +++ b/include/kunit/test.h
> @@ -643,6 +643,12 @@ void __printf(2, 3) kunit_log_append(struct string_stream *log, const char *fmt,
> WRITE_ONCE(test->last_seen.line, __LINE__); \
> } while (0)
>
> +#define KUNIT_TEST_REQUIRES(test, cond) do { \
> + if (!(cond)) \
> + kunit_skip((test), "Test requires: " #cond); \
> +} while (0)
> +
> +
> /**
> * KUNIT_SUCCEED() - A no-op expectation. Only exists for code clarity.
> * @test: The test context object.
> diff --git a/mm/kfence/kfence_test.c b/mm/kfence/kfence_test.c
> index 00fd17285285..5dbb22c8c44f 100644
> --- a/mm/kfence/kfence_test.c
> +++ b/mm/kfence/kfence_test.c
> @@ -32,11 +32,6 @@
> #define arch_kfence_test_address(addr) (addr)
> #endif
>
> -#define KFENCE_TEST_REQUIRES(test, cond) do { \
> - if (!(cond)) \
> - kunit_skip((test), "Test requires: " #cond); \
> -} while (0)
> -
> /* Report as observed from console. */
> static struct {
> spinlock_t lock;
> @@ -561,7 +556,7 @@ static void test_init_on_free(struct kunit *test)
> };
> int i;
>
> - KFENCE_TEST_REQUIRES(test, IS_ENABLED(CONFIG_INIT_ON_FREE_DEFAULT_ON));
> + KUNIT_TEST_REQUIRES(test, IS_ENABLED(CONFIG_INIT_ON_FREE_DEFAULT_ON));
> /* Assume it hasn't been disabled on command line. */
>
> setup_test_cache(test, size, 0, NULL);
> @@ -609,7 +604,7 @@ static void test_gfpzero(struct kunit *test)
> int i;
>
> /* Skip if we think it'd take too long. */
> - KFENCE_TEST_REQUIRES(test, kfence_sample_interval <= 100);
> + KUNIT_TEST_REQUIRES(test, kfence_sample_interval <= 100);
>
> setup_test_cache(test, size, 0, NULL);
> buf1 = test_alloc(test, size, GFP_KERNEL, ALLOCATE_ANY);
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 5/5] mm/slub, kunit: Add testcase for krealloc redzone and zeroing
2024-09-09 1:29 ` [PATCH 5/5] mm/slub, kunit: Add testcase for krealloc redzone and zeroing Feng Tang
2024-09-10 10:09 ` Danilo Krummrich
@ 2024-09-10 13:29 ` Vlastimil Babka
2024-09-10 14:08 ` Feng Tang
1 sibling, 1 reply; 21+ messages in thread
From: Vlastimil Babka @ 2024-09-10 13:29 UTC (permalink / raw)
To: Feng Tang, Andrew Morton, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo,
Andrey Konovalov, Marco Elver, Shuah Khan, David Gow,
Danilo Krummrich
Cc: linux-mm, kasan-dev, linux-kernel
On 9/9/24 03:29, Feng Tang wrote:
> Danilo Krummrich raised issue about krealloc+GFP_ZERO [1], and Vlastimil
> suggested to add some test case which can sanity test the kmalloc-redzone
> and zeroing by utilizing the kmalloc's 'orig_size' debug feature.
>
> It covers the grow and shrink case of krealloc() re-using current kmalloc
> object, and the case of re-allocating a new bigger object.
>
> User can add "slub_debug" kernel cmdline parameter to test it.
>
> [1]. https://lore.kernel.org/lkml/20240812223707.32049-1-dakr@kernel.org/
>
> Suggested-by: Vlastimil Babka <vbabka@suse.cz>
> Signed-off-by: Feng Tang <feng.tang@intel.com>
> ---
> lib/slub_kunit.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 46 insertions(+)
>
> diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
> index 6e3a1e5a7142..03e0089149ad 100644
> --- a/lib/slub_kunit.c
> +++ b/lib/slub_kunit.c
> @@ -186,6 +186,51 @@ static void test_leak_destroy(struct kunit *test)
> KUNIT_EXPECT_EQ(test, 1, slab_errors);
> }
>
> +static void test_krealloc_redzone_zeroing(struct kunit *test)
> +{
> + char *p;
> + int i;
> +
> + KUNIT_TEST_REQUIRES(test, __slub_debug_enabled());
AFAICS this is insufficient, because the static key may be enabled due to
debugging enabled for different caches than kmalloc, or it might not include
both red zone and object tracking.
But it should be possible to instead create a fake kmalloc cache of size 64
and use __kmalloc_cache_noprof() like test_kmalloc_redzone_access()?
> +
> + /* Allocate a 64B kmalloc object */
> + p = kzalloc(48, GFP_KERNEL);
> + if (unlikely(is_kfence_address(p))) {
> + kfree(p);
> + return;
> + }
> + memset(p, 0xff, 48);
> +
> + kasan_disable_current();
> + OPTIMIZER_HIDE_VAR(p);
> +
> + /* Test shrink */
> + p = krealloc(p, 40, GFP_KERNEL | __GFP_ZERO);
> + for (i = 40; i < 64; i++)
> + KUNIT_EXPECT_EQ(test, p[i], SLUB_RED_ACTIVE);
> +
> + /* Test grow within the same 64B kmalloc object */
> + p = krealloc(p, 56, GFP_KERNEL | __GFP_ZERO);
> + for (i = 40; i < 56; i++)
> + KUNIT_EXPECT_EQ(test, p[i], 0);
> + for (i = 56; i < 64; i++)
> + KUNIT_EXPECT_EQ(test, p[i], SLUB_RED_ACTIVE);
> +
> + /* Test grow with allocating a bigger 128B object */
> + p = krealloc(p, 112, GFP_KERNEL | __GFP_ZERO);
The only downside is that krealloc() here might use kmalloc-128 cache that's
not doing red zoning and object tracking....
> + if (unlikely(is_kfence_address(p)))
> + goto exit;
> +
> + for (i = 56; i < 112; i++)
> + KUNIT_EXPECT_EQ(test, p[i], 0);
... but this test is still valid and necessary
> + for (i = 112; i < 128; i++)
> + KUNIT_EXPECT_EQ(test, p[i], SLUB_RED_ACTIVE);
... we might skip this test as the red zoning is not done by __do_krealloc()
anyway in the alloc_new case.
> +
> +exit:
> + kfree(p);
Ideally we'd also validate the fake kmalloc cache we created and expect zero
slab_errors.
Hopefully this approach works and I'm not missing something...
> + kasan_enable_current();
> +}
> +
> static int test_init(struct kunit *test)
> {
> slab_errors = 0;
> @@ -196,6 +241,7 @@ static int test_init(struct kunit *test)
> }
>
> static struct kunit_case test_cases[] = {
> + KUNIT_CASE(test_krealloc_redzone_zeroing),
> KUNIT_CASE(test_clobber_zone),
>
> #ifndef CONFIG_KASAN
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 3/5] mm/slub: Improve redzone check and zeroing for krealloc()
2024-09-10 10:06 ` Danilo Krummrich
@ 2024-09-10 13:39 ` Feng Tang
0 siblings, 0 replies; 21+ messages in thread
From: Feng Tang @ 2024-09-10 13:39 UTC (permalink / raw)
To: Danilo Krummrich
Cc: Vlastimil Babka, Andrew Morton, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo,
Andrey Konovalov, Marco Elver, Shuah Khan, David Gow, linux-mm,
kasan-dev, linux-kernel
Hi Danilo,
Thanks for the review!
On Tue, Sep 10, 2024 at 12:06:05PM +0200, Danilo Krummrich wrote:
> On Mon, Sep 09, 2024 at 09:29:56AM +0800, Feng Tang wrote:
> > For current krealloc(), one problem is its caller doesn't know what's
> > the actual request size, say the object is 64 bytes kmalloc one, but
> > the original caller may only requested 48 bytes. And when krealloc()
> > shrinks or grows in the same object, or allocate a new bigger object,
> > it lacks this 'original size' information to do accurate data preserving
> > or zeroing (when __GFP_ZERO is set).
> >
> > And when some slub debug option is enabled, kmalloc caches do have this
> > 'orig_size' feature. So utilize it to do more accurate data handling,
> > as well as enforce the kmalloc-redzone sanity check.
> >
> > The krealloc() related code is moved from slab_common.c to slub.c for
> > more efficient function calling.
>
> I think it would be good to do this in a separate commit, for a better diff and
> history.
Agreed. will do.
> >
> > Suggested-by: Vlastimil Babka <vbabka@suse.cz>
> > Signed-off-by: Feng Tang <feng.tang@intel.com>
> > ---
> > mm/slab_common.c | 84 -------------------------------------
> > mm/slub.c | 106 +++++++++++++++++++++++++++++++++++++++++++++++
> > 2 files changed, 106 insertions(+), 84 deletions(-)
[...]
> > +
> > +/**
> > + * krealloc - reallocate memory. The contents will remain unchanged.
> > + * @p: object to reallocate memory for.
> > + * @new_size: how many bytes of memory are required.
> > + * @flags: the type of memory to allocate.
> > + *
> > + * If @p is %NULL, krealloc() behaves exactly like kmalloc(). If @new_size
> > + * is 0 and @p is not a %NULL pointer, the object pointed to is freed.
> > + *
> > + * If __GFP_ZERO logic is requested, callers must ensure that, starting with the
> > + * initial memory allocation, every subsequent call to this API for the same
> > + * memory allocation is flagged with __GFP_ZERO. Otherwise, it is possible that
> > + * __GFP_ZERO is not fully honored by this API.
> > + *
> > + * When slub_debug_orig_size() is off, since krealloc() only knows about the
>
> I think you want to remove ' since ' here.
Yes! :)
> > + * bucket size of an allocation (but not the exact size it was allocated with)
> > + * and hence implements the following semantics for shrinking and growing
> > + * buffers with __GFP_ZERO.
> > + *
> > + * new bucket
> > + * 0 size size
> > + * |--------|----------------|
> > + * | keep | zero |
> > + *
> > + * Otherwize, the original allocation size 'orig_size' could be used to
>
> Typo in 'otherwise'.
Will fix.
Thanks,
Feng
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 5/5] mm/slub, kunit: Add testcase for krealloc redzone and zeroing
2024-09-10 13:29 ` Vlastimil Babka
@ 2024-09-10 14:08 ` Feng Tang
0 siblings, 0 replies; 21+ messages in thread
From: Feng Tang @ 2024-09-10 14:08 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Andrew Morton, Christoph Lameter, Pekka Enberg, David Rientjes,
Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo, Andrey Konovalov,
Marco Elver, Shuah Khan, David Gow, Danilo Krummrich, linux-mm,
kasan-dev, linux-kernel
On Tue, Sep 10, 2024 at 03:29:21PM +0200, Vlastimil Babka wrote:
> On 9/9/24 03:29, Feng Tang wrote:
> > Danilo Krummrich raised issue about krealloc+GFP_ZERO [1], and Vlastimil
> > suggested to add some test case which can sanity test the kmalloc-redzone
> > and zeroing by utilizing the kmalloc's 'orig_size' debug feature.
> >
> > It covers the grow and shrink case of krealloc() re-using current kmalloc
> > object, and the case of re-allocating a new bigger object.
> >
> > User can add "slub_debug" kernel cmdline parameter to test it.
> >
> > [1]. https://lore.kernel.org/lkml/20240812223707.32049-1-dakr@kernel.org/
> >
> > Suggested-by: Vlastimil Babka <vbabka@suse.cz>
> > Signed-off-by: Feng Tang <feng.tang@intel.com>
> > ---
> > lib/slub_kunit.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 46 insertions(+)
> >
> > diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
> > index 6e3a1e5a7142..03e0089149ad 100644
> > --- a/lib/slub_kunit.c
> > +++ b/lib/slub_kunit.c
> > @@ -186,6 +186,51 @@ static void test_leak_destroy(struct kunit *test)
> > KUNIT_EXPECT_EQ(test, 1, slab_errors);
> > }
> >
> > +static void test_krealloc_redzone_zeroing(struct kunit *test)
> > +{
> > + char *p;
> > + int i;
> > +
> > + KUNIT_TEST_REQUIRES(test, __slub_debug_enabled());
>
> AFAICS this is insufficient, because the static key may be enabled due to
> debugging enabled for different caches than kmalloc, or it might not include
> both red zone and object tracking.
You are right, that concerned me too. In first version, I make it depend
on CONFIG_SLUB_DEBUG_ON==y, but most user' and distribution's kernel
won't enable it, and user have to rebuild kernel to test. So I changed
to this check finally.
If there is a way to judge whether 'slub_debug' is enabled, that would
solve this issue.
>
> But it should be possible to instead create a fake kmalloc cache of size 64
> and use __kmalloc_cache_noprof() like test_kmalloc_redzone_access()?
Yep, I thought about that, and the problem was the krealloc a new 128B
object.
> > +
> > + /* Allocate a 64B kmalloc object */
> > + p = kzalloc(48, GFP_KERNEL);
> > + if (unlikely(is_kfence_address(p))) {
> > + kfree(p);
> > + return;
> > + }
> > + memset(p, 0xff, 48);
> > +
> > + kasan_disable_current();
> > + OPTIMIZER_HIDE_VAR(p);
> > +
> > + /* Test shrink */
> > + p = krealloc(p, 40, GFP_KERNEL | __GFP_ZERO);
> > + for (i = 40; i < 64; i++)
> > + KUNIT_EXPECT_EQ(test, p[i], SLUB_RED_ACTIVE);
> > +
> > + /* Test grow within the same 64B kmalloc object */
> > + p = krealloc(p, 56, GFP_KERNEL | __GFP_ZERO);
> > + for (i = 40; i < 56; i++)
> > + KUNIT_EXPECT_EQ(test, p[i], 0);
> > + for (i = 56; i < 64; i++)
> > + KUNIT_EXPECT_EQ(test, p[i], SLUB_RED_ACTIVE);
> > +
> > + /* Test grow with allocating a bigger 128B object */
> > + p = krealloc(p, 112, GFP_KERNEL | __GFP_ZERO);
>
> The only downside is that krealloc() here might use kmalloc-128 cache that's
> not doing red zoning and object tracking....
Yes.
> > + if (unlikely(is_kfence_address(p)))
> > + goto exit;
> > +
> > + for (i = 56; i < 112; i++)
> > + KUNIT_EXPECT_EQ(test, p[i], 0);
>
> ... but this test is still valid and necessary
>
> > + for (i = 112; i < 128; i++)
> > + KUNIT_EXPECT_EQ(test, p[i], SLUB_RED_ACTIVE);
>
> ... we might skip this test as the red zoning is not done by __do_krealloc()
> anyway in the alloc_new case.
>
> > +
> > +exit:
> > + kfree(p);
>
> Ideally we'd also validate the fake kmalloc cache we created and expect zero
> slab_errors.
>
> Hopefully this approach works and I'm not missing something...
Yep, this should work. As redzone was tested in earlier check, and
not necessary to be checked again here. Will do some test on this.
Thanks,
Feng
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 4/5] kunit: kfence: Make KFENCE_TEST_REQUIRES macro available for all kunit case
2024-09-10 13:17 ` Vlastimil Babka
@ 2024-09-10 14:14 ` Feng Tang
2024-09-10 14:19 ` Alexander Potapenko
2024-09-10 16:04 ` Marco Elver
0 siblings, 2 replies; 21+ messages in thread
From: Feng Tang @ 2024-09-10 14:14 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Andrew Morton, Christoph Lameter, Pekka Enberg, David Rientjes,
Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo, Andrey Konovalov,
Marco Elver, Shuah Khan, David Gow, Danilo Krummrich, linux-mm,
kasan-dev, linux-kernel
On Tue, Sep 10, 2024 at 03:17:10PM +0200, Vlastimil Babka wrote:
> On 9/9/24 03:29, Feng Tang wrote:
> > KFENCE_TEST_REQUIRES macro is convenient for judging if a prerequisite of a
> > test case exists. Lift it into kunit/test.h so that all kunit test cases
> > can benefit from it.
> >
> > Signed-off-by: Feng Tang <feng.tang@intel.com>
>
> I think you should have Cc'd kunit and kfence maintainers on this one.
> But if that's necessary depends on the review for patch 5...
I added Marco Elver, Shuah Khan, David Gow and kasan-dev@googlegroups.com
for kence and kunit review. That should be incomplete, will add more in
next verion. Thanks for the reminder!
- Feng
>
> > ---
> > include/kunit/test.h | 6 ++++++
> > mm/kfence/kfence_test.c | 9 ++-------
> > 2 files changed, 8 insertions(+), 7 deletions(-)
> >
> > diff --git a/include/kunit/test.h b/include/kunit/test.h
> > index 5ac237c949a0..8a8027e10b89 100644
> > --- a/include/kunit/test.h
> > +++ b/include/kunit/test.h
> > @@ -643,6 +643,12 @@ void __printf(2, 3) kunit_log_append(struct string_stream *log, const char *fmt,
> > WRITE_ONCE(test->last_seen.line, __LINE__); \
> > } while (0)
> >
> > +#define KUNIT_TEST_REQUIRES(test, cond) do { \
> > + if (!(cond)) \
> > + kunit_skip((test), "Test requires: " #cond); \
> > +} while (0)
> > +
> > +
> > /**
> > * KUNIT_SUCCEED() - A no-op expectation. Only exists for code clarity.
> > * @test: The test context object.
> > diff --git a/mm/kfence/kfence_test.c b/mm/kfence/kfence_test.c
> > index 00fd17285285..5dbb22c8c44f 100644
> > --- a/mm/kfence/kfence_test.c
> > +++ b/mm/kfence/kfence_test.c
> > @@ -32,11 +32,6 @@
> > #define arch_kfence_test_address(addr) (addr)
> > #endif
> >
> > -#define KFENCE_TEST_REQUIRES(test, cond) do { \
> > - if (!(cond)) \
> > - kunit_skip((test), "Test requires: " #cond); \
> > -} while (0)
> > -
> > /* Report as observed from console. */
> > static struct {
> > spinlock_t lock;
> > @@ -561,7 +556,7 @@ static void test_init_on_free(struct kunit *test)
> > };
> > int i;
> >
> > - KFENCE_TEST_REQUIRES(test, IS_ENABLED(CONFIG_INIT_ON_FREE_DEFAULT_ON));
> > + KUNIT_TEST_REQUIRES(test, IS_ENABLED(CONFIG_INIT_ON_FREE_DEFAULT_ON));
> > /* Assume it hasn't been disabled on command line. */
> >
> > setup_test_cache(test, size, 0, NULL);
> > @@ -609,7 +604,7 @@ static void test_gfpzero(struct kunit *test)
> > int i;
> >
> > /* Skip if we think it'd take too long. */
> > - KFENCE_TEST_REQUIRES(test, kfence_sample_interval <= 100);
> > + KUNIT_TEST_REQUIRES(test, kfence_sample_interval <= 100);
> >
> > setup_test_cache(test, size, 0, NULL);
> > buf1 = test_alloc(test, size, GFP_KERNEL, ALLOCATE_ANY);
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 3/5] mm/slub: Improve redzone check and zeroing for krealloc()
2024-09-10 13:15 ` Vlastimil Babka
@ 2024-09-10 14:18 ` Feng Tang
0 siblings, 0 replies; 21+ messages in thread
From: Feng Tang @ 2024-09-10 14:18 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Andrew Morton, Christoph Lameter, Pekka Enberg, David Rientjes,
Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo, Andrey Konovalov,
Marco Elver, Shuah Khan, David Gow, Danilo Krummrich, linux-mm,
kasan-dev, linux-kernel
On Tue, Sep 10, 2024 at 03:15:36PM +0200, Vlastimil Babka wrote:
> On 9/9/24 03:29, Feng Tang wrote:
> > For current krealloc(), one problem is its caller doesn't know what's
> > the actual request size, say the object is 64 bytes kmalloc one, but
>
> It's more accurate to say the caller doesn't pass the old size (it might
> actually know it).
Right!
>
> > the original caller may only requested 48 bytes. And when krealloc()
> > shrinks or grows in the same object, or allocate a new bigger object,
> > it lacks this 'original size' information to do accurate data preserving
> > or zeroing (when __GFP_ZERO is set).
>
> Let's describe the problem specifically by adding:
>
> Thus with slub debug redzone and object tracking enabled, parts of the
> object after krealloc() might contain redzone data instead of zeroes, which
> is violating the __GFP_ZERO guarantees.
Will add it. Thanks for the enhancement!
> > And when some slub debug option is enabled, kmalloc caches do have this
> > 'orig_size' feature. So utilize it to do more accurate data handling,
> > as well as enforce the kmalloc-redzone sanity check.
> >
> > The krealloc() related code is moved from slab_common.c to slub.c for
> > more efficient function calling.
>
> Agreed with Danilo that having the move as separate preparatory patch would
> make review easier.
Yes and will do.
Thanks,
Feng
> Thanks.
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 4/5] kunit: kfence: Make KFENCE_TEST_REQUIRES macro available for all kunit case
2024-09-10 14:14 ` Feng Tang
@ 2024-09-10 14:19 ` Alexander Potapenko
2024-09-10 16:04 ` Marco Elver
1 sibling, 0 replies; 21+ messages in thread
From: Alexander Potapenko @ 2024-09-10 14:19 UTC (permalink / raw)
To: Feng Tang
Cc: Vlastimil Babka, Andrew Morton, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo,
Andrey Konovalov, Marco Elver, Shuah Khan, David Gow,
Danilo Krummrich, linux-mm, kasan-dev, linux-kernel
On Tue, Sep 10, 2024 at 4:14 PM Feng Tang <feng.tang@intel.com> wrote:
>
> On Tue, Sep 10, 2024 at 03:17:10PM +0200, Vlastimil Babka wrote:
> > On 9/9/24 03:29, Feng Tang wrote:
> > > KFENCE_TEST_REQUIRES macro is convenient for judging if a prerequisite of a
> > > test case exists. Lift it into kunit/test.h so that all kunit test cases
> > > can benefit from it.
> > >
> > > Signed-off-by: Feng Tang <feng.tang@intel.com>
Reviewed-by: Alexander Potapenko <glider@google.com>
(assuming KUNIT maintainers are fine with the change)
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 4/5] kunit: kfence: Make KFENCE_TEST_REQUIRES macro available for all kunit case
2024-09-10 14:14 ` Feng Tang
2024-09-10 14:19 ` Alexander Potapenko
@ 2024-09-10 16:04 ` Marco Elver
1 sibling, 0 replies; 21+ messages in thread
From: Marco Elver @ 2024-09-10 16:04 UTC (permalink / raw)
To: Feng Tang
Cc: Vlastimil Babka, Andrew Morton, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo,
Andrey Konovalov, Shuah Khan, David Gow, Danilo Krummrich,
linux-mm, kasan-dev, linux-kernel
On Tue, 10 Sept 2024 at 16:14, Feng Tang <feng.tang@intel.com> wrote:
>
> On Tue, Sep 10, 2024 at 03:17:10PM +0200, Vlastimil Babka wrote:
> > On 9/9/24 03:29, Feng Tang wrote:
> > > KFENCE_TEST_REQUIRES macro is convenient for judging if a prerequisite of a
> > > test case exists. Lift it into kunit/test.h so that all kunit test cases
> > > can benefit from it.
> > >
> > > Signed-off-by: Feng Tang <feng.tang@intel.com>
> >
> > I think you should have Cc'd kunit and kfence maintainers on this one.
> > But if that's necessary depends on the review for patch 5...
>
> I added Marco Elver, Shuah Khan, David Gow and kasan-dev@googlegroups.com
> for kence and kunit review. That should be incomplete, will add more in
> next verion. Thanks for the reminder!
Reviewed-by: Marco Elver <elver@google.com>
Glad to see you found this macro generally useful. But do await KUnit
maintainer Ack as well.
Thanks,
-- Marco
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2024-09-10 16:05 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-09-09 1:29 [PATCH 0/5] mm/slub: Improve data handling of krealloc() when orig_size is enabled Feng Tang
2024-09-09 1:29 ` [PATCH 1/5] mm/kasan: Don't store metadata inside kmalloc object when slub_debug_orig_size is on Feng Tang
2024-09-09 16:24 ` Andrey Konovalov
2024-09-10 2:17 ` Feng Tang
2024-09-09 1:29 ` [PATCH 2/5] mm/slub: Consider kfence case for get_orig_size() Feng Tang
2024-09-09 1:29 ` [PATCH 3/5] mm/slub: Improve redzone check and zeroing for krealloc() Feng Tang
2024-09-10 10:06 ` Danilo Krummrich
2024-09-10 13:39 ` Feng Tang
2024-09-10 13:15 ` Vlastimil Babka
2024-09-10 14:18 ` Feng Tang
2024-09-09 1:29 ` [PATCH 4/5] kunit: kfence: Make KFENCE_TEST_REQUIRES macro available for all kunit case Feng Tang
2024-09-10 13:17 ` Vlastimil Babka
2024-09-10 14:14 ` Feng Tang
2024-09-10 14:19 ` Alexander Potapenko
2024-09-10 16:04 ` Marco Elver
2024-09-09 1:29 ` [PATCH 5/5] mm/slub, kunit: Add testcase for krealloc redzone and zeroing Feng Tang
2024-09-10 10:09 ` Danilo Krummrich
2024-09-10 13:29 ` Vlastimil Babka
2024-09-10 14:08 ` Feng Tang
2024-09-09 17:12 ` [PATCH 0/5] mm/slub: Improve data handling of krealloc() when orig_size is enabled Vlastimil Babka
2024-09-10 2:20 ` Feng Tang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox