* [PATCH v3 1/3] mm/slub: Consider kfence case for get_orig_size()
2024-10-16 15:41 [PATCH v3 0/3] mm/slub: Improve data handling of krealloc() when orig_size is enabled Feng Tang
@ 2024-10-16 15:41 ` Feng Tang
2024-11-14 13:38 ` Hyeonggon Yoo
2024-10-16 15:41 ` [PATCH v3 2/3] mm/slub: Improve redzone check and zeroing for krealloc() Feng Tang
` (2 subsequent siblings)
3 siblings, 1 reply; 8+ messages in thread
From: Feng Tang @ 2024-10-16 15:41 UTC (permalink / raw)
To: Vlastimil Babka, Andrew Morton, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo,
Andrey Konovalov, Marco Elver, Alexander Potapenko,
Dmitry Vyukov, Danilo Krummrich, Narasimhan.V
Cc: linux-mm, kasan-dev, linux-kernel, Feng Tang
When 'orig_size' of kmalloc object is enabled by debug option, it
should either contains the actual requested size or the cache's
'object_size'.
But it's not true if that object is a kfence-allocated one, and the
data at 'orig_size' offset of metadata could be zero or other values.
This is not a big issue for current 'orig_size' usage, as init_object()
and check_object() during alloc/free process will be skipped for kfence
addresses. But it could cause trouble for other usage in future.
Use the existing kfence helper kfence_ksize() which can return the
real original request size.
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
mm/slub.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/mm/slub.c b/mm/slub.c
index af9a80071fe0..1d348899f7a3 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -768,6 +768,9 @@ static inline unsigned int get_orig_size(struct kmem_cache *s, void *object)
{
void *p = kasan_reset_tag(object);
+ if (is_kfence_address(object))
+ return kfence_ksize(object);
+
if (!slub_debug_orig_size(s))
return s->object_size;
--
2.27.0
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH v3 1/3] mm/slub: Consider kfence case for get_orig_size()
2024-10-16 15:41 ` [PATCH v3 1/3] mm/slub: Consider kfence case for get_orig_size() Feng Tang
@ 2024-11-14 13:38 ` Hyeonggon Yoo
0 siblings, 0 replies; 8+ messages in thread
From: Hyeonggon Yoo @ 2024-11-14 13:38 UTC (permalink / raw)
To: Feng Tang
Cc: Vlastimil Babka, Andrew Morton, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Roman Gushchin, Andrey Konovalov,
Marco Elver, Alexander Potapenko, Dmitry Vyukov,
Danilo Krummrich, Narasimhan.V, linux-mm, kasan-dev,
linux-kernel
On Thu, Oct 17, 2024 at 12:42 AM Feng Tang <feng.tang@intel.com> wrote:
>
> When 'orig_size' of kmalloc object is enabled by debug option, it
> should either contains the actual requested size or the cache's
> 'object_size'.
>
> But it's not true if that object is a kfence-allocated one, and the
> data at 'orig_size' offset of metadata could be zero or other values.
> This is not a big issue for current 'orig_size' usage, as init_object()
> and check_object() during alloc/free process will be skipped for kfence
> addresses. But it could cause trouble for other usage in future.
>
> Use the existing kfence helper kfence_ksize() which can return the
> real original request size.
>
> Signed-off-by: Feng Tang <feng.tang@intel.com>
> ---
Looks good to me,
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> mm/slub.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index af9a80071fe0..1d348899f7a3 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -768,6 +768,9 @@ static inline unsigned int get_orig_size(struct kmem_cache *s, void *object)
> {
> void *p = kasan_reset_tag(object);
>
> + if (is_kfence_address(object))
> + return kfence_ksize(object);
> +
> if (!slub_debug_orig_size(s))
> return s->object_size;
>
> --
> 2.27.0
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v3 2/3] mm/slub: Improve redzone check and zeroing for krealloc()
2024-10-16 15:41 [PATCH v3 0/3] mm/slub: Improve data handling of krealloc() when orig_size is enabled Feng Tang
2024-10-16 15:41 ` [PATCH v3 1/3] mm/slub: Consider kfence case for get_orig_size() Feng Tang
@ 2024-10-16 15:41 ` Feng Tang
2024-11-14 13:34 ` Hyeonggon Yoo
2024-10-16 15:41 ` [PATCH v3 3/3] mm/slub, kunit: Add testcase for krealloc redzone and zeroing Feng Tang
2024-10-18 9:51 ` [PATCH v3 0/3] mm/slub: Improve data handling of krealloc() when orig_size is enabled Vlastimil Babka
3 siblings, 1 reply; 8+ messages in thread
From: Feng Tang @ 2024-10-16 15:41 UTC (permalink / raw)
To: Vlastimil Babka, Andrew Morton, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo,
Andrey Konovalov, Marco Elver, Alexander Potapenko,
Dmitry Vyukov, Danilo Krummrich, Narasimhan.V
Cc: linux-mm, kasan-dev, linux-kernel, Feng Tang
For current krealloc(), one problem is its caller doesn't pass the old
request size, say the object is 64 bytes kmalloc one, but caller may
only requested 48 bytes. Then when krealloc() shrinks or grows in the
same object, or allocate a new bigger object, it lacks this 'original
size' information to do accurate data preserving or zeroing (when
__GFP_ZERO is set).
Thus with slub debug redzone and object tracking enabled, parts of the
object after krealloc() might contain redzone data instead of zeroes,
which is violating the __GFP_ZERO guarantees. Good thing is in this
case, kmalloc caches do have this 'orig_size' feature. So solve the
problem by utilize 'org_size' to do accurate data zeroing and preserving.
[Thanks to syzbot and V, Narasimhan for discovering kfence and big
kmalloc related issues in early patch version]
Suggested-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
mm/slub.c | 84 +++++++++++++++++++++++++++++++++++++++----------------
1 file changed, 60 insertions(+), 24 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 1d348899f7a3..958f7af79fad 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4718,34 +4718,66 @@ static __always_inline __realloc_size(2) void *
__do_krealloc(const void *p, size_t new_size, gfp_t flags)
{
void *ret;
- size_t ks;
-
- /* Check for double-free before calling ksize. */
- if (likely(!ZERO_OR_NULL_PTR(p))) {
- if (!kasan_check_byte(p))
- return NULL;
- ks = ksize(p);
- } else
- ks = 0;
-
- /* If the object still fits, repoison it precisely. */
- if (ks >= new_size) {
- /* Zero out spare memory. */
- if (want_init_on_alloc(flags)) {
- kasan_disable_current();
+ size_t ks = 0;
+ int orig_size = 0;
+ struct kmem_cache *s = NULL;
+
+ /* Check for double-free. */
+ if (unlikely(ZERO_OR_NULL_PTR(p)))
+ goto alloc_new;
+
+ if (!kasan_check_byte(p))
+ return NULL;
+
+ if (is_kfence_address(p)) {
+ ks = orig_size = kfence_ksize(p);
+ } else {
+ struct folio *folio;
+
+ folio = virt_to_folio(p);
+ if (unlikely(!folio_test_slab(folio))) {
+ /* Big kmalloc object */
+ WARN_ON(folio_size(folio) <= KMALLOC_MAX_CACHE_SIZE);
+ WARN_ON(p != folio_address(folio));
+ ks = folio_size(folio);
+ } else {
+ s = folio_slab(folio)->slab_cache;
+ orig_size = get_orig_size(s, (void *)p);
+ ks = s->object_size;
+ }
+ }
+
+ /* If the old object doesn't fit, allocate a bigger one */
+ if (new_size > ks)
+ goto alloc_new;
+
+ /* Zero out spare memory. */
+ if (want_init_on_alloc(flags)) {
+ kasan_disable_current();
+ if (orig_size && orig_size < new_size)
+ memset((void *)p + orig_size, 0, new_size - orig_size);
+ else
memset((void *)p + new_size, 0, ks - new_size);
- kasan_enable_current();
- }
+ kasan_enable_current();
+ }
- p = kasan_krealloc((void *)p, new_size, flags);
- return (void *)p;
+ /* Setup kmalloc redzone when needed */
+ if (s && slub_debug_orig_size(s)) {
+ set_orig_size(s, (void *)p, new_size);
+ if (s->flags & SLAB_RED_ZONE && new_size < ks)
+ memset_no_sanitize_memory((void *)p + new_size,
+ SLUB_RED_ACTIVE, ks - new_size);
}
+ p = kasan_krealloc((void *)p, new_size, flags);
+ return (void *)p;
+
+alloc_new:
ret = kmalloc_node_track_caller_noprof(new_size, flags, NUMA_NO_NODE, _RET_IP_);
if (ret && p) {
/* Disable KASAN checks as the object's redzone is accessed. */
kasan_disable_current();
- memcpy(ret, kasan_reset_tag(p), ks);
+ memcpy(ret, kasan_reset_tag(p), orig_size ?: ks);
kasan_enable_current();
}
@@ -4766,16 +4798,20 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
* memory allocation is flagged with __GFP_ZERO. Otherwise, it is possible that
* __GFP_ZERO is not fully honored by this API.
*
- * This is the case, since krealloc() only knows about the bucket size of an
- * allocation (but not the exact size it was allocated with) and hence
- * implements the following semantics for shrinking and growing buffers with
- * __GFP_ZERO.
+ * When slub_debug_orig_size() is off, krealloc() only knows about the bucket
+ * size of an allocation (but not the exact size it was allocated with) and
+ * hence implements the following semantics for shrinking and growing buffers
+ * with __GFP_ZERO.
*
* new bucket
* 0 size size
* |--------|----------------|
* | keep | zero |
*
+ * Otherwise, the original allocation size 'orig_size' could be used to
+ * precisely clear the requested size, and the new size will also be stored
+ * as the new 'orig_size'.
+ *
* In any case, the contents of the object pointed to are preserved up to the
* lesser of the new and old sizes.
*
--
2.27.0
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH v3 2/3] mm/slub: Improve redzone check and zeroing for krealloc()
2024-10-16 15:41 ` [PATCH v3 2/3] mm/slub: Improve redzone check and zeroing for krealloc() Feng Tang
@ 2024-11-14 13:34 ` Hyeonggon Yoo
2024-11-15 13:29 ` Vlastimil Babka
0 siblings, 1 reply; 8+ messages in thread
From: Hyeonggon Yoo @ 2024-11-14 13:34 UTC (permalink / raw)
To: Feng Tang
Cc: Vlastimil Babka, Andrew Morton, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Roman Gushchin, Andrey Konovalov,
Marco Elver, Alexander Potapenko, Dmitry Vyukov,
Danilo Krummrich, Narasimhan.V, linux-mm, kasan-dev,
linux-kernel
On Thu, Oct 17, 2024 at 12:42 AM Feng Tang <feng.tang@intel.com> wrote:
>
> For current krealloc(), one problem is its caller doesn't pass the old
> request size, say the object is 64 bytes kmalloc one, but caller may
> only requested 48 bytes. Then when krealloc() shrinks or grows in the
> same object, or allocate a new bigger object, it lacks this 'original
> size' information to do accurate data preserving or zeroing (when
> __GFP_ZERO is set).
>
> Thus with slub debug redzone and object tracking enabled, parts of the
> object after krealloc() might contain redzone data instead of zeroes,
> which is violating the __GFP_ZERO guarantees. Good thing is in this
> case, kmalloc caches do have this 'orig_size' feature. So solve the
> problem by utilize 'org_size' to do accurate data zeroing and preserving.
>
> [Thanks to syzbot and V, Narasimhan for discovering kfence and big
> kmalloc related issues in early patch version]
>
> Suggested-by: Vlastimil Babka <vbabka@suse.cz>
> Signed-off-by: Feng Tang <feng.tang@intel.com>
> ---
> mm/slub.c | 84 +++++++++++++++++++++++++++++++++++++++----------------
> 1 file changed, 60 insertions(+), 24 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 1d348899f7a3..958f7af79fad 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -4718,34 +4718,66 @@ static __always_inline __realloc_size(2) void *
> __do_krealloc(const void *p, size_t new_size, gfp_t flags)
> {
> void *ret;
> - size_t ks;
> -
> - /* Check for double-free before calling ksize. */
> - if (likely(!ZERO_OR_NULL_PTR(p))) {
> - if (!kasan_check_byte(p))
> - return NULL;
> - ks = ksize(p);
> - } else
> - ks = 0;
> -
> - /* If the object still fits, repoison it precisely. */
> - if (ks >= new_size) {
> - /* Zero out spare memory. */
> - if (want_init_on_alloc(flags)) {
> - kasan_disable_current();
> + size_t ks = 0;
> + int orig_size = 0;
> + struct kmem_cache *s = NULL;
> +
> + /* Check for double-free. */
> + if (unlikely(ZERO_OR_NULL_PTR(p)))
> + goto alloc_new;
nit: I think kasan_check_bytes() is the function that checks for double-free?
Otherwise looks good to me,
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> + if (!kasan_check_byte(p))
> + return NULL;
> +
> + if (is_kfence_address(p)) {
> + ks = orig_size = kfence_ksize(p);
> + } else {
> + struct folio *folio;
> +
> + folio = virt_to_folio(p);
> + if (unlikely(!folio_test_slab(folio))) {
> + /* Big kmalloc object */
> + WARN_ON(folio_size(folio) <= KMALLOC_MAX_CACHE_SIZE);
> + WARN_ON(p != folio_address(folio));
> + ks = folio_size(folio);
> + } else {
> + s = folio_slab(folio)->slab_cache;
> + orig_size = get_orig_size(s, (void *)p);
> + ks = s->object_size;
> + }
> + }
> +
> + /* If the old object doesn't fit, allocate a bigger one */
> + if (new_size > ks)
> + goto alloc_new;
> +
> + /* Zero out spare memory. */
> + if (want_init_on_alloc(flags)) {
> + kasan_disable_current();
> + if (orig_size && orig_size < new_size)
> + memset((void *)p + orig_size, 0, new_size - orig_size);
> + else
> memset((void *)p + new_size, 0, ks - new_size);
> - kasan_enable_current();
> - }
> + kasan_enable_current();
> + }
>
> - p = kasan_krealloc((void *)p, new_size, flags);
> - return (void *)p;
> + /* Setup kmalloc redzone when needed */
> + if (s && slub_debug_orig_size(s)) {
> + set_orig_size(s, (void *)p, new_size);
> + if (s->flags & SLAB_RED_ZONE && new_size < ks)
> + memset_no_sanitize_memory((void *)p + new_size,
> + SLUB_RED_ACTIVE, ks - new_size);
> }
> + p = kasan_krealloc((void *)p, new_size, flags);
> + return (void *)p;
> +
> +alloc_new:
> ret = kmalloc_node_track_caller_noprof(new_size, flags, NUMA_NO_NODE, _RET_IP_);
> if (ret && p) {
> /* Disable KASAN checks as the object's redzone is accessed. */
> kasan_disable_current();
> - memcpy(ret, kasan_reset_tag(p), ks);
> + memcpy(ret, kasan_reset_tag(p), orig_size ?: ks);
> kasan_enable_current();
> }
>
> @@ -4766,16 +4798,20 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
> * memory allocation is flagged with __GFP_ZERO. Otherwise, it is possible that
> * __GFP_ZERO is not fully honored by this API.
> *
> - * This is the case, since krealloc() only knows about the bucket size of an
> - * allocation (but not the exact size it was allocated with) and hence
> - * implements the following semantics for shrinking and growing buffers with
> - * __GFP_ZERO.
> + * When slub_debug_orig_size() is off, krealloc() only knows about the bucket
> + * size of an allocation (but not the exact size it was allocated with) and
> + * hence implements the following semantics for shrinking and growing buffers
> + * with __GFP_ZERO.
> *
> * new bucket
> * 0 size size
> * |--------|----------------|
> * | keep | zero |
> *
> + * Otherwise, the original allocation size 'orig_size' could be used to
> + * precisely clear the requested size, and the new size will also be stored
> + * as the new 'orig_size'.
> + *
> * In any case, the contents of the object pointed to are preserved up to the
> * lesser of the new and old sizes.
> *
> --
> 2.27.0
>
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH v3 2/3] mm/slub: Improve redzone check and zeroing for krealloc()
2024-11-14 13:34 ` Hyeonggon Yoo
@ 2024-11-15 13:29 ` Vlastimil Babka
0 siblings, 0 replies; 8+ messages in thread
From: Vlastimil Babka @ 2024-11-15 13:29 UTC (permalink / raw)
To: Hyeonggon Yoo, Feng Tang
Cc: Andrew Morton, Christoph Lameter, Pekka Enberg, David Rientjes,
Joonsoo Kim, Roman Gushchin, Andrey Konovalov, Marco Elver,
Alexander Potapenko, Dmitry Vyukov, Danilo Krummrich,
Narasimhan.V, linux-mm, kasan-dev, linux-kernel
On 11/14/24 14:34, Hyeonggon Yoo wrote:
> On Thu, Oct 17, 2024 at 12:42 AM Feng Tang <feng.tang@intel.com> wrote:
>>
>> For current krealloc(), one problem is its caller doesn't pass the old
>> request size, say the object is 64 bytes kmalloc one, but caller may
>> only requested 48 bytes. Then when krealloc() shrinks or grows in the
>> same object, or allocate a new bigger object, it lacks this 'original
>> size' information to do accurate data preserving or zeroing (when
>> __GFP_ZERO is set).
>>
>> Thus with slub debug redzone and object tracking enabled, parts of the
>> object after krealloc() might contain redzone data instead of zeroes,
>> which is violating the __GFP_ZERO guarantees. Good thing is in this
>> case, kmalloc caches do have this 'orig_size' feature. So solve the
>> problem by utilize 'org_size' to do accurate data zeroing and preserving.
>>
>> [Thanks to syzbot and V, Narasimhan for discovering kfence and big
>> kmalloc related issues in early patch version]
>>
>> Suggested-by: Vlastimil Babka <vbabka@suse.cz>
>> Signed-off-by: Feng Tang <feng.tang@intel.com>
>> ---
>> mm/slub.c | 84 +++++++++++++++++++++++++++++++++++++++----------------
>> 1 file changed, 60 insertions(+), 24 deletions(-)
>>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 1d348899f7a3..958f7af79fad 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -4718,34 +4718,66 @@ static __always_inline __realloc_size(2) void *
>> __do_krealloc(const void *p, size_t new_size, gfp_t flags)
>> {
>> void *ret;
>> - size_t ks;
>> -
>> - /* Check for double-free before calling ksize. */
>> - if (likely(!ZERO_OR_NULL_PTR(p))) {
>> - if (!kasan_check_byte(p))
>> - return NULL;
>> - ks = ksize(p);
>> - } else
>> - ks = 0;
>> -
>> - /* If the object still fits, repoison it precisely. */
>> - if (ks >= new_size) {
>> - /* Zero out spare memory. */
>> - if (want_init_on_alloc(flags)) {
>> - kasan_disable_current();
>> + size_t ks = 0;
>> + int orig_size = 0;
>> + struct kmem_cache *s = NULL;
>> +
>> + /* Check for double-free. */
>> + if (unlikely(ZERO_OR_NULL_PTR(p)))
>> + goto alloc_new;
>
> nit: I think kasan_check_bytes() is the function that checks for double-free?
Hm yeah, moved the comment.
> Otherwise looks good to me,
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Thanks!
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v3 3/3] mm/slub, kunit: Add testcase for krealloc redzone and zeroing
2024-10-16 15:41 [PATCH v3 0/3] mm/slub: Improve data handling of krealloc() when orig_size is enabled Feng Tang
2024-10-16 15:41 ` [PATCH v3 1/3] mm/slub: Consider kfence case for get_orig_size() Feng Tang
2024-10-16 15:41 ` [PATCH v3 2/3] mm/slub: Improve redzone check and zeroing for krealloc() Feng Tang
@ 2024-10-16 15:41 ` Feng Tang
2024-10-18 9:51 ` [PATCH v3 0/3] mm/slub: Improve data handling of krealloc() when orig_size is enabled Vlastimil Babka
3 siblings, 0 replies; 8+ messages in thread
From: Feng Tang @ 2024-10-16 15:41 UTC (permalink / raw)
To: Vlastimil Babka, Andrew Morton, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo,
Andrey Konovalov, Marco Elver, Alexander Potapenko,
Dmitry Vyukov, Danilo Krummrich, Narasimhan.V
Cc: linux-mm, kasan-dev, linux-kernel, Feng Tang
Danilo Krummrich raised issue about krealloc+GFP_ZERO [1], and Vlastimil
suggested to add some test case which can sanity test the kmalloc-redzone
and zeroing by utilizing the kmalloc's 'orig_size' debug feature.
It covers the grow and shrink case of krealloc() re-using current kmalloc
object, and the case of re-allocating a new bigger object.
[1]. https://lore.kernel.org/lkml/20240812223707.32049-1-dakr@kernel.org/
Suggested-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
lib/slub_kunit.c | 42 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 42 insertions(+)
diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
index 80e39f003344..3cd1cc667988 100644
--- a/lib/slub_kunit.c
+++ b/lib/slub_kunit.c
@@ -192,6 +192,47 @@ static void test_leak_destroy(struct kunit *test)
KUNIT_EXPECT_EQ(test, 2, slab_errors);
}
+static void test_krealloc_redzone_zeroing(struct kunit *test)
+{
+ u8 *p;
+ int i;
+ struct kmem_cache *s = test_kmem_cache_create("TestSlub_krealloc", 64,
+ SLAB_KMALLOC|SLAB_STORE_USER|SLAB_RED_ZONE);
+
+ p = __kmalloc_cache_noprof(s, GFP_KERNEL, 48);
+ memset(p, 0xff, 48);
+
+ kasan_disable_current();
+ OPTIMIZER_HIDE_VAR(p);
+
+ /* Test shrink */
+ p = krealloc(p, 40, GFP_KERNEL | __GFP_ZERO);
+ for (i = 40; i < 64; i++)
+ KUNIT_EXPECT_EQ(test, p[i], SLUB_RED_ACTIVE);
+
+ /* Test grow within the same 64B kmalloc object */
+ p = krealloc(p, 56, GFP_KERNEL | __GFP_ZERO);
+ for (i = 40; i < 56; i++)
+ KUNIT_EXPECT_EQ(test, p[i], 0);
+ for (i = 56; i < 64; i++)
+ KUNIT_EXPECT_EQ(test, p[i], SLUB_RED_ACTIVE);
+
+ validate_slab_cache(s);
+ KUNIT_EXPECT_EQ(test, 0, slab_errors);
+
+ memset(p, 0xff, 56);
+ /* Test grow with allocating a bigger 128B object */
+ p = krealloc(p, 112, GFP_KERNEL | __GFP_ZERO);
+ for (i = 0; i < 56; i++)
+ KUNIT_EXPECT_EQ(test, p[i], 0xff);
+ for (i = 56; i < 112; i++)
+ KUNIT_EXPECT_EQ(test, p[i], 0);
+
+ kfree(p);
+ kasan_enable_current();
+ kmem_cache_destroy(s);
+}
+
static int test_init(struct kunit *test)
{
slab_errors = 0;
@@ -214,6 +255,7 @@ static struct kunit_case test_cases[] = {
KUNIT_CASE(test_kmalloc_redzone_access),
KUNIT_CASE(test_kfree_rcu),
KUNIT_CASE(test_leak_destroy),
+ KUNIT_CASE(test_krealloc_redzone_zeroing),
{}
};
--
2.27.0
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH v3 0/3] mm/slub: Improve data handling of krealloc() when orig_size is enabled
2024-10-16 15:41 [PATCH v3 0/3] mm/slub: Improve data handling of krealloc() when orig_size is enabled Feng Tang
` (2 preceding siblings ...)
2024-10-16 15:41 ` [PATCH v3 3/3] mm/slub, kunit: Add testcase for krealloc redzone and zeroing Feng Tang
@ 2024-10-18 9:51 ` Vlastimil Babka
3 siblings, 0 replies; 8+ messages in thread
From: Vlastimil Babka @ 2024-10-18 9:51 UTC (permalink / raw)
To: Feng Tang, Andrew Morton, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo,
Andrey Konovalov, Marco Elver, Alexander Potapenko,
Dmitry Vyukov, Danilo Krummrich, Narasimhan.V
Cc: linux-mm, kasan-dev, linux-kernel
On 10/16/24 17:41, Feng Tang wrote:
> Danilo Krummrich's patch [1] raised one problem about krealloc() that
> its caller doesn't pass the old request size, say the object is 64
> bytes kmalloc one, but caller originally only requested 48 bytes. Then
> when krealloc() shrinks or grows in the same object, or allocate a new
> bigger object, it lacks this 'original size' information to do accurate
> data preserving or zeroing (when __GFP_ZERO is set).
>
> Thus with slub debug redzone and object tracking enabled, parts of the
> object after krealloc() might contain redzone data instead of zeroes,
> which is violating the __GFP_ZERO guarantees. Good thing is in this
> case, kmalloc caches do have this 'orig_size' feature, which could be
> used to improve the situation here.
>
> To make the 'orig_size' accurate, we adjust some kasan/slub meta data
> handling. Also add a slub kunit test case for krealloc().
>
> Many thanks to syzbot and V, Narasimhan for detecting issues of the
> v2 patches.
>
> This is again linux-slab tree's 'for-6.13/fixes' branch
Thanks, added there.
Vlastimil
> [1]. https://lore.kernel.org/lkml/20240812223707.32049-1-dakr@kernel.org/
>
> Thanks,
> Feng
>
> Changelog:
>
> Since v2:
> * Fix NULL pointer issue related to big kmalloc object which has
> no associated slab (V, Narasimhan, syzbot)
> * Fix issue related handling for kfence allocated object (syzbot,
> Marco Elver)
> * drop the 0001 and 0003 patch whch have been merged to slab tree
>
> Since v1:
> * Drop the patch changing generic kunit code from this patchset,
> and will send it separately.
> * Separate the krealloc moving form slab_common.c to slub.c to a
> new patch for better review (Danilo/Vlastimil)
> * Improve commit log and comments (Vlastimil/Danilo)
> * Rework the kunit test case to remove its dependency over
> slub_debug (which is incomplete in v1) (Vlastimil)
> * Add ack and review tag from developers.
>
>
>
> Feng Tang (3):
> mm/slub: Consider kfence case for get_orig_size()
> mm/slub: Improve redzone check and zeroing for krealloc()
> mm/slub, kunit: Add testcase for krealloc redzone and zeroing
>
> lib/slub_kunit.c | 42 +++++++++++++++++++++++
> mm/slub.c | 87 +++++++++++++++++++++++++++++++++++-------------
> 2 files changed, 105 insertions(+), 24 deletions(-)
>
^ permalink raw reply [flat|nested] 8+ messages in thread