linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] slub: fix data loss and overflow in krealloc()
@ 2026-04-16 13:25 Marco Elver
  2026-04-17  4:42 ` Harry Yoo (Oracle)
  2026-04-17  9:11 ` Vlastimil Babka (SUSE)
  0 siblings, 2 replies; 5+ messages in thread
From: Marco Elver @ 2026-04-16 13:25 UTC (permalink / raw)
  To: elver, Vlastimil Babka, Andrew Morton
  Cc: Harry Yoo, Hao Li, Christoph Lameter, David Rientjes,
	Roman Gushchin, linux-mm, linux-kernel, kasan-dev, stable

Commit 2cd8231796b5 ("mm/slub: allow to set node and align in
k[v]realloc") introduced the ability to force a reallocation if the
original object does not satisfy new alignment or NUMA node, even when
the object is being shrunk.

This introduced two bugs in the reallocation fallback path:

1. Data loss during NUMA migration: The jump to 'alloc_new' happens
   before 'ks' and 'orig_size' are initialized. As a result, the
   memcpy() in the 'alloc_new' block would copy 0 bytes into the new
   allocation.

2. Buffer overflow during shrinking: When shrinking an object while
   forcing a new alignment, 'new_size' is smaller than the old size.
   However, the memcpy() used the old size ('orig_size ?: ks'), leading
   to an out-of-bounds write.

The same overflow bug exists in the kvrealloc() fallback path, where the
old bucket size ksize(p) is copied into the new buffer without being
bounded by the new size.

A simple reproducer:

	// e.g. add to lkdtm as KREALLOC_SHRINK_OVERFLOW
	while (1) {
		void *p = kmalloc(128, GFP_KERNEL);
		p = krealloc_node_align(p, 64, 256, GFP_KERNEL, NUMA_NO_NODE);
		kfree(p);
	}

demonstrates the issue:

  ==================================================================
  BUG: KFENCE: out-of-bounds write in memcpy_orig+0x68/0x130

  Out-of-bounds write at 0xffff8883ad757038 (120B right of kfence-#47):
   memcpy_orig+0x68/0x130
   krealloc_node_align_noprof+0x1c8/0x340
   lkdtm_KREALLOC_SHRINK_OVERFLOW+0x8c/0xc0 [lkdtm]
   lkdtm_do_action+0x3a/0x60 [lkdtm]
   ...

  kfence-#47: 0xffff8883ad756fc0-0xffff8883ad756fff, size=64, cache=kmalloc-64

  allocated by task 316 on cpu 7 at 97.680481s (0.021813s ago):
   krealloc_node_align_noprof+0x19c/0x340
   lkdtm_KREALLOC_SHRINK_OVERFLOW+0x8c/0xc0 [lkdtm]
   lkdtm_do_action+0x3a/0x60 [lkdtm]
   ...
  ==================================================================

Fix it by moving the old size calculation to the top of __do_krealloc()
and bounding all copy lengths by the new allocation size.

Fixes: 2cd8231796b5 ("mm/slub: allow to set node and align in k[v]realloc")
Cc: <stable@vger.kernel.org>
Reported-by: https://sashiko.dev/#/patchset/20260415143735.2974230-1-elver%40google.com
Signed-off-by: Marco Elver <elver@google.com>
---
 mm/slub.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 92362eeb13e5..161079ac5ba1 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -6645,16 +6645,6 @@ __do_krealloc(const void *p, size_t new_size, unsigned long align, gfp_t flags,
 	if (!kasan_check_byte(p))
 		return NULL;
 
-	/*
-	 * If reallocation is not necessary (e. g. the new size is less
-	 * than the current allocated size), the current allocation will be
-	 * preserved unless __GFP_THISNODE is set. In the latter case a new
-	 * allocation on the requested node will be attempted.
-	 */
-	if (unlikely(flags & __GFP_THISNODE) && nid != NUMA_NO_NODE &&
-		     nid != page_to_nid(virt_to_page(p)))
-		goto alloc_new;
-
 	if (is_kfence_address(p)) {
 		ks = orig_size = kfence_ksize(p);
 	} else {
@@ -6673,6 +6663,16 @@ __do_krealloc(const void *p, size_t new_size, unsigned long align, gfp_t flags,
 		}
 	}
 
+	/*
+	 * If reallocation is not necessary (e. g. the new size is less
+	 * than the current allocated size), the current allocation will be
+	 * preserved unless __GFP_THISNODE is set. In the latter case a new
+	 * allocation on the requested node will be attempted.
+	 */
+	if (unlikely(flags & __GFP_THISNODE) && nid != NUMA_NO_NODE &&
+		     nid != page_to_nid(virt_to_page(p)))
+		goto alloc_new;
+
 	/* If the old object doesn't fit, allocate a bigger one */
 	if (new_size > ks)
 		goto alloc_new;
@@ -6707,7 +6707,7 @@ __do_krealloc(const void *p, size_t new_size, unsigned long align, gfp_t flags,
 	if (ret && p) {
 		/* Disable KASAN checks as the object's redzone is accessed. */
 		kasan_disable_current();
-		memcpy(ret, kasan_reset_tag(p), orig_size ?: ks);
+		memcpy(ret, kasan_reset_tag(p), min(new_size, (size_t)(orig_size ?: ks)));
 		kasan_enable_current();
 	}
 
@@ -6941,7 +6941,7 @@ void *kvrealloc_node_align_noprof(const void *p, size_t size, unsigned long alig
 		if (p) {
 			/* We already know that `p` is not a vmalloc address. */
 			kasan_disable_current();
-			memcpy(n, kasan_reset_tag(p), ksize(p));
+			memcpy(n, kasan_reset_tag(p), min(size, ksize(p)));
 			kasan_enable_current();
 
 			kfree(p);
-- 
2.54.0.rc1.513.gad8abe7a5a-goog


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] slub: fix data loss and overflow in krealloc()
  2026-04-16 13:25 [PATCH] slub: fix data loss and overflow in krealloc() Marco Elver
@ 2026-04-17  4:42 ` Harry Yoo (Oracle)
  2026-04-17  9:05   ` Marco Elver
  2026-04-17  9:11 ` Vlastimil Babka (SUSE)
  1 sibling, 1 reply; 5+ messages in thread
From: Harry Yoo (Oracle) @ 2026-04-17  4:42 UTC (permalink / raw)
  To: Marco Elver
  Cc: Vlastimil Babka, Andrew Morton, Hao Li, Christoph Lameter,
	David Rientjes, Roman Gushchin, linux-mm, linux-kernel,
	kasan-dev, stable, Vitaly Wool, Uladzislau Rezki,
	Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett, Alice Ryhl,
	rust-for-linux

[+Cc relevant folks]

On Thu, Apr 16, 2026 at 03:25:07PM +0200, Marco Elver wrote:
> Commit 2cd8231796b5 ("mm/slub: allow to set node and align in
> k[v]realloc") introduced the ability to force a reallocation if the
> original object does not satisfy new alignment or NUMA node, even when
> the object is being shrunk.
> 
> This introduced two bugs in the reallocation fallback path:
> 
> 1. Data loss during NUMA migration: The jump to 'alloc_new' happens
>    before 'ks' and 'orig_size' are initialized. As a result, the
>    memcpy() in the 'alloc_new' block would copy 0 bytes into the new
>    allocation.

Ouch.

> 2. Buffer overflow during shrinking: When shrinking an object while
>    forcing a new alignment, 'new_size' is smaller than the old size.
>    However, the memcpy() used the old size ('orig_size ?: ks'), leading
>    to an out-of-bounds write.

Right. before the commit we didn't reallocate when the size is smaller.

> The same overflow bug exists in the kvrealloc() fallback path, where the
> old bucket size ksize(p) is copied into the new buffer without being
> bounded by the new size.
> 
> A simple reproducer:
> 
> 	// e.g. add to lkdtm as KREALLOC_SHRINK_OVERFLOW
> 	while (1) {
> 		void *p = kmalloc(128, GFP_KERNEL);
> 		p = krealloc_node_align(p, 64, 256, GFP_KERNEL, NUMA_NO_NODE);
> 		kfree(p);
> 	}
> 
> demonstrates the issue:
> 
>   ==================================================================
>   BUG: KFENCE: out-of-bounds write in memcpy_orig+0x68/0x130
> 
>   Out-of-bounds write at 0xffff8883ad757038 (120B right of kfence-#47):
>    memcpy_orig+0x68/0x130
>    krealloc_node_align_noprof+0x1c8/0x340
>    lkdtm_KREALLOC_SHRINK_OVERFLOW+0x8c/0xc0 [lkdtm]
>    lkdtm_do_action+0x3a/0x60 [lkdtm]
>    ...
> 
>   kfence-#47: 0xffff8883ad756fc0-0xffff8883ad756fff, size=64, cache=kmalloc-64
> 
>   allocated by task 316 on cpu 7 at 97.680481s (0.021813s ago):
>    krealloc_node_align_noprof+0x19c/0x340
>    lkdtm_KREALLOC_SHRINK_OVERFLOW+0x8c/0xc0 [lkdtm]
>    lkdtm_do_action+0x3a/0x60 [lkdtm]
>    ...
>   ==================================================================
> 
> Fix it by moving the old size calculation to the top of __do_krealloc()
> and bounding all copy lengths by the new allocation size.
> 
> Fixes: 2cd8231796b5 ("mm/slub: allow to set node and align in k[v]realloc")
> Cc: <stable@vger.kernel.org>
> Reported-by: https://sashiko.dev/#/patchset/20260415143735.2974230-1-elver%40google.com
> Signed-off-by: Marco Elver <elver@google.com>
> ---

Looks good to me, but I think we still have a similar issue in
vrealloc_node_align_noprof()? (goto need_realloc; due to NUMA mismatch
but the new size is smaller)

-- 
Cheers,
Harry / Hyeonggon


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] slub: fix data loss and overflow in krealloc()
  2026-04-17  4:42 ` Harry Yoo (Oracle)
@ 2026-04-17  9:05   ` Marco Elver
  2026-04-17  9:21     ` Harry Yoo (Oracle)
  0 siblings, 1 reply; 5+ messages in thread
From: Marco Elver @ 2026-04-17  9:05 UTC (permalink / raw)
  To: Harry Yoo (Oracle)
  Cc: Vlastimil Babka, Andrew Morton, Hao Li, Christoph Lameter,
	David Rientjes, Roman Gushchin, linux-mm, linux-kernel,
	kasan-dev, stable, Vitaly Wool, Uladzislau Rezki,
	Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett, Alice Ryhl,
	rust-for-linux

On Fri, 17 Apr 2026 at 06:42, Harry Yoo (Oracle) <harry@kernel.org> wrote:
>
> [+Cc relevant folks]
>
> On Thu, Apr 16, 2026 at 03:25:07PM +0200, Marco Elver wrote:
> > Commit 2cd8231796b5 ("mm/slub: allow to set node and align in
> > k[v]realloc") introduced the ability to force a reallocation if the
> > original object does not satisfy new alignment or NUMA node, even when
> > the object is being shrunk.
> >
> > This introduced two bugs in the reallocation fallback path:
> >
> > 1. Data loss during NUMA migration: The jump to 'alloc_new' happens
> >    before 'ks' and 'orig_size' are initialized. As a result, the
> >    memcpy() in the 'alloc_new' block would copy 0 bytes into the new
> >    allocation.
>
> Ouch.
>
> > 2. Buffer overflow during shrinking: When shrinking an object while
> >    forcing a new alignment, 'new_size' is smaller than the old size.
> >    However, the memcpy() used the old size ('orig_size ?: ks'), leading
> >    to an out-of-bounds write.
>
> Right. before the commit we didn't reallocate when the size is smaller.
>
> > The same overflow bug exists in the kvrealloc() fallback path, where the
> > old bucket size ksize(p) is copied into the new buffer without being
> > bounded by the new size.
> >
> > A simple reproducer:
> >
> >       // e.g. add to lkdtm as KREALLOC_SHRINK_OVERFLOW
> >       while (1) {
> >               void *p = kmalloc(128, GFP_KERNEL);
> >               p = krealloc_node_align(p, 64, 256, GFP_KERNEL, NUMA_NO_NODE);
> >               kfree(p);
> >       }
> >
> > demonstrates the issue:
> >
> >   ==================================================================
> >   BUG: KFENCE: out-of-bounds write in memcpy_orig+0x68/0x130
> >
> >   Out-of-bounds write at 0xffff8883ad757038 (120B right of kfence-#47):
> >    memcpy_orig+0x68/0x130
> >    krealloc_node_align_noprof+0x1c8/0x340
> >    lkdtm_KREALLOC_SHRINK_OVERFLOW+0x8c/0xc0 [lkdtm]
> >    lkdtm_do_action+0x3a/0x60 [lkdtm]
> >    ...
> >
> >   kfence-#47: 0xffff8883ad756fc0-0xffff8883ad756fff, size=64, cache=kmalloc-64
> >
> >   allocated by task 316 on cpu 7 at 97.680481s (0.021813s ago):
> >    krealloc_node_align_noprof+0x19c/0x340
> >    lkdtm_KREALLOC_SHRINK_OVERFLOW+0x8c/0xc0 [lkdtm]
> >    lkdtm_do_action+0x3a/0x60 [lkdtm]
> >    ...
> >   ==================================================================
> >
> > Fix it by moving the old size calculation to the top of __do_krealloc()
> > and bounding all copy lengths by the new allocation size.
> >
> > Fixes: 2cd8231796b5 ("mm/slub: allow to set node and align in k[v]realloc")
> > Cc: <stable@vger.kernel.org>
> > Reported-by: https://sashiko.dev/#/patchset/20260415143735.2974230-1-elver%40google.com
> > Signed-off-by: Marco Elver <elver@google.com>
> > ---
>
> Looks good to me, but I think we still have a similar issue in
> vrealloc_node_align_noprof()? (goto need_realloc; due to NUMA mismatch
> but the new size is smaller)

Good find.
That's a separate patch, though, since it's in the vmalloc subsystem
(it's also not confidence-inspiring that vrealloc_node_align_noprof
has a bunch of TODOs sprinkled all over...).
Since you found that, do you want to claim it?

Also by the looks of it slub and vmalloc patches go through different
trees these days per MAINTAINERS.

Thanks,
-- Marco


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] slub: fix data loss and overflow in krealloc()
  2026-04-16 13:25 [PATCH] slub: fix data loss and overflow in krealloc() Marco Elver
  2026-04-17  4:42 ` Harry Yoo (Oracle)
@ 2026-04-17  9:11 ` Vlastimil Babka (SUSE)
  1 sibling, 0 replies; 5+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-04-17  9:11 UTC (permalink / raw)
  To: Marco Elver, Andrew Morton
  Cc: Harry Yoo, Hao Li, Christoph Lameter, David Rientjes,
	Roman Gushchin, linux-mm, linux-kernel, kasan-dev, stable

On 4/16/26 15:25, Marco Elver wrote:
> Commit 2cd8231796b5 ("mm/slub: allow to set node and align in
> k[v]realloc") introduced the ability to force a reallocation if the
> original object does not satisfy new alignment or NUMA node, even when
> the object is being shrunk.
> 
> This introduced two bugs in the reallocation fallback path:
> 
> 1. Data loss during NUMA migration: The jump to 'alloc_new' happens
>    before 'ks' and 'orig_size' are initialized. As a result, the
>    memcpy() in the 'alloc_new' block would copy 0 bytes into the new
>    allocation.
> 
> 2. Buffer overflow during shrinking: When shrinking an object while
>    forcing a new alignment, 'new_size' is smaller than the old size.
>    However, the memcpy() used the old size ('orig_size ?: ks'), leading
>    to an out-of-bounds write.
> 
> The same overflow bug exists in the kvrealloc() fallback path, where the
> old bucket size ksize(p) is copied into the new buffer without being
> bounded by the new size.
> 
> A simple reproducer:
> 
> 	// e.g. add to lkdtm as KREALLOC_SHRINK_OVERFLOW
> 	while (1) {
> 		void *p = kmalloc(128, GFP_KERNEL);
> 		p = krealloc_node_align(p, 64, 256, GFP_KERNEL, NUMA_NO_NODE);
> 		kfree(p);
> 	}
> 
> demonstrates the issue:
> 
>   ==================================================================
>   BUG: KFENCE: out-of-bounds write in memcpy_orig+0x68/0x130
> 
>   Out-of-bounds write at 0xffff8883ad757038 (120B right of kfence-#47):
>    memcpy_orig+0x68/0x130
>    krealloc_node_align_noprof+0x1c8/0x340
>    lkdtm_KREALLOC_SHRINK_OVERFLOW+0x8c/0xc0 [lkdtm]
>    lkdtm_do_action+0x3a/0x60 [lkdtm]
>    ...
> 
>   kfence-#47: 0xffff8883ad756fc0-0xffff8883ad756fff, size=64, cache=kmalloc-64
> 
>   allocated by task 316 on cpu 7 at 97.680481s (0.021813s ago):
>    krealloc_node_align_noprof+0x19c/0x340
>    lkdtm_KREALLOC_SHRINK_OVERFLOW+0x8c/0xc0 [lkdtm]
>    lkdtm_do_action+0x3a/0x60 [lkdtm]
>    ...
>   ==================================================================
> 
> Fix it by moving the old size calculation to the top of __do_krealloc()
> and bounding all copy lengths by the new allocation size.
> 
> Fixes: 2cd8231796b5 ("mm/slub: allow to set node and align in k[v]realloc")
> Cc: <stable@vger.kernel.org>
> Reported-by: https://sashiko.dev/#/patchset/20260415143735.2974230-1-elver%40google.com
> Signed-off-by: Marco Elver <elver@google.com>

Ouch, thanks. Added to slab/for-next-fixes
Indeed the vrealloc would be separate patch with different Fixes: commit and
handled in the mm tree.

> ---
>  mm/slub.c | 24 ++++++++++++------------
>  1 file changed, 12 insertions(+), 12 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 92362eeb13e5..161079ac5ba1 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -6645,16 +6645,6 @@ __do_krealloc(const void *p, size_t new_size, unsigned long align, gfp_t flags,
>  	if (!kasan_check_byte(p))
>  		return NULL;
>  
> -	/*
> -	 * If reallocation is not necessary (e. g. the new size is less
> -	 * than the current allocated size), the current allocation will be
> -	 * preserved unless __GFP_THISNODE is set. In the latter case a new
> -	 * allocation on the requested node will be attempted.
> -	 */
> -	if (unlikely(flags & __GFP_THISNODE) && nid != NUMA_NO_NODE &&
> -		     nid != page_to_nid(virt_to_page(p)))
> -		goto alloc_new;
> -
>  	if (is_kfence_address(p)) {
>  		ks = orig_size = kfence_ksize(p);
>  	} else {
> @@ -6673,6 +6663,16 @@ __do_krealloc(const void *p, size_t new_size, unsigned long align, gfp_t flags,
>  		}
>  	}
>  
> +	/*
> +	 * If reallocation is not necessary (e. g. the new size is less
> +	 * than the current allocated size), the current allocation will be
> +	 * preserved unless __GFP_THISNODE is set. In the latter case a new
> +	 * allocation on the requested node will be attempted.
> +	 */
> +	if (unlikely(flags & __GFP_THISNODE) && nid != NUMA_NO_NODE &&
> +		     nid != page_to_nid(virt_to_page(p)))
> +		goto alloc_new;
> +
>  	/* If the old object doesn't fit, allocate a bigger one */
>  	if (new_size > ks)
>  		goto alloc_new;
> @@ -6707,7 +6707,7 @@ __do_krealloc(const void *p, size_t new_size, unsigned long align, gfp_t flags,
>  	if (ret && p) {
>  		/* Disable KASAN checks as the object's redzone is accessed. */
>  		kasan_disable_current();
> -		memcpy(ret, kasan_reset_tag(p), orig_size ?: ks);
> +		memcpy(ret, kasan_reset_tag(p), min(new_size, (size_t)(orig_size ?: ks)));
>  		kasan_enable_current();
>  	}
>  
> @@ -6941,7 +6941,7 @@ void *kvrealloc_node_align_noprof(const void *p, size_t size, unsigned long alig
>  		if (p) {
>  			/* We already know that `p` is not a vmalloc address. */
>  			kasan_disable_current();
> -			memcpy(n, kasan_reset_tag(p), ksize(p));
> +			memcpy(n, kasan_reset_tag(p), min(size, ksize(p)));
>  			kasan_enable_current();
>  
>  			kfree(p);



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] slub: fix data loss and overflow in krealloc()
  2026-04-17  9:05   ` Marco Elver
@ 2026-04-17  9:21     ` Harry Yoo (Oracle)
  0 siblings, 0 replies; 5+ messages in thread
From: Harry Yoo (Oracle) @ 2026-04-17  9:21 UTC (permalink / raw)
  To: Marco Elver
  Cc: Vlastimil Babka, Andrew Morton, Hao Li, Christoph Lameter,
	David Rientjes, Roman Gushchin, linux-mm, linux-kernel,
	kasan-dev, stable, Vitaly Wool, Uladzislau Rezki,
	Danilo Krummrich, Lorenzo Stoakes, Liam R. Howlett, Alice Ryhl,
	rust-for-linux

On Fri, Apr 17, 2026 at 11:05:09AM +0200, Marco Elver wrote:
> On Fri, 17 Apr 2026 at 06:42, Harry Yoo (Oracle) <harry@kernel.org> wrote:
> > [+Cc relevant folks]
> >
> > On Thu, Apr 16, 2026 at 03:25:07PM +0200, Marco Elver wrote:
> > > Commit 2cd8231796b5 ("mm/slub: allow to set node and align in
> > > k[v]realloc") introduced the ability to force a reallocation if the
> > > original object does not satisfy new alignment or NUMA node, even when
> > > the object is being shrunk.
> > >
> > > This introduced two bugs in the reallocation fallback path:
> > >
> > > 1. Data loss during NUMA migration: The jump to 'alloc_new' happens
> > >    before 'ks' and 'orig_size' are initialized. As a result, the
> > >    memcpy() in the 'alloc_new' block would copy 0 bytes into the new
> > >    allocation.
> >
> > Ouch.
> >
> > > 2. Buffer overflow during shrinking: When shrinking an object while
> > >    forcing a new alignment, 'new_size' is smaller than the old size.
> > >    However, the memcpy() used the old size ('orig_size ?: ks'), leading
> > >    to an out-of-bounds write.
> >
> > Right. before the commit we didn't reallocate when the size is smaller.
> >
> > > The same overflow bug exists in the kvrealloc() fallback path, where the
> > > old bucket size ksize(p) is copied into the new buffer without being
> > > bounded by the new size.
> > >
> > > A simple reproducer:
> > >
> > >       // e.g. add to lkdtm as KREALLOC_SHRINK_OVERFLOW
> > >       while (1) {
> > >               void *p = kmalloc(128, GFP_KERNEL);
> > >               p = krealloc_node_align(p, 64, 256, GFP_KERNEL, NUMA_NO_NODE);
> > >               kfree(p);
> > >       }
> > >
> > > demonstrates the issue:
> > >
> > >   ==================================================================
> > >   BUG: KFENCE: out-of-bounds write in memcpy_orig+0x68/0x130
> > >
> > >   Out-of-bounds write at 0xffff8883ad757038 (120B right of kfence-#47):
> > >    memcpy_orig+0x68/0x130
> > >    krealloc_node_align_noprof+0x1c8/0x340
> > >    lkdtm_KREALLOC_SHRINK_OVERFLOW+0x8c/0xc0 [lkdtm]
> > >    lkdtm_do_action+0x3a/0x60 [lkdtm]
> > >    ...
> > >
> > >   kfence-#47: 0xffff8883ad756fc0-0xffff8883ad756fff, size=64, cache=kmalloc-64
> > >
> > >   allocated by task 316 on cpu 7 at 97.680481s (0.021813s ago):
> > >    krealloc_node_align_noprof+0x19c/0x340
> > >    lkdtm_KREALLOC_SHRINK_OVERFLOW+0x8c/0xc0 [lkdtm]
> > >    lkdtm_do_action+0x3a/0x60 [lkdtm]
> > >    ...
> > >   ==================================================================
> > >
> > > Fix it by moving the old size calculation to the top of __do_krealloc()
> > > and bounding all copy lengths by the new allocation size.
> > >
> > > Fixes: 2cd8231796b5 ("mm/slub: allow to set node and align in k[v]realloc")
> > > Cc: <stable@vger.kernel.org>
> > > Reported-by: https://sashiko.dev/#/patchset/20260415143735.2974230-1-elver%40google.com
> > > Signed-off-by: Marco Elver <elver@google.com>
> > > ---
> >
> > Looks good to me, but I think we still have a similar issue in
> > vrealloc_node_align_noprof()? (goto need_realloc; due to NUMA mismatch
> > but the new size is smaller)
> 
> Good find.
>
> That's a separate patch, though, since it's in the vmalloc subsystem

You're right. for this patch:

Looks good to me,
Reviewed-by: Harry Yoo (Oracle) <harry@kernel.org>

> (it's also not confidence-inspiring that vrealloc_node_align_noprof
> has a bunch of TODOs sprinkled all over...).

;)

looks like one of the TODOs is going to be tacked though. (shrinking)
https://lore.kernel.org/linux-mm/20260404-vmalloc-shrink-v10-0-335759165dfa@zohomail.in

> Since you found that, do you want to claim it?

I have many stuffs going on my plate now (including re-reviewing the
typed kmalloc caches patch) so it'd be nice if somebody could claim :)

> Also by the looks of it slub and vmalloc patches go through different
> trees these days per MAINTAINERS.

Right. slab patches go through the slab tree.

-- 
Cheers,
Harry / Hyeonggon


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2026-04-17  9:21 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-04-16 13:25 [PATCH] slub: fix data loss and overflow in krealloc() Marco Elver
2026-04-17  4:42 ` Harry Yoo (Oracle)
2026-04-17  9:05   ` Marco Elver
2026-04-17  9:21     ` Harry Yoo (Oracle)
2026-04-17  9:11 ` Vlastimil Babka (SUSE)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox