We wont make it until next week. Maybe you guys can compile newest r5 kernel with that patch ? We are using https://prebuiltkernels.com/ and do not compiled 6.18 ourselves. We can do that next week. This week is full of emergencies lol If you can provide me two debs like prebuild kernels i could deploy it and leave for testing for 1-2 days. -- tel. 790 202 300 *Tytus Rogalewski* Dolina Krzemowa 6A 83-010 Jagatowo NIP: 9570976234 wt., 11 lis 2025 o 19:29 Harry Yoo napisał(a): > On Tue, Nov 11, 2025 at 05:48:35PM +0100, Tytus Rogalewski wrote: > > Do you guys still need that debug then? > > I think this is happening only when qemu vm is working. > > > > I can get results within 1-2 days. > > Hi Tythus! > > Really appreciate you reporting the bug and testing it. > > Now that I know what went wrong, I realize that `slab_debug=U` parameter > will hide the bug, since we disable "sheaves" feature for > debug caches. > > Instead of testing with `slab_debug=U` parameter, could you please > apply this patch on top of Linux v6.18-rc5, build & install it, > and verify that the memory leak is indeed resolved on your machine? > > > -- > > > > tel. 790 202 300 > > > > *Tytus Rogalewski* > > > > Dolina Krzemowa 6A > > > > 83-010 Jagatowo > > > > NIP: 9570976234 > > > > > > W dniu wt., 11 lis 2025 o 16:37 Liam R. Howlett > > > napisał(a): > > > > > * Harry Yoo [251111 07:55]: > > > > The commit 989b09b73978 ("slab: skip percpu sheaves for remote object > > > > freeing") introduced the remote_objects array in free_to_pcs_bulk() > to > > > > skip sheaves when objects from a remote node are freed. > > > > > > > > However, the array is flushed only when: > > > > 1) the array becomes full (++remote_nr >= PCS_BATCH_MAX), or > > > > 2) slab_free_hook() returns false and size becomes zero. > > > > > > > > When neither of the conditions is met, objects in the array are > leaked. > > > > This resulted in a memory leak [1], where 82 GiB of memory was > allocated > > > > for the maple_node cache. > > > > > > > > Flush the array after successfully freeing objects to sheaves > > > > in the do_free: path. > > > > > > > > In the meantime, move the snippet if (!size) goto flush_remote; > outside > > > > the while loop for readability. Let's say all objects in the array > are > > > > from a remote node: then we acquire s->cpu_sheaves->lock and try to > free > > > > an object even when size is zero. This doesn't appear to be harmful, > > > > but isn't really readable. > > > > > > > > Reported-by: Tytus Rogalewski > > > > Closes: https://bugzilla.kernel.org/show_bug.cgi?id=220765 > > > > Closes: > > > > https://lore.kernel.org/linux-mm/20251107094809.12e9d705b7bf4815783eb184@linux-foundation.org > > > > Closes: https://lore.kernel.org/all/aRGDTwbt2EIz2CYn@hyeyoo > > > > Fixes: 989b09b73978 ("slab: skip percpu sheaves for remote object > > > freeing") > > > > Signed-off-by: Harry Yoo > > > > > > > > > Thanks Harry. > > > > > > Acked-by: Liam R. Howlett > > > > > > > --- > > > > mm/slub.c | 8 ++++++-- > > > > 1 file changed, 6 insertions(+), 2 deletions(-) > > > > > > > > diff --git a/mm/slub.c b/mm/slub.c > > > > index f1a5373eee7b..a787687a0d59 100644 > > > > --- a/mm/slub.c > > > > +++ b/mm/slub.c > > > > @@ -6332,8 +6332,6 @@ static void free_to_pcs_bulk(struct kmem_cache > *s, > > > size_t size, void **p) > > > > > > > > if (unlikely(!slab_free_hook(s, p[i], init, false))) { > > > > p[i] = p[--size]; > > > > - if (!size) > > > > - goto flush_remote; > > > > continue; > > > > } > > > > > > > > @@ -6348,6 +6346,9 @@ static void free_to_pcs_bulk(struct kmem_cache > *s, > > > size_t size, void **p) > > > > i++; > > > > } > > > > > > > > + if (!size) > > > > + goto flush_remote; > > > > + > > > > next_batch: > > > > if (!local_trylock(&s->cpu_sheaves->lock)) > > > > goto fallback; > > > > @@ -6402,6 +6403,9 @@ static void free_to_pcs_bulk(struct kmem_cache > *s, > > > size_t size, void **p) > > > > goto next_batch; > > > > } > > > > > > > > + if (remote_nr) > > > > + goto flush_remote; > > > > + > > > > return; > > > > > > > > no_empty: > > > > -- > > > > 2.43.0 > > > > > > > > > -- > Cheers, > Harry / Hyeonggon >