From: Vlastimil Babka <vbabka@suse.cz>
To: Harry Yoo <harry.yoo@oracle.com>, Tytus Rogalewski <tytanick@gmail.com>
Cc: Liam.Howlett@oracle.com, aliceryhl@google.com,
andrewjballance@gmail.com, maple-tree@lists.infradead.org,
linux-mm@kvack.org
Subject: Re: Memory leak in 6.18
Date: Mon, 10 Nov 2025 23:23:30 +0100 [thread overview]
Message-ID: <15905893-cd05-479d-8a96-9b9857e0cdc6@suse.cz> (raw)
In-Reply-To: <025d73b5-8947-43b3-a85e-c112667c030f@suse.cz>
On 11/10/25 17:47, Vlastimil Babka wrote:
> On 11/10/25 07:16, Harry Yoo wrote:
>> On Mon, Nov 10, 2025 at 03:04:02PM +0900, Harry Yoo wrote:
>>> On Sun, Nov 09, 2025 at 11:36:26PM +0100, Tytus Rogalewski wrote:
>>> > Hi guys,
>>> >
>>> > Been using 6.18 kernel and i have noticed that there is some memory leak.
>>> > Currently mapple_node takes 86GB when server does not do much.
>>> > I do not see that issue on 6.17 kernel at all.
>>
>> Cc'ing linux-mm@kvack.org properly as I modified the address by mistake.
>>
>>> Hi Tytus, thanks for the report!
>>>
>>> Could you please boot your machine with kernel boot
>>> parameter slab_debug=U [1] and run
>>>
>>> $ cat /sys/kernel/debug/slab/maple_node/alloc_traces
>>>
>>> and
>>>
>>> $ cat /sys/kernel/debug/slab/maple_node/free_traces
>
> Agreed. What could help in addition to above is also enabling
> CONFIG_SLUB_STATS and also providing the output of:
>
> grep . /sys/kernel/slab/maple_node/*
>
> Thanks.
Also the output of this please:
numactl -H
>>> ?
>>> > Total 1000 GB memory
>>> > ASRockRack GENOA2D24G-2L
>>> > 2x AMD EPYC 9654 96-Core Processor
>>> > Running Proxmox 9
>>> >
>>> > Active / Total Objects (% used) : 472110239 / 472257124 (100.0%)
>>> > Active / Total Slabs (% used) : 7385489 / 7385489 (100.0%)
>>> > Active / Total Caches (% used) : 164 / 231 (71.0%)
>>> > Active / Total Size (% used) : 95486861.00K / 95528053.72K (100.0%)
>>> > Minimum / Average / Maximum Object : 0.01K / 0.20K / 8.06K
>>> >
>>> > OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
>>> > 345907216 345897683 99% 0.25K 5404801 64 86476816K maple_node
>>> > 120873408 120841337 99% 0.06K 1888647 64 7554588K dmaengine-unmap-2
>>> > 224256 223324 99% 0.01K 438 512 1752K kmalloc-8
>>> > 224040 224040 100% 0.13K 3734 60 29872K kernfs_node_cache
>>> > 196608 196608 100% 0.01K 384 512 1536K kmalloc-cg-8
>>> > 196160 166455 84% 0.50K 3065 64 98080K kmalloc-512
>>>
>>> Not sure if this is because of sheaves or maple tree changes.
>>> Let's see what's in the alloc & free traces.
>>>
>>> Or it would be great if you could build the kernel and perform
>>> git bisection [2] and give us what is the first bad commit.
>>>
>>> Thanks!
>>>
>>> [1] https://docs.kernel.org/next/admin-guide/mm/slab.html
>>> [2] https://git-scm.com/docs/git-bisect
>>
>
prev parent reply other threads:[~2025-11-10 22:23 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-09 22:36 Tytus Rogalewski
[not found] ` <aRGAUjkKe_H1xc7H@hyeyoo>
2025-11-10 6:16 ` Harry Yoo
2025-11-10 16:47 ` Vlastimil Babka
2025-11-10 22:23 ` Vlastimil Babka [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=15905893-cd05-479d-8a96-9b9857e0cdc6@suse.cz \
--to=vbabka@suse.cz \
--cc=Liam.Howlett@oracle.com \
--cc=aliceryhl@google.com \
--cc=andrewjballance@gmail.com \
--cc=harry.yoo@oracle.com \
--cc=linux-mm@kvack.org \
--cc=maple-tree@lists.infradead.org \
--cc=tytanick@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox