From: Qi Zheng <zhengqi.arch@bytedance.com>
To: Muchun Song <muchun.song@linux.dev>
Cc: Vlastimil Babka <vbabka@suse.cz>,
chenridong <chenridong@huawei.com>,
Anshuman Khandual <anshuman.khandual@arm.com>,
Ridong Chen <chenridong@huaweicloud.com>,
akpm@linux-foundation.org, david@fromorbit.com,
roman.gushchin@linux.dev, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, wangweiyang2@huawei.com
Subject: Re: [PATCH v2] mm: shrinker: avoid memleak in alloc_shrinker_info
Date: Thu, 17 Oct 2024 10:41:53 +0800 [thread overview]
Message-ID: <94ed7edb-604e-42ff-924e-631980ca3c5e@bytedance.com> (raw)
In-Reply-To: <55B22931-34E1-4DAF-B392-A48EC2A9EE1A@linux.dev>
On 2024/10/16 22:22, Muchun Song wrote:
>
>
>> On Oct 16, 2024, at 20:13, Vlastimil Babka <vbabka@suse.cz> wrote:
>>
>> On 10/14/24 11:20, Muchun Song wrote:
>>>
>>>
>>>>> On Oct 14, 2024, at 17:04, chenridong <chenridong@huawei.com> wrote:
>>>>
>>>>
>>>>
>>>> On 2024/10/14 16:43, Muchun Song wrote:
>>>>>> On Oct 14, 2024, at 16:13, Anshuman Khandual <anshuman.khandual@arm.com> wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 10/14/24 08:53, Chen Ridong wrote:
>>>>>>> From: Chen Ridong <chenridong@huawei.com>
>>>>>>>
>>>>>>> A memleak was found as bellow:
>>>>>>>
>>>>>>> unreferenced object 0xffff8881010d2a80 (size 32):
>>>>>>> comm "mkdir", pid 1559, jiffies 4294932666
>>>>>>> hex dump (first 32 bytes):
>>>>>>> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
>>>>>>> 40 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 @...............
>>>>>>> backtrace (crc 2e7ef6fa):
>>>>>>> [<ffffffff81372754>] __kmalloc_node_noprof+0x394/0x470
>>>>>>> [<ffffffff813024ab>] alloc_shrinker_info+0x7b/0x1a0
>>>>>>> [<ffffffff813b526a>] mem_cgroup_css_online+0x11a/0x3b0
>>>>>>> [<ffffffff81198dd9>] online_css+0x29/0xa0
>>>>>>> [<ffffffff811a243d>] cgroup_apply_control_enable+0x20d/0x360
>>>>>>> [<ffffffff811a5728>] cgroup_mkdir+0x168/0x5f0
>>>>>>> [<ffffffff8148543e>] kernfs_iop_mkdir+0x5e/0x90
>>>>>>> [<ffffffff813dbb24>] vfs_mkdir+0x144/0x220
>>>>>>> [<ffffffff813e1c97>] do_mkdirat+0x87/0x130
>>>>>>> [<ffffffff813e1de9>] __x64_sys_mkdir+0x49/0x70
>>>>>>> [<ffffffff81f8c928>] do_syscall_64+0x68/0x140
>>>>>>> [<ffffffff8200012f>] entry_SYSCALL_64_after_hwframe+0x76/0x7e
>>>>>>>
>>>>>>> In the alloc_shrinker_info function, when shrinker_unit_alloc return
>>>>>>> err, the info won't be freed. Just fix it.
>>>>>>>
>>>>>>> Fixes: 307bececcd12 ("mm: shrinker: add a secondary array for shrinker_info::{map, nr_deferred}")
>>>>>>> Signed-off-by: Chen Ridong <chenridong@huawei.com>
>>>>>>> ---
>>>>>>> mm/shrinker.c | 1 +
>>>>>>> 1 file changed, 1 insertion(+)
>>>>>>>
>>>>>>> diff --git a/mm/shrinker.c b/mm/shrinker.c
>>>>>>> index dc5d2a6fcfc4..92270413190d 100644
>>>>>>> --- a/mm/shrinker.c
>>>>>>> +++ b/mm/shrinker.c
>>>>>>> @@ -97,6 +97,7 @@ int alloc_shrinker_info(struct mem_cgroup *memcg)
>>>>>>>
>>>>>>> err:
>>>>>>> mutex_unlock(&shrinker_mutex);
>>>>>>> + kvfree(info);
>>>>>>> free_shrinker_info(memcg);
>>>>>>> return -ENOMEM;
>>>>>>> }
>>>>>>
>>>>>> There are two scenarios when "goto err:" gets called
>>>>>>
>>>>>> - When shrinker_info allocations fails, no kvfree() is required
>>>>>> - but after this change kvfree() would be called even
>>>>>> when the allocation had failed originally, which does
>>>>>> not sound right
>>>>> Yes. In this case, @info is NULL and kvfree could handle NULL.
>>>>> It seems strange but the final behaviour correct.
>>>>>>
>>>>>> - shrinker_unit_alloc() fails, kvfree() is actually required
>>>>>>
>>>>>> I guess kvfree() should be called just after shrinker_unit_alloc()
>>>>>> fails but before calling into "goto err".
>>>>> We could do it like this, which avoids ambiguity (if someone ignores
>>>>> that kvfree could handle NULL). Something like:
>>>>> --- a/mm/shrinker.c
>>>>> +++ b/mm/shrinker.c
>>>>> @@ -88,13 +88,14 @@ int alloc_shrinker_info(struct mem_cgroup *memcg)
>>>>> goto err;
>>>>> info->map_nr_max = shrinker_nr_max;
>>>>> if (shrinker_unit_alloc(info, NULL, nid))
>>>>> - goto err;
>>>>> + goto free;
>>>>> rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info);
>>>>> }
>>>>> mutex_unlock(&shrinker_mutex);
>>>>> return ret;
>>>>> -
>>>>> +free:
>>>>> + kvfree(info);
>>>>> err:
>>>>> mutex_unlock(&shrinker_mutex);
>>>>> free_shrinker_info(memcg);
>>>>> Thanks.
>>>>>>
>>>>>> But curious, should not both kvzalloc_node()/kvfree() be avoided
>>>>>> while inside mutex lock to avoid possible lockdep issues ?
>>>> How about:
>>>>
>>>> diff --git a/mm/shrinker.c b/mm/shrinker.c
>>>> index dc5d2a6fcfc4..7baee7f00497 100644
>>>> --- a/mm/shrinker.c
>>>> +++ b/mm/shrinker.c
>>>> @@ -87,9 +87,9 @@ int alloc_shrinker_info(struct mem_cgroup *memcg)
>>>> if (!info)
>>>> goto err;
>>>> info->map_nr_max = shrinker_nr_max;
>>>> + rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info);
>>>> if (shrinker_unit_alloc(info, NULL, nid))
>>>> goto err;
>>>> - rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info);
>>>> }
>>>> mutex_unlock(&shrinker_mutex);
>>>
>>> No. We should make sure the @info is fully initialized before others
>>> could see it. That's why rcu_assign_pointer is used here.
>>
>> If the info is immediately visible, is the failure cleanup
>> free_shrinker_info() safe? It uses kvfree(info) and not kvfree_rcu(), and
>> shrinker_unit_free() is also doing kfree().
>
> Qi told me that the @info will not visible immediately yesterday.
> So non-rcu-based kvfree is safe. Even if this fix could
Yes, alloc_shrinker_info() is only called by mem_cgroup_css_online(). At
this time, the memcg is not online yet, so it is not visible to
shrink_slab(). And free_shrinker_info() is also called by
mem_cgroup_css_free(),
the memcg has already been offline. The shrinker_unit_free() is
also called by expand_one_shrinker_info(), but the shrinker_info 'new'
is also not visible at that time. So non-rcu-based kvfree is safe.
> work properly, it’s a bit strange for me to use
> rcu_assign_pointer to assign the @info without full initialization to it.
Agree.
>
> Muchun,
> Thanks.
>
>>
>>>>
>>>> I think this is concise.
>>>>
>>>> Best regards,
>>>> Ridong
>>>
>>>
>>
next prev parent reply other threads:[~2024-10-17 2:42 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-14 3:23 Chen Ridong
2024-10-14 6:25 ` Muchun Song
2024-10-14 8:13 ` Anshuman Khandual
2024-10-14 8:43 ` Muchun Song
2024-10-14 9:04 ` chenridong
2024-10-14 9:20 ` Muchun Song
2024-10-14 9:38 ` chenridong
2024-10-16 12:13 ` Vlastimil Babka
2024-10-16 14:22 ` Muchun Song
2024-10-17 2:41 ` Qi Zheng [this message]
2024-10-14 11:29 ` Kirill A. Shutemov
2024-10-15 1:13 ` chenridong
2024-10-15 6:55 ` Anshuman Khandual
2024-10-16 1:25 ` chenridong
2024-10-16 2:21 ` Muchun Song
2024-10-16 10:16 ` Kirill A. Shutemov
2024-10-16 13:37 ` Muchun Song
2024-10-16 11:43 ` Vlastimil Babka
2024-10-16 14:08 ` Muchun Song
2024-10-16 17:02 ` Vlastimil Babka
2024-10-16 17:31 ` Roman Gushchin
2024-10-24 1:26 ` Chen Ridong
2024-10-24 9:08 ` Vlastimil Babka
2024-10-25 1:22 ` Chen Ridong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=94ed7edb-604e-42ff-924e-631980ca3c5e@bytedance.com \
--to=zhengqi.arch@bytedance.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=chenridong@huawei.com \
--cc=chenridong@huaweicloud.com \
--cc=david@fromorbit.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=muchun.song@linux.dev \
--cc=roman.gushchin@linux.dev \
--cc=vbabka@suse.cz \
--cc=wangweiyang2@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox