From: Qi Zheng <zhengqi.arch@bytedance.com>
To: Vlastimil Babka <vbabka@suse.cz>,
42.hyeyoo@gmail.com, akpm@linux-foundation.org,
roman.gushchin@linux.dev, iamjoonsoo.kim@lge.com,
rientjes@google.com, penberg@kernel.org, cl@linux.com
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Zhao Gongyi <zhaogongyi@bytedance.com>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Thomas Gleixner <tglx@linutronix.de>, RCU <rcu@vger.kernel.org>,
"Paul E . McKenney" <paulmck@kernel.org>
Subject: Re: [PATCH] mm: slub: annotate kmem_cache_node->list_lock as raw_spinlock
Date: Tue, 11 Apr 2023 22:08:01 +0800 [thread overview]
Message-ID: <ccaf5e8e-3457-a2cf-b6eb-794cbf1b46f5@bytedance.com> (raw)
In-Reply-To: <c6ea3b17-a89c-6f66-5c86-967f1da601b4@suse.cz>
On 2023/4/11 21:40, Vlastimil Babka wrote:
> On 4/11/23 15:08, Qi Zheng wrote:
>> The list_lock can be held in the critical section of
>> raw_spinlock, and then lockdep will complain about it
>> like below:
>>
>> =============================
>> [ BUG: Invalid wait context ]
>> 6.3.0-rc6-next-20230411 #7 Not tainted
>> -----------------------------
>> swapper/0/1 is trying to lock:
>> ffff888100055418 (&n->list_lock){....}-{3:3}, at: ___slab_alloc+0x73d/0x1330
>> other info that might help us debug this:
>> context-{5:5}
>> 2 locks held by swapper/0/1:
>> #0: ffffffff824e8160 (rcu_tasks.cbs_gbl_lock){....}-{2:2}, at: cblist_init_generic+0x22/0x2d0
>> #1: ffff888136bede50 (&ACCESS_PRIVATE(rtpcp, lock)){....}-{2:2}, at: cblist_init_generic+0x232/0x2d0
>> stack backtrace:
>> CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.3.0-rc6-next-20230411 #7
>> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014
>> Call Trace:
>> <TASK>
>> dump_stack_lvl+0x77/0xc0
>> __lock_acquire+0xa65/0x2950
>> ? arch_stack_walk+0x65/0xf0
>> ? arch_stack_walk+0x65/0xf0
>> ? unwind_next_frame+0x602/0x8d0
>> lock_acquire+0xe0/0x300
>> ? ___slab_alloc+0x73d/0x1330
>> ? find_usage_forwards+0x39/0x50
>> ? check_irq_usage+0x162/0xa70
>> ? __bfs+0x10c/0x2c0
>> _raw_spin_lock_irqsave+0x4f/0x90
>> ? ___slab_alloc+0x73d/0x1330
>> ___slab_alloc+0x73d/0x1330
>> ? fill_pool+0x16b/0x2a0
>> ? look_up_lock_class+0x5d/0x160
>> ? register_lock_class+0x48/0x500
>> ? __lock_acquire+0xabc/0x2950
>> ? fill_pool+0x16b/0x2a0
>> kmem_cache_alloc+0x358/0x3b0
>> ? __lock_acquire+0xabc/0x2950
>> fill_pool+0x16b/0x2a0
>> ? __debug_object_init+0x292/0x560
>> ? lock_acquire+0xe0/0x300
>> ? cblist_init_generic+0x232/0x2d0
>> __debug_object_init+0x2c/0x560
>> cblist_init_generic+0x147/0x2d0
>> rcu_init_tasks_generic+0x15/0x190
>> kernel_init_freeable+0x6e/0x3e0
>> ? rest_init+0x1e0/0x1e0
>> kernel_init+0x1b/0x1d0
>> ? rest_init+0x1e0/0x1e0
>> ret_from_fork+0x1f/0x30
>> </TASK>
>>
>> The fill_pool() can only be called in the !PREEMPT_RT kernel
>> or in the preemptible context of the PREEMPT_RT kernel, so
>> the above warning is not a real issue, but it's better to
>> annotate kmem_cache_node->list_lock as raw_spinlock to get
>> rid of such issue.
>
> + CC some RT and RCU people
Thanks.
>
> AFAIK raw_spinlock is not just an annotation, but on RT it changes the
> implementation from preemptible mutex to actual spin lock, so it would be
Yeah.
> rather unfortunate to do that for a spurious warning. Can it be somehow
> fixed in a better way?
It's indeed unfortunate for the warning in the commit message. But
functions like kmem_cache_alloc(GFP_ATOMIC) may indeed be called
in the critical section of raw_spinlock or in the hardirq context, which
will cause problem in the PREEMPT_RT kernel. So I still think it is
reasonable to convert kmem_cache_node->list_lock to raw_spinlock type.
In addition, there are many fix patches for this kind of warning in the
git log, so I also think there should be a general and better solution. :)
>
--
Thanks,
Qi
next prev parent reply other threads:[~2023-04-11 14:08 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-11 13:08 Qi Zheng
2023-04-11 13:40 ` Vlastimil Babka
2023-04-11 14:08 ` Qi Zheng [this message]
2023-04-11 14:19 ` Vlastimil Babka
2023-04-11 14:25 ` Qi Zheng
2023-04-12 5:51 ` Boqun Feng
2023-04-12 6:44 ` Zhang, Qiang1
2023-04-12 6:50 ` Vlastimil Babka
2023-04-12 7:30 ` Qi Zheng
2023-04-12 8:32 ` Qi Zheng
2023-04-12 13:09 ` Waiman Long
2023-04-12 16:47 ` Qi Zheng
2023-04-12 12:48 ` Peter Zijlstra
2023-04-12 12:47 ` Peter Zijlstra
2023-04-12 16:44 ` Qi Zheng
2023-04-13 7:40 ` Peter Zijlstra
2023-04-25 15:03 ` Peter Zijlstra
2023-04-25 15:51 ` Qi Zheng
2023-04-29 10:06 ` [PATCH] debugobjects,locking: Annotate __debug_object_init() wait type violation Peter Zijlstra
2023-04-13 14:49 ` [PATCH] mm: slub: annotate kmem_cache_node->list_lock as raw_spinlock Qi Zheng
2023-04-12 6:57 ` Qi Zheng
2023-04-13 0:24 ` Joel Fernandes
2023-04-13 1:55 ` Steven Rostedt
2023-04-12 6:45 ` Qi Zheng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ccaf5e8e-3457-a2cf-b6eb-794cbf1b46f5@bytedance.com \
--to=zhengqi.arch@bytedance.com \
--cc=42.hyeyoo@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=bigeasy@linutronix.de \
--cc=cl@linux.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=paulmck@kernel.org \
--cc=penberg@kernel.org \
--cc=rcu@vger.kernel.org \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=tglx@linutronix.de \
--cc=vbabka@suse.cz \
--cc=zhaogongyi@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox