From: Xiubo Li <xiubli@redhat.com>
To: Hillf Danton <hdanton@sina.com>
Cc: tj@kernel.org, hannes@cmpxchg.org, cgroups@vger.kernel.org,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: cgroup: deadlock between cpu_hotplug_lock and freezer_mutex
Date: Wed, 15 Feb 2023 18:36:02 +0800 [thread overview]
Message-ID: <90147a2b-982e-ae57-9b7c-062bee0fab07@redhat.com> (raw)
In-Reply-To: <20230215072501.3764-1-hdanton@sina.com>
Hi Hillf,
On 15/02/2023 15:25, Hillf Danton wrote:
> On Wed, 15 Feb 2023 10:07:23 +0800 Xiubo Li <xiubli@redhat.com>
>> Hi
>>
>> Recently when running some test cases for ceph we hit the following
>> deadlock issue in cgroup code. Has this been fixed ? I have checked the
>> latest code and it seems no any commit is fixing this.
>>
>> This call trace could also be found in
>> https://tracker.ceph.com/issues/58564#note-4, which is more friendly to
>> read.
>>
>> ======================================================
>> WARNING: possible circular locking dependency detected
>> 6.1.0-rc5-ceph-gc90f64b588ff #1 Tainted: G S
>> ------------------------------------------------------
>> runc/90769 is trying to acquire lock:
>> ffffffff82664cb0 (cpu_hotplug_lock){++++}-{0:0}, at:
>> static_key_slow_inc+0xe/0x20
>> #012but task is already holding lock:
>> ffffffff8276e468 (freezer_mutex){+.+.}-{3:3}, at: freezer_write+0x89/0x530
>> #012which lock already depends on the new lock.
>> #012the existing dependency chain (in reverse order) is:
>> #012-> #2 (freezer_mutex){+.+.}-{3:3}:
>> __mutex_lock+0x9c/0xf20
>> freezer_attach+0x2c/0xf0
>> cgroup_migrate_execute+0x3f3/0x4c0
>> cgroup_attach_task+0x22e/0x3e0
>> __cgroup1_procs_write.constprop.12+0xfb/0x140
>> cgroup_file_write+0x91/0x230
>> kernfs_fop_write_iter+0x137/0x1d0
>> vfs_write+0x344/0x4d0
>> ksys_write+0x5c/0xd0
>> do_syscall_64+0x34/0x80
>> entry_SYSCALL_64_after_hwframe+0x63/0xcd
>> #012-> #1 (cgroup_threadgroup_rwsem){++++}-{0:0}:
>> percpu_down_write+0x45/0x2c0
>> cgroup_procs_write_start+0x84/0x270
>> __cgroup1_procs_write.constprop.12+0x57/0x140
>> cgroup_file_write+0x91/0x230
>> kernfs_fop_write_iter+0x137/0x1d0
>> vfs_write+0x344/0x4d0
>> ksys_write+0x5c/0xd0
>> do_syscall_64+0x34/0x80
>> entry_SYSCALL_64_after_hwframe+0x63/0xcd
>> #012-> #0 (cpu_hotplug_lock){++++}-{0:0}:
>> __lock_acquire+0x103f/0x1de0
>> lock_acquire+0xd4/0x2f0
>> cpus_read_lock+0x3c/0xd0
>> static_key_slow_inc+0xe/0x20
>> freezer_apply_state+0x98/0xb0
>> freezer_write+0x307/0x530
>> cgroup_file_write+0x91/0x230
>> kernfs_fop_write_iter+0x137/0x1d0
>> vfs_write+0x344/0x4d0
>> ksys_write+0x5c/0xd0
>> do_syscall_64+0x34/0x80
>> entry_SYSCALL_64_after_hwframe+0x63/0xcd
>> #012other info that might help us debug this:
>> Chain exists of:#012 cpu_hotplug_lock --> cgroup_threadgroup_rwsem
>> --> freezer_mutex
>> Possible unsafe locking scenario:
>> CPU0 CPU1
>> ---- ----
>> lock(freezer_mutex);
>> lock(cgroup_threadgroup_rwsem);
>> lock(freezer_mutex);
>> lock(cpu_hotplug_lock);
>> #012 *** DEADLOCK ***
> Thanks for your report.
>
> Change locking order if it is impossible to update freezer_active in atomic manner.
>
> Only for thoughts.
Sure, I will test this.
Thanks
>
> Hillf
> +++ linux-6.1.3/kernel/cgroup/legacy_freezer.c
> @@ -350,7 +350,7 @@ static void freezer_apply_state(struct f
>
> if (freeze) {
> if (!(freezer->state & CGROUP_FREEZING))
> - static_branch_inc(&freezer_active);
> + static_branch_inc_cpuslocked(&freezer_active);
> freezer->state |= state;
> freeze_cgroup(freezer);
> } else {
> @@ -361,7 +361,7 @@ static void freezer_apply_state(struct f
> if (!(freezer->state & CGROUP_FREEZING)) {
> freezer->state &= ~CGROUP_FROZEN;
> if (was_freezing)
> - static_branch_dec(&freezer_active);
> + static_branch_dec_cpuslocked(&freezer_active);
> unfreeze_cgroup(freezer);
> }
> }
> @@ -379,6 +379,7 @@ static void freezer_change_state(struct
> {
> struct cgroup_subsys_state *pos;
>
> + cpus_read_lock();
> /*
> * Update all its descendants in pre-order traversal. Each
> * descendant will try to inherit its parent's FREEZING state as
> @@ -407,6 +408,7 @@ static void freezer_change_state(struct
> }
> rcu_read_unlock();
> mutex_unlock(&freezer_mutex);
> + cpus_read_unlock();
> }
>
> static ssize_t freezer_write(struct kernfs_open_file *of,
>
--
Best Regards,
Xiubo Li (李秀波)
Email: xiubli@redhat.com/xiubli@ibm.com
Slack: @Xiubo Li
prev parent reply other threads:[~2023-02-15 10:36 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <768be93b-a401-deab-600c-f946e0bd27fa@redhat.com>
2023-02-15 7:25 ` Hillf Danton
2023-02-15 10:36 ` Xiubo Li [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=90147a2b-982e-ae57-9b7c-062bee0fab07@redhat.com \
--to=xiubli@redhat.com \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=hdanton@sina.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox