From: Yu Kuai <yukuai1@huaweicloud.com>
To: tj@kernel.org, ming.lei@redhat.com, nilay@linux.ibm.com,
hch@lst.de, josef@toxicpanda.com, axboe@kernel.dk,
akpm@linux-foundation.org, vgoyal@redhat.com
Cc: cgroups@vger.kernel.org, linux-block@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com,
yangerkun@huawei.com, johnny.chenyi@huawei.com
Subject: [PATCH 04/10] blk-cgroup: don't nest queue_lock under blkcg->lock in blkcg_destroy_blkgs()
Date: Thu, 25 Sep 2025 16:15:19 +0800 [thread overview]
Message-ID: <20250925081525.700639-5-yukuai1@huaweicloud.com> (raw)
In-Reply-To: <20250925081525.700639-1-yukuai1@huaweicloud.com>
From: Yu Kuai <yukuai3@huawei.com>
The correct lock order is q->queue_lock before blkcg->lock, and in order
to prevent deadlock from blkcg_destroy_blkgs(), trylock is used for
q->queue_lock while blkcg->lock is already held, this is hacky.
Hence refactor blkcg_destroy_blkgs(), by holding blkcg->lock to get the
first blkg and release it, then hold q->queue_lock and blkcg->lock in
the correct order to destroy blkg. This is super cold path, it's fine to
grab and release locks.
Also prepare to convert protecting blkcg with blkcg_mutex instead of
queue_lock.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
block/blk-cgroup.c | 45 ++++++++++++++++++++++++++-------------------
1 file changed, 26 insertions(+), 19 deletions(-)
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 53a64bfe4a24..795efb5ccb5e 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -1283,6 +1283,21 @@ struct list_head *blkcg_get_cgwb_list(struct cgroup_subsys_state *css)
* This finally frees the blkcg.
*/
+static struct blkcg_gq *blkcg_get_first_blkg(struct blkcg *blkcg)
+{
+ struct blkcg_gq *blkg = NULL;
+
+ spin_lock_irq(&blkcg->lock);
+ if (!hlist_empty(&blkcg->blkg_list)) {
+ blkg = hlist_entry(blkcg->blkg_list.first, struct blkcg_gq,
+ blkcg_node);
+ blkg_get(blkg);
+ }
+ spin_unlock_irq(&blkcg->lock);
+
+ return blkg;
+}
+
/**
* blkcg_destroy_blkgs - responsible for shooting down blkgs
* @blkcg: blkcg of interest
@@ -1296,32 +1311,24 @@ struct list_head *blkcg_get_cgwb_list(struct cgroup_subsys_state *css)
*/
static void blkcg_destroy_blkgs(struct blkcg *blkcg)
{
- might_sleep();
+ struct blkcg_gq *blkg;
- spin_lock_irq(&blkcg->lock);
+ might_sleep();
- while (!hlist_empty(&blkcg->blkg_list)) {
- struct blkcg_gq *blkg = hlist_entry(blkcg->blkg_list.first,
- struct blkcg_gq, blkcg_node);
+ while ((blkg = blkcg_get_first_blkg(blkcg))) {
struct request_queue *q = blkg->q;
- if (need_resched() || !spin_trylock(&q->queue_lock)) {
- /*
- * Given that the system can accumulate a huge number
- * of blkgs in pathological cases, check to see if we
- * need to rescheduling to avoid softlockup.
- */
- spin_unlock_irq(&blkcg->lock);
- cond_resched();
- spin_lock_irq(&blkcg->lock);
- continue;
- }
+ spin_lock_irq(&q->queue_lock);
+ spin_lock(&blkcg->lock);
blkg_destroy(blkg);
- spin_unlock(&q->queue_lock);
- }
- spin_unlock_irq(&blkcg->lock);
+ spin_unlock(&blkcg->lock);
+ spin_unlock_irq(&q->queue_lock);
+
+ blkg_put(blkg);
+ cond_resched();
+ }
}
/**
--
2.39.2
next prev parent reply other threads:[~2025-09-25 8:25 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-25 8:15 [PATCH 00/10] blk-cgroup: don't use queue_lock for protection and fix deadlock Yu Kuai
2025-09-25 8:15 ` [PATCH 01/10] blk-cgroup: use cgroup lock and rcu to protect iterating blkcg blkgs Yu Kuai
2025-09-25 15:57 ` Bart Van Assche
2025-09-25 17:07 ` Yu Kuai
2025-09-26 0:57 ` Yu Kuai
2025-09-26 17:19 ` Bart Van Assche
2025-09-29 1:02 ` Yu Kuai
2025-09-25 8:15 ` [PATCH 02/10] blk-cgroup: don't nest queue_lock under rcu in blkg_lookup_create() Yu Kuai
2025-09-25 8:15 ` [PATCH 03/10] blk-cgroup: don't nest queu_lock under rcu in bio_associate_blkg() Yu Kuai
2025-09-25 8:15 ` Yu Kuai [this message]
2025-09-25 8:15 ` [PATCH 05/10] mm/page_io: don't nest queue_lock under rcu in bio_associate_blkg_from_page() Yu Kuai
2025-09-25 8:15 ` [PATCH 06/10] block, bfq: don't grab queue_lock to initialize bfq Yu Kuai
2025-09-25 8:15 ` [PATCH 07/10] blk-cgroup: convert to protect blkgs with blkcg_mutex Yu Kuai
2025-09-25 8:15 ` [PATCH 08/10] blk-cgroup: remove radix_tree_preload() Yu Kuai
2025-10-03 7:37 ` Christoph Hellwig
2025-10-06 1:55 ` Yu Kuai
2025-10-06 6:48 ` Christoph Hellwig
2025-09-25 8:15 ` [PATCH 09/10] blk-cgroup: remove preallocate blkg for blkg_create() Yu Kuai
2025-09-25 8:15 ` [PATCH 10/10] blk-throttle: fix possible deadlock due to queue_lock in timer Yu Kuai
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250925081525.700639-5-yukuai1@huaweicloud.com \
--to=yukuai1@huaweicloud.com \
--cc=akpm@linux-foundation.org \
--cc=axboe@kernel.dk \
--cc=cgroups@vger.kernel.org \
--cc=hch@lst.de \
--cc=johnny.chenyi@huawei.com \
--cc=josef@toxicpanda.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ming.lei@redhat.com \
--cc=nilay@linux.ibm.com \
--cc=tj@kernel.org \
--cc=vgoyal@redhat.com \
--cc=yangerkun@huawei.com \
--cc=yi.zhang@huawei.com \
--cc=yukuai3@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox