From: Waiman Long <longman@redhat.com>
To: Ming Lei <ming.lei@redhat.com>, Yosry Ahmed <yosryahmed@google.com>
Cc: Linux-MM <linux-mm@kvack.org>, Michal Hocko <mhocko@kernel.org>,
Shakeel Butt <shakeelb@google.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Roman Gushchin <roman.gushchin@linux.dev>,
Muchun Song <muchun.song@linux.dev>, Jens Axboe <axboe@kernel.dk>,
linux-block@vger.kernel.org, cgroups@vger.kernel.org,
Tejun Heo <tj@kernel.org>,
mkoutny@suse.com
Subject: Re: [PATCH] blk-cgroup: Flush stats before releasing blkcg_gq
Date: Wed, 24 May 2023 00:10:55 -0400 [thread overview]
Message-ID: <cfef97ec-bb77-4ccb-a0b2-8f1eb66afeb6@redhat.com> (raw)
In-Reply-To: <ZG14VnHl20lt9jLc@ovpn-8-17.pek2.redhat.com>
On 5/23/23 22:37, Ming Lei wrote:
> Hi Yosry,
>
> On Tue, May 23, 2023 at 07:06:38PM -0700, Yosry Ahmed wrote:
>> Hi Ming,
>>
>> On Tue, May 23, 2023 at 6:21 PM Ming Lei <ming.lei@redhat.com> wrote:
>>> As noted by Michal, the blkg_iostat_set's in the lockless list
>>> hold reference to blkg's to protect against their removal. Those
>>> blkg's hold reference to blkcg. When a cgroup is being destroyed,
>>> cgroup_rstat_flush() is only called at css_release_work_fn() which
>>> is called when the blkcg reference count reaches 0. This circular
>>> dependency will prevent blkcg and some blkgs from being freed after
>>> they are made offline.
>> I am not at all familiar with blkcg, but does calling
>> cgroup_rstat_flush() in offline_css() fix the problem?
> Except for offline, this list needs to be flushed after the associated disk
> is deleted.
>
>> or can items be
>> added to the lockless list(s) after the blkcg is offlined?
> Yeah.
>
> percpu_ref_*get(&blkg->refcnt) still can succeed after the percpu refcnt
> is killed in blkg_destroy() which is called from both offline css and
> removing disk.
As suggested by Tejun, we can use percpu_ref_tryget(&blkg->refcnt) to
make sure that we can only take a reference when the blkg is online. I
think it is a bit safer to take a percpu refcnt to avoid use after free.
My other concern about your patch is that the per cpu list iterations
will be done multiple times when a blkcg is destroyed if many blkgs are
attached to the blkcg. I still prefer to do it once in
blkcg_destroy_blkgs(). I am going to post an updated version tomorrow
after some more testings.
Cheers,
Longman
next prev parent reply other threads:[~2023-05-24 4:11 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20230524011935.719659-1-ming.lei@redhat.com>
2023-05-24 2:06 ` Yosry Ahmed
2023-05-24 2:37 ` Ming Lei
2023-05-24 2:43 ` Yosry Ahmed
2023-05-24 4:10 ` Waiman Long [this message]
2023-05-24 4:21 ` Ming Lei
2023-05-25 14:11 ` Michal Koutný
2023-05-25 15:25 ` Waiman Long
2023-05-26 21:11 ` Michal Koutný
2023-05-24 4:04 ` Waiman Long
2023-05-24 4:13 ` Yosry Ahmed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cfef97ec-bb77-4ccb-a0b2-8f1eb66afeb6@redhat.com \
--to=longman@redhat.com \
--cc=axboe@kernel.dk \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=ming.lei@redhat.com \
--cc=mkoutny@suse.com \
--cc=muchun.song@linux.dev \
--cc=roman.gushchin@linux.dev \
--cc=shakeelb@google.com \
--cc=tj@kernel.org \
--cc=yosryahmed@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox