From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D68F7C77B7E for ; Wed, 24 May 2023 02:44:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 29FC6900002; Tue, 23 May 2023 22:44:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 24F446B0075; Tue, 23 May 2023 22:44:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 11761900002; Tue, 23 May 2023 22:44:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 03A316B0074 for ; Tue, 23 May 2023 22:44:03 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B96BFC0540 for ; Wed, 24 May 2023 02:44:02 +0000 (UTC) X-FDA: 80823603924.28.DBE2DCE Received: from mail-ej1-f52.google.com (mail-ej1-f52.google.com [209.85.218.52]) by imf20.hostedemail.com (Postfix) with ESMTP id B7AF91C0012 for ; Wed, 24 May 2023 02:44:00 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=KmUGnXLB; spf=pass (imf20.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.52 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684896240; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lDFmGd7gRLIyIyQ94Db9al5d+TsERu2E8WF1J8USfBI=; b=D/MxxOuTchKj2ubF+mahSJX2HFJuN+HFU/hkJyZr6mFKlA3ukGg8EBzHpmVCxfGRNKCoN+ YYFXeEH+6DAapDqMlzfnO6UmSUzDaxFdkicBwy1zXQY7xMwvYTRmxyFHtc7k2O6GBtGYyw cViG5N6B8OaWihJ38DP3iYoKLDUtSM0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684896240; a=rsa-sha256; cv=none; b=unHnB5p7DCnftSGc6RTNZG/n1kiJmHPXdyyzSwK+iEBf481cMDav4D3DPCoH8+etIZLO7g gBWBiZho+FIuvkL3xZZPb6esgN/+5ttxc+XStvhx9M4W0F3Zte1nJUk8aVQbu0etDpGx1K Wgr2f9TPBUIWLvsZvxudkER6mLkKNbY= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=KmUGnXLB; spf=pass (imf20.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.52 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-ej1-f52.google.com with SMTP id a640c23a62f3a-96f0678de80so56170766b.3 for ; Tue, 23 May 2023 19:44:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1684896239; x=1687488239; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=lDFmGd7gRLIyIyQ94Db9al5d+TsERu2E8WF1J8USfBI=; b=KmUGnXLB6gEP77qIOVlPQeghHtKBGpH3Ef2X2IQo3Edwq8bNA+ml+h9uUIm7XqmPyr etHLOHHHJWKLIym/XRC7aUMcLYcTl2FPkpONf9VghCFR+MTiF6tfv5zQrf3kObGoexDv suioINXnpng1xZ15B9PB7xcVIhWyNAlVfES3UVd7dHI6i6nQvBFSthhC9dh0nsoBRDAm 4/HK6xpRTrXUzyv3Qqjsl42a+goqFMxm68TiUJNjgFR4om+yfX0KxehlCKMzrOXvpjDb mgV/HrGJ15rRkmt1vU+wXJgq339CYF7Eg0TMo7kfsvQChM4LuQyEb3DScoTQLYlB6ZA/ pZ8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684896239; x=1687488239; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lDFmGd7gRLIyIyQ94Db9al5d+TsERu2E8WF1J8USfBI=; b=W5dwC4a0O2Zi1am3+itQC8mi3uOJ9DSVdg1V8UiW5Q1nfEL3JINn6oS/9xBRhzrUxJ 7+DcIPH/0ufzvdK2mRvHNN+ZMiGXWLprcfnPYXedKJm/zIUQB2QnoD00Gq2UvaqoFK+Y H9HJ/1usSXWmt4da5NuDyVMc25kJuD/j5AIrAxFreibf8tMF/8v4PhhTS1K3LkuY5xED y+J1NFfM+cK2uWNBIInlDVhN99+A6tOf4AsKCBdXO3kjYQBEwh1VQjQi4qmKr7hkyGUz dLweRX6hgDxA7WL1lJZvrIyQlySKcFYWyDbuPcllAInv5+kSmaF/y9TWx1cZLLjnn47w zbYw== X-Gm-Message-State: AC+VfDw4Ot8fGQgKPp8zhVapmZkQ+3j+6Pk2yhFi7/LZe0y4bgkfeOtT 9q7qa2Adltz8IDKv7qeJFI76J3RvXavgAwG+Q56BOQ== X-Google-Smtp-Source: ACHHUZ5yR3inzXdaLk0gDYliyDDAk95PNx9VjPM2tSyIvoBTv3A0EKiNh1kMJukUuOpKLX4O163Atlyq4UxPQwNYkzM= X-Received: by 2002:a17:907:7faa:b0:973:7576:bed3 with SMTP id qk42-20020a1709077faa00b009737576bed3mr1693165ejc.47.1684896239150; Tue, 23 May 2023 19:43:59 -0700 (PDT) MIME-Version: 1.0 References: <20230524011935.719659-1-ming.lei@redhat.com> In-Reply-To: From: Yosry Ahmed Date: Tue, 23 May 2023 19:43:22 -0700 Message-ID: Subject: Re: [PATCH] blk-cgroup: Flush stats before releasing blkcg_gq To: Ming Lei Cc: Linux-MM , Michal Hocko , Shakeel Butt , Johannes Weiner , Roman Gushchin , Muchun Song , Jens Axboe , linux-block@vger.kernel.org, Waiman Long , cgroups@vger.kernel.org, Tejun Heo , mkoutny@suse.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: dfn3oykxz4pwj9fozq4rc9so4jrduxgf X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: B7AF91C0012 X-Rspam-User: X-HE-Tag: 1684896240-568616 X-HE-Meta: U2FsdGVkX1/frcJRwko7rvZfNB+3ZNVBB2iMWHpKiw2YcUdQ39mKnpydUBdM9Tsmle6KfOcSo3LXYXse1uQD4JswZLwhaKgWIrUScJ9JGg4zWnPLSO3EiBHQk+3VG/xZRFAg4Z0HCZNvjSd+E9wMv9dDL3UgmmVlaFFsYwkw+/3MEI9Af+OsgAIZT7OVk2G8vNlf4tRzKY/xx6IC91MzJwk9JuSnuGiCXbwdhoImpvZ1sjIgGlvdxHGChS/TIAmTvxRU/hc1uAlBOdf9QvH+ixmGPhHkcitRrdu19tynsZJ57TTlh4upj3VINHEiTywMwMJLkC8N8goFLP/5c891tFddigm2WEts9j8S+yVGLQkQds6xd1wknwQ+gxHRIT6ho/Bck/FA9wHpw+rvTtPFeYmnNrg+LHnRAndhzLoP+sDVfxLoHVAWfZJG+bUdQjj7DUVS2rfRPbfi5PEiK8WY72vcQ+aLsprlw5oMUDFfssSNEhfO83Z1pQyhZPwhuxLgKlLkenSAVipQJp80Q8WGeGiXw8/EoS/pSMIDGzo5bfT7Xe5jzP1AN6l1bVLraofAMhe5GPNslG5vEZlYQMCQ2jZhIYiQG2HmRHR7x11cUEVookoVSssQdoDBNiVOESfj16cDJ7XIbu7MAUKuJIF9T4gLvX0ODToIcFc/I6z8Sz99JLxQNjELP5CEnVpn9/zKaX7DrGKh0NdquUTMmKl0r8DiGjWcmdhOdTtBmKj7Q0aPPxA1Sc+hHDUTD3A2ulFG9pUzqmPOAWL+ALkZAbghwWoOGKOM24VO60n2JPLiCrvwu1eVVqNNpseM+Lg/KFqYtRznN5WmghKw9R4nHTv71tYNFgOxmhgFpeoC3duy3TmZLZxTQZYV7b6iZlxpFIHrGUPl6krADkq43GX3YRXB84SbEgLNd85FpQ0vbqCOEBYuSN5KXNifd5okAXF9dfZJOGtc/7HWr8DH6yc5VRR 959e4wsw W1PVOSKBUNycTxxeBQsPivfXdqU3+djUwx20zqiT1WcGCqgGiXXAmSzZsNkumwdpbR/deEki1oP+1ofGc8Lnchz21Yd9yXshi9g+Vyx/IXpcIglMpp49iww2xgD/hPkLfgO5DDq6/mmCj2zWIYSz9dYSUDB9bzlRRJgmWO6jqUwMcf9TG8YcxcGubNH2VF6fz/oGOH0tFW4dN7KYiquiqfM4YG6G047sTx8UmmJ0RWbwPsr5wpjkVa3125g3U1Q9ndm1Jbqt3yBApBUNpdQhqCXRnD9N4glU9TeTyK6MWXv7OaBabX3eJELmdht+4TE0Y3hK5xx9oQins9V8fGbZz2iP5vFsn+nHmarqbwNgHR4/z10aIDi59rQ6MTg0Wn9SEtB17kC4DGV/iJ4R1d0uKOqYSq2e4HDUV00el X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, May 23, 2023 at 7:37=E2=80=AFPM Ming Lei wrot= e: > > Hi Yosry, > > On Tue, May 23, 2023 at 07:06:38PM -0700, Yosry Ahmed wrote: > > Hi Ming, > > > > On Tue, May 23, 2023 at 6:21=E2=80=AFPM Ming Lei = wrote: > > > > > > As noted by Michal, the blkg_iostat_set's in the lockless list > > > hold reference to blkg's to protect against their removal. Those > > > blkg's hold reference to blkcg. When a cgroup is being destroyed, > > > cgroup_rstat_flush() is only called at css_release_work_fn() which > > > is called when the blkcg reference count reaches 0. This circular > > > dependency will prevent blkcg and some blkgs from being freed after > > > they are made offline. > > > > I am not at all familiar with blkcg, but does calling > > cgroup_rstat_flush() in offline_css() fix the problem? > > Except for offline, this list needs to be flushed after the associated di= sk > is deleted. > > > or can items be > > added to the lockless list(s) after the blkcg is offlined? > > Yeah. > > percpu_ref_*get(&blkg->refcnt) still can succeed after the percpu refcnt > is killed in blkg_destroy() which is called from both offline css and > removing disk. I see. > > > > > > > > > It is less a problem if the cgroup to be destroyed also has other > > > controllers like memory that will call cgroup_rstat_flush() which wil= l > > > clean up the reference count. If block is the only controller that us= es > > > rstat, these offline blkcg and blkgs may never be freed leaking more > > > and more memory over time. > > > > > > To prevent this potential memory leak: > > > > > > - a new cgroup_rstat_css_cpu_flush() function is added to flush stats= for > > > a given css and cpu. This new function will be called in __blkg_relea= se(). > > > > > > - don't grab bio->bi_blkg when adding the stats into blkcg's per-cpu > > > stat list, and this kind of handling is the most fragile part of > > > original patch > > > > > > Based on Waiman's patch: > > > > > > https://lore.kernel.org/linux-block/20221215033132.230023-3-longman@r= edhat.com/ > > > > > > Fixes: 3b8cc6298724 ("blk-cgroup: Optimize blkcg_rstat_flush()") > > > Cc: Waiman Long > > > Cc: cgroups@vger.kernel.org > > > Cc: Tejun Heo > > > Cc: mkoutny@suse.com > > > Signed-off-by: Ming Lei > > > --- > > > block/blk-cgroup.c | 15 +++++++++++++-- > > > include/linux/cgroup.h | 1 + > > > kernel/cgroup/rstat.c | 18 ++++++++++++++++++ > > > 3 files changed, 32 insertions(+), 2 deletions(-) > > > > > > diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c > > > index 0ce64dd73cfe..5437b6af3955 100644 > > > --- a/block/blk-cgroup.c > > > +++ b/block/blk-cgroup.c > > > @@ -163,10 +163,23 @@ static void blkg_free(struct blkcg_gq *blkg) > > > static void __blkg_release(struct rcu_head *rcu) > > > { > > > struct blkcg_gq *blkg =3D container_of(rcu, struct blkcg_gq, = rcu_head); > > > + struct blkcg *blkcg =3D blkg->blkcg; > > > + int cpu; > > > > > > #ifdef CONFIG_BLK_CGROUP_PUNT_BIO > > > WARN_ON(!bio_list_empty(&blkg->async_bios)); > > > #endif > > > + /* > > > + * Flush all the non-empty percpu lockless lists before relea= sing > > > + * us. Meantime no new bio can refer to this blkg any more gi= ven > > > + * the refcnt is killed. > > > + */ > > > + for_each_possible_cpu(cpu) { > > > + struct llist_head *lhead =3D per_cpu_ptr(blkcg->lhead= , cpu); > > > + > > > + if (!llist_empty(lhead)) > > > + cgroup_rstat_css_cpu_flush(&blkcg->css, cpu); > > > + } > > > > > > /* release the blkcg and parent blkg refs this blkg has been = holding */ > > > css_put(&blkg->blkcg->css); > > > @@ -991,7 +1004,6 @@ static void blkcg_rstat_flush(struct cgroup_subs= ys_state *css, int cpu) > > > if (parent && parent->parent) > > > blkcg_iostat_update(parent, &blkg->iostat.cur= , > > > &blkg->iostat.last); > > > - percpu_ref_put(&blkg->refcnt); > > > } > > > > > > out: > > > @@ -2075,7 +2087,6 @@ void blk_cgroup_bio_start(struct bio *bio) > > > > > > llist_add(&bis->lnode, lhead); > > > WRITE_ONCE(bis->lqueued, true); > > > - percpu_ref_get(&bis->blkg->refcnt); > > > } > > > > > > u64_stats_update_end_irqrestore(&bis->sync, flags); > > > diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h > > > index 885f5395fcd0..97d4764d8e6a 100644 > > > --- a/include/linux/cgroup.h > > > +++ b/include/linux/cgroup.h > > > @@ -695,6 +695,7 @@ void cgroup_rstat_flush(struct cgroup *cgrp); > > > void cgroup_rstat_flush_atomic(struct cgroup *cgrp); > > > void cgroup_rstat_flush_hold(struct cgroup *cgrp); > > > void cgroup_rstat_flush_release(void); > > > +void cgroup_rstat_css_cpu_flush(struct cgroup_subsys_state *css, int= cpu); > > > > > > /* > > > * Basic resource stats. > > > diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c > > > index 9c4c55228567..96e7a4e6da72 100644 > > > --- a/kernel/cgroup/rstat.c > > > +++ b/kernel/cgroup/rstat.c > > > @@ -281,6 +281,24 @@ void cgroup_rstat_flush_release(void) > > > spin_unlock_irq(&cgroup_rstat_lock); > > > } > > > > > > +/** > > > + * cgroup_rstat_css_cpu_flush - flush stats for the given css and cp= u > > > + * @css: target css to be flush > > > + * @cpu: the cpu that holds the stats to be flush > > > + * > > > + * A lightweight rstat flush operation for a given css and cpu. > > > + * Only the cpu_lock is being held for mutual exclusion, the cgroup_= rstat_lock > > > + * isn't used. > > > > (Adding linux-mm and memcg maintainers) > > +Linux-MM +Michal Hocko +Shakeel Butt +Johannes Weiner +Roman Gushchin > > +Muchun Song > > > > I don't think flushing the stats without holding cgroup_rstat_lock is > > safe for memcg stats flushing. mem_cgroup_css_rstat_flush() modifies > > some non-percpu data (e.g. memcg->vmstats->state, > > memcg->vmstats->state_pending). > > > > Perhaps have this be a separate callback than css_rstat_flush() (e.g. > > css_rstat_flush_cpu() or something), so that it's clear what > > subsystems support this? In this case, only blkcg would implement this > > callback. > > Also I guess cgroup_rstat_flush() can be used here too. If we don't really care about flushing other subsystems, then yeah that would be a simpler approach. > > BTW, cgroup_rstat_flush() is annotated as might_sleep(), however it won't > sleep actually, so can this might_sleep() be removed? It will actually sleep if may_sleep=3Dtrue, and I have a change in Andrew's MM tree that removes that *_atomic() variant and makes it always sleepable. In cgroup_rstat_flush_locked(), if need_resched() or spin_needbreak(&cgroup_rstat_lock), we release the lock, reschedule, and then re-acquire it. > > > > > > + */ > > > +void cgroup_rstat_css_cpu_flush(struct cgroup_subsys_state *css, int= cpu) > > > +{ > > > + raw_spinlock_t *cpu_lock =3D per_cpu_ptr(&cgroup_rstat_cpu_lo= ck, cpu); > > > + > > > + raw_spin_lock_irq(cpu_lock); > > > + css->ss->css_rstat_flush(css, cpu); > > > > I think we need to check that css_rstat_flush() (or a new callback) is > > implemented before calling it here. > > Good catch! > > > > Thanks, > Ming >