From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C9A7C77B7E for ; Wed, 24 May 2023 02:07:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4D0FA900002; Tue, 23 May 2023 22:07:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 459736B0075; Tue, 23 May 2023 22:07:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2DC5F900002; Tue, 23 May 2023 22:07:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 16BDB6B0074 for ; Tue, 23 May 2023 22:07:18 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id D2101160700 for ; Wed, 24 May 2023 02:07:17 +0000 (UTC) X-FDA: 80823511314.04.C96AA93 Received: from mail-ej1-f53.google.com (mail-ej1-f53.google.com [209.85.218.53]) by imf25.hostedemail.com (Postfix) with ESMTP id 0C50AA000F for ; Wed, 24 May 2023 02:07:15 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=SPBe4uEi; spf=pass (imf25.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.53 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684894036; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=d5dfPBvx6aG61cOeQLMS4GIwmjHbkXMXy0MM0cC9oLc=; b=zTCjQJUoFXkUaFfvUqnBWOAKAhuhs1YG7r5azRVRECh+izkDXIifeR7Q+AXeWlD0QBrFCv mtHNz8UHzuUvG/zJBcYQc5iUw45wRfx8pTQZRH+l/EJx6YiU2SiSaArl5Rjxqlz6+vhvyE Di1NNPM4OitHQc7jVG4vOfDfPe/xkXA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684894036; a=rsa-sha256; cv=none; b=JEP3it5jcVBA5KnZ1OhzZHtbdwemeuJUqO3n7mAMLBYEx3/FyXnt03vlCzYFbkSv2eC9EN NOQTISdsqiPBdfeGHl0kxFHmKLbkB4Br5cvECOupjRzBQcATtwva+602TH3iF6pQE/bLjW tU8PGcMmSdEHmjxU2YFiZsIloGsaasw= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=SPBe4uEi; spf=pass (imf25.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.53 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-ej1-f53.google.com with SMTP id a640c23a62f3a-96652cb7673so49886566b.0 for ; Tue, 23 May 2023 19:07:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1684894034; x=1687486034; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=d5dfPBvx6aG61cOeQLMS4GIwmjHbkXMXy0MM0cC9oLc=; b=SPBe4uEiMI2jCR4eHyxy57SIWXSCKmjMRhgWthenPoHjHzpBejlAiDsDQd7XBehg9G 2D8suTsf58iU+5W1sQ8+9rWD8jnOtEI202+Cbur5SoY3b9v2rJVWLr6fLJmi6vQeCroy Bh7aWdzGwI4tJmXQUSlF1ZZeZIXXqXFZJ4JVpuGAw3gW2JBDvvUSPY2AW/bQ2XgBJM7x k6e7mBeqcXmpYuEqB4F5KmaED/uPuu3uQU3bv2WzHzkuB0PL625tsWPYqD0ZQzgs+Km/ 3XTXX0bLWUdrJzxpaMNKlOxVltJuQyufcaO+ZgBwP0geilnrGkFY0fsjhLlY1oNoxto8 SKGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684894034; x=1687486034; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=d5dfPBvx6aG61cOeQLMS4GIwmjHbkXMXy0MM0cC9oLc=; b=SfB8HcFTkjKcs2Y/hOjMHunJTr88trH4qCszFKUx0olgvQADU72SoT9TylWpcaOsxC tcTTrX4mLT+8lKtmGkY3jEWTgMNGPUdlikxQ6pnXJk5opDAD+DdMC2TDmsgMmwY//W18 LM/gENN3CC6o81e7PkJiikd53DFaNcUhcprjL2yVEx4s8BezvRvxW13YVtvld4/XRfzn AQkpw5c5F7TG2SPtSD3x9+7VrIOxbwhCBcAVbiPaEMapHcCFBgo3F1an0U2lOrntizsX I9qPt8KkYUus36F8a4VCAOuLyrV9v0vA7yqOuHyhzrvwyuIRW940Z3VFEid7uDdblMg1 CpDQ== X-Gm-Message-State: AC+VfDyZ822jyDZdeoQ8FZAQw+k7VINocRG3QhjgaDoirbc/2/Ui34UU drkBga0EIhCJgspzz2BUQnC0n1rkxogJ9pmABnt6bQ== X-Google-Smtp-Source: ACHHUZ5Otych3q6h2UGWE4sOweDTcCjyfpqtgf2uKuIfsUHXcTvByotJN9SHYFyKqkzusZtmjvRIBCszRVALwzIYMis= X-Received: by 2002:a17:907:75fb:b0:968:c38f:5481 with SMTP id jz27-20020a17090775fb00b00968c38f5481mr14112330ejc.51.1684894034316; Tue, 23 May 2023 19:07:14 -0700 (PDT) MIME-Version: 1.0 References: <20230524011935.719659-1-ming.lei@redhat.com> In-Reply-To: <20230524011935.719659-1-ming.lei@redhat.com> From: Yosry Ahmed Date: Tue, 23 May 2023 19:06:38 -0700 Message-ID: Subject: Re: [PATCH] blk-cgroup: Flush stats before releasing blkcg_gq To: Ming Lei , Linux-MM , Michal Hocko , Shakeel Butt , Johannes Weiner , Roman Gushchin , Muchun Song Cc: Jens Axboe , linux-block@vger.kernel.org, Waiman Long , cgroups@vger.kernel.org, Tejun Heo , mkoutny@suse.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 0C50AA000F X-Rspam-User: X-Stat-Signature: 7n5cngxj96nthtzjehmb1ophcizxqq5r X-Rspamd-Server: rspam03 X-HE-Tag: 1684894035-561784 X-HE-Meta: U2FsdGVkX18sKhGxahWKtFOhrEu+iF9lBKiSViTwntKhaBZc3DJCN6GP36phe9cOUOOM+0cqE31jYqhfd73yEDQehE97bGtRaGIu0hxSIz0qsfv84Nezv3iaca/utHbIxVPprxPIiqlf4dQQYRH4lxC4+TulYIKb71le4dpQtwk1w+h3oHy+QMZvD5YpJ94zoB2wxfpy3P2ZOnjM8rqjxzczORWJs2wVd/hiuaYQfLf0N9/9eJfePE/oploDDBkhX327U1ghOTUGO+L57w+R8C/DDkcpjMEtSSS+mUc7wPa3DaMrAF+0NjJgB2dPEfeRg4lonULmsNK6b3yyjFNL/B1r4lPDneoemxDWoFay5yLs/tkE8COpbvepO38Kuo1ZvVpwbm/9t32agqAIsGriFhTwS0B7pLAVrmM0yokBqZxFdcHcatmVNN6KIcKHg4gtU4zlobns3XneyIicYeqRLcW/F4DpwDKbp4ZulNNmZ3jh2ijf4sI0NzuY6j4PTvIJ13Jg3od6Ji2S/sBSeGVw5FgpZ08qqvZ4cL30YT4//FcxZHYuYTzc9XlSXoH/rbSFBnRtQkc9Z0PjNdoyCkfe+Jgdny9Qsaa4R9vVDBnxMo/hM6U7tw5piUyOSzN7d0DIHt9w9VptTz92TGNIAPxkKkf1jbnYBzvlj7xmCxOHDcZ/LHTCjQjfmRs05ydj6Y+oD8rNa6M9HqPpWcoYbXTpP5EM9W1E9JBHibWr4sambbKarbZBf6TpiLiqmrbTxkmVGy6oDxvFuhSwaAbwyxnqHqi8yQOl3ImntxXzAREUPe8pVuz2kRVY0hn90IzrTmkj0e9BN4Xl9urYGT+8O1QrkEr3fkXxA/Jbl/Qe6JLzq0/8vTV4nSAQH28X2savqaAG6Qn7hgvTF3heX9NDWXojGguv/SRvCXZVWdN5ZvIxE3/xM5mW2l8+N8t9HMYntWkXq6n3cTX+94B/iOWEwlf EeaxHjz3 TRMGTU0vHYK5E1j5gx5UKWsxgcgdm5bcn1DqCNPc/74GqZCPNt9WoLzw9UoNgPWP49C+xl3TjHDmBjPAFhWPZ6RTI5eY/H3UFb/w3W55lwaheaBtlpt6A2NDuQV/sXfuHkLT3ruYtHCX2nbJ/ANQt6eAOFNijonYVxk+jzZgwsTQxvHTe4mpmEZ+Ld7su2bdBdGNk617lThvKT6gMnRQHqxFPC6P6ldZiT7+Ku2YBJoTO/+x8ybvkFMVJcVWYNiWG5FM2Yo+Lb8hmeGmBKOs4y5W1Cp00WuqbbdFYif135NudcOgL/CjJ1gYCfq8C5VJw/ckGlrJL3CsXFZy6lChwWRmRUNq6CUj67sQObQFFB1SdSpKtnIJB3Qq7RDw/Yv+V/X5fcTfPmxG1H4Mc5vAVRpPjzHU7WujJOOD5 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Ming, On Tue, May 23, 2023 at 6:21=E2=80=AFPM Ming Lei wrot= e: > > As noted by Michal, the blkg_iostat_set's in the lockless list > hold reference to blkg's to protect against their removal. Those > blkg's hold reference to blkcg. When a cgroup is being destroyed, > cgroup_rstat_flush() is only called at css_release_work_fn() which > is called when the blkcg reference count reaches 0. This circular > dependency will prevent blkcg and some blkgs from being freed after > they are made offline. I am not at all familiar with blkcg, but does calling cgroup_rstat_flush() in offline_css() fix the problem? or can items be added to the lockless list(s) after the blkcg is offlined? > > It is less a problem if the cgroup to be destroyed also has other > controllers like memory that will call cgroup_rstat_flush() which will > clean up the reference count. If block is the only controller that uses > rstat, these offline blkcg and blkgs may never be freed leaking more > and more memory over time. > > To prevent this potential memory leak: > > - a new cgroup_rstat_css_cpu_flush() function is added to flush stats for > a given css and cpu. This new function will be called in __blkg_release()= . > > - don't grab bio->bi_blkg when adding the stats into blkcg's per-cpu > stat list, and this kind of handling is the most fragile part of > original patch > > Based on Waiman's patch: > > https://lore.kernel.org/linux-block/20221215033132.230023-3-longman@redha= t.com/ > > Fixes: 3b8cc6298724 ("blk-cgroup: Optimize blkcg_rstat_flush()") > Cc: Waiman Long > Cc: cgroups@vger.kernel.org > Cc: Tejun Heo > Cc: mkoutny@suse.com > Signed-off-by: Ming Lei > --- > block/blk-cgroup.c | 15 +++++++++++++-- > include/linux/cgroup.h | 1 + > kernel/cgroup/rstat.c | 18 ++++++++++++++++++ > 3 files changed, 32 insertions(+), 2 deletions(-) > > diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c > index 0ce64dd73cfe..5437b6af3955 100644 > --- a/block/blk-cgroup.c > +++ b/block/blk-cgroup.c > @@ -163,10 +163,23 @@ static void blkg_free(struct blkcg_gq *blkg) > static void __blkg_release(struct rcu_head *rcu) > { > struct blkcg_gq *blkg =3D container_of(rcu, struct blkcg_gq, rcu_= head); > + struct blkcg *blkcg =3D blkg->blkcg; > + int cpu; > > #ifdef CONFIG_BLK_CGROUP_PUNT_BIO > WARN_ON(!bio_list_empty(&blkg->async_bios)); > #endif > + /* > + * Flush all the non-empty percpu lockless lists before releasing > + * us. Meantime no new bio can refer to this blkg any more given > + * the refcnt is killed. > + */ > + for_each_possible_cpu(cpu) { > + struct llist_head *lhead =3D per_cpu_ptr(blkcg->lhead, cp= u); > + > + if (!llist_empty(lhead)) > + cgroup_rstat_css_cpu_flush(&blkcg->css, cpu); > + } > > /* release the blkcg and parent blkg refs this blkg has been hold= ing */ > css_put(&blkg->blkcg->css); > @@ -991,7 +1004,6 @@ static void blkcg_rstat_flush(struct cgroup_subsys_s= tate *css, int cpu) > if (parent && parent->parent) > blkcg_iostat_update(parent, &blkg->iostat.cur, > &blkg->iostat.last); > - percpu_ref_put(&blkg->refcnt); > } > > out: > @@ -2075,7 +2087,6 @@ void blk_cgroup_bio_start(struct bio *bio) > > llist_add(&bis->lnode, lhead); > WRITE_ONCE(bis->lqueued, true); > - percpu_ref_get(&bis->blkg->refcnt); > } > > u64_stats_update_end_irqrestore(&bis->sync, flags); > diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h > index 885f5395fcd0..97d4764d8e6a 100644 > --- a/include/linux/cgroup.h > +++ b/include/linux/cgroup.h > @@ -695,6 +695,7 @@ void cgroup_rstat_flush(struct cgroup *cgrp); > void cgroup_rstat_flush_atomic(struct cgroup *cgrp); > void cgroup_rstat_flush_hold(struct cgroup *cgrp); > void cgroup_rstat_flush_release(void); > +void cgroup_rstat_css_cpu_flush(struct cgroup_subsys_state *css, int cpu= ); > > /* > * Basic resource stats. > diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c > index 9c4c55228567..96e7a4e6da72 100644 > --- a/kernel/cgroup/rstat.c > +++ b/kernel/cgroup/rstat.c > @@ -281,6 +281,24 @@ void cgroup_rstat_flush_release(void) > spin_unlock_irq(&cgroup_rstat_lock); > } > > +/** > + * cgroup_rstat_css_cpu_flush - flush stats for the given css and cpu > + * @css: target css to be flush > + * @cpu: the cpu that holds the stats to be flush > + * > + * A lightweight rstat flush operation for a given css and cpu. > + * Only the cpu_lock is being held for mutual exclusion, the cgroup_rsta= t_lock > + * isn't used. (Adding linux-mm and memcg maintainers) +Linux-MM +Michal Hocko +Shakeel Butt +Johannes Weiner +Roman Gushchin +Muchun Song I don't think flushing the stats without holding cgroup_rstat_lock is safe for memcg stats flushing. mem_cgroup_css_rstat_flush() modifies some non-percpu data (e.g. memcg->vmstats->state, memcg->vmstats->state_pending). Perhaps have this be a separate callback than css_rstat_flush() (e.g. css_rstat_flush_cpu() or something), so that it's clear what subsystems support this? In this case, only blkcg would implement this callback. > + */ > +void cgroup_rstat_css_cpu_flush(struct cgroup_subsys_state *css, int cpu= ) > +{ > + raw_spinlock_t *cpu_lock =3D per_cpu_ptr(&cgroup_rstat_cpu_lock, = cpu); > + > + raw_spin_lock_irq(cpu_lock); > + css->ss->css_rstat_flush(css, cpu); I think we need to check that css_rstat_flush() (or a new callback) is implemented before calling it here. > + raw_spin_unlock_irq(cpu_lock); > +} > + > int cgroup_rstat_init(struct cgroup *cgrp) > { > int cpu; > -- > 2.40.1 >