From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8128C282C6 for ; Fri, 28 Feb 2025 19:20:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5E634280002; Fri, 28 Feb 2025 14:20:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 59694280001; Fri, 28 Feb 2025 14:20:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 45E8A280002; Fri, 28 Feb 2025 14:20:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 23D53280001 for ; Fri, 28 Feb 2025 14:20:38 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id B9B521C91F1 for ; Fri, 28 Feb 2025 19:20:37 +0000 (UTC) X-FDA: 83170320114.30.397F20A Received: from out-173.mta0.migadu.com (out-173.mta0.migadu.com [91.218.175.173]) by imf15.hostedemail.com (Postfix) with ESMTP id 84D00A000C for ; Fri, 28 Feb 2025 19:20:35 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=hLFtJLcq; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf15.hostedemail.com: domain of yosry.ahmed@linux.dev designates 91.218.175.173 as permitted sender) smtp.mailfrom=yosry.ahmed@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740770435; a=rsa-sha256; cv=none; b=VXjOAf9wDApCzaLqLrnWzOsmTBVHMCtPARzsZIhkLvd5+JW26SJUUb7VfAGvD0IOPvw6zm kJbak9QITbufZx+ELVd4scS0FW8IgTJgKr7XOa7kfdjRl+oPHITlmeecbKPEZOP+22Q+Ei D5cZUuNc7LHRALcLV1DqhRfh9VayU7Q= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=hLFtJLcq; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf15.hostedemail.com: domain of yosry.ahmed@linux.dev designates 91.218.175.173 as permitted sender) smtp.mailfrom=yosry.ahmed@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740770435; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=sdbeWRnBpOfCCZOXa1FtbbNvzrBRb2Uo3OdFODPCwfU=; b=5Wu0IizBCEoDg+U9Ha+dUxJ9ibvSCwTHJN6zzJ0nXKdFEX/rLgztEsnCvhAdqtNKTJXYNm a/hFN34SbhTVpihr5YnEmPgiGVb63Q4qtTgYIPjsAfG6zt4ZmKd4ciz2TTrcs9q+ZqFmfc ZFvebuZIfrDBK858L1TpQ00EfNI8yYE= Date: Fri, 28 Feb 2025 19:20:26 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1740770433; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=sdbeWRnBpOfCCZOXa1FtbbNvzrBRb2Uo3OdFODPCwfU=; b=hLFtJLcqjYEV4G3+dqxo3liMq5sg0+FBX/nGHgpAQvyN4MT+gsZhaa1tQXk0PeXetd0z5y RsjhM7SNAE7R+UvaDOQgnJLW/X3MYiTSwG1SOIjvCOFnjNBoPt1wDebr9KpN305d1VIa+R U5PZwNVrZTs4778s8cEiA40Xbt5BGI4= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Yosry Ahmed To: inwardvessel Cc: tj@kernel.org, shakeel.butt@linux.dev, mhocko@kernel.org, hannes@cmpxchg.org, akpm@linux-foundation.org, linux-mm@kvack.org, cgroups@vger.kernel.org, kernel-team@meta.com Subject: Re: [PATCH 3/4 v2] cgroup: separate rstat locks for subsystems Message-ID: References: <20250227215543.49928-1-inwardvessel@gmail.com> <20250227215543.49928-4-inwardvessel@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250227215543.49928-4-inwardvessel@gmail.com> X-Migadu-Flow: FLOW_OUT X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 84D00A000C X-Stat-Signature: yoysud7ocssot185xht3biminn673i19 X-Rspam-User: X-HE-Tag: 1740770435-43292 X-HE-Meta: U2FsdGVkX19PoK3mKCOTj7uRNOmOX3d1l6tPW1A4Vyojgs+IHNiFE6Fh12b0VNgQ4egD5avC7cTwH/HUQNh5wLYEODC99HzzqH1vzLBqGp6+NkJUTN34x/6KZx2TeUhLnJPOTcDtR2ABmdfy6aRJjdXD/JZhRSz/5+O1r4uv4kdx5Ol3NhUDz49TLCk0iGY+enLM4opcm17Rtd9OBdJ8hWBBl6lBx/V329gWMgq9JOcuILZ0Qm9fUrDRU56no1Czvyio2zBSYcL6BoTPs1yMiXb03m9ZP5eqfyIH6u2KSUaXBG2yzw5DLrAUwjzVQhBIxxcvfWcC7W3M5EjEAweEogK3x5v9Qjhao3ec5dNSCeVAWTXLdN/J3CHooYttJoSAADJLvjVAkA9pIOox3RP8iYifJOkW0k/JF4NovlnuixUhmXuuyQgKrgbJoSCSilqW85usmMbfwYhVyCBi61tB5LF2t8te8QcplLezOKjzGomFkVw3QfLagLT9sIQ7EyTrvYlJGpaQyweG17q2QjT8kVL0KCejWVZJgaWzu19aBwbImeK1UZFoxOVCWJuIWsMycRxmnzYhFFjiS3mkexT85qrSATg/IPbic5uBZ5qEOr0BwTX8jS4aQQIgZ7FeTPcjXg/8nGHz8UspHxKqcb48JftziAhRdqYVUpRyqx4HcL4cjwb9T+DS/sx9eqesPlf78w1fj8KKl6W1xB0vjo794vc9SjRMIQI1q1HmcdnStqNNAAS/ZT8JRflyjwpBpNWbPRcxXhF2fErWpFv1BqQtuVl1i0ryvqIxYktwHQcduyWVE5camitwDXN+gPIEDzwcchXOkoE73ItgyL+gqnLidKrgLCU4ubLCqzeLk+59IAy9pNbUeBs9HsqlZ6Ku8XHc/ExR+wUKKXe4IjYBB2QWrEpJReyykXZfQwu4AoGzd5ezwhaM+rklNl64VQu171WfyWxRbYut8oQx4fs92vs GnngU1un Euuvbrfc4hW4JiwN1LUowYuNL8H8Vq0zeCcJ3C3642hNS09VElsemmqYcUEKwur9uBuwg/nWqYNW1fuX4iTgxDGOvpA46JGIYaaflYyoa4AHC6Pg4t3SpYWpPj45V0eh7AXDGFtc/6XPun+9jS4WqQOPLc2D8zJJMVqXZewdcpeye/VhiZJUJiIKXrTd9DJ8GrgtD/7zigkimEKi4aidMG/RzIgW5XahURbAhDF7qPpgM4zc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Feb 27, 2025 at 01:55:42PM -0800, inwardvessel wrote: > From: JP Kobryn > > Let the existing locks be dedicated to the base stats and rename them as > such. Also add new rstat locks for each enabled subsystem. When handling > cgroup subsystem states, distinguish between formal subsystems (memory, > io, etc) and the base stats subsystem state (represented by > cgroup::self) to decide on which locks to take. This change is made to > prevent contention between subsystems when updating/flushing stats. > > Signed-off-by: JP Kobryn > --- > kernel/cgroup/rstat.c | 93 +++++++++++++++++++++++++++++++++---------- > 1 file changed, 72 insertions(+), 21 deletions(-) > > diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c > index 88908ef9212d..b3eaefc1fd07 100644 > --- a/kernel/cgroup/rstat.c > +++ b/kernel/cgroup/rstat.c > @@ -9,8 +9,12 @@ > > #include > > -static DEFINE_SPINLOCK(cgroup_rstat_lock); > -static DEFINE_PER_CPU(raw_spinlock_t, cgroup_rstat_cpu_lock); > +static DEFINE_SPINLOCK(cgroup_rstat_base_lock); > +static DEFINE_PER_CPU(raw_spinlock_t, cgroup_rstat_base_cpu_lock); > + > +static spinlock_t cgroup_rstat_subsys_lock[CGROUP_SUBSYS_COUNT]; > +static DEFINE_PER_CPU(raw_spinlock_t, > + cgroup_rstat_subsys_cpu_lock[CGROUP_SUBSYS_COUNT]); > > static void cgroup_base_stat_flush(struct cgroup *cgrp, int cpu); > > @@ -20,8 +24,13 @@ static struct cgroup_rstat_cpu *cgroup_rstat_cpu( > return per_cpu_ptr(css->rstat_cpu, cpu); > } > > +static inline bool is_base_css(struct cgroup_subsys_state *css) > +{ > + return css->ss == NULL; > +} > + > /* > - * Helper functions for rstat per CPU lock (cgroup_rstat_cpu_lock). > + * Helper functions for rstat per CPU locks. > * > * This makes it easier to diagnose locking issues and contention in > * production environments. The parameter @fast_path determine the > @@ -36,12 +45,12 @@ unsigned long _cgroup_rstat_cpu_lock(raw_spinlock_t *cpu_lock, int cpu, > bool contended; > > /* > - * The _irqsave() is needed because cgroup_rstat_lock is > - * spinlock_t which is a sleeping lock on PREEMPT_RT. Acquiring > - * this lock with the _irq() suffix only disables interrupts on > - * a non-PREEMPT_RT kernel. The raw_spinlock_t below disables > - * interrupts on both configurations. The _irqsave() ensures > - * that interrupts are always disabled and later restored. > + * The _irqsave() is needed because the locks used for flushing are > + * spinlock_t which is a sleeping lock on PREEMPT_RT. Acquiring this lock > + * with the _irq() suffix only disables interrupts on a non-PREEMPT_RT > + * kernel. The raw_spinlock_t below disables interrupts on both > + * configurations. The _irqsave() ensures that interrupts are always > + * disabled and later restored. > */ > contended = !raw_spin_trylock_irqsave(cpu_lock, flags); > if (contended) { > @@ -87,7 +96,7 @@ __bpf_kfunc void cgroup_rstat_updated( > struct cgroup_subsys_state *css, int cpu) > { > struct cgroup *cgrp = css->cgroup; > - raw_spinlock_t *cpu_lock = per_cpu_ptr(&cgroup_rstat_cpu_lock, cpu); > + raw_spinlock_t *cpu_lock; > unsigned long flags; > > /* > @@ -101,6 +110,12 @@ __bpf_kfunc void cgroup_rstat_updated( > if (data_race(cgroup_rstat_cpu(css, cpu)->updated_next)) > return; > > + if (is_base_css(css)) > + cpu_lock = per_cpu_ptr(&cgroup_rstat_base_cpu_lock, cpu); > + else > + cpu_lock = per_cpu_ptr(cgroup_rstat_subsys_cpu_lock, cpu) + > + css->ss->id; > + Maybe wrap this in a macro or function since it's used more than once. > flags = _cgroup_rstat_cpu_lock(cpu_lock, cpu, cgrp, true); > > /* put @css and all ancestors on the corresponding updated lists */ > @@ -208,11 +223,17 @@ static struct cgroup_subsys_state *cgroup_rstat_updated_list( > struct cgroup_subsys_state *root, int cpu) > { > struct cgroup *cgrp = root->cgroup; > - raw_spinlock_t *cpu_lock = per_cpu_ptr(&cgroup_rstat_cpu_lock, cpu); > struct cgroup_rstat_cpu *rstatc = cgroup_rstat_cpu(root, cpu); > struct cgroup_subsys_state *head = NULL, *parent, *child; > + raw_spinlock_t *cpu_lock; > unsigned long flags; > > + if (is_base_css(root)) > + cpu_lock = per_cpu_ptr(&cgroup_rstat_base_cpu_lock, cpu); > + else > + cpu_lock = per_cpu_ptr(cgroup_rstat_subsys_cpu_lock, cpu) + > + root->ss->id; > + > flags = _cgroup_rstat_cpu_lock(cpu_lock, cpu, cgrp, false); > > /* Return NULL if this subtree is not on-list */ > @@ -315,7 +336,7 @@ static void cgroup_rstat_flush_locked(struct cgroup_subsys_state *css, > struct cgroup *cgrp = css->cgroup; > int cpu; > > - lockdep_assert_held(&cgroup_rstat_lock); > + lockdep_assert_held(&lock); > > for_each_possible_cpu(cpu) { > struct cgroup_subsys_state *pos; > @@ -356,12 +377,18 @@ static void cgroup_rstat_flush_locked(struct cgroup_subsys_state *css, > __bpf_kfunc void cgroup_rstat_flush(struct cgroup_subsys_state *css) > { > struct cgroup *cgrp = css->cgroup; > + spinlock_t *lock; > + > + if (is_base_css(css)) > + lock = &cgroup_rstat_base_lock; > + else > + lock = &cgroup_rstat_subsys_lock[css->ss->id]; Same here. Also, instead of passing locks around, can we just pass the css to __cgroup_rstat_lock/unlock() and have them find the correct lock and use it directly? > > might_sleep(); > > - __cgroup_rstat_lock(&cgroup_rstat_lock, cgrp, -1); > - cgroup_rstat_flush_locked(css, &cgroup_rstat_lock); > - __cgroup_rstat_unlock(&cgroup_rstat_lock, cgrp, -1); > + __cgroup_rstat_lock(lock, cgrp, -1); > + cgroup_rstat_flush_locked(css, lock); > + __cgroup_rstat_unlock(lock, cgrp, -1); > } > > /** > @@ -376,10 +403,16 @@ __bpf_kfunc void cgroup_rstat_flush(struct cgroup_subsys_state *css) > void cgroup_rstat_flush_hold(struct cgroup_subsys_state *css) > { > struct cgroup *cgrp = css->cgroup; > + spinlock_t *lock; > + > + if (is_base_css(css)) > + lock = &cgroup_rstat_base_lock; > + else > + lock = &cgroup_rstat_subsys_lock[css->ss->id]; > > might_sleep(); > - __cgroup_rstat_lock(&cgroup_rstat_lock, cgrp, -1); > - cgroup_rstat_flush_locked(css, &cgroup_rstat_lock); > + __cgroup_rstat_lock(lock, cgrp, -1); > + cgroup_rstat_flush_locked(css, lock); > } > > /** > @@ -389,7 +422,14 @@ void cgroup_rstat_flush_hold(struct cgroup_subsys_state *css) > void cgroup_rstat_flush_release(struct cgroup_subsys_state *css) > { > struct cgroup *cgrp = css->cgroup; > - __cgroup_rstat_unlock(&cgroup_rstat_lock, cgrp, -1); > + spinlock_t *lock; > + > + if (is_base_css(css)) > + lock = &cgroup_rstat_base_lock; > + else > + lock = &cgroup_rstat_subsys_lock[css->ss->id]; > + > + __cgroup_rstat_unlock(lock, cgrp, -1); > } > > int cgroup_rstat_init(struct cgroup_subsys_state *css) > @@ -435,10 +475,21 @@ void cgroup_rstat_exit(struct cgroup_subsys_state *css) > > void __init cgroup_rstat_boot(void) > { > - int cpu; > + struct cgroup_subsys *ss; > + int cpu, ssid; > > - for_each_possible_cpu(cpu) > - raw_spin_lock_init(per_cpu_ptr(&cgroup_rstat_cpu_lock, cpu)); > + for_each_subsys(ss, ssid) { > + spin_lock_init(&cgroup_rstat_subsys_lock[ssid]); > + } > + > + for_each_possible_cpu(cpu) { > + raw_spin_lock_init(per_cpu_ptr(&cgroup_rstat_base_cpu_lock, cpu)); > + > + for_each_subsys(ss, ssid) { > + raw_spin_lock_init( > + per_cpu_ptr(cgroup_rstat_subsys_cpu_lock, cpu) + ssid); > + } > + } > } > > /* > -- > 2.43.5 > >