From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BF50C35FFA for ; Wed, 19 Mar 2025 17:16:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 39548280003; Wed, 19 Mar 2025 13:16:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 36C94280001; Wed, 19 Mar 2025 13:16:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 25D1E280003; Wed, 19 Mar 2025 13:16:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 0850D280001 for ; Wed, 19 Mar 2025 13:16:12 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 03F5014040B for ; Wed, 19 Mar 2025 17:16:11 +0000 (UTC) X-FDA: 83238953784.01.52858CF Received: from out-180.mta1.migadu.com (out-180.mta1.migadu.com [95.215.58.180]) by imf12.hostedemail.com (Postfix) with ESMTP id 3B9A04002A for ; Wed, 19 Mar 2025 17:16:10 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=vCsluCwr; spf=pass (imf12.hostedemail.com: domain of yosry.ahmed@linux.dev designates 95.215.58.180 as permitted sender) smtp.mailfrom=yosry.ahmed@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1742404570; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/VcydtPIyQryEARYQuxcXq/5lnJ15R6BGejdmXYWpLo=; b=5OMDdkl9KW70kC7WtD6nBwgQurLjUUo5pXwrcG7k8RT3evYpoDqFsNn/32+qLsUspIlUjv 2JTEjB/tVI7KGGjrIUZ1coaI1mF0YrD9pbrMI08/s3nrtxpyylxRcQO45x5/hE8r7M5IPh qZeVdpY9yzaTS5sKylwHBskREI+F37U= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1742404570; a=rsa-sha256; cv=none; b=Ok+wL10BhzvOs583ZYlHFubIdMhLPAqUwDV95nrXo1Q1/bHKiLF3Op3zwY5dRvIh0v023t wrIYRuPTh4Its+5AcDVjW7GV75t9iydYIF8hURYG/i/cNVpOx/C3pGHTReDqVK7W6zdL3/ VPTlhPpg/Ha9GQQZ6/APdMnp009V/ms= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=vCsluCwr; spf=pass (imf12.hostedemail.com: domain of yosry.ahmed@linux.dev designates 95.215.58.180 as permitted sender) smtp.mailfrom=yosry.ahmed@linux.dev; dmarc=pass (policy=none) header.from=linux.dev Date: Wed, 19 Mar 2025 17:16:02 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1742404568; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=/VcydtPIyQryEARYQuxcXq/5lnJ15R6BGejdmXYWpLo=; b=vCsluCwrMYQKurrUymPjNi+PWht0yf6WCq5C7Vpnx6eq26wk4iS/U0noV0cPjN+2qNNW7g 9t9G4SCMxnIy1x0V4UpW1EP097/COthlpcxjgtgWvKE7mGhTMOZm60lD+93DPYt4wUTgwi F/RWkEzB2FHEfz0dO0TaIitQRsxi1YY= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Yosry Ahmed To: Greg Thelen Cc: Tejun Heo , Johannes Weiner , Michal =?utf-8?Q?Koutn=C3=BD?= , Andrew Morton , Eric Dumazet , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Eric Dumazet Subject: Re: [PATCH] cgroup/rstat: avoid disabling irqs for O(num_cpu) Message-ID: References: <20250319071330.898763-1-gthelen@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250319071330.898763-1-gthelen@google.com> X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 3B9A04002A X-Stat-Signature: ehms3tmxanj81y4jfzx7xmbgyda4hfi9 X-HE-Tag: 1742404570-840025 X-HE-Meta: U2FsdGVkX1+8/MVeskSbdUWmMrN5DYt7g1ztfHpjlwjzZtH8UTSdEVp675WnviocgI42MNIpYTAXeBIZUXfrgkC2BBSZJETk41S2bWkmmTStgM8J3rP60TKIOTiYhr3/wLROagxCCVONZGrz2g7oo4S1hOYF04vCSPnLWFPlQX3MjZcxc41YdztsyeTL/DpJY0Vehd0AsvK4rewkXboILFl8BIMfMrc/16JGqSPg4nk1klZzXWKy/4PjRe9VJfd3/+EtzzXeqtna2rcbSbw3zX2we1kSJYh1VaRwUx5Yltt1Vjoeec1BDLa2bGFwD0yJdCn8KHy2SMqVttBovSnBg0WYTNYKRoOSw0EcgmNEaVS+kZUpCqyFr0+YqUVUh6J4ud+noOoDvfwXQGzDh2ZDB651X76WOEkG5nEgYIBeRP10P/uqLNN5R7XKK+gvBTDxHmVarAFfXS/NgATUzDzUBOO4qPu9bcEt2CDERHqeplGCtCx3SaAxopZLfAUPu6Jwma3tV2uFr2xQ1MJRd5xdNyphgyJIDcq9NBQKFU+FIbQLnaH4FNwALWKQU06F+jhL5pGb4UwB3tUWKT0oHN+FdC3IxPsQrJGsccNc3XFUs0LPl9In7MjA0xHR4Gx7Y0DZQiUWoGFs/IjSneB/nml5i4PADsNqIWLfIhz7zss4usA1f/dPffeJpNVXe5eR4DORcAxpgMlfh8cALWy3tpuol0lYKwRTi5QeWVvKwV8cnSCsawi9XSg3qS0RCoMYNIFfl5SJag6tU6/nfM7Bx63Bzh231TDc5S6Nms+YWnLVGrs2AFzf7RSi7fMc0BWVF+cAq6g0ugivHzjigczqBXqGCfeYNu87mLW4hNGGndtRMUuePOD/7GD67JciLiSRTuf1NhCRLzkOAVo6X83NVQW7uNHfaDt0+qyK X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Mar 19, 2025 at 12:13:30AM -0700, Greg Thelen wrote: > From: Eric Dumazet > > cgroup_rstat_flush_locked() grabs the irq safe cgroup_rstat_lock while > iterating all possible cpus. It only drops the lock if there is > scheduler or spin lock contention. If neither, then interrupts can be > disabled for a long time. On large machines this can disable interrupts > for a long enough time to drop network packets. On 400+ CPU machines > I've seen interrupt disabled for over 40 msec. > > Prevent rstat from disabling interrupts while processing all possible > cpus. Instead drop and reacquire cgroup_rstat_lock for each cpu. This > approach was previously discussed in > https://lore.kernel.org/lkml/ZBz%2FV5a7%2F6PZeM7S@slm.duckdns.org/, > though this was in the context of an non-irq rstat spin lock. > > Benchmark this change with: > 1) a single stat_reader process with 400 threads, each reading a test > memcg's memory.stat repeatedly for 10 seconds. > 2) 400 memory hog processes running in the test memcg and repeatedly > charging memory until oom killed. Then they repeat charging and oom > killing. > > v6.14-rc6 with CONFIG_IRQSOFF_TRACER with stat_reader and hogs, finds > interrupts are disabled by rstat for 45341 usec: > # => started at: _raw_spin_lock_irq > # => ended at: cgroup_rstat_flush > # > # > # _------=> CPU# > # / _-----=> irqs-off/BH-disabled > # | / _----=> need-resched > # || / _---=> hardirq/softirq > # ||| / _--=> preempt-depth > # |||| / _-=> migrate-disable > # ||||| / delay > # cmd pid |||||| time | caller > # \ / |||||| \ | / > stat_rea-96532 52d.... 0us*: _raw_spin_lock_irq > stat_rea-96532 52d.... 45342us : cgroup_rstat_flush > stat_rea-96532 52d.... 45342us : tracer_hardirqs_on <-cgroup_rstat_flush > stat_rea-96532 52d.... 45343us : > => memcg1_stat_format > => memory_stat_format > => memory_stat_show > => seq_read_iter > => vfs_read > => ksys_read > => do_syscall_64 > => entry_SYSCALL_64_after_hwframe > > With this patch the CONFIG_IRQSOFF_TRACER doesn't find rstat to be the > longest holder. The longest irqs-off holder has irqs disabled for > 4142 usec, a huge reduction from previous 45341 usec rstat finding. > > Running stat_reader memory.stat reader for 10 seconds: > - without memory hogs: 9.84M accesses => 12.7M accesses > - with memory hogs: 9.46M accesses => 11.1M accesses > The throughput of memory.stat access improves. > > The mode of memory.stat access latency after grouping by of 2 buckets: > - without memory hogs: 64 usec => 16 usec > - with memory hogs: 64 usec => 8 usec > The memory.stat latency improves. > > Signed-off-by: Eric Dumazet > Signed-off-by: Greg Thelen > Tested-by: Greg Thelen > --- > kernel/cgroup/rstat.c | 12 +++++------- > 1 file changed, 5 insertions(+), 7 deletions(-) > > diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c > index aac91466279f..976c24b3671a 100644 > --- a/kernel/cgroup/rstat.c > +++ b/kernel/cgroup/rstat.c > @@ -323,13 +323,11 @@ static void cgroup_rstat_flush_locked(struct cgroup *cgrp) > rcu_read_unlock(); > } > > - /* play nice and yield if necessary */ > - if (need_resched() || spin_needbreak(&cgroup_rstat_lock)) { > - __cgroup_rstat_unlock(cgrp, cpu); > - if (!cond_resched()) > - cpu_relax(); > - __cgroup_rstat_lock(cgrp, cpu); > - } > + /* play nice and avoid disabling interrupts for a long time */ > + __cgroup_rstat_unlock(cgrp, cpu); > + if (!cond_resched()) > + cpu_relax(); > + __cgroup_rstat_lock(cgrp, cpu); This patch looks good as-is, so feel free to add: Reviewed-by: Yosry Ahmed That being said I think we can do further cleanups here now. We should probably inline cgroup_rstat_flush_locked() into cgroup_rstat_flush(), and move the lock hold and release into the loop. cgroup_rstat_flush_hold() can simply call cgroup_rstat_flush() then hold the lock. Something like this on top: diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c index 976c24b3671a7..4f4b8d22555d7 100644 --- a/kernel/cgroup/rstat.c +++ b/kernel/cgroup/rstat.c @@ -299,17 +299,29 @@ static inline void __cgroup_rstat_unlock(struct cgroup *cgrp, int cpu_in_loop) spin_unlock_irq(&cgroup_rstat_lock); } -/* see cgroup_rstat_flush() */ -static void cgroup_rstat_flush_locked(struct cgroup *cgrp) - __releases(&cgroup_rstat_lock) __acquires(&cgroup_rstat_lock) +/** + * cgroup_rstat_flush - flush stats in @cgrp's subtree + * @cgrp: target cgroup + * + * Collect all per-cpu stats in @cgrp's subtree into the global counters + * and propagate them upwards. After this function returns, all cgroups in + * the subtree have up-to-date ->stat. + * + * This also gets all cgroups in the subtree including @cgrp off the + * ->updated_children lists. + * + * This function may block. + */ +__bpf_kfunc void cgroup_rstat_flush(struct cgroup *cgrp) { int cpu; - lockdep_assert_held(&cgroup_rstat_lock); - + might_sleep(); for_each_possible_cpu(cpu) { struct cgroup *pos = cgroup_rstat_updated_list(cgrp, cpu); + /* Reacquire for every CPU to avoiding disabing IRQs too long */ + __cgroup_rstat_lock(cgrp, cpu); for (; pos; pos = pos->rstat_flush_next) { struct cgroup_subsys_state *css; @@ -322,37 +334,12 @@ static void cgroup_rstat_flush_locked(struct cgroup *cgrp) css->ss->css_rstat_flush(css, cpu); rcu_read_unlock(); } - - /* play nice and avoid disabling interrupts for a long time */ __cgroup_rstat_unlock(cgrp, cpu); if (!cond_resched()) cpu_relax(); - __cgroup_rstat_lock(cgrp, cpu); } } -/** - * cgroup_rstat_flush - flush stats in @cgrp's subtree - * @cgrp: target cgroup - * - * Collect all per-cpu stats in @cgrp's subtree into the global counters - * and propagate them upwards. After this function returns, all cgroups in - * the subtree have up-to-date ->stat. - * - * This also gets all cgroups in the subtree including @cgrp off the - * ->updated_children lists. - * - * This function may block. - */ -__bpf_kfunc void cgroup_rstat_flush(struct cgroup *cgrp) -{ - might_sleep(); - - __cgroup_rstat_lock(cgrp, -1); - cgroup_rstat_flush_locked(cgrp); - __cgroup_rstat_unlock(cgrp, -1); -} - /** * cgroup_rstat_flush_hold - flush stats in @cgrp's subtree and hold * @cgrp: target cgroup @@ -365,9 +352,8 @@ __bpf_kfunc void cgroup_rstat_flush(struct cgroup *cgrp) void cgroup_rstat_flush_hold(struct cgroup *cgrp) __acquires(&cgroup_rstat_lock) { - might_sleep(); + cgroup_rstat_flush(cgrp); __cgroup_rstat_lock(cgrp, -1); - cgroup_rstat_flush_locked(cgrp); } /**