From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70773C35FFA for ; Wed, 19 Mar 2025 17:18:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C08B0280004; Wed, 19 Mar 2025 13:18:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B92FE280001; Wed, 19 Mar 2025 13:18:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A32EA280004; Wed, 19 Mar 2025 13:18:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 81511280001 for ; Wed, 19 Mar 2025 13:18:14 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 265591606A1 for ; Wed, 19 Mar 2025 17:18:15 +0000 (UTC) X-FDA: 83238958950.08.A2A5AD4 Received: from out-183.mta1.migadu.com (out-183.mta1.migadu.com [95.215.58.183]) by imf27.hostedemail.com (Postfix) with ESMTP id 1F92B40017 for ; Wed, 19 Mar 2025 17:18:12 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=HdZqR9VH; spf=pass (imf27.hostedemail.com: domain of yosry.ahmed@linux.dev designates 95.215.58.183 as permitted sender) smtp.mailfrom=yosry.ahmed@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1742404693; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wndCWRL/SlUJGjwqRG41pyacoMp3XEgXcXkWL0LBY/I=; b=waksgHfA5fsHxxOVfM7cG2YBqsfNya4USSUiWvdOkBhwfmzLSIJRMwhEZsnnm8L6SOTZr6 8bnKRGpDQSKQtpBKseAcFV91T7lsc66798vCZuoOuhvRQxAIQGJjydIcEpXM3kN1T8cqL/ lLd4Tg1bfASyZryb9K/Tm9kT6bYIUl4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1742404693; a=rsa-sha256; cv=none; b=dDMSA935eCa+G4TWuCypu8MShKo0GDYXS1UnqZKjBMIcKpfoO6gamnXs9ay1BoKXFi0Boc F70HCgsJ4o3FkWEY7iK0YhMc5IHUYT9TJU2FQIncd/DQj3S9jErw/PUk8sMs3Ud/0fCjh7 /XgJB+xC29gfh2ZOtP6t0kWLqg8CABg= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=HdZqR9VH; spf=pass (imf27.hostedemail.com: domain of yosry.ahmed@linux.dev designates 95.215.58.183 as permitted sender) smtp.mailfrom=yosry.ahmed@linux.dev; dmarc=pass (policy=none) header.from=linux.dev Date: Wed, 19 Mar 2025 17:18:05 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1742404690; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=wndCWRL/SlUJGjwqRG41pyacoMp3XEgXcXkWL0LBY/I=; b=HdZqR9VHfoQsq62CuzECJXjB804i6k0wcp0SW45XsU+pmf0fr3HyejFyAQgrGuXtjaJ+Lc D3CEKccPxBG7+3kPRLc8m7s0uuG26KJj+guFboQcKNIZZ1QOObRIY5Bh6Gbt4yafvfL/s9 ad/qi2bgDwNTHSP/qEg3FzTS7SozrCw= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Yosry Ahmed To: Mateusz Guzik Cc: Greg Thelen , Tejun Heo , Johannes Weiner , Michal =?utf-8?Q?Koutn=C3=BD?= , Andrew Morton , Eric Dumazet , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Eric Dumazet Subject: Re: [PATCH] cgroup/rstat: avoid disabling irqs for O(num_cpu) Message-ID: References: <20250319071330.898763-1-gthelen@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 1F92B40017 X-Stat-Signature: iz74yyukjrm81hsxar5scj6wxshfsogb X-HE-Tag: 1742404692-945404 X-HE-Meta: U2FsdGVkX1+Gx9aze7vulaxNbExfoqEZZlrHbc8IKGQK6tivf5NgXJ8VzN4sHMm7/f0HNZik4lgEMQdtN2nQaL72lRRS8n1xCqsMSTqmjbQQbceXIXyo6FmsFSvVWrtv29Q4/VHXzf0r9wC9Dh4/Oo7ZKLpUcfzK9EVmdFs7UlEkO4+1Tq+ApRCPSuwKNM5KcgLlO/YPmkDTtE+SWa6izZWm2Sk3YmlJivAaLwgq7rp7DwTCNa0M+XkBzhLSyuE+QIF4XfiYfNPVM1135tzPn9frWe/6fTyFrF/4wZpoA8yuOd1auHQUnYdOXtFhWx1U5jbVEj9ZMC9nil/Ny+MkvhIsqFXbXrm4VWeSx69koB4ISbVkx6vupN4lgpZrgT1rQRpj0ZrZUoIbxnyb9chQU8lFsyPz2aM0QLon0kBMunjvuoYJbyHnVzfwiY+BzgbxS2H6kOSclHnMam2o30k5CmbCKmY4RLC/pzYN3DNuL7xIJE95nIT3sWD8LnR/sbt+hSpsq3fw6A+NvvhwvtavQExdSUQTQSwzb0KdmQSzcQQ4Tt3Y7DI2Mmxhg8hbrZKtrRTMkCdb5YA+Kp5N5m/DhDaBa1YI0FLqbEjaIcmn51iowKpmYKE2Fj1HuMEvoVEGzQJNTEbT1rNwcmCFq/MjpwG+xUr7I5hnZrE8yOZTh4/JinKBWhcv36q+JItanzieclz5uIoysMxoZ1sVwpjtnZVS2yOwH1fPk+CKiJImu+n6aqeTSEmSCfB/49H02HOaLl/PI1f/s2auPm8/IfPfoD2z8ejtqKp828XmH4LSZnUuO3P+389kw/qgPmE6/i2QA8D/FH9+13V5AQ3RYSR7rBhfg1AO7sgVRTDKbEaRNaElRjmx+AossC0oGl7Q7bKUmK3kPn2ciTI7EwuKklFhie0sQNw5ugCCJWge4MpzrNVFaAV3mPx8DjWAoJP2dL6Rrbj+yo/55Su2mO4LUHO vo5BtnEF Fdqey6Se+9ULtMnK0yUcSc3GPm78V2hyeBMQ2z3eSpfrwU5CSSuofkf6qP6J53YWChd2rj7eQUClj/ssDsi9koE11UN5dGZ2oJqO1VrvXI4WO0IKdR0dC6dbY3U1+7wcOXdX+Rtj1Zo0vPaaJQhW4N9Bdl848NK+x+b7b+MgcUEcwVsgHpoBs3LNvBmhPMlTnolovjLx2Ti9g4wt9pIvuDVJYGFEQHy/xKsGEwtPQeL3qhtdQ3i4u2mjLXJ6nz/NgfrNVbUfgZV/bK3DrTLeNwwr6Sj8CGmXM8cmf3xsCUYGfOx6eRx2aCzAf3d9D+dGYQyLwX9zjZ9jYIds= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Mar 19, 2025 at 11:47:32AM +0100, Mateusz Guzik wrote: > On Wed, Mar 19, 2025 at 12:13:30AM -0700, Greg Thelen wrote: > > From: Eric Dumazet > > > > cgroup_rstat_flush_locked() grabs the irq safe cgroup_rstat_lock while > > iterating all possible cpus. It only drops the lock if there is > > scheduler or spin lock contention. If neither, then interrupts can be > > disabled for a long time. On large machines this can disable interrupts > > for a long enough time to drop network packets. On 400+ CPU machines > > I've seen interrupt disabled for over 40 msec. > > > > Prevent rstat from disabling interrupts while processing all possible > > cpus. Instead drop and reacquire cgroup_rstat_lock for each cpu. This > > approach was previously discussed in > > https://lore.kernel.org/lkml/ZBz%2FV5a7%2F6PZeM7S@slm.duckdns.org/, > > though this was in the context of an non-irq rstat spin lock. > > > > Benchmark this change with: > > 1) a single stat_reader process with 400 threads, each reading a test > > memcg's memory.stat repeatedly for 10 seconds. > > 2) 400 memory hog processes running in the test memcg and repeatedly > > charging memory until oom killed. Then they repeat charging and oom > > killing. > > > > v6.14-rc6 with CONFIG_IRQSOFF_TRACER with stat_reader and hogs, finds > > interrupts are disabled by rstat for 45341 usec: > > # => started at: _raw_spin_lock_irq > > # => ended at: cgroup_rstat_flush > > # > > # > > # _------=> CPU# > > # / _-----=> irqs-off/BH-disabled > > # | / _----=> need-resched > > # || / _---=> hardirq/softirq > > # ||| / _--=> preempt-depth > > # |||| / _-=> migrate-disable > > # ||||| / delay > > # cmd pid |||||| time | caller > > # \ / |||||| \ | / > > stat_rea-96532 52d.... 0us*: _raw_spin_lock_irq > > stat_rea-96532 52d.... 45342us : cgroup_rstat_flush > > stat_rea-96532 52d.... 45342us : tracer_hardirqs_on <-cgroup_rstat_flush > > stat_rea-96532 52d.... 45343us : > > => memcg1_stat_format > > => memory_stat_format > > => memory_stat_show > > => seq_read_iter > > => vfs_read > > => ksys_read > > => do_syscall_64 > > => entry_SYSCALL_64_after_hwframe > > > > With this patch the CONFIG_IRQSOFF_TRACER doesn't find rstat to be the > > longest holder. The longest irqs-off holder has irqs disabled for > > 4142 usec, a huge reduction from previous 45341 usec rstat finding. > > > > Running stat_reader memory.stat reader for 10 seconds: > > - without memory hogs: 9.84M accesses => 12.7M accesses > > - with memory hogs: 9.46M accesses => 11.1M accesses > > The throughput of memory.stat access improves. > > > > The mode of memory.stat access latency after grouping by of 2 buckets: > > - without memory hogs: 64 usec => 16 usec > > - with memory hogs: 64 usec => 8 usec > > The memory.stat latency improves. > > > > Signed-off-by: Eric Dumazet > > Signed-off-by: Greg Thelen > > Tested-by: Greg Thelen > > --- > > kernel/cgroup/rstat.c | 12 +++++------- > > 1 file changed, 5 insertions(+), 7 deletions(-) > > > > diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c > > index aac91466279f..976c24b3671a 100644 > > --- a/kernel/cgroup/rstat.c > > +++ b/kernel/cgroup/rstat.c > > @@ -323,13 +323,11 @@ static void cgroup_rstat_flush_locked(struct cgroup *cgrp) > > rcu_read_unlock(); > > } > > > > - /* play nice and yield if necessary */ > > - if (need_resched() || spin_needbreak(&cgroup_rstat_lock)) { > > - __cgroup_rstat_unlock(cgrp, cpu); > > - if (!cond_resched()) > > - cpu_relax(); > > - __cgroup_rstat_lock(cgrp, cpu); > > - } > > + /* play nice and avoid disabling interrupts for a long time */ > > + __cgroup_rstat_unlock(cgrp, cpu); > > + if (!cond_resched()) > > + cpu_relax(); > > + __cgroup_rstat_lock(cgrp, cpu); > > } > > } > > Is not this going a little too far? > > the lock + irq trip is quite expensive in its own right and now is > going to be paid for each cpu, as in the total time spent executing > cgroup_rstat_flush_locked is going to go up. > > Would your problem go away toggling this every -- say -- 8 cpus? I was concerned about this too, and about more lock bouncing, but the testing suggests that this actually overall improves the latency of cgroup_rstat_flush_locked() (at least on tested HW). So I don't think we need to do something like this unless a regression is observed. > > Just a suggestion. >