From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB13BC52D7C for ; Mon, 12 Aug 2024 14:41:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5ED216B008A; Mon, 12 Aug 2024 10:41:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 59C8C6B008C; Mon, 12 Aug 2024 10:41:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 464506B0095; Mon, 12 Aug 2024 10:41:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 28F876B008A for ; Mon, 12 Aug 2024 10:41:38 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 9A29F120201 for ; Mon, 12 Aug 2024 14:41:37 +0000 (UTC) X-FDA: 82443857034.09.AD744EA Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf07.hostedemail.com (Postfix) with ESMTP id 3276B40015 for ; Mon, 12 Aug 2024 14:41:34 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=PDt991gA; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf07.hostedemail.com: domain of hawk@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=hawk@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723473618; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=XhvdwdlC99/nHuNCFBjh1pKdvj2su4OHaLFgxfAHyAo=; b=XqFPBM2C4B2oYtVyNlq8NQvrAQmzu1KP6BiDYs0TekHcfGfi/xz+Oex8LQvl+TliQkSrBH TgDl3RKncdrHu52/DxrPvW6DDgOSiYzxG6BseSLgAwm6502cEoJFBvIYffWkpUzV0nRaF9 RarESBe4vOpEGdJcaKOPSup+741B8iU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723473618; a=rsa-sha256; cv=none; b=OkXsIbKkBZnx7h3FTCn3V1seeKINrewusLqaKx8HizSQOjH1YiOookbwDlCysYAcqNeweK wMd35IJkDlwkboXv63KKHqS41Dn690jyRJILLgawHg55vBJPULnRW/PsU3eh4vqbP+UU5V v2Y8pUmfcvCcdbKQ7Au/v+2TEGrduUI= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=PDt991gA; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf07.hostedemail.com: domain of hawk@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=hawk@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 5AB19CE0EE5; Mon, 12 Aug 2024 14:41:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 43511C32782; Mon, 12 Aug 2024 14:41:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723473690; bh=6kRnKXC8ZWBXeu1SrJ69LeK8v5L7MzOYhhDtPffcaFw=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=PDt991gAdhARwtSG028BsrlJ10WbrEQwNhss/mxR1yBr6l/a+Z8m34Fo7RKfDs5tR KhZr7qEVVvPQS/Y1WX0PTpPYszdJzKRKfaG0wkdoV+pxda4btL/PKDPpA51GRYeWPY OmCi3VKWtjE1Mp/SSg1fo8wYDah+NlB7R2+8tycy5rxHbX/Nga9L6FdZ8/QAB8w8pZ 151xQ+g0I8dit3yMiiwjyNjY9OASAn/yoFQZQPLbPnb0B4rXJ1fGeUqeoKKtcJVxxR DTimYohOdxEGGcCx6dr/tJXqSqqoSw+lhVp/MTjyXuenGmwQqSf77abvDVU9UNvdbn vk2KRlG1CXgVw== Message-ID: <693177aa-16f0-432a-baa4-3b862dee48c9@kernel.org> Date: Mon, 12 Aug 2024 16:41:26 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH V9 1/2] cgroup/rstat: Avoid flushing if there is an ongoing overlapping flush To: tj@kernel.org, cgroups@vger.kernel.org, yosryahmed@google.com, shakeel.butt@linux.dev Cc: hannes@cmpxchg.org, lizefan.x@bytedance.com, longman@redhat.com, kernel-team@cloudflare.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <172245504313.3147408.12138439169548255896.stgit@firesoul> Content-Language: en-US From: Jesper Dangaard Brouer In-Reply-To: <172245504313.3147408.12138439169548255896.stgit@firesoul> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 3276B40015 X-Stat-Signature: noh5rtqzwian6qf49s94u5zabtdzdw1u X-Rspam-User: X-HE-Tag: 1723473694-95108 X-HE-Meta: U2FsdGVkX1/AssUedFeWnxiOa6dV1ocBTT+taaVju8DBzRZWB3Zyuuk7+gNqIUqbjiq+fQMV7/PBWVGxoUk7ax3OHFbbATPNg96kDQ19hriCzByfqaZX6uRkG/ECxb9Oaua64Z9NdTDWdzjS0xzNG11MXdZxUO0lZ+50GGl+EE9Vj42BLWJz2tslk7F+dNjuo/dMkn5ox8g4yI+rVjG3PwO0mw9ljr8Xu+5KdnCIUW1tQI71TUbTaZXLhyhI0pxjWvfk9FTgg2m8NC4rd9pLfcwCT9M07P3djUmsmJBILWlb86YcmUl0DHZkvXA+sys2S2POwTBQd9pOMMOOQHAzzkU8pS9MZlbAEVOH2CjqVVdWLb8SLIHCz19wdOmabPSIitWN7s5BCtNRuKZ565iEub3+95h8czX1off1p4kYJ4NiK903Wz4c/cmDgLAguf7wQLdbt5ap7cI2F3iJDEVRvBU7HanhPYNV8NdV9qbTZDNd6aBxFFBeHrNkZ4DxNKiC/6NLzxZrkP0yGwLYaRDAF6MCE3M+muUEsBdP4Z+cvWZded9VR/ZkDun+QeRsyBoxl065Fs6WA5k/ameH6mtcB0jYntIUD+PomGk5G2nQkEY76g9co9c2h+tnBQPh33341rwNE+V/UQ6NoAZCsubbRxUeO/ruqFecuE+0n0thOi2sg0q3Ht2DcsdbS6muovqgC8D2/UADeh14mHGV4+3lWR9TUCwTFy7pOkA6Qv44Po79rG8ccPUClx/XRDppuAulbQsDKCFS96dVDe7rPSD4Yv3gy7dWGB1tP0cxYTVkmYU9NwdbKjks7p1aFd056Q4TILZzfm4RG8vixky7kHF7DWFA6nPxNYF0/e3051ruHi1D2Gf8kHSzleqRP6R6ok/CRtKJp158XZM11PYJMzDB223q1+cu/i2ovMxegSYeYoNNxJbNcGACkW8kJG5x/xpi62z2u4Nxo0lmxcRTltz x9PIjFyt 1oMb6dPrdokbHfqNsY+F1NnNF2gz0cBpLBKIQbcP1z/39CYJh6owOUS1AsUvVXdJG0sADyYZ2ZRpAYhz80jxtaXC7sY9YPDKED0YZTBVp3bnhSQB1Fq+Zw5te6MD6d+1tuvSFyVzONay6XwdnyMgrSntjab0njlsC1mddZ3mIWvR0m7XQWztMm4grDT8l4zgLmRiTctJWgqN5pxRFMZ5iofVcaWfSk2O6gicq0ZSt3aOJD3JOJRrqj0LAGl1wxI74Sqf9DZ2W2Wfyr9ZpdCsYLf6Akyha2UaQDWRj X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 31/07/2024 21.47, Jesper Dangaard Brouer wrote: > This patch reintroduces and generalizes the "stats_flush_ongoing" concept > to avoid redundant flushes if there is an ongoing flush, addressing lock > contention issues on the global cgroup rstat lock. > > At Cloudflare, we observed significant performance degradation due to > lock contention on the rstat lock, primarily caused by kswapd. The > specific mem_cgroup_flush_stats() call inlined in shrink_node, which > takes the rstat lock, is particularly problematic. > > On our 12 NUMA node machines, each with a kswapd kthread per NUMA node, we > noted severe lock contention on the rstat lock, causing 12 CPUs to waste > cycles spinning every time kswapd runs. Fleet-wide stats (/proc/N/schedstat) > for kthreads revealed that we are burning an average of 20,000 CPU cores > fleet-wide on kswapd, primarily due to spinning on the rstat lock. > > Here's a brief overview of the issue: > - __alloc_pages_slowpath calls wake_all_kswapds, causing all kswapdN threads > to wake up simultaneously. > - The kswapd thread invokes shrink_node (via balance_pgdat), triggering the > cgroup rstat flush operation as part of its work. > - balance_pgdat() has a NULL value in target_mem_cgroup, causing > mem_cgroup_flush_stats() to flush with root_mem_cgroup. > > The kernel previously addressed this with a "stats_flush_ongoing" concept, > which was removed in commit 7d7ef0a4686a ("mm: memcg: restore subtree stats > flushing"). This patch reintroduces and generalizes the concept to apply to > all users of cgroup rstat, not just memcg. > > If there is an ongoing rstat flush and the current cgroup is a descendant, a > new flush is unnecessary. To ensure callers still receive updated stats, > they wait for the ongoing flush to complete before returning, but with a > timeout, as stats may already be inaccurate due to continuous updates. > > Lock yielding causes complications for ongoing flushers. Therefore, we limit > which cgroup can become ongoing flusher to top-level, as lock yielding > allows others to obtain the lock without being the ongoing flusher, leading > to a situation where a cgroup that isn't a descendant obtains the lock via > yielding. Thus, we prefer an ongoing flusher with many descendants. If and > when the lock yielding is removed, such as when changing this to a mutex, > we can simplify this code. > > This change significantly reduces lock contention, especially in > environments with multiple NUMA nodes, thereby improving overall system > performance. > > Fixes: 7d7ef0a4686a ("mm: memcg: restore subtree stats flushing"). > Signed-off-by: Jesper Dangaard Brouer > --- > V9: > - Fix logic for 'already_contended' > - Didn't address the refcnt suggestion, need more input from experts > - I'll deploy this ASAP to my production experiment hosts and report back > > V8: https://lore.kernel.org/all/172139415725.3084888.13770938453137383953.stgit@firesoul > V7: https://lore.kernel.org/all/172070450139.2992819.13210624094367257881.stgit@firesoul > V6: https://lore.kernel.org/all/172052399087.2357901.4955042377343593447.stgit@firesoul/ > V5: https://lore.kernel.org/all/171956951930.1897969.8709279863947931285.stgit@firesoul/ > V4: https://lore.kernel.org/all/171952312320.1810550.13209360603489797077.stgit@firesoul/ > V3: https://lore.kernel.org/all/171943668946.1638606.1320095353103578332.stgit@firesoul/ > V2: https://lore.kernel.org/all/171923011608.1500238.3591002573732683639.stgit@firesoul/ > V1: https://lore.kernel.org/all/171898037079.1222367.13467317484793748519.stgit@firesoul/ > RFC: https://lore.kernel.org/all/171895533185.1084853.3033751561302228252.stgit@firesoul/ > > > include/linux/cgroup-defs.h | 2 + > kernel/cgroup/rstat.c | 113 ++++++++++++++++++++++++++++++++++++++----- > 2 files changed, 103 insertions(+), 12 deletions(-) > > diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h > index b36690ca0d3f..a33b37514c29 100644 > --- a/include/linux/cgroup-defs.h > +++ b/include/linux/cgroup-defs.h > @@ -548,6 +548,8 @@ struct cgroup { > #ifdef CONFIG_BPF_SYSCALL > struct bpf_local_storage __rcu *bpf_cgrp_storage; > #endif > + /* completion queue for cgrp_rstat_ongoing_flusher */ > + struct completion flush_done; > > /* All ancestors including self */ > struct cgroup *ancestors[]; > diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c > index fb8b49437573..463f9807ec7e 100644 > --- a/kernel/cgroup/rstat.c > +++ b/kernel/cgroup/rstat.c > @@ -2,6 +2,7 @@ > #include "cgroup-internal.h" > > #include > +#include > > #include > #include > @@ -11,6 +12,7 @@ > > static DEFINE_SPINLOCK(cgroup_rstat_lock); > static DEFINE_PER_CPU(raw_spinlock_t, cgroup_rstat_cpu_lock); > +static struct cgroup *cgrp_rstat_ongoing_flusher = NULL; > > static void cgroup_base_stat_flush(struct cgroup *cgrp, int cpu); > > @@ -279,17 +281,32 @@ __bpf_hook_end(); > * value -1 is used when obtaining the main lock else this is the CPU > * number processed last. > */ > -static inline void __cgroup_rstat_lock(struct cgroup *cgrp, int cpu_in_loop) > +static inline bool __cgroup_rstat_trylock(struct cgroup *cgrp, int cpu_in_loop) > +{ > + bool locked; > + > + locked = spin_trylock_irq(&cgroup_rstat_lock); > + if (!locked) > + trace_cgroup_rstat_lock_contended(cgrp, cpu_in_loop, true); > + else > + trace_cgroup_rstat_locked(cgrp, cpu_in_loop, false); > + > + return locked; > +} > + > +static inline void __cgroup_rstat_lock(struct cgroup *cgrp, int cpu_in_loop, > + bool already_contended) > __acquires(&cgroup_rstat_lock) > { > - bool contended; > + bool locked = false; > > - contended = !spin_trylock_irq(&cgroup_rstat_lock); > - if (contended) { > - trace_cgroup_rstat_lock_contended(cgrp, cpu_in_loop, contended); > + if (!already_contended) /* Skip trylock if already contended */ > + locked = __cgroup_rstat_trylock(cgrp, cpu_in_loop); > + > + if (!locked) { > spin_lock_irq(&cgroup_rstat_lock); > + trace_cgroup_rstat_locked(cgrp, cpu_in_loop, true); > } > - trace_cgroup_rstat_locked(cgrp, cpu_in_loop, contended); > } > > static inline void __cgroup_rstat_unlock(struct cgroup *cgrp, int cpu_in_loop) > @@ -299,6 +316,71 @@ static inline void __cgroup_rstat_unlock(struct cgroup *cgrp, int cpu_in_loop) > spin_unlock_irq(&cgroup_rstat_lock); > } > > +#define MAX_WAIT msecs_to_jiffies(100) > +/** > + * cgroup_rstat_trylock_flusher - Trylock that checks for on ongoing flusher > + * @cgrp: target cgroup > + * > + * Function return value follow trylock semantics. Returning true when lock is > + * obtained. Returning false when not locked and it detected flushing can be > + * skipped as another ongoing flusher took care of the flush. > + */ > +static bool cgroup_rstat_trylock_flusher(struct cgroup *cgrp) > +{ > + struct cgroup *ongoing; > + bool locked; > + > + /* > + * Check if ongoing flusher is already taking care of this, if > + * we are a descendant skip work, but wait for ongoing flusher > + * to complete work. > + */ > +retry: > + ongoing = READ_ONCE(cgrp_rstat_ongoing_flusher); > + if (ongoing && cgroup_is_descendant(cgrp, ongoing)) { > + wait_for_completion_interruptible_timeout( > + &ongoing->flush_done, MAX_WAIT); > + /* TODO: Add tracepoint here */ > + return false; > + } > + > + locked = __cgroup_rstat_trylock(cgrp, -1); State "A": Obtaining lock here. (Remember that userspace can attach a tracepoint program to the locked event.) > + if (!locked) { > + /* Contended: Handle losing race for ongoing flusher */ State "B": Seeing lock was held by someone else. > + if (!ongoing && READ_ONCE(cgrp_rstat_ongoing_flusher)) > + goto retry; > + This race handling code isn't good enough. > + __cgroup_rstat_lock(cgrp, -1, true); > + } > + /* > + * Obtained lock, record this cgrp as the ongoing flusher. > + * Due to lock yielding, we might obtain lock while another > + * ongoing flusher (that isn't a parent) owns ongoing_flusher. > + */ > + if (!READ_ONCE(cgrp_rstat_ongoing_flusher)) { > + /* > + * Limit to top-level as lock yielding allows others to obtain > + * lock without being ongoing_flusher. Leading to cgroup that > + * isn't descendant to obtain lock via yielding. So, prefer > + * ongoing_flusher with many descendants. > + */ > + if (cgrp->level < 2) { > + reinit_completion(&cgrp->flush_done); > + WRITE_ONCE(cgrp_rstat_ongoing_flusher, cgrp); Time between State "A" and writing cgrp_rstat_ongoing_flusher, can be easily be affected by loading BPF tracepoint programs. E.g. when I in production load a bpftrace prog on tracepoint cgroup_rstat_locked then the kswapd thundering herd problem comes back, because BPF-prog delay State "A" updating ongoing, then State "B" observe cgrp_rstat_ongoing_flusher is unset in race check. (Note, due to lock yield, it is possible that no ongoing_flusher is set and locked is held, so we cannot always goto retry when we observe lock is held but no ongoing is set). In [V2] we had a race free implementation that used cmpxchg(). I will go back to the drawing board with V2 as the inspiration. [V2] https://lore.kernel.org/all/171923011608.1500238.3591002573732683639.stgit@firesoul/ > + } > + } > + return true; > +} > + > +static void cgroup_rstat_unlock_flusher(struct cgroup *cgrp) > +{ > + if (cgrp == READ_ONCE(cgrp_rstat_ongoing_flusher)) { > + WRITE_ONCE(cgrp_rstat_ongoing_flusher, NULL); > + complete_all(&cgrp->flush_done); > + } > + __cgroup_rstat_unlock(cgrp, -1); > +} > + > /* see cgroup_rstat_flush() */ > static void cgroup_rstat_flush_locked(struct cgroup *cgrp) > __releases(&cgroup_rstat_lock) __acquires(&cgroup_rstat_lock) > @@ -328,7 +410,7 @@ static void cgroup_rstat_flush_locked(struct cgroup *cgrp) > __cgroup_rstat_unlock(cgrp, cpu); > if (!cond_resched()) > cpu_relax(); > - __cgroup_rstat_lock(cgrp, cpu); > + __cgroup_rstat_lock(cgrp, cpu, false); > } > } > } > @@ -350,9 +432,11 @@ __bpf_kfunc void cgroup_rstat_flush(struct cgroup *cgrp) > { > might_sleep(); > > - __cgroup_rstat_lock(cgrp, -1); > + if (!cgroup_rstat_trylock_flusher(cgrp)) > + return; > + > cgroup_rstat_flush_locked(cgrp); > - __cgroup_rstat_unlock(cgrp, -1); > + cgroup_rstat_unlock_flusher(cgrp); > } > > /** > @@ -368,8 +452,11 @@ void cgroup_rstat_flush_hold(struct cgroup *cgrp) > __acquires(&cgroup_rstat_lock) > { > might_sleep(); > - __cgroup_rstat_lock(cgrp, -1); > - cgroup_rstat_flush_locked(cgrp); > + > + if (cgroup_rstat_trylock_flusher(cgrp)) > + cgroup_rstat_flush_locked(cgrp); > + else > + __cgroup_rstat_lock(cgrp, -1, false); > } > > /** > @@ -379,7 +466,7 @@ void cgroup_rstat_flush_hold(struct cgroup *cgrp) > void cgroup_rstat_flush_release(struct cgroup *cgrp) > __releases(&cgroup_rstat_lock) > { > - __cgroup_rstat_unlock(cgrp, -1); > + cgroup_rstat_unlock_flusher(cgrp); > } > > int cgroup_rstat_init(struct cgroup *cgrp) > @@ -401,6 +488,8 @@ int cgroup_rstat_init(struct cgroup *cgrp) > u64_stats_init(&rstatc->bsync); > } > > + init_completion(&cgrp->flush_done); > + > return 0; > } > > >