From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80337C27C4F for ; Fri, 21 Jun 2024 14:47:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 204626B035B; Fri, 21 Jun 2024 10:47:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1B5EB6B035E; Fri, 21 Jun 2024 10:47:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0576F6B035F; Fri, 21 Jun 2024 10:47:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D00296B035B for ; Fri, 21 Jun 2024 10:47:44 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 874FD80FE6 for ; Fri, 21 Jun 2024 14:47:44 +0000 (UTC) X-FDA: 82255174848.24.DAF56BD Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf17.hostedemail.com (Postfix) with ESMTP id 13AB44001A for ; Fri, 21 Jun 2024 14:47:41 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=CNZR64Oi; spf=pass (imf17.hostedemail.com: domain of hawk@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=hawk@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718981258; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8rCtmhll6RMi1cYik9Fz7Sr1Oid+e0zZz8LcdwFWEdE=; b=YmImEfcp8Z3sSL/Qszmk7GACofd6QA7WuhKrbhkpLwlXpzjhVYz188wljE2HqK1U9LOjkL hYuSSmv69be44/MkPnOtuWgOjHRIYqO6Dvau7cmJxm6rTfUyAs5n3YzGCrGES8Y7EnnZyk 7SN3HavWigFaeaOyidCR5tGOJSQKlWA= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=CNZR64Oi; spf=pass (imf17.hostedemail.com: domain of hawk@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=hawk@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718981258; a=rsa-sha256; cv=none; b=wsxOjseAWDEGAj4AlfH9XJGIpB92rFUY0aSNpvRhQNtN6q1xlXM31SGv6EaibddCvS5WUL TTCmZQKlY2hHMMfxBtXkXXjl7skExITo9UUYIggqiMcyro4qRSJCnWa6atNeLN9lR1hI9m 4wlbpdS13ozCrX0Sm5ZhdqC4ciEgLKQ= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 9D334CE2E57; Fri, 21 Jun 2024 14:47:38 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 22733C2BBFC; Fri, 21 Jun 2024 14:47:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718981257; bh=A4cHDnX1dTFnwXGGupX/t2tXu929gQBMR7klMhczcXU=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=CNZR64Oi/yGvjiIO9c08wB97tdigaAbD3nXW1SWG09xe3RUMtfjcgeiRCPk4OKMnO H8Qqi2ZBEpiChDehsyVYDqilHXhVbZL8DUOiLsuRDLUcF/+MezjUah766nns+AvqAx oyPNVIczTiX/hNaPnnNISiXUQvR+03ZVOfuJTLHAGVj1av4nC2ZnwEaB0Djftm3TO0 5dXhOTiIWY37VmbX4/GUI4Fv+yHZ2RXpVOQlXIABalVN8AkobBcOXXdseB+xZ/KssU L5rgxH6/fximTAmjvtLWjhsB6oG8n1DhTB6mrrzFrfViKXg1DoDs3YCsj+6xsdVnNs Id52lhRwTAGAw== Message-ID: Date: Fri, 21 Jun 2024 16:47:34 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v1] cgroup/rstat: Avoid thundering herd problem by kswapd across NUMA nodes To: tj@kernel.org, cgroups@vger.kernel.org, yosryahmed@google.com, shakeel.butt@linux.dev Cc: hannes@cmpxchg.org, lizefan.x@bytedance.com, longman@redhat.com, kernel-team@cloudflare.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <171898037079.1222367.13467317484793748519.stgit@firesoul> Content-Language: en-US From: Jesper Dangaard Brouer In-Reply-To: <171898037079.1222367.13467317484793748519.stgit@firesoul> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 13AB44001A X-Stat-Signature: t7tinxz7nhthmh8efw9m8i3qy3wp9ecz X-HE-Tag: 1718981261-273227 X-HE-Meta: U2FsdGVkX18BUbNjokp87mmjwHziSWujGWfgmLzLe6Rq+kokW5YJDUMrN+88GBMSHIRfRvjbuocNNE/MW7cGdsoJymRf+cvfEPHjmNjaE9M5wEEOZtOPAu012C/N//NN/qMhBq1MWNbAk9wSScpbYeFOarDUAGFj5adY9I2s/KpJQuVYH3zDU8E5YLJ3zyStM6n7Yb4aoMTVdmpR9supn5eljo0WbF8v6n7K3HQYmzn5WDxURbzh/R3zsgSXOncGyI3iXa4UasxiXWG4e1PdemQlx4EIZ0m+927DetvZ1B8KrFls2FBdoZHlSyI6GQ8b17MP94RNh9yxGfCHOXEVua/K7cByC7/SgGLqzZdEpJip2E18qpLMnZmsASanv9jTb4FGRDE5LvShFw31V95rOsxJ6A7GoP2qlmpRRXmQz70A+SLIgvTBX8Ofiz0CMsFcdPLBkLg0QoI14mLWt4QCaQPHQx95OXZa9qhhk7dB2cp/0YD0IeXDH942iVi1lJ4zyAUmMkHaFtDh5cLJJkIyNjOCtnRIJ2B7VnIYLkLgNvHqOaiza6K+PafrxzRossE4DwVNQal0A3hFq3Emqtqm7m63lqNbyuMUqt716ksDPcsvMJblaVvOkWDq5EMlk1Uk5wwmDcO4SJHAXmDAhZ8dYUvy8p0RAq73PG7LNtqG9DgD93DdPRMVtpaDouns8xFUgpVHaGYWZunA+AfLAKYl4lM1diDPh5BKQs+6Ak8vG21rYe5r1nXx6XZnfM9n/VRwLCq/6YDGbp795bOX+G4SS5fjPSsleN8EcqauirESmKcneLpSn2+R172B3mWva/IcXOruHDscWuTJRPwx9GUMb4mnOTD+doImSp8SA5wR8pac3TJss7aOjbIzOCp81OWD3V3CiG2dP73fDt9S+9LeXv5j6OylC5XWWdT65b9A14szaC4Ku18t6TlIXSX4S6hZwxUzbeO4mhx8yKo0yQd K8bEZSOf ztfj5nqi5BiNBF9iApjc4KNTP65HLULWX2Joj2D592Olv85yDSRIpU6ra3AJp6U9apaaDDd6N2z7JpHTfFObW4ts+AEUguzsbUO49pZJk563oteQE7mz3r66XiCPhWrfMKLiZrTHmozg6nTmn25agxd/8fnbKjJxTtal6/QxGjZ0Rd+EyOyDg4KDk1TVbQ2I0MDOaa8C9rpU5N8aZQQWxh4JIX8DkkpUs4/TaRDJSdVbEGRxc6TXaIeoyiyGCUTHeXhJcVWG87JsgO+vlc/kKDgqrL3IoD7DZebVf X-Bogosity: Ham, tests=bogofilter, spamicity=0.000006, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hold off applying this patch, as test kernel didn't boot with this patch applied on top of TJ's cgroup tree (on commit ec9eeb89e60d86). I don't know if this is related to this patch or not. --Jesper On 21/06/2024 16.32, Jesper Dangaard Brouer wrote: > Avoid lock contention on the global cgroup rstat lock caused by kswapd > starting on all NUMA nodes simultaneously. At Cloudflare, we observed > massive issues due to kswapd and the specific mem_cgroup_flush_stats() > call inlined in shrink_node, which takes the rstat lock. > > On our 12 NUMA node machines, each with a kswapd kthread per NUMA node, > we noted severe lock contention on the rstat lock. This contention > causes 12 CPUs to waste cycles spinning every time kswapd runs. > Fleet-wide stats (/proc/N/schedstat) for kthreads revealed that we are > burning an average of 20,000 CPU cores fleet-wide on kswapd, primarily > due to spinning on the rstat lock. > > To help reviewer follow code: When the Per-CPU-Pages (PCP) freelist is > empty, __alloc_pages_slowpath calls wake_all_kswapds(), causing all > kswapdN threads to wake up simultaneously. The kswapd thread invokes > shrink_node (via balance_pgdat) triggering the cgroup rstat flush > operation as part of its work. This results in kernel self-induced rstat > lock contention by waking up all kswapd threads simultaneously. > Leveraging this detail: balance_pgdat() have NULL value in > target_mem_cgroup, this cause mem_cgroup_flush_stats() to do flush with > root_mem_cgroup. > > To resolve the kswapd issue, we generalized the "stats_flush_ongoing" > concept to apply to all users of cgroup rstat, not just memcg. This > concept was originally reverted in commit 7d7ef0a4686a ("mm: memcg: > restore subtree stats flushing"). If there is an ongoing rstat flush, > limited to the root cgroup, the flush is skipped. This is effective as > kswapd operates on the root tree, sufficiently mitigating the thundering > herd problem. > > This lowers contention on the global rstat lock, although limited to the > root cgroup. Flushing cgroup subtree's can still lead to lock contention. > > Fixes: 7d7ef0a4686a ("mm: memcg: restore subtree stats flushing"). > Signed-off-by: Jesper Dangaard Brouer > --- > include/linux/cgroup.h | 5 +++++ > kernel/cgroup/rstat.c | 28 ++++++++++++++++++++++++++++ > 2 files changed, 33 insertions(+) > > diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h > index 2150ca60394b..ad41cca5c3b6 100644 > --- a/include/linux/cgroup.h > +++ b/include/linux/cgroup.h > @@ -499,6 +499,11 @@ static inline struct cgroup *cgroup_parent(struct cgroup *cgrp) > return NULL; > } > > +static inline bool cgroup_is_root(struct cgroup *cgrp) > +{ > + return cgroup_parent(cgrp) == NULL; > +} > + > /** > * cgroup_is_descendant - test ancestry > * @cgrp: the cgroup to be tested > diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c > index fb8b49437573..5aba95e92d31 100644 > --- a/kernel/cgroup/rstat.c > +++ b/kernel/cgroup/rstat.c > @@ -11,6 +11,7 @@ > > static DEFINE_SPINLOCK(cgroup_rstat_lock); > static DEFINE_PER_CPU(raw_spinlock_t, cgroup_rstat_cpu_lock); > +static atomic_t root_rstat_flush_ongoing = ATOMIC_INIT(0); > > static void cgroup_base_stat_flush(struct cgroup *cgrp, int cpu); > > @@ -350,8 +351,25 @@ __bpf_kfunc void cgroup_rstat_flush(struct cgroup *cgrp) > { > might_sleep(); > > + /* > + * This avoids thundering herd problem on global rstat lock. When an > + * ongoing flush of the entire tree is in progress, then skip flush. > + */ > + if (atomic_read(&root_rstat_flush_ongoing)) > + return; > + > + /* Grab right to be ongoing flusher, return if loosing race */ > + if (cgroup_is_root(cgrp) && > + atomic_xchg(&root_rstat_flush_ongoing, 1)) > + return; > + > __cgroup_rstat_lock(cgrp, -1); > + > cgroup_rstat_flush_locked(cgrp); > + > + if (cgroup_is_root(cgrp)) > + atomic_set(&root_rstat_flush_ongoing, 0); > + > __cgroup_rstat_unlock(cgrp, -1); > } > > @@ -362,13 +380,20 @@ __bpf_kfunc void cgroup_rstat_flush(struct cgroup *cgrp) > * Flush stats in @cgrp's subtree and prevent further flushes. Must be > * paired with cgroup_rstat_flush_release(). > * > + * Current invariant, not called with root cgrp. > + * > * This function may block. > */ > void cgroup_rstat_flush_hold(struct cgroup *cgrp) > __acquires(&cgroup_rstat_lock) > { > might_sleep(); > + > __cgroup_rstat_lock(cgrp, -1); > + > + if (atomic_read(&root_rstat_flush_ongoing)) > + return; > + > cgroup_rstat_flush_locked(cgrp); > } > > @@ -379,6 +404,9 @@ void cgroup_rstat_flush_hold(struct cgroup *cgrp) > void cgroup_rstat_flush_release(struct cgroup *cgrp) > __releases(&cgroup_rstat_lock) > { > + if (cgroup_is_root(cgrp)) > + atomic_set(&root_rstat_flush_ongoing, 0); > + > __cgroup_rstat_unlock(cgrp, -1); > } > > >