From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1B39C2BBCA for ; Fri, 28 Jun 2024 09:39:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4D6A76B009D; Fri, 28 Jun 2024 05:39:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 486AE6B00A2; Fri, 28 Jun 2024 05:39:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 300DB6B00B1; Fri, 28 Jun 2024 05:39:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 0CA156B009D for ; Fri, 28 Jun 2024 05:39:48 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id A7504418B3 for ; Fri, 28 Jun 2024 09:39:47 +0000 (UTC) X-FDA: 82279800414.12.A43C44D Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf30.hostedemail.com (Postfix) with ESMTP id D0C2380027 for ; Fri, 28 Jun 2024 09:39:44 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=FivoTdyu; spf=pass (imf30.hostedemail.com: domain of hawk@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=hawk@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719567575; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=czXsF7l5+xd+qCCcdzEWHrsdd/OaL2g+3CagzIw8lUE=; b=husDZkHkUzwfTmuRdGusa3C9INZuUmul+WZ2FthsdS6QbX5OoDNcO3lcF3TbG3aXr2URTQ lpj89QJcw4IHcPm/TOQXsRfOuu2GT/+1s4YhNtSiL1letmst+7GYpPKjEsGZgaX0nkoiWB t9KK7RKoUk5uDogp8H09Qp87wtTYPGs= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=FivoTdyu; spf=pass (imf30.hostedemail.com: domain of hawk@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=hawk@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719567575; a=rsa-sha256; cv=none; b=Pe+qCIlCw/8Q7xU7Qjqv5uc3mSaWRAz8gb+gfThxB/yUjT2mG7TMdnWSioULFSpq8ldSJ/ h5laNdo8UWW5mZNE163fmFgEMV8SEM4718of8V/gaJiLMKtWVnEwUIedICICZY+CfT83vw I57GNvkVvWDHY+ito5yGSHRScz7a8ks= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 895C7620AA; Fri, 28 Jun 2024 09:39:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 526E4C4AF0A; Fri, 28 Jun 2024 09:39:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719567583; bh=adCZ3hdKXJyxi+wXUvxAIESEoUqjFP6MkE+cuLwrqoU=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=FivoTdyu2upq/sDIIZ4J3ZSiTP9ZpSs9a0fqpZJzTENJKtHzdflsPzOzbOmgZ3MDz i2w+iyd1kM+CtwH7Jn0OoZB7F/cPxpZX7oZx6K96CZV/msRrAIL5SfnyeuweII8OLW P3Lh+qh3VR+rTXvgC7SpDCfkrF6dzDZ/1ZPEv6RLE+6h0Qo7b1pr0YA14Vnv5eyt0O smLQBCnrH/02CT5nA9Cyx1lTIM6XQeE3K1c8TatzdrbERGRf2nKbG0C2LAuiL0fWEc Dxb86GWoRQf/ePUoi/UIQC/uSMa8tHcLP662FUDP5J/zyVAneD+EBOqe1A24Nu64LS 07+tJZk7VTZFA== Message-ID: <7ecdd625-37a0-49f1-92fc-eef9791fbe5b@kernel.org> Date: Fri, 28 Jun 2024 11:39:39 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH V4 2/2] cgroup/rstat: Avoid thundering herd problem by kswapd across NUMA nodes To: Shakeel Butt Cc: tj@kernel.org, cgroups@vger.kernel.org, yosryahmed@google.com, hannes@cmpxchg.org, lizefan.x@bytedance.com, longman@redhat.com, kernel-team@cloudflare.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <171952310959.1810550.17003659816794335660.stgit@firesoul> <171952312320.1810550.13209360603489797077.stgit@firesoul> <4n3qu75efpznkomxytm7irwfiq44hhi4hb5igjbd55ooxgmvwa@tbgmwvcqsy75> Content-Language: en-US From: Jesper Dangaard Brouer In-Reply-To: <4n3qu75efpznkomxytm7irwfiq44hhi4hb5igjbd55ooxgmvwa@tbgmwvcqsy75> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: D0C2380027 X-Stat-Signature: 59zs9gu54ghsshagqzoh6ubjgd3ghxme X-Rspam-User: X-HE-Tag: 1719567584-251788 X-HE-Meta: U2FsdGVkX1/kecj1jHJRbj3llSLa8eJhWmtgaJ4mbv3mshc2hZs3ktzR2s9SJp1yNh0G41STSz28ymLpMy2toFAH1gmVtbE2QsnHbHQv00K6E+oasBLXayhTwU500IQxdHYnqfKszrR/z7Ls3Zmv0VBKkFXbHz8+O278SbD7TXrxGa4L5WDABZUgdcwSAe5RlnHJhkZbNVBqcH8R/gGikyDpyctqbjsdsPIbgXZUkTk1H1HH/MW8+zR41i4IZqCLhkNVE74Jo+fmcVR3CVqAzzquH4xz9TyEqFjc+1nrZGwMUBZc/uuuhJp82AqJCd8vVdz1SKM2Ckbh0IwBhp6c+fCpNjpJeRMVMsVE5SPhubl009nImtxOafDKu4xPBQ/1GVosYN2Y29m/1JthPK9Eua1tCS2Ny+DxSWw0VLbozjOp7r5GApOkdPzEP8IzNdo1oULVQ8U8KxIHuefefL4QUn9QaviGNhTarwIchca9dO9F/XTXCr/uenPGAbe2DSpnExS7Nowgfu/i5Lw9BabRvsE83wR/N5lVuRJgv2UNlPT/hXQgXLpiDw/2ZE/R9A3buZm5qr6kBKMx2seqq8BKrwrzPeS3aBQKTu7zBtRWjihfgLeR0If6Yhs1rZpVOx1rPHkBh6Epuh3D5LHyugJuldjDkkPBpcPgPDd8I3NoPZlr3xx5nlwoWxfr32fWG4yoHaLT4xqOksyL2AUPelvXgX/CZ1AGVvlKb/XxCEvgcsYpo53I1l2iRCZQCAnJeK+lozcF73wlLsoIjIWaYDw/uN1/L5WgMrP4yZl6YgCo4mmLHHBv5Y3PLZYtaaYZsG+MR0Ve8RzZDaggVGh/s6avXPIn2eFS0/r4+KgOoeKHdiOsFTrIfASRfJwvn7ZAzNIDqPmdffqjzcFPocy169V6S3Nog9zf0Hr7osndYrgZWsPdiXOzrGLa/jUhTIwQdkrz8eRYG8sD03prWNHLXaE x07+lvmU oMENSt5sF81QWS5jumQvL2tnA2UzTQsG93Sz1TWpNmrTNNrkSj5KRslwvajnN48MHw52dO5x764ubtztv7W17E9Z8la1/1Xbg3BCefKewH6+YsZDexHJH+j1ylQ+76/t6W69Ucb5omSo4Jt2lcGdsXk061in9IiW23abIazGMsbjm8Gi97t2pHVcfK3OkTExpAhck94FsTf0BFMVDV210aqcyYQF80BOQlzFmYaQ0i/THm/wKR7NHbu7Y6wzAk8I7Z3YyUR9D/P6RG98PQWiN+98luZ0JumcfHgwL X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 28/06/2024 01.34, Shakeel Butt wrote: > On Thu, Jun 27, 2024 at 11:18:56PM GMT, Jesper Dangaard Brouer wrote: >> Avoid lock contention on the global cgroup rstat lock caused by kswapd >> starting on all NUMA nodes simultaneously. At Cloudflare, we observed >> massive issues due to kswapd and the specific mem_cgroup_flush_stats() >> call inlined in shrink_node, which takes the rstat lock. >> >> On our 12 NUMA node machines, each with a kswapd kthread per NUMA node, >> we noted severe lock contention on the rstat lock. This contention >> causes 12 CPUs to waste cycles spinning every time kswapd runs. >> Fleet-wide stats (/proc/N/schedstat) for kthreads revealed that we are >> burning an average of 20,000 CPU cores fleet-wide on kswapd, primarily >> due to spinning on the rstat lock. >> >> To help reviewer follow code: When the Per-CPU-Pages (PCP) freelist is >> empty, > > Remove the "When the Per-CPU-Pages (PCP) freelist is empty" as there are > a lot more conditions needed for the waking up kswapds which are not > needed to be explained here. Just "__alloc_pages_slowpath waking up > kswapds given the allocation context" or similar text should suffice. > Agree. >> __alloc_pages_slowpath calls wake_all_kswapds(), causing all >> kswapdN threads to wake up simultaneously. The kswapd thread invokes >> shrink_node (via balance_pgdat) triggering the cgroup rstat flush >> operation as part of its work. This results in kernel self-induced rstat >> lock contention by waking up all kswapd threads simultaneously. >> Leveraging this detail: balance_pgdat() have NULL value in >> target_mem_cgroup, this cause mem_cgroup_flush_stats() to do flush with >> root_mem_cgroup. >> >> To avoid this kind of thundering herd problem, kernel previously had a >> "stats_flush_ongoing" concept, but this was removed as part of commit >> 7d7ef0a4686a ("mm: memcg: restore subtree stats flushing"). This patch >> reintroduce and generalized the concept to apply to all users of cgroup >> rstat, not just memcg. >> >> If there is an ongoing rstat flush, and current cgroup is a descendant, >> then it is unnecessary to do the flush. For callers to still see updated >> stats, wait for ongoing flusher to complete before returning, but add >> timeout as stats are already inaccurate given updaters keeps running. >> >> Fixes: 7d7ef0a4686a ("mm: memcg: restore subtree stats flushing"). >> Signed-off-by: Jesper Dangaard Brouer >> --- >> V3: https://lore.kernel.org/all/171943668946.1638606.1320095353103578332.stgit@firesoul/ >> V2: https://lore.kernel.org/all/171923011608.1500238.3591002573732683639.stgit@firesoul/ >> V1: https://lore.kernel.org/all/171898037079.1222367.13467317484793748519.stgit@firesoul/ >> RFC: https://lore.kernel.org/all/171895533185.1084853.3033751561302228252.stgit@firesoul/ >> >> include/linux/cgroup-defs.h | 2 + >> kernel/cgroup/rstat.c | 64 ++++++++++++++++++++++++++++++++++++------- >> 2 files changed, 55 insertions(+), 11 deletions(-) >> >> diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h >> index b36690ca0d3f..a33b37514c29 100644 >> --- a/include/linux/cgroup-defs.h >> +++ b/include/linux/cgroup-defs.h >> @@ -548,6 +548,8 @@ struct cgroup { >> #ifdef CONFIG_BPF_SYSCALL >> struct bpf_local_storage __rcu *bpf_cgrp_storage; >> #endif >> + /* completion queue for cgrp_rstat_ongoing_flusher */ >> + struct completion flush_done; >> >> /* All ancestors including self */ >> struct cgroup *ancestors[]; >> diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c >> index 2a42be3a9bb3..a98af43bdce7 100644 >> --- a/kernel/cgroup/rstat.c >> +++ b/kernel/cgroup/rstat.c >> @@ -2,6 +2,7 @@ >> #include "cgroup-internal.h" >> >> #include >> +#include >> >> #include >> #include >> @@ -11,6 +12,8 @@ >> >> static DEFINE_SPINLOCK(cgroup_rstat_lock); >> static DEFINE_PER_CPU(raw_spinlock_t, cgroup_rstat_cpu_lock); >> +static struct cgroup *cgrp_rstat_ongoing_flusher = NULL; >> +static DECLARE_COMPLETION(cgrp_rstat_flusher_done); > > cgrp_rstat_flusher_done is not needed anymore. > True, I already fixed this yesterday, when reading the patch email myself. >> >> static void cgroup_base_stat_flush(struct cgroup *cgrp, int cpu); >> >> @@ -312,6 +315,45 @@ static inline void __cgroup_rstat_unlock(struct cgroup *cgrp, int cpu_in_loop) >> spin_unlock_irq(&cgroup_rstat_lock); >> } >> >> +#define MAX_WAIT msecs_to_jiffies(100) >> +/* Trylock helper that also checks for on ongoing flusher */ >> +static bool cgroup_rstat_trylock_flusher(struct cgroup *cgrp) >> +{ >> + bool locked = __cgroup_rstat_trylock(cgrp, -1); >> + if (!locked) { >> + struct cgroup *cgrp_ongoing; >> + >> + /* Lock is contended, lets check if ongoing flusher is already >> + * taking care of this, if we are a descendant. >> + */ >> + cgrp_ongoing = READ_ONCE(cgrp_rstat_ongoing_flusher); >> + if (cgrp_ongoing && cgroup_is_descendant(cgrp, cgrp_ongoing)) { > > I wonder if READ_ONCE() and cgroup_is_descendant() needs to happen > within in rcu section. On a preemptable kernel, let's say we got > preempted in between them, the flusher was unrelated and got freed > before we get the CPU. In that case we are accessing freed memory. > I have to think about this some more. >> + wait_for_completion_interruptible_timeout( >> + &cgrp_ongoing->flush_done, MAX_WAIT); >> + >> + return false; >> + } >> + __cgroup_rstat_lock(cgrp, -1, false); >> + } >> + /* Obtained lock, record this cgrp as the ongoing flusher */ >> + if (!READ_ONCE(cgrp_rstat_ongoing_flusher)) { > > Can the above condition will ever be false? > Yes, I think so, because I realized that cgroup_rstat_flush_locked() can release/"yield" the lock. Thus, other CPUs/threads have a chance to call cgroup_rstat_flush, and try to become the "ongoing-flusher". With this realization, my __cgroup_rstat_trylock() "signal" to detect ongoing-flushers is also not a good signal. I think we/I should move the ongoing_flusher detection before attempting to aquire the lock. If doing so, I'm considering adding a tracepoint after wait_for_completion() to help us see if code is behaving as expected in prod env. >> + reinit_completion(&cgrp->flush_done); >> + WRITE_ONCE(cgrp_rstat_ongoing_flusher, cgrp); >> + } >> + >> + return true; /* locked */ >> +} >> + >> +static void cgroup_rstat_unlock_flusher(struct cgroup *cgrp) >> +{ >> + /* Detect if we are the ongoing flusher */ >> + if (cgrp == READ_ONCE(cgrp_rstat_ongoing_flusher)) { > > Same. Same explaination as above. > >> + WRITE_ONCE(cgrp_rstat_ongoing_flusher, NULL); >> + complete_all(&cgrp->flush_done); >> + } >> + __cgroup_rstat_unlock(cgrp, -1); >> +} >> + >> /* see cgroup_rstat_flush() */ >> static void cgroup_rstat_flush_locked(struct cgroup *cgrp) >> __releases(&cgroup_rstat_lock) __acquires(&cgroup_rstat_lock) >> @@ -361,18 +403,13 @@ static void cgroup_rstat_flush_locked(struct cgroup *cgrp) >> */ >> __bpf_kfunc void cgroup_rstat_flush(struct cgroup *cgrp) >> { >> - bool locked; >> - >> might_sleep(); >> >> - locked = __cgroup_rstat_trylock(cgrp, -1); >> - if (!locked) { >> - /* Opportunity to ongoing flush detection */ >> - __cgroup_rstat_lock(cgrp, -1, false); >> - } >> + if (!cgroup_rstat_trylock_flusher(cgrp)) >> + return; >> >> cgroup_rstat_flush_locked(cgrp); >> - __cgroup_rstat_unlock(cgrp, -1); >> + cgroup_rstat_unlock_flusher(cgrp); >> } >> >> /** >> @@ -388,8 +425,11 @@ void cgroup_rstat_flush_hold(struct cgroup *cgrp) >> __acquires(&cgroup_rstat_lock) >> { >> might_sleep(); >> - __cgroup_rstat_lock(cgrp, -1, true); >> - cgroup_rstat_flush_locked(cgrp); >> + >> + if (cgroup_rstat_trylock_flusher(cgrp)) >> + cgroup_rstat_flush_locked(cgrp); >> + else >> + __cgroup_rstat_lock(cgrp, -1, true); >> } >> >> /** >> @@ -399,7 +439,7 @@ void cgroup_rstat_flush_hold(struct cgroup *cgrp) >> void cgroup_rstat_flush_release(struct cgroup *cgrp) >> __releases(&cgroup_rstat_lock) >> { >> - __cgroup_rstat_unlock(cgrp, -1); >> + cgroup_rstat_unlock_flusher(cgrp); >> } >> >> int cgroup_rstat_init(struct cgroup *cgrp) >> @@ -421,6 +461,8 @@ int cgroup_rstat_init(struct cgroup *cgrp) >> u64_stats_init(&rstatc->bsync); >> } >> >> + init_completion(&cgrp->flush_done); >> + >> return 0; >> } >> >> >>