From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D73CC2BD05 for ; Mon, 24 Jun 2024 10:49:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B2B4F6B00A6; Mon, 24 Jun 2024 06:49:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AB47E6B00D0; Mon, 24 Jun 2024 06:49:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 92ED96B00CA; Mon, 24 Jun 2024 06:49:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 75A9B6B03D8 for ; Mon, 24 Jun 2024 06:49:32 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id D76C0140D87 for ; Mon, 24 Jun 2024 10:49:31 +0000 (UTC) X-FDA: 82265460942.11.01DBD39 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf19.hostedemail.com (Postfix) with ESMTP id 837831A0008 for ; Mon, 24 Jun 2024 10:49:28 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=TxjeIi28; spf=pass (imf19.hostedemail.com: domain of hawk@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=hawk@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719226159; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JLzD4L05JvedmkNGKkRX2UsiHom29pcceRQvD5fqqpg=; b=YIEJ9evHuhZ/uZ+G83g07NykcZtc4lxF4ZIcBti9vZ08jKMegXXbJg1QnN/rNtnvB3Ui8Q GwxYzF3/DhPQW+z258IwOX9b9WLa1v8yyo7TXkyzJY6vhZim494h607LxVFyMWYd/ERsQx WKz6cj1OZBt/sbWnUUwR/ZjwM7Q2fuA= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=TxjeIi28; spf=pass (imf19.hostedemail.com: domain of hawk@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=hawk@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719226159; a=rsa-sha256; cv=none; b=IUiBPINZjqg2cb3ue+2Hvy/DdoyTGl9XkHP8OoqqnrB+MXGiLjWob5L9e64FENKHSPEaS+ A1KMKEotOfRG/fPx4HQL8to5saEeWnp8X2isUjax42GhCkyASXCTLImWuE4n/5QjY4ZoBz Hxf/f6ceNBARQzM1BFjsyjx9Sn8sMjI= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id C8927CE069B; Mon, 24 Jun 2024 10:49:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 58B32C2BBFC; Mon, 24 Jun 2024 10:49:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719226164; bh=GvAARBMTw7Stefz/r4lhdSuznA2R3pe8J7SJUtv8Seo=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=TxjeIi28++EhKDGutTg7cz67VVUBof99379p7tIdkUUy0rncv3lV+fGCCkEz3i62K hzTA2BUWHnvfM0bkXtHwH4Mjpz4750poyXjLxa6qMewod5fnVUvAUbjZYEwW11bDJz 7a4eX8Sgwzlj9dflXJZI3hC1Tcb1MEFN+Crn7P8WH+Gn1EOdQHw1gYIB6VbpmykvXC 2XvwWLEN8Eg0FU5G5RAbtlMa5CUUB9JdxmwOJq8nn+hby0JfxI6+YrRZExRaSYEZnn DdZq0pAwLbuJbueaL1jIiPyMlKUt7FnkaZqUAs1fyzGDx+uBXP92SKEl6eUH61uKJI mfUHihVbp+u3w== Message-ID: Date: Mon, 24 Jun 2024 12:49:20 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v1] cgroup/rstat: Avoid thundering herd problem by kswapd across NUMA nodes To: Waiman Long , tj@kernel.org, cgroups@vger.kernel.org, yosryahmed@google.com, shakeel.butt@linux.dev Cc: hannes@cmpxchg.org, lizefan.x@bytedance.com, kernel-team@cloudflare.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <171898037079.1222367.13467317484793748519.stgit@firesoul> <2de2850c-c844-4a75-884a-18d552fcb846@redhat.com> Content-Language: en-US From: Jesper Dangaard Brouer In-Reply-To: <2de2850c-c844-4a75-884a-18d552fcb846@redhat.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Stat-Signature: fshxnsyj1zc9c3qe1poyke65r6npw96o X-Rspam-User: X-Rspamd-Queue-Id: 837831A0008 X-Rspamd-Server: rspam02 X-HE-Tag: 1719226168-595611 X-HE-Meta: U2FsdGVkX1/QOf5zNAa01X8a8Wk9ODbCyB0SzokIv7FfAR6xFLkNFU6w6WVOBIJFhz2IRRwgniwH/xcwDr/aM/xRIvlA1SelU3TKs3HqcRjCMV1fH6lRTpVCAfPbJ5ADDbxb2P9iWmg0B2qs6ND7Uajkeo4Eulw6Drb7BSFiEghvyx0oBOLjV0YSfYykII5a96sNyN5+VUgfCQDckSXhhJx/6QIBmRGOmea+4bvR5Q4ws8aOjEk4chtvB2KzSYTX25djT22dwqN4qnFAofcmZMb0I+nRO5nB7Tw3RyhnlAXVq6qNtZ4Gh7UXqAnU3iEbDsjwAGyYUq/95aPjDql6oPfdUuIHxfVjACvG8xrHE5RRT4v6vhH+K7r16MGLj5m5EVO+CTnzpRwIcOMe7enysyvpO475upFMTwraxmX2puo6IoqefNc680ET+eX0DleprnTiudC011tfpuHdghDJBFpHpdGCz1no7LkMPzqsfj/pERfNMcMQencdQgLYcPp9kC9nlEw6a7qtctfLMCV2SIrzQcppWkH0+WUroxb5ic9JY11Atnk8RISl7kYvpUSGnH0Ignp7XCQtOq1FAQk/KyxpyFAEL9O25g71ggbb5F3BQ17SCZp6UjK4eyHWJXrfSFfP9MIyxeP9JM+keElYmdsxMsviKmTnDSUX8ELhF5V1mPzZuYcK+FyrQ1WWpgs9kQaR8qp+aMN7kfP6hY3vVaqz2v5Sk2FUiSf1Acn2TRaAxY2BpK2HtL3nIbPcKsdeqWJANlLb22DybXUS64/wq0kL5V/CAShHFukz6gBHxFHNDZY3VVTn+xNhMOyyRvAIP73XbWn9WcRrXhgXi4bs/8wCNa4Gg3awzO2C1PyJZ6h6Xh0J5/5S3Hkh1WvXjd0nk+HlgKR8zmLMeUfSm7Z1vjuY16zXC3fj7qZlbaEAmbO9p8gMXayHPmTQGx7XhR2/sWtNat/ZdZ5Wzqkbf2h pLbIDNV2 rNBOeiDvCFnvP2XBh1xQ16f8peylPFQxglBOpzlQRJvM8dFg38McTFZ5o1Tx5K55+rPDjVa6fQb8T0jHFbLqlwR2rRDgH3X19aKas+vr2sz8j/zjR7tgRuoa6yktBo2WtHVMfEuFScstmhbO1PmxWgZxb4/7nuykiYKso/OhTtyZUx+AjBg9HEejhEos/QxyNbIhIJio78ez3yKCiB0VPdAKg2cZh9aMoVq+br6nlHVulQDcxO36tNHmK6FyQ31TND193ABWwj90JsvZXQr70C4BrceGUEMkZ21AjjtmSXjrHsK/AcbpYKPGYAw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 21/06/2024 18.08, Waiman Long wrote: > > On 6/21/24 10:32, Jesper Dangaard Brouer wrote: >> Avoid lock contention on the global cgroup rstat lock caused by kswapd >> starting on all NUMA nodes simultaneously. At Cloudflare, we observed >> massive issues due to kswapd and the specific mem_cgroup_flush_stats() >> call inlined in shrink_node, which takes the rstat lock. >> >> On our 12 NUMA node machines, each with a kswapd kthread per NUMA node, >> we noted severe lock contention on the rstat lock. This contention >> causes 12 CPUs to waste cycles spinning every time kswapd runs. >> Fleet-wide stats (/proc/N/schedstat) for kthreads revealed that we are >> burning an average of 20,000 CPU cores fleet-wide on kswapd, primarily >> due to spinning on the rstat lock. >> >> To help reviewer follow code: When the Per-CPU-Pages (PCP) freelist is >> empty, __alloc_pages_slowpath calls wake_all_kswapds(), causing all >> kswapdN threads to wake up simultaneously. The kswapd thread invokes >> shrink_node (via balance_pgdat) triggering the cgroup rstat flush >> operation as part of its work. This results in kernel self-induced rstat >> lock contention by waking up all kswapd threads simultaneously. >> Leveraging this detail: balance_pgdat() have NULL value in >> target_mem_cgroup, this cause mem_cgroup_flush_stats() to do flush with >> root_mem_cgroup. >> >> To resolve the kswapd issue, we generalized the "stats_flush_ongoing" >> concept to apply to all users of cgroup rstat, not just memcg. This >> concept was originally reverted in commit 7d7ef0a4686a ("mm: memcg: >> restore subtree stats flushing"). If there is an ongoing rstat flush, >> limited to the root cgroup, the flush is skipped. This is effective as >> kswapd operates on the root tree, sufficiently mitigating the thundering >> herd problem. >> >> This lowers contention on the global rstat lock, although limited to the >> root cgroup. Flushing cgroup subtree's can still lead to lock contention. >> >> Fixes: 7d7ef0a4686a ("mm: memcg: restore subtree stats flushing"). >> Signed-off-by: Jesper Dangaard Brouer >> --- >>   include/linux/cgroup.h |    5 +++++ >>   kernel/cgroup/rstat.c  |   28 ++++++++++++++++++++++++++++ >>   2 files changed, 33 insertions(+) >> >> diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h >> index 2150ca60394b..ad41cca5c3b6 100644 >> --- a/include/linux/cgroup.h >> +++ b/include/linux/cgroup.h >> @@ -499,6 +499,11 @@ static inline struct cgroup *cgroup_parent(struct >> cgroup *cgrp) >>       return NULL; >>   } >> +static inline bool cgroup_is_root(struct cgroup *cgrp) >> +{ >> +    return cgroup_parent(cgrp) == NULL; >> +} Please let me know if above code is correct? >> + >>   /** >>    * cgroup_is_descendant - test ancestry >>    * @cgrp: the cgroup to be tested >> diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c >> index fb8b49437573..5aba95e92d31 100644 >> --- a/kernel/cgroup/rstat.c >> +++ b/kernel/cgroup/rstat.c >> @@ -11,6 +11,7 @@ >>   static DEFINE_SPINLOCK(cgroup_rstat_lock); >>   static DEFINE_PER_CPU(raw_spinlock_t, cgroup_rstat_cpu_lock); >> +static atomic_t root_rstat_flush_ongoing = ATOMIC_INIT(0); >>   static void cgroup_base_stat_flush(struct cgroup *cgrp, int cpu); >> @@ -350,8 +351,25 @@ __bpf_kfunc void cgroup_rstat_flush(struct cgroup >> *cgrp) >>   { >>       might_sleep(); >> +    /* >> +     * This avoids thundering herd problem on global rstat lock. When an >> +     * ongoing flush of the entire tree is in progress, then skip flush. >> +     */ >> +    if (atomic_read(&root_rstat_flush_ongoing)) >> +        return; >> + >> +    /* Grab right to be ongoing flusher, return if loosing race */ >> +    if (cgroup_is_root(cgrp) && >> +        atomic_xchg(&root_rstat_flush_ongoing, 1)) >> +        return; >> + >>       __cgroup_rstat_lock(cgrp, -1); >> + >>       cgroup_rstat_flush_locked(cgrp); >> + >> +    if (cgroup_is_root(cgrp)) >> +        atomic_set(&root_rstat_flush_ongoing, 0); >> + >>       __cgroup_rstat_unlock(cgrp, -1); >>   } >> @@ -362,13 +380,20 @@ __bpf_kfunc void cgroup_rstat_flush(struct >> cgroup *cgrp) >>    * Flush stats in @cgrp's subtree and prevent further flushes.  Must be >>    * paired with cgroup_rstat_flush_release(). >>    * >> + * Current invariant, not called with root cgrp. >> + * >>    * This function may block. >>    */ >>   void cgroup_rstat_flush_hold(struct cgroup *cgrp) >>       __acquires(&cgroup_rstat_lock) >>   { >>       might_sleep(); >> + >>       __cgroup_rstat_lock(cgrp, -1); >> + >> +    if (atomic_read(&root_rstat_flush_ongoing)) >> +        return; >> + >>       cgroup_rstat_flush_locked(cgrp); >>   } >> @@ -379,6 +404,9 @@ void cgroup_rstat_flush_hold(struct cgroup *cgrp) >>   void cgroup_rstat_flush_release(struct cgroup *cgrp) >>       __releases(&cgroup_rstat_lock) >>   { >> +    if (cgroup_is_root(cgrp)) >> +        atomic_set(&root_rstat_flush_ongoing, 0); >> + >>       __cgroup_rstat_unlock(cgrp, -1); >>   } > > Since both cgroup_rstat_flush_hold() and cgroup_rstat_flush_release() > are not called with root cgroup, the cgroup_rstat_flush_hold() hunk is > essentially dead code. > Yes, the cgroup_rstat_flush_release chunk is essentially dead code. I will send a V2 with this code removed. --Jesper