From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA430C2BD09 for ; Thu, 27 Jun 2024 11:32:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3D7946B00A0; Thu, 27 Jun 2024 07:32:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 360346B00A6; Thu, 27 Jun 2024 07:32:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 201616B00A7; Thu, 27 Jun 2024 07:32:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id F18546B00A0 for ; Thu, 27 Jun 2024 07:32:44 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 97C9941D76 for ; Thu, 27 Jun 2024 11:32:44 +0000 (UTC) X-FDA: 82276456248.17.CCCF487 Received: from mail-ej1-f44.google.com (mail-ej1-f44.google.com [209.85.218.44]) by imf04.hostedemail.com (Postfix) with ESMTP id 9C8734000F for ; Thu, 27 Jun 2024 11:32:42 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="DN8vep/3"; spf=pass (imf04.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.44 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719487949; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6vU5agj/EL1K0f7W7LUiKvhi+dR0y1Ed4dZ6K6lOZ28=; b=o764DMiU1oyDNMwAzaexWmOj/aINaOgWfRSNQslxTsUhwL3zQfI3VQVgNjfT8zyhikNqz2 KDiXHZECxUO5F5ZSPeeOGRkwKpuFLDNN+bLGj0JJtOX1mFTEZWOjY96WxqMd6teXblA1x/ vxhMhQPkKsA2smWjIyHMaXwdqN+487s= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="DN8vep/3"; spf=pass (imf04.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.44 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719487949; a=rsa-sha256; cv=none; b=n0pMFTqi1dmmHlft0B34hqPUgYpYDERQX5yvFW1HyuZu7PmPXfGcg+viz3UgO7dr/Coi2r Tq6tzUf3QK0C52Cuw3rqIz5fHt39lhdQC2RavCATslVCd0Fs4OceOCnWjbb2vX0joOSxpB pQxbz7N72f90Oq9+8pLRQFrnYIcbElg= Received: by mail-ej1-f44.google.com with SMTP id a640c23a62f3a-a72420e84feso703004466b.0 for ; Thu, 27 Jun 2024 04:32:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1719487961; x=1720092761; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=6vU5agj/EL1K0f7W7LUiKvhi+dR0y1Ed4dZ6K6lOZ28=; b=DN8vep/3HNDyvxM5oTpoc5OMzPGnXSDpKS7zXUUh0kZwq84Cn9Dm9NiRLkD+sx7LcB pqcqEX05EDYCTkqnKXfBZHFwe/wHgeJ4/pP+n3wfthoXxq3UbxO84yuZ/HhW1IYiaDNd sCBZYcjJV0MmVYzc3wwfUJze0HLaFn5Cfq67vxOme9qOVEl/KiWV+hHUSctXQ+gSOw1k E/ZQmVMZSteLMd6FI6XPeceIzf7Up9VaFUft1obgDVmaQRYAS3JN/RuKqTJaSS5HFPHg 9gQPXel78u0Yc8cvizul/19+aknXIREqZdcmeqTeiiFY6EJXmO2HqIBswFsLlkLdY1z5 b7SQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719487961; x=1720092761; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6vU5agj/EL1K0f7W7LUiKvhi+dR0y1Ed4dZ6K6lOZ28=; b=CJo47l1IteVCxeaNBDPG1YYl0DLSF9VGTcDLITOPDNwlVurvtTvMnKPcQH0FQcbtZC owbgZobb8VJZqb3+GXkSyIkvZbRx1A7wWvXzB30J25m8FqlJG+I4la/MGuMtl0ktKL7+ dmopOzxo4za5eMAiUBuUhkgCVbL3fQFU2TBCuvcYb3BeUL2Es0+W3fuqR3sibknfmd0J S0UAT2VNCXnMKawI62qfOQb70/x05TMSJyElbbxpeTSukN76glV8APWQXoiuY4fgzrww 1qEGKklqEPqj3OQNauMgpaOKtl9w3XU1UEtUEC7sXqdM7pFvSQFIkreCYqkoHK1v9jrq 6Viw== X-Forwarded-Encrypted: i=1; AJvYcCWIak7kWmf+iKkoMH1+zayjoKd5AxG/4wmyspOV1AYdz1/QCfKavA01fvZIPJ8sSyb8L5l41B0yIRdgd8GTXdPYDmE= X-Gm-Message-State: AOJu0YzMgDp365JNuZNeOHZdTWPfUROq0TtcwNhUcGPjxwI0ZrvJ7SEw RozW99tbYdu9rpBL/20SY3hNTiezfUZTAug6Mxpg/ia5IfRpUwUz57KJYq8/wBQhmwtEcbXdGxu /bMU0Cplpl4mONdantBXB9CRU/72+w14cetXC X-Google-Smtp-Source: AGHT+IFzenAMVJq7FYqPhFgYZ5K6XiUy6e6TmL+067a8BQRtwc96chyuNJ9XV581VCJKL/mCZSROLKHWMI1AqndJi9E= X-Received: by 2002:a17:907:c085:b0:a72:4676:4f8 with SMTP id a640c23a62f3a-a7246760537mr1163117766b.62.1719487960491; Thu, 27 Jun 2024 04:32:40 -0700 (PDT) MIME-Version: 1.0 References: <171943667611.1638606.4158229160024621051.stgit@firesoul> <171943668946.1638606.1320095353103578332.stgit@firesoul> In-Reply-To: From: Yosry Ahmed Date: Thu, 27 Jun 2024 04:32:03 -0700 Message-ID: Subject: Re: [PATCH V3 2/2] cgroup/rstat: Avoid thundering herd problem by kswapd across NUMA nodes To: Jesper Dangaard Brouer Cc: tj@kernel.org, cgroups@vger.kernel.org, shakeel.butt@linux.dev, hannes@cmpxchg.org, lizefan.x@bytedance.com, longman@redhat.com, kernel-team@cloudflare.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: tae7mr8bnn887r1s8iyiam55u5he7jyt X-Rspam-User: X-Rspamd-Queue-Id: 9C8734000F X-Rspamd-Server: rspam02 X-HE-Tag: 1719487962-83316 X-HE-Meta: U2FsdGVkX1+NU4LggVrVS/ntDCn2zcE6gli3IgRgrDGqIKUjW+8Cf66Mf4pTd6RUvtzNPCKRrS5YTQyz16Vn8fMLLxiVctIUnhJHRk9ECZCLzsqNy4Bu9kDw0Q+4sSPnDzmtCvCndds9vjILh6c446X6l8AJViub3LXSJ3kj8Brf539rh6q6Q+O/nvkepp9VNZe8spzdlr4WV/THaKe+k2lprORa+nuwCR9G8ZSMLXsUUXrYW42bPBRLCcUOusfBe2lICEN5EzwUbL52frcVb3VRW8QMJoCv7riZTtmGHGrshIz59ilsmniJNaUzav+j62y5AKMH+ANzve0ng6QybN0atuwljbVuJI+rB8Q6SKJNGLq0uHMjk66xl1o6b7iSXN6omsarAXYS3YeUtmFZkZoUH0ZJkWByBz3vGp3RkKxvy2lsSlclR8mE6uLwUmCbXYX/eopd4NpgB5AOzn7pFsoCf2gJ94Zb3exPBBPrCpG8BTiTFPa151BULLGtc2BJ0/QO/6Fzcb1lYXsGf83GchRKTFsjhQ0mJgAns+gS5Xc9AlXele4ZfROWnRx71zUq/yUVPo87SRM4GCTNCU16G4Ren+eygkQTsSziumKFqyrnKAdODYb5MEb61Bc4EwCyW0jabsRZ2V3OIIfv7cPmsZXbWq3V3zR2brRsSrOQcHFsQLM8B2GcBVM3pVBhzMDCLb7lMk6ggMi7n7bdkhrhQ1sdOO9rUphVCa1WUSJcm9AJ8JNECqaX5MMSEf8ck/7cuRCqRh7Dw9OY40t9Id36TOPs0WzKyzDqV9bH4bCGUCO6ANo6ASBe8PcGr3OpqL5yTdmNIVvOwx5swfxwt1ZSIcT9mwz9he8iq8MQZY4yG2OWve8BQpJ7VX2iUoJ9xvvHlaLbrTA93w2KDQD22Ey7ei+ORSdka4iLIOSet5EsrHvt2kfxIyxnCq5LskBNtZbBpvphNsoU+ecwC/nsFy7 AIZcPuZG yU34JzkSUCvWEzFvyeKHMDsjKLrwhicf0IhbQnS6Ko9W1cRe7Fu4qc7XPy5YyDkgtyaRKlxAx8jIOMQbF5EnWQ79wg/NZFvoiwz/vQBjNAavQOW57KKzi+CwuaIy4mBCRspRTWs4iUKyeWwM1LpK3/6WF5RjzkjPZ+RT6eRTz7N5htmtPc5WV/27Dmi8BQYq51YINNh/+zEIhpGrPekcZDpmgi5uGrOXp9neWIKeV46hyW0sbDOIqw4Dpo1UKnQrgGFYw X-Bogosity: Ham, tests=bogofilter, spamicity=0.000002, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Jun 27, 2024 at 3:33=E2=80=AFAM Yosry Ahmed = wrote: > > On Wed, Jun 26, 2024 at 2:18=E2=80=AFPM Jesper Dangaard Brouer wrote: > > > > Avoid lock contention on the global cgroup rstat lock caused by kswapd > > starting on all NUMA nodes simultaneously. At Cloudflare, we observed > > massive issues due to kswapd and the specific mem_cgroup_flush_stats() > > call inlined in shrink_node, which takes the rstat lock. > > > > On our 12 NUMA node machines, each with a kswapd kthread per NUMA node, > > we noted severe lock contention on the rstat lock. This contention > > causes 12 CPUs to waste cycles spinning every time kswapd runs. > > Fleet-wide stats (/proc/N/schedstat) for kthreads revealed that we are > > burning an average of 20,000 CPU cores fleet-wide on kswapd, primarily > > due to spinning on the rstat lock. > > > > To help reviewer follow code: When the Per-CPU-Pages (PCP) freelist is > > empty, __alloc_pages_slowpath calls wake_all_kswapds(), causing all > > kswapdN threads to wake up simultaneously. The kswapd thread invokes > > shrink_node (via balance_pgdat) triggering the cgroup rstat flush > > operation as part of its work. This results in kernel self-induced rsta= t > > lock contention by waking up all kswapd threads simultaneously. > > Leveraging this detail: balance_pgdat() have NULL value in > > target_mem_cgroup, this cause mem_cgroup_flush_stats() to do flush with > > root_mem_cgroup. > > > > To avoid this kind of thundering herd problem, kernel previously had a > > "stats_flush_ongoing" concept, but this was removed as part of commit > > 7d7ef0a4686a ("mm: memcg: restore subtree stats flushing"). This patch > > reintroduce and generalized the concept to apply to all users of cgroup > > rstat, not just memcg. > > > > If there is an ongoing rstat flush, and current cgroup is a descendant, > > then it is unnecessary to do the flush. For callers to still see update= d > > stats, wait for ongoing flusher to complete before returning, but add > > timeout as stats are already inaccurate given updaters keeps running. > > > > Fixes: 7d7ef0a4686a ("mm: memcg: restore subtree stats flushing"). > > Signed-off-by: Jesper Dangaard Brouer > > --- > > V2: https://lore.kernel.org/all/171923011608.1500238.359100257373268363= 9.stgit@firesoul/ > > V1: https://lore.kernel.org/all/171898037079.1222367.134673174847937485= 19.stgit@firesoul/ > > RFC: https://lore.kernel.org/all/171895533185.1084853.30337515613022282= 52.stgit@firesoul/ > > > > kernel/cgroup/rstat.c | 61 ++++++++++++++++++++++++++++++++++++++++-= -------- > > 1 file changed, 50 insertions(+), 11 deletions(-) > > > > diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c > > index 2a42be3a9bb3..f21e6b1109a4 100644 > > --- a/kernel/cgroup/rstat.c > > +++ b/kernel/cgroup/rstat.c > > @@ -2,6 +2,7 @@ > > #include "cgroup-internal.h" > > > > #include > > +#include > > > > #include > > #include > > @@ -11,6 +12,8 @@ > > > > static DEFINE_SPINLOCK(cgroup_rstat_lock); > > static DEFINE_PER_CPU(raw_spinlock_t, cgroup_rstat_cpu_lock); > > +static struct cgroup *cgrp_rstat_ongoing_flusher; > > +static DECLARE_COMPLETION(cgrp_rstat_flusher_done); > > > > static void cgroup_base_stat_flush(struct cgroup *cgrp, int cpu); > > > > @@ -346,6 +349,44 @@ static void cgroup_rstat_flush_locked(struct cgrou= p *cgrp) > > } > > } > > > > +#define MAX_WAIT msecs_to_jiffies(100) > > +/* Trylock helper that also checks for on ongoing flusher */ > > +static bool cgroup_rstat_trylock_flusher(struct cgroup *cgrp) > > +{ > > +retry: > > + bool locked =3D __cgroup_rstat_trylock(cgrp, -1); > > + if (!locked) { > > + struct cgroup *cgrp_ongoing; > > + > > + /* Lock is contended, lets check if ongoing flusher is = already > > + * taking care of this, if we are a descendant. > > + */ > > + cgrp_ongoing =3D READ_ONCE(cgrp_rstat_ongoing_flusher); > > + if (!cgrp_ongoing) > > + goto retry; > > + > > + if (cgroup_is_descendant(cgrp, cgrp_ongoing)) { > > + wait_for_completion_interruptible_timeout( > > + &cgrp_rstat_flusher_done, MAX_WAIT); > > Thanks for sending this! > > The reason why I suggested that the completion live in struct cgroup > is because there is a chance here that the flush completes and another > irrelevant flush starts between reading cgrp_rstat_ongoing_flusher and > calling wait_for_completion_interruptible_timeout(). > > This will cause the caller to wait for an irrelevant flush, which may > be fine because today the caller would wait for the lock anyway. Just > mentioning this in case you think this may happen enough to be a > problem. Actually, I think this can happen beyond the window I described above. I think it's possible that a thread waits for the flush, then gets woken up when complete_all() is called, but another flusher calls reinit_completion() immediately. The woken up thread will observe completion->done =3D=3D 0 and go to sleep again. I think most of these cases can be avoided if we make the completion per cgroup. It is still possible to wait for more flushes than necessary, but only if they are for the same cgroup.