From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32783C27C4F for ; Sun, 16 Jun 2024 00:29:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8D8596B0092; Sat, 15 Jun 2024 20:29:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 888C56B00A3; Sat, 15 Jun 2024 20:29:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7508E6B00A5; Sat, 15 Jun 2024 20:29:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 59FBE6B0092 for ; Sat, 15 Jun 2024 20:29:38 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id EEE40C0E09 for ; Sun, 16 Jun 2024 00:29:37 +0000 (UTC) X-FDA: 82234868394.28.4869634 Received: from mail-ej1-f52.google.com (mail-ej1-f52.google.com [209.85.218.52]) by imf10.hostedemail.com (Postfix) with ESMTP id 07972C000C for ; Sun, 16 Jun 2024 00:29:34 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=AMrF7VKW; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf10.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.52 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718497771; a=rsa-sha256; cv=none; b=qcNGEzc5StQ9jWhtFqJpzuJfrPrjS6KxYjvErcJOcJfC0rD3A/7Oo07/oKdCpG+HWAYstT JbwSA5C3JBTfZ4weA7ydF0tOTtZklcsE7YwqaW9YDCW/tvcaWpapvtaOcskzjaPuMU5HgZ GSrxzLc22J8DgvEG63SsmnrRmpSk5tA= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=AMrF7VKW; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf10.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.52 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718497771; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6osgA93aqonamdWRCetYmklZz/MQRCMhpbnjLMTEVVM=; b=RLQroz004mE9gsjMhs/QdfE/0YpeNxbwzDUN30kNNnAnn3HW3MufG8cMHRIuXLSTd9q5M0 FNXA4ZDXo03dpDO7ZmI4AggyoFlWGH8P6jcsbSqEnGDfItgXMEPtySPsbC9/8L3vvBkw+8 Xwnd7A5Ukr/JNXKs1DyRV/g+jk4QXIU= Received: by mail-ej1-f52.google.com with SMTP id a640c23a62f3a-a6f3efa1cc7so727450566b.0 for ; Sat, 15 Jun 2024 17:29:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1718497773; x=1719102573; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=6osgA93aqonamdWRCetYmklZz/MQRCMhpbnjLMTEVVM=; b=AMrF7VKWb5qlo1qvkNY3vIr2tnkPZQOqu+XKQQ6CgmDWHvWpPeJi8R7alSpIuxmY0o Gx5mf6kxPuk2hdGNn3LIv2bXyeQU/11KDuvHrX4wPqySM/1KJWi99pIfbw6hFs6hRfxf OV8emBfS/KNN+UvUrQEWYjRfBU9/GDgxhJ4HJPxpyM/V21BWj1kvbGRJlhPEZNvV5Tjt /DFuAdMKuLngd8C0+mNTS8xLxGk6jid4zMZlQ660QeLoqMQ9Ev0TGAXwZDZ+ZPWr15CJ lFKjxCLaFVU5GEh/2IpQi7csI2VYfH7ygJhnvME9gp0pYfAG/Bq1RBHWHGeZhKjpK3KH JMNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718497773; x=1719102573; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6osgA93aqonamdWRCetYmklZz/MQRCMhpbnjLMTEVVM=; b=V5X/1qWUbWrFHkDbvw4HVfoVYlcthl3L/aa/AJcy+/JV9BU1fShjhb+yfjapceWg2m O2GUra/LMZtpCigQk843CLbKjdn60wm8m5Agmlpqd4vS0Bl0vm7X6BBU2QUKKX0PMRRh JVpEUfdK8WIEdDfoL58CVn9ypLIBPiAtf07gfTCCKqLx7d3kAaommM94WDRaVvMHeGpN tJdeBEiXH2j26w9KQxn763GiNGf5prU7Y+ikwZsihA7cRBC8Xd53LMikwY+2e0tUfKmL wyWI8e+ZztW6b/U4FDfmGrwu3pnULSbaeqZNWP1dfUNXLhZRUlVjpQvyXBJcUA69Ideu jxtg== X-Forwarded-Encrypted: i=1; AJvYcCUNgX4HpfWnJRnOwJ3479jQn05bh+U2PJncVdWQ2kDOuo20UIWW0ayLQCdSl94I35VLbsq/Ye7anBSiU1wYQtgVFWE= X-Gm-Message-State: AOJu0YxbZ8DUAXW/1MknANfsMlLszAAnD3PL6z2paaXBqjwNKQomPnwr lcx03wS7ray615jGNcVCwuXWvHJhAaqZQoFU+x4ur0MAcyCrruiIBzbko5mSsPmZ/uKCyNtZrZd TB9+ocSVmW85n6+rnbgSM2LOAeNxYWCd6hirw X-Google-Smtp-Source: AGHT+IFJkBx3nd5yg5OaSvrh9uB02SksIZKWWRNEuDuxK7lqkOsnjy11XGrHJ8gM7AbgWsB6IRndKfF+kJDUWM3OMAs= X-Received: by 2002:a17:907:9406:b0:a6f:1727:cf4b with SMTP id a640c23a62f3a-a6f608a7945mr504935266b.23.1718497773009; Sat, 15 Jun 2024 17:29:33 -0700 (PDT) MIME-Version: 1.0 References: <20240615081257.3945587-1-shakeel.butt@linux.dev> In-Reply-To: <20240615081257.3945587-1-shakeel.butt@linux.dev> From: Yosry Ahmed Date: Sat, 15 Jun 2024 17:28:55 -0700 Message-ID: Subject: Re: [PATCH] memcg: use ratelimited stats flush in the reclaim To: Shakeel Butt Cc: Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Jesper Dangaard Brouer , Yu Zhao , Muchun Song , Facebook Kernel Team , linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 07972C000C X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: raxrqc1rc9gm655n9cuhadfyy7cxjker X-HE-Tag: 1718497774-901287 X-HE-Meta: U2FsdGVkX19GV15IKtV+EFEJZZj31mvYhekQacZvZ4+BGJTWKlK0JDelpcGt56m77KAiCnLcgsgNTx3fAL4ZlKJXc09m+y0qTlYoRKOpzkVAS97LFWka8xjRAGVFFWKhU2z0CIQbOssQRekihFzVW1vNdKfHGNXO/tR+TcO3xTGhLrBWGUYoKi2DG8Nb2CqOETzp+YHPkYQHp413MHV4B7WFWLFV73PRJbE1aTYE9Bw//JVtBDkyf2AxiKsN7IwPN7oSIJ3mLpWcpG+TIrCiF5o3nHq65i1ZqTuKLYqRaBx3eJ/xctXCkliQxR9KxfIaoO8jM4DXxr+I1Mfj41uhwSaoNSc+TJ2dtSjeAfhV1ajJKeBlgY5YAM5P9JY4tqeZQ9dm0hElsba3n/v7d2xPlxdVdu/HEioUNkXJwYCHf1zLaaf0Y05uSqcWPdOclBeQxdeBHccPHt6JFxYVEBsc596Ec9BdmggMxvGdcT+CjlkW2Nw8M4mb2YRU1qyW9s/ItcLyq7RIrFPkvxrLpbkXmyXn00VzSKIBzwOOpgRXlTWAi/tSN3w6v6GgyBFHfwYQ3txlv8k1jXP2yiq1IRJiu5dJv+dlwuRe0ALf/pHbRERLY6GxNikFWOntUTC3RlHXv5fV4WzhvwOP73XEa2eCu2w+GGkICfLj++tp2wveOmwScyyYNmq5su+tDiyZsErTfVEGOEdGHv098zDDSMNe2Lj9mS75CB3oNkF8cS7fuTR8/SQiSrCZ+h1oBp1BRY8iHXdaWrySkbcSQj2KCjare2PZCbvDLW2j25xby4zh6emG/WEssPM1PRmpBElZUYTUTOycacnPbCUwtdUuM/qRN1I9F1xM87NEKX3QrdeEjyb9FI5YAPkBXhcvMIySUFtnC0aNgvAW0HmcfgxjExvR+trR0/m4vSMUJaMY0Y76t+Hw6t0ul9Q1r7hbH5TfhHUSudzfqEQhz4ooZMg9J1p 53cU4Uke /vA60BXU4U6mst7d940KbrLYMvuCVOAFuVJk4bxhJq/WbCFz9jX1FJd00zmURIxwcxJbK3DwwPlKakUwKez5h1+DrtJIS1jxIcKdaJidtAaltTqaZ3QgIDS6BNL07aOvCAlA10OJkMuhhZwa8aXCXk90bfc0iBe9TAo/2HzwhrD56JJDkaSEf/d831SrpTJSrDyAP7plRs+41uJoSOZlxfI6xkQoXaGITcBlpkAwMidozdh1c5x1+V1am7/C/csLoWHpf X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sat, Jun 15, 2024 at 1:13=E2=80=AFAM Shakeel Butt wrote: > > The Meta prod is seeing large amount of stalls in memcg stats flush > from the memcg reclaim code path. At the moment, this specific callsite > is doing a synchronous memcg stats flush. The rstat flush is an > expensive and time consuming operation, so concurrent relaimers will > busywait on the lock potentially for a long time. Actually this issue is > not unique to Meta and has been observed by Cloudflare [1] as well. For > the Cloudflare case, the stalls were due to contention between kswapd > threads running on their 8 numa node machines which does not make sense > as rstat flush is global and flush from one kswapd thread should be > sufficient for all. Simply replace the synchronous flush with the > ratelimited one. > > One may raise a concern on potentially using 2 sec stale (at worst) > stats for heuristics like desirable inactive:active ratio and preferring > inactive file pages over anon pages but these specific heuristics do not > require very precise stats and also are ignored under severe memory > pressure. This patch has been running on Meta fleet for more than a > month and we have not observed any issues. Please note that MGLRU is not > impacted by this issue at all as it avoids rstat flushing completely. > > Link: https://lore.kernel.org/all/6ee2518b-81dd-4082-bdf5-322883895ffc@ke= rnel.org [1] > Signed-off-by: Shakeel Butt > --- > mm/vmscan.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index c0429fd6c573..bda4f92eba71 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -2263,7 +2263,7 @@ static void prepare_scan_control(pg_data_t *pgdat, = struct scan_control *sc) > * Flush the memory cgroup stats, so that we read accurate per-me= mcg > * lruvec stats for heuristics. > */ > - mem_cgroup_flush_stats(sc->target_mem_cgroup); > + mem_cgroup_flush_stats_ratelimited(sc->target_mem_cgroup); I think you already know my opinion about this one :) I don't like it at all, and I will explain why below. I know it may be a necessary evil, but I would like us to make sure there is no other option before going forward with this. Unfortunately, I am travelling this week, so I probably won't be able to follow up on this for a week or so, but I will try to lay down my thoughts as much as I can. Why don't I like this? - From a high level, I don't like the approach of replacing problematic flushing calls with the ratelimited version. It strikes me as a whac-a-mole approach that is mitigating symptoms of a larger problem. - With the added thresholding code, a flush is only done if there is a significant number of pending updates in the relevant subtree. Choosing the ratelimited approach is intentionally ignoring a significant change in stats (although arguably it could be irrelevant stats). - Reclaim code is an iterative process, so not updating the stats on every retry is very counterintuitive. We are retrying reclaim using the same stats and heuristics used by a previous iteration, essentially dismissing the effects of those previous iterations. - Indeterministic behavior like this one is very difficult to debug if it causes problems. The missing updates in the last 2s (or whatever period) could be of any magnitude. We may be ignoring GBs of free/allocated memory. What's worse is, if it causes any problems, tracing it back to this flush will be extremely difficult. What can we do? - Try to make more fundamental improvements to the flushing code (for memcgs or cgroups in general). The per-memcg flushing thresholding is an example of this. For example, if flushing is taking too long because we are flushing all subsystems, it may make sense to have separate rstat trees for separate subsystems. One other thing we can try is add a mutex in the memcg flushing path. I had initially had this in my subtree flushing series [1], but I dropped it as we thought it's not very useful. Currently in mem_cgroup_flush_stats(), we check if there are enough pending updates to flush, then we call cgroup_flush_stats() and spin on the lock. It is possible that while we spin, those pending updates we observed have been flushed. If we add back the mutex like in [1], then once we acquire the mutex we check again to make sure there are still enough stats to flush. [1]https://lore.kernel.org/all/20231010032117.1577496-6-yosryahmed@google.c= om/ - Try to avoid the need for flushing in this path. I am not sure what approach MGLRU uses to avoid the flush, but if we can do something similar for classic LRUs that would be preferable. I am guessing MGLRU may be maintaining its own stats outside of the rstat framework. - Try to figure out if one (or a few) update paths are regressing all flushers. If one specific stat or stats update path is causing most of the updates, we can try to fix that instead. Especially if it's a counter that is continuously being increased and decreases (so the net change is not as high as we think). At the end of the day, all of the above may not work, and we may have to live with just using the ratelimited approach. But I *really* hope we could actually go the other way. Fix things on a more fundamental level and eventually drop the ratelimited variants completely. Just my 2c. Sorry for the long email :)