From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0392FC48292 for ; Mon, 5 Feb 2024 20:26:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6AD8C6B0071; Mon, 5 Feb 2024 15:26:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 65CDE6B0072; Mon, 5 Feb 2024 15:26:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 524446B0074; Mon, 5 Feb 2024 15:26:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 42D036B0071 for ; Mon, 5 Feb 2024 15:26:27 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id B10FC140A55 for ; Mon, 5 Feb 2024 20:26:26 +0000 (UTC) X-FDA: 81758882772.07.0EB95A3 Received: from mail-yw1-f180.google.com (mail-yw1-f180.google.com [209.85.128.180]) by imf19.hostedemail.com (Postfix) with ESMTP id 9AC981A0002 for ; Mon, 5 Feb 2024 20:26:23 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=JVNaTjs5; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of tjmercier@google.com designates 209.85.128.180 as permitted sender) smtp.mailfrom=tjmercier@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707164783; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MDhsm2fdsB+fQAt2maFbmEiHwdZyphGxKvwl5nsbois=; b=0f3lKVqDJTDXIJULp4rJQjmFGn1NlLqoFBKG1ZmBWA848mC7V4jGKIwjmNPqRaobVoQ74P L0P4IrXrZbznQ/Cn7xD67tcem5cdr+s1pouMHYfURgtpO35KWPhdHfEmASQccBQxMJ0jFL OMbkLP0KZNkaCtylo0KSqiAclZogD9A= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=JVNaTjs5; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of tjmercier@google.com designates 209.85.128.180 as permitted sender) smtp.mailfrom=tjmercier@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707164783; a=rsa-sha256; cv=none; b=wAcQw+mhzknig9ByW9EHBkATdWujh4OsIL3FZQhEVJ2XUx8+6Hgw90CAnfv8fS/gMlSsli /qqRuoiXlJdkOwIBytwoyl6LRxHgLnG7RPZ0KWVff6+Da1mSyLgbe1oTDXh5csWVMTWT8q OQZvD2zcXINy7KyC9vX09sw2MzseN8k= Received: by mail-yw1-f180.google.com with SMTP id 00721157ae682-60425a9a3d1so35894507b3.0 for ; Mon, 05 Feb 2024 12:26:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707164782; x=1707769582; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=MDhsm2fdsB+fQAt2maFbmEiHwdZyphGxKvwl5nsbois=; b=JVNaTjs5b24I8lZa8DEerxuWkUJZXgJQFBbVkT3TU9uX8+ccjlr3bXzWjsYVRDKroi f/YKswDEi0HTm5ukigvXghL9SmXhJ9mSVliNZF/Zfir8uhs82QtJVBs8eRlHcoOZFTPS Cw6AFjXOdRRTngOBXnD09ZrIUaCctf7n//eIX8GsDEozzJaxSuIhr8X4Oav1JcrkQQGh HS7uCHoY3T9IhC8KhGKzbFWhEJR3w21EKdI1akfQTjLUH2RRq33fYDB1oqt7coZ2dVrG 6vuXDbwmKNkRhjhUOIZWWEAaBMeBS8hApJg3gIMvBeQh3BLuRPK5lDFpXhY8eFxM9Kpc qHgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707164782; x=1707769582; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MDhsm2fdsB+fQAt2maFbmEiHwdZyphGxKvwl5nsbois=; b=TeViAopo1+d0+0NLGOsDUcl+fCgsFnHSNUnSFw+B/BcH7NWr6xoznvnSwdrAV3mYXf 9YwoFBdXHD02vB0u7pfcu9pGTjit8D7jS95WaecKLdaREjw4QMW5ydKujLzWU0GRWNJ/ v7cCiUQ0H91J7Xyl34kUu031fXglyHgOJMTvKW5R9vcBFSNXaXvfnGQSw25AwYrlgWbc BhxhEYzw8WWEqyuI0SdPrg3bdeQ8T8DKvqANUPuVoKzc8+PKGZK7OCQGWReJjbT28p83 DFErbMyss/0JLSscFmmJWSfqvrvSuEo6dqfQBydiazycodsgm3ijOeE6trZKoEbfnipg mLFw== X-Gm-Message-State: AOJu0YzY1/ws0DEebfbdX98mXxg2QiTmxwE2i39/se/oZUGRCryIsxCw sxCj9s0CKRhMNDKXZgfLBIgNyKizK4tarEZHtktDkjvz7H1v1r0YMoP/LmHDfOWze+4wX0ajFrZ 1T1lz4NvbOv07xLTUjmsxD2yqXwbh59oiyulBsgVpbD+px9M3Xjt5gZU= X-Google-Smtp-Source: AGHT+IFHQ1ICqE+J6hW/J7OaMc5Lypw2vG5/kzSrs2LQSEtUBd5XY4MjXXu6Sg3/hEB9BZH1jIihdxxu26Yl8EyIOXY= X-Received: by 2002:a81:ae66:0:b0:5ff:86cb:ea77 with SMTP id g38-20020a81ae66000000b005ff86cbea77mr36550ywk.10.1707164782501; Mon, 05 Feb 2024 12:26:22 -0800 (PST) MIME-Version: 1.0 References: <20240202233855.1236422-1-tjmercier@google.com> In-Reply-To: From: "T.J. Mercier" Date: Mon, 5 Feb 2024 12:26:10 -0800 Message-ID: Subject: Re: [PATCH v3] mm: memcg: Use larger batches for proactive reclaim To: Michal Hocko Cc: Johannes Weiner , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , Efly Young , android-mm@google.com, yuzhao@google.com, mkoutny@suse.com, Yosry Ahmed , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 9AC981A0002 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: htfpzcnjk89o7xsou7nzzz4pyjcph77o X-HE-Tag: 1707164783-721997 X-HE-Meta: U2FsdGVkX18jYu/v+cUECIFLzsmvcFd99JKex/Ai+RHfXXnb3JNnj9Tb2PewR0JqfwlnoWb2F3CmScNwl6Yyr3Vw0uOpg2eOi7o3++Wkk1zHWB5+1/oSZlcA2qBAXdwGwKTQqURR2SAK1Lybon2IGifqpA846qscOLAJraZojEYtEwa0sHNo466xPj20E+k/0Vx62XMbF/MZc2cW7vgbGtE23BzZwJkQJxWaBkbkAUyT2RTZnrWL+sMklOef1dMO5A4mNgZm+IDi+27moCrGzbleasXV+mVzrndRCdlFdECac+yCaQ6sGSV8Yyhzjpld9mezGyNkB2JmLLFgT976SATJFVUwtWxiB675eGEUYhcOebKAFR2rI26hRX/gSOUFsmlxC2yqWz30ag4cWrOPCv3sXsOvbELzOMkAJIqHromsUxNZ19kWMTjEUkNnPc4bonfL+0A1zjZ/HpJYfp+Lh/sMGCcevMKcfpldg3mSAk3FE8+/aQSdxvPwte03or9auiptB8O1lt22m6LFQBKsM73QFFSJqrl2V/3IRKFDgANGwixry8jJd7KvNVIR8S4stBiVq2MqD85KICQzr/9Vp7AMDqQG/NfTb/ehI1lS+F2HABaZSB0jIvIPQ6vk/caf3H+aanTZqGZP6uR+DQdUlEtY3gi8GY3qWxjXo2y5TFDkCre7Fu85U/lOGZaE7ipUjd1AYbGHvKUpMhNiyQx5BIfw1zN8/Fw7XQ4u5CSv36aJaiab+DKuQevyKPob/+WtYQ4ohX+M1oN95EfX1CSnZIX3/efBmYPi0jsA3Wl0/xQ3MgpA+Gw2HJzyNB9hSpypAkUymlesdh4w25Q6WlA9NlXzsrQ1nk+09p7yMkmuc4bjgHfkHWpAfgg3oHIHQsSVii72ElCuJ7LAcKevXBzW6w7X6iM71wogJwoaX2LPAbOPH+yeAiFIE5O05hdHVGBcdDpN0jDWKjfU7920kyz 8NyNgP5l tcqjl X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Feb 5, 2024 at 11:40=E2=80=AFAM Michal Hocko wrot= e: > > On Mon 05-02-24 11:29:49, T.J. Mercier wrote: > > On Mon, Feb 5, 2024 at 2:40=E2=80=AFAM Michal Hocko w= rote: > > > > > > On Fri 02-02-24 23:38:54, T.J. Mercier wrote: > > > > Before 388536ac291 ("mm:vmscan: fix inaccurate reclaim during proac= tive > > > > reclaim") we passed the number of pages for the reclaim request dir= ectly > > > > to try_to_free_mem_cgroup_pages, which could lead to significant > > > > overreclaim. After 0388536ac291 the number of pages was limited to = a > > > > maximum 32 (SWAP_CLUSTER_MAX) to reduce the amount of overreclaim. > > > > However such a small batch size caused a regression in reclaim > > > > performance due to many more reclaim start/stop cycles inside > > > > memory_reclaim. > > > > > > You have mentioned that in one of the previous emails but it is good = to > > > mention what is the source of that overhead for the future reference. > > > > I can add a sentence about the restart cost being amortized over more > > pages with a large batch size. It covers things like repeatedly > > flushing stats, walking the tree, evaluating protection limits, etc. > > > > > > Reclaim tries to balance nr_to_reclaim fidelity with fairness acros= s > > > > nodes and cgroups over which the pages are spread. As such, the big= ger > > > > the request, the bigger the absolute overreclaim error. Historic > > > > in-kernel users of reclaim have used fixed, small sized requests to > > > > approach an appropriate reclaim rate over time. When we reclaim a u= ser > > > > request of arbitrary size, use decaying batch sizes to manage error= while > > > > maintaining reasonable throughput. > > > > > > These numbers are with MGLRU or the default reclaim implementation? > > > > These numbers are for both. root uses the memcg LRU (MGLRU was > > enabled), and /uid_0 does not. > > Thanks it would be nice to outline that in the changelog. Ok, I'll update the table below for each case. > > > > root - full reclaim pages/sec time (sec) > > > > pre-0388536ac291 : 68047 10.46 > > > > post-0388536ac291 : 13742 inf > > > > (reclaim-reclaimed)/4 : 67352 10.51 > > > > > > > > /uid_0 - 1G reclaim pages/sec time (sec) overreclaim (MiB) > > > > pre-0388536ac291 : 258822 1.12 107.8 > > > > post-0388536ac291 : 105174 2.49 3.5 > > > > (reclaim-reclaimed)/4 : 233396 1.12 -7.4 > > > > > > > > /uid_0 - full reclaim pages/sec time (sec) > > > > pre-0388536ac291 : 72334 7.09 > > > > post-0388536ac291 : 38105 14.45 > > > > (reclaim-reclaimed)/4 : 72914 6.96 > > > > > > > > Fixes: 0388536ac291 ("mm:vmscan: fix inaccurate reclaim during proa= ctive reclaim") > > > > Signed-off-by: T.J. Mercier > > > > Reviewed-by: Yosry Ahmed > > > > Acked-by: Johannes Weiner > > > > > > > > --- > > > > v3: Formatting fixes per Yosry Ahmed and Johannes Weiner. No functi= onal > > > > changes. > > > > v2: Simplify the request size calculation per Johannes Weiner and M= ichal Koutn=C3=BD > > > > > > > > mm/memcontrol.c | 6 ++++-- > > > > 1 file changed, 4 insertions(+), 2 deletions(-) > > > > > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > > > index 46d8d02114cf..f6ab61128869 100644 > > > > --- a/mm/memcontrol.c > > > > +++ b/mm/memcontrol.c > > > > @@ -6976,9 +6976,11 @@ static ssize_t memory_reclaim(struct kernfs_= open_file *of, char *buf, > > > > if (!nr_retries) > > > > lru_add_drain_all(); > > > > > > > > + /* Will converge on zero, but reclaim enforces a mini= mum */ > > > > + unsigned long batch_size =3D (nr_to_reclaim - nr_recl= aimed) / 4; > > > > > > This doesn't fit into the existing coding style. I do not think there= is > > > a strong reason to go against it here. > > > > There's been some back and forth here. You'd prefer to move this to > > the top of the while loop, under the declaration of reclaimed? It's > > farther from its use there, but it does match the existing style in > > the file better. > > This is not something I deeply care about but generally it is better to > not mix styles unless that is a clear win. If you want to save one LOC > you can just move it up - just couple of lines up, or you can keep the > definition closer and have a separate declaration. I find it nicer to have to search as little as possible for both the declaration (type) and definition, but I am not attached to it either and it's not worth annoying anyone over here. Let's move it up like Yosry suggested initially. > > > > + > > > > reclaimed =3D try_to_free_mem_cgroup_pages(memcg, > > > > - min(nr_to_reclaim - nr_reclai= med, SWAP_CLUSTER_MAX), > > > > - GFP_KERNEL, reclaim_options); > > > > + batch_size, GFP_KERNEL, recla= im_options); > > > > > > Also with the increased reclaim target do we need something like this= ? > > > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > > index 4f9c854ce6cc..94794cf5ee9f 100644 > > > --- a/mm/vmscan.c > > > +++ b/mm/vmscan.c > > > @@ -1889,7 +1889,7 @@ static unsigned long shrink_inactive_list(unsig= ned long nr_to_scan, > > > > > > /* We are about to die and free our memory. Return no= w. */ > > > if (fatal_signal_pending(current)) > > > - return SWAP_CLUSTER_MAX; > > > + return sc->nr_to_reclaim; > > > } > > > > > > lru_add_drain(); > > > > > > > > if (!reclaimed && !nr_retries--) > > > > return -EAGAIN; > > > > -- > > > > This is interesting, but I don't think it's closely related to this > > change. This section looks like it was added to delay OOM kills due to > > apparent lack of reclaim progress when pages are isolated and the > > direct reclaimer is scheduled out. A couple things: > > > > In the context of proactive reclaim, current is not really undergoing > > reclaim due to memory pressure. It's initiated from userspace. So > > whether it has a fatal signal pending or not doesn't seem like it > > should influence the return value of shrink_inactive_list for some > > probably unrelated process. It seems more straightforward to me to > > return 0, and add another fatal signal pending check to the caller > > (shrink_lruvec) to bail out early (dealing with OOM kill avoidance > > there if necessary) instead of waiting to accumulate fake > > SWAP_CLUSTER_MAX values from shrink_inactive_list. > > The point of this code is to bail out early if the caller has fatal > signals pending. That could be SIGTERM sent to the process performing > the reclaim for whatever reason. The bail out is tuned for > SWAP_CLUSTER_MAX as you can see and your patch is increasing the reclaim > target which means that bailout wouldn't work properly and you wouldn't > get any useful work done but not really bail out. It's increasing to 1/4 of what it was 6 months ago before 88536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive reclaim") and this hasn't changed since then, so if anything the bailout should happen quicker than originally tuned for. > > As far as changing the value, SWAP_CLUSTER_MAX puts the final value of > > sc->nr_reclaimed pretty close to sc->nr_to_reclaim. Since there's a > > loop for each evictable lru in shrink_lruvec, we could end up with 4 * > > sc->nr_to_reclaim in sc->nr_reclaimed if we switched to > > sc->nr_to_reclaim from SWAP_CLUSTER_MAX... an even bigger lie. So I > > don't think we'd want to do that. > > The actual number returned from the reclaim is not really important > because memory_reclaim would break out of the loop and userspace would > never see the result. This makes sense, but it makes me uneasy. I can't point to anywhere this would cause a problem currently (except maybe super unlikely overflow of nr_reclaimed), but it feels like a setup for future unintended consequences.