From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DEE0C48298 for ; Mon, 5 Feb 2024 20:36:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B995E6B0075; Mon, 5 Feb 2024 15:36:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B49236B0078; Mon, 5 Feb 2024 15:36:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A10FD6B007B; Mon, 5 Feb 2024 15:36:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 9310F6B0075 for ; Mon, 5 Feb 2024 15:36:45 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id E76D640A57 for ; Mon, 5 Feb 2024 20:36:44 +0000 (UTC) X-FDA: 81758908728.21.D4396B9 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by imf08.hostedemail.com (Postfix) with ESMTP id AFE0A16000A for ; Mon, 5 Feb 2024 20:36:42 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=MfABY2cV; dkim=pass header.d=suse.com header.s=susede1 header.b=eRJpnDIb; dmarc=pass (policy=quarantine) header.from=suse.com; spf=pass (imf08.hostedemail.com: domain of mhocko@suse.com designates 195.135.223.130 as permitted sender) smtp.mailfrom=mhocko@suse.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707165403; a=rsa-sha256; cv=none; b=c0CyfYD2zwILmJagUoxkn6tyHL4qwkQ2EnrHf70vEV1iX746nx+DjBI4X9VJ/gyqNt7dxC lJPDYMhssgqiCN8WUPb8q1K2S13QSvD/kJsNkAGEXG0Mfr5w/btT9oTKOXPDpOZuIXjRmv /DGMXwsVJtMmdiQm2gw56/6ifyZQjuE= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=MfABY2cV; dkim=pass header.d=suse.com header.s=susede1 header.b=eRJpnDIb; dmarc=pass (policy=quarantine) header.from=suse.com; spf=pass (imf08.hostedemail.com: domain of mhocko@suse.com designates 195.135.223.130 as permitted sender) smtp.mailfrom=mhocko@suse.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707165403; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/HI0JTDRvPpvy217ZsMPRSu6ANke2HQzyk7K93T+6gg=; b=2L7nfVNPB67EQTjozj0FF2LygNKc4z4QILfi1WYOQR7X7E8tRkxFf8tVDaFtAyTeYrv39e wyarjrf/8QkpZl/gpAT1FWNttoCRt7cWgI5v2KjljzO8exePtofyhkKZkch1gEQ7oBynSm GHWK7NNnG9ho3D0kVlwfzkJxpSNY1iU= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id D086021F04; Mon, 5 Feb 2024 20:36:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1707165401; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/HI0JTDRvPpvy217ZsMPRSu6ANke2HQzyk7K93T+6gg=; b=MfABY2cVcPI6n3TYICRakABtOaTm8ZJw3qkwwUufnPgfPRZL5DU0O1DR3eGkYAVTuQCTvj edW6C2TeoI7YL3D7lsf0QZ32Ffcs5jyQqW4sDtTxqQ1RfDNkdfKt89JVtUcRyh9eL1LtI6 9EDg//wEUAm8Jfpk/T0wJ/0FodRzj6Y= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1707165400; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/HI0JTDRvPpvy217ZsMPRSu6ANke2HQzyk7K93T+6gg=; b=eRJpnDIbfWkAr47SVIlkX01LA0Ly29XRLazU1ryQO5KMYs3+FZaC2+5avgEUJ+mRKtvNAD 0BUs0kRedbCPT4B3bUKRPjSN46hMx19uGMfi4Igmu4zhCsZEFgQQJmtrbpWvLHet+T/iWT Y0Kay+79NuwBbUHk05yE3y1ZN0V8ECs= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id A9387132DD; Mon, 5 Feb 2024 20:36:40 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id NB7QJdhGwWUKSwAAD6G6ig (envelope-from ); Mon, 05 Feb 2024 20:36:40 +0000 Date: Mon, 5 Feb 2024 21:36:40 +0100 From: Michal Hocko To: "T.J. Mercier" Cc: Johannes Weiner , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , Efly Young , android-mm@google.com, yuzhao@google.com, mkoutny@suse.com, Yosry Ahmed , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v3] mm: memcg: Use larger batches for proactive reclaim Message-ID: References: <20240202233855.1236422-1-tjmercier@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: AFE0A16000A X-Stat-Signature: z13afkmeainkbqidjyqsk7hu1na6hipz X-HE-Tag: 1707165402-23232 X-HE-Meta: U2FsdGVkX19BCebtFJe486h62v54wMkS/M392FE5o5APWLFbki7Pe4WxK7PRIg5hlazwmEz8FPeSLL/ueK3sjcqJSheF0UP8yf10d/9qJI5HbW9P2chtHOXXSz9BX2OtC8JxYxAmUzxUtmslJSy+wfCUUDzXqzJnuO2fl1IxYX1/g4NTth1Uk82rx+AWkPdZt8RcFe52DHnl+TeAgo3576q4Y1V7Huu2HjqKfn3TOMEHiifFpe/o0sbrUBnmKKSa3bILVpwLOJhkoIv+x5+2VhCFv4Fu5kKb5tA/heelIO9AxzfLkLaPacwWCjSiheNAwIWgcdxXPWTiWMYDCYyZ5Bzi48x0y2erhZJIqmLRWQLNTy8bwoCTKwvqiZqzOhsX3vcSyWk236eXcb5BAigbORGCEGPeuznfb6KIeP1VjvOQw38gUVdbbSqFrdWJw5oHWgVbcnrmTu3dkAym7O04KInAFHiH1PS38+BXnlCKpa1W4LGaONFc51XJvuo/oV3wxP7+8JA02tUOaSGvgflJOLNpi0fxeesqpk1Earu/QbHG5Q4CwupkCoULQ0UVyDDnCzeBjK24+5qZvgBfGjk7QqS6W8W06bYqEc0H9kLcPaxQsEKgDi0dtf4GnCX2OjVXkcPIAlq6jZSEPXfCJr21HGLOZoHBaJ59pakpTW4juC5Q04+2hiMsDVLmXt8e3LhdD+faZv4OX4YLGtENVraBR1aMGk8NTnpJPQ0ag6+Iqzn8b637HszOOy5GzDxDo8OMrxXiT0ga0F4TCkuHUxCE4771Oo3oggQbp1LFrub85ECtvg4lOM1mnxW+MhF9R8SN+Vu0w4+zUSDlC3G09J53uJvOb2GAlNvDNm85GWR+YZBjQjIu0iI47LmPhi1r/HIe1t/4S6HI8PXBOQtpjEr+593fJzo0h2a4hvuuZvd2GC/8v0E67XBVDLu3jOfHt5y9vuK+rGnAsg3LM7QUKhE v8Nn5Bgd n19BnDCQneRro3F8nB2zy4r0LMk38/+i+jj2Vel1TtFwVHCidfqlSoM7OZTBYTrLKHGPgsrxecnifWSFfdm6b5WtZkrxC34hWyJbqTZNlDuMGNRUhEifI1x3qljnkIHU+7tNQftrIZYDgkwYu7cN8nelB11v7vDCBkOmfZrzG+YCNUdPC6glWZ4JC4451hKER+YSiz4MTpylN2MLT3H5kzG2mDjWkJF3VLmVcivbume9gmRCNVVc9oQ4HnbL1/zgQ/nnY8EStypJMDiUpNwz9grDzjIuCVhH5MFwJ7DbKp4WbIzd0UbcQwPTZs3K7pgE0JwaP3V4tEq6tzvvrDocpJX4sAFm6nOAYFuxvxWE0dpTgHGAENOMFmu4nYWL3i04ZTwES X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon 05-02-24 12:26:10, T.J. Mercier wrote: > On Mon, Feb 5, 2024 at 11:40 AM Michal Hocko wrote: > > > > On Mon 05-02-24 11:29:49, T.J. Mercier wrote: > > > On Mon, Feb 5, 2024 at 2:40 AM Michal Hocko wrote: > > > > > > > > On Fri 02-02-24 23:38:54, T.J. Mercier wrote: > > > > > Before 388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive > > > > > reclaim") we passed the number of pages for the reclaim request directly > > > > > to try_to_free_mem_cgroup_pages, which could lead to significant > > > > > overreclaim. After 0388536ac291 the number of pages was limited to a > > > > > maximum 32 (SWAP_CLUSTER_MAX) to reduce the amount of overreclaim. > > > > > However such a small batch size caused a regression in reclaim > > > > > performance due to many more reclaim start/stop cycles inside > > > > > memory_reclaim. > > > > > > > > You have mentioned that in one of the previous emails but it is good to > > > > mention what is the source of that overhead for the future reference. > > > > > > I can add a sentence about the restart cost being amortized over more > > > pages with a large batch size. It covers things like repeatedly > > > flushing stats, walking the tree, evaluating protection limits, etc. > > > > > > > > Reclaim tries to balance nr_to_reclaim fidelity with fairness across > > > > > nodes and cgroups over which the pages are spread. As such, the bigger > > > > > the request, the bigger the absolute overreclaim error. Historic > > > > > in-kernel users of reclaim have used fixed, small sized requests to > > > > > approach an appropriate reclaim rate over time. When we reclaim a user > > > > > request of arbitrary size, use decaying batch sizes to manage error while > > > > > maintaining reasonable throughput. > > > > > > > > These numbers are with MGLRU or the default reclaim implementation? > > > > > > These numbers are for both. root uses the memcg LRU (MGLRU was > > > enabled), and /uid_0 does not. > > > > Thanks it would be nice to outline that in the changelog. > > Ok, I'll update the table below for each case. > > > > > > root - full reclaim pages/sec time (sec) > > > > > pre-0388536ac291 : 68047 10.46 > > > > > post-0388536ac291 : 13742 inf > > > > > (reclaim-reclaimed)/4 : 67352 10.51 > > > > > > > > > > /uid_0 - 1G reclaim pages/sec time (sec) overreclaim (MiB) > > > > > pre-0388536ac291 : 258822 1.12 107.8 > > > > > post-0388536ac291 : 105174 2.49 3.5 > > > > > (reclaim-reclaimed)/4 : 233396 1.12 -7.4 > > > > > > > > > > /uid_0 - full reclaim pages/sec time (sec) > > > > > pre-0388536ac291 : 72334 7.09 > > > > > post-0388536ac291 : 38105 14.45 > > > > > (reclaim-reclaimed)/4 : 72914 6.96 > > > > > > > > > > Fixes: 0388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive reclaim") > > > > > Signed-off-by: T.J. Mercier > > > > > Reviewed-by: Yosry Ahmed > > > > > Acked-by: Johannes Weiner > > > > > > > > > > --- > > > > > v3: Formatting fixes per Yosry Ahmed and Johannes Weiner. No functional > > > > > changes. > > > > > v2: Simplify the request size calculation per Johannes Weiner and Michal Koutný > > > > > > > > > > mm/memcontrol.c | 6 ++++-- > > > > > 1 file changed, 4 insertions(+), 2 deletions(-) > > > > > > > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > > > > index 46d8d02114cf..f6ab61128869 100644 > > > > > --- a/mm/memcontrol.c > > > > > +++ b/mm/memcontrol.c > > > > > @@ -6976,9 +6976,11 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf, > > > > > if (!nr_retries) > > > > > lru_add_drain_all(); > > > > > > > > > > + /* Will converge on zero, but reclaim enforces a minimum */ > > > > > + unsigned long batch_size = (nr_to_reclaim - nr_reclaimed) / 4; > > > > > > > > This doesn't fit into the existing coding style. I do not think there is > > > > a strong reason to go against it here. > > > > > > There's been some back and forth here. You'd prefer to move this to > > > the top of the while loop, under the declaration of reclaimed? It's > > > farther from its use there, but it does match the existing style in > > > the file better. > > > > This is not something I deeply care about but generally it is better to > > not mix styles unless that is a clear win. If you want to save one LOC > > you can just move it up - just couple of lines up, or you can keep the > > definition closer and have a separate declaration. > > I find it nicer to have to search as little as possible for both the > declaration (type) and definition, but I am not attached to it either > and it's not worth annoying anyone over here. Let's move it up like > Yosry suggested initially. > > > > > > + > > > > > reclaimed = try_to_free_mem_cgroup_pages(memcg, > > > > > - min(nr_to_reclaim - nr_reclaimed, SWAP_CLUSTER_MAX), > > > > > - GFP_KERNEL, reclaim_options); > > > > > + batch_size, GFP_KERNEL, reclaim_options); > > > > > > > > Also with the increased reclaim target do we need something like this? > > > > > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > > > index 4f9c854ce6cc..94794cf5ee9f 100644 > > > > --- a/mm/vmscan.c > > > > +++ b/mm/vmscan.c > > > > @@ -1889,7 +1889,7 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan, > > > > > > > > /* We are about to die and free our memory. Return now. */ > > > > if (fatal_signal_pending(current)) > > > > - return SWAP_CLUSTER_MAX; > > > > + return sc->nr_to_reclaim; > > > > } > > > > > > > > lru_add_drain(); > > > > > > > > > > if (!reclaimed && !nr_retries--) > > > > > return -EAGAIN; > > > > > -- > > > > > > This is interesting, but I don't think it's closely related to this > > > change. This section looks like it was added to delay OOM kills due to > > > apparent lack of reclaim progress when pages are isolated and the > > > direct reclaimer is scheduled out. A couple things: > > > > > > In the context of proactive reclaim, current is not really undergoing > > > reclaim due to memory pressure. It's initiated from userspace. So > > > whether it has a fatal signal pending or not doesn't seem like it > > > should influence the return value of shrink_inactive_list for some > > > probably unrelated process. It seems more straightforward to me to > > > return 0, and add another fatal signal pending check to the caller > > > (shrink_lruvec) to bail out early (dealing with OOM kill avoidance > > > there if necessary) instead of waiting to accumulate fake > > > SWAP_CLUSTER_MAX values from shrink_inactive_list. > > > > The point of this code is to bail out early if the caller has fatal > > signals pending. That could be SIGTERM sent to the process performing > > the reclaim for whatever reason. The bail out is tuned for > > SWAP_CLUSTER_MAX as you can see and your patch is increasing the reclaim > > target which means that bailout wouldn't work properly and you wouldn't > > get any useful work done but not really bail out. > > It's increasing to 1/4 of what it was 6 months ago before 88536ac291 > ("mm:vmscan: fix inaccurate reclaim during proactive reclaim") and > this hasn't changed since then, so if anything the bailout should > happen quicker than originally tuned for. Yes, this wasn't handled properly back then either. > > > As far as changing the value, SWAP_CLUSTER_MAX puts the final value of > > > sc->nr_reclaimed pretty close to sc->nr_to_reclaim. Since there's a > > > loop for each evictable lru in shrink_lruvec, we could end up with 4 * > > > sc->nr_to_reclaim in sc->nr_reclaimed if we switched to > > > sc->nr_to_reclaim from SWAP_CLUSTER_MAX... an even bigger lie. So I > > > don't think we'd want to do that. > > > > The actual number returned from the reclaim is not really important > > because memory_reclaim would break out of the loop and userspace would > > never see the result. > > This makes sense, but it makes me uneasy. I can't point to anywhere > this would cause a problem currently (except maybe super unlikely > overflow of nr_reclaimed), but it feels like a setup for future > unintended consequences. This of something like timeout $TIMEOUT echo $TARGET > $MEMCG_PATH/memory.reclaim where timeout acts as a stop gap if the reclaim cannot finish in TIMEOUT. -- Michal Hocko SUSE Labs