From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CA7BC48292 for ; Mon, 5 Feb 2024 19:40:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BE92E6B0075; Mon, 5 Feb 2024 14:40:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B98D56B0078; Mon, 5 Feb 2024 14:40:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A882E6B007B; Mon, 5 Feb 2024 14:40:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 96B9B6B0075 for ; Mon, 5 Feb 2024 14:40:20 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 4E76A1C0838 for ; Mon, 5 Feb 2024 19:40:20 +0000 (UTC) X-FDA: 81758766600.16.22918ED Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by imf18.hostedemail.com (Postfix) with ESMTP id 2A5401C0021 for ; Mon, 5 Feb 2024 19:40:17 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=jXLOZEY9; dkim=pass header.d=suse.com header.s=susede1 header.b=jXLOZEY9; spf=pass (imf18.hostedemail.com: domain of mhocko@suse.com designates 195.135.223.130 as permitted sender) smtp.mailfrom=mhocko@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707162018; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=eCAYeFTv9u2wjsEQ0QvNWNJYtGTTkhyunrC4I8efVjs=; b=jIAKaSdZpJ3G+/y8JXvnJmfgjkdh5XBN6iDeUKbsKtfw4uIHXtHbnsjEe+phgHVeWYwqLU 4GeP6i2/Qill0lxVQk0AfyJwgKXfpNPXAG5mv8FCFS0tz7AxQx7tfblL7KafRsAlDBCfi6 uL+w//fiYhXe7E8BRQVYWrAw5htBho4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707162018; a=rsa-sha256; cv=none; b=wjs88El9gxlTk5IpbsyaNqo7y9+wxdFuz15dCro9e7fBInYFRCotLY6k1xCtzr5DvE59qY WVKHe7cMkhvIWpL79ND1k1ahQhn43YUlnzrXaeM+i4s4ArmPIY+pJkDasMspRfDF2YPe5E PcCZcXW0GLTjMTOf84E+5xqN1MSDAgM= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=jXLOZEY9; dkim=pass header.d=suse.com header.s=susede1 header.b=jXLOZEY9; spf=pass (imf18.hostedemail.com: domain of mhocko@suse.com designates 195.135.223.130 as permitted sender) smtp.mailfrom=mhocko@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 22C09220B0; Mon, 5 Feb 2024 19:40:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1707162016; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eCAYeFTv9u2wjsEQ0QvNWNJYtGTTkhyunrC4I8efVjs=; b=jXLOZEY9Z2nbF3cV3dJl5rwDUFbl2dcqCqu8TRF67gsqNLDQshVfcuJWsWaDShiR0IuZ6b bnEjTM4ktfLocee/0/zAOfnvIMHzJMUQyYStG4EGmrkombgYg7aksMk7fXNO4SBhsfrytr 33pK9FIQGv86hOyIRx1jeryQxk7/Bzw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1707162016; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eCAYeFTv9u2wjsEQ0QvNWNJYtGTTkhyunrC4I8efVjs=; b=jXLOZEY9Z2nbF3cV3dJl5rwDUFbl2dcqCqu8TRF67gsqNLDQshVfcuJWsWaDShiR0IuZ6b bnEjTM4ktfLocee/0/zAOfnvIMHzJMUQyYStG4EGmrkombgYg7aksMk7fXNO4SBhsfrytr 33pK9FIQGv86hOyIRx1jeryQxk7/Bzw= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id F128E136F5; Mon, 5 Feb 2024 19:40:15 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id bifAOJ85wWWgPgAAD6G6ig (envelope-from ); Mon, 05 Feb 2024 19:40:15 +0000 Date: Mon, 5 Feb 2024 20:40:15 +0100 From: Michal Hocko To: "T.J. Mercier" Cc: Johannes Weiner , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , Efly Young , android-mm@google.com, yuzhao@google.com, mkoutny@suse.com, Yosry Ahmed , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v3] mm: memcg: Use larger batches for proactive reclaim Message-ID: References: <20240202233855.1236422-1-tjmercier@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Rspamd-Queue-Id: 2A5401C0021 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: gt151t3b5b8m5kfuhdsqiody5gb7hmwo X-HE-Tag: 1707162017-557049 X-HE-Meta: U2FsdGVkX19/ohMpShXsWqHEVcv1AA0cLS5dbdR9yYhFEDcO5rJsrqlEebFwKQoPOvtggIkiNwJYCFfRa0sAhIaTQtPobZ6NE5tPDSRWLT+0gIxcnYWDISZuMTK2w1OSRKaMobFg3XfRXnx7uefRxf704dgPlgCZ9ptR7VaHNEVNHoOcHAACBU31NaK0XFiBku+iNQ5VMhvQ2qQDV/9u9nW9Z7ulp8D2+mYxUm1k25iM0Glqw9hp0J4KQD7GDxP88CJznfqZhgN7YEiQqcR3E4GyFxeU6TTjwfaxpYjwW4WHSNwbQKaOd+VOLWjd5ufMG3caTgQPhXhWtTUvqp5Zabb698SyXDgHI7HeesRQAXKzm+3pfHFaH3FsA8QTtCeFID9pb+/udddfmC09beai3x2Kt3gqnBm6Vn4IHPL2pOOJKFJCay0u4maOH1pNdC4pmiUgNgShTUIBwNdAefeZcofzaL1gBlkdnS3y17HZFKO2aPjqrZrGjbDNo+SnaV+MqlvhYnxJKJPgyX+p2rzaXICcVVTlhRUyGLUOgmGWg3WDLkUvrnj/xa3rh84U9a/lCepHfnfU5QnYJKzWjHcGQP+mrw6EfnQod/0lklderhAErEiB3sL/X9XSaISkoc6q4fQuFhyyKrR7gnJVyTosht80vvI2VMCRg72bVH4JEb5mXC/fR6WEbNQlzbLAF0IaCvZhSrk0RrhCphVz3aXYnx6o1Dj/V1eAu8672kTgZJRpmOOaI5HIWqu0Cu63V3cmeVwc6863P3ww6PepLk/ZM24WOENVQWKAfDufvDEfSi65ttHGHes1KxWRSjU9xy+C2ReWrUWDgJhLJo4Lz9RvNCxiRh7UTF3CEVFrkvJhuise4GvqrS3lQM5E1uHCM9v5qp1qDQVarVY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon 05-02-24 11:29:49, T.J. Mercier wrote: > On Mon, Feb 5, 2024 at 2:40 AM Michal Hocko wrote: > > > > On Fri 02-02-24 23:38:54, T.J. Mercier wrote: > > > Before 388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive > > > reclaim") we passed the number of pages for the reclaim request directly > > > to try_to_free_mem_cgroup_pages, which could lead to significant > > > overreclaim. After 0388536ac291 the number of pages was limited to a > > > maximum 32 (SWAP_CLUSTER_MAX) to reduce the amount of overreclaim. > > > However such a small batch size caused a regression in reclaim > > > performance due to many more reclaim start/stop cycles inside > > > memory_reclaim. > > > > You have mentioned that in one of the previous emails but it is good to > > mention what is the source of that overhead for the future reference. > > I can add a sentence about the restart cost being amortized over more > pages with a large batch size. It covers things like repeatedly > flushing stats, walking the tree, evaluating protection limits, etc. > > > > Reclaim tries to balance nr_to_reclaim fidelity with fairness across > > > nodes and cgroups over which the pages are spread. As such, the bigger > > > the request, the bigger the absolute overreclaim error. Historic > > > in-kernel users of reclaim have used fixed, small sized requests to > > > approach an appropriate reclaim rate over time. When we reclaim a user > > > request of arbitrary size, use decaying batch sizes to manage error while > > > maintaining reasonable throughput. > > > > These numbers are with MGLRU or the default reclaim implementation? > > These numbers are for both. root uses the memcg LRU (MGLRU was > enabled), and /uid_0 does not. Thanks it would be nice to outline that in the changelog. > > > root - full reclaim pages/sec time (sec) > > > pre-0388536ac291 : 68047 10.46 > > > post-0388536ac291 : 13742 inf > > > (reclaim-reclaimed)/4 : 67352 10.51 > > > > > > /uid_0 - 1G reclaim pages/sec time (sec) overreclaim (MiB) > > > pre-0388536ac291 : 258822 1.12 107.8 > > > post-0388536ac291 : 105174 2.49 3.5 > > > (reclaim-reclaimed)/4 : 233396 1.12 -7.4 > > > > > > /uid_0 - full reclaim pages/sec time (sec) > > > pre-0388536ac291 : 72334 7.09 > > > post-0388536ac291 : 38105 14.45 > > > (reclaim-reclaimed)/4 : 72914 6.96 > > > > > > Fixes: 0388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive reclaim") > > > Signed-off-by: T.J. Mercier > > > Reviewed-by: Yosry Ahmed > > > Acked-by: Johannes Weiner > > > > > > --- > > > v3: Formatting fixes per Yosry Ahmed and Johannes Weiner. No functional > > > changes. > > > v2: Simplify the request size calculation per Johannes Weiner and Michal Koutný > > > > > > mm/memcontrol.c | 6 ++++-- > > > 1 file changed, 4 insertions(+), 2 deletions(-) > > > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > > index 46d8d02114cf..f6ab61128869 100644 > > > --- a/mm/memcontrol.c > > > +++ b/mm/memcontrol.c > > > @@ -6976,9 +6976,11 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf, > > > if (!nr_retries) > > > lru_add_drain_all(); > > > > > > + /* Will converge on zero, but reclaim enforces a minimum */ > > > + unsigned long batch_size = (nr_to_reclaim - nr_reclaimed) / 4; > > > > This doesn't fit into the existing coding style. I do not think there is > > a strong reason to go against it here. > > There's been some back and forth here. You'd prefer to move this to > the top of the while loop, under the declaration of reclaimed? It's > farther from its use there, but it does match the existing style in > the file better. This is not something I deeply care about but generally it is better to not mix styles unless that is a clear win. If you want to save one LOC you can just move it up - just couple of lines up, or you can keep the definition closer and have a separate declaration. > > > + > > > reclaimed = try_to_free_mem_cgroup_pages(memcg, > > > - min(nr_to_reclaim - nr_reclaimed, SWAP_CLUSTER_MAX), > > > - GFP_KERNEL, reclaim_options); > > > + batch_size, GFP_KERNEL, reclaim_options); > > > > Also with the increased reclaim target do we need something like this? > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index 4f9c854ce6cc..94794cf5ee9f 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -1889,7 +1889,7 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan, > > > > /* We are about to die and free our memory. Return now. */ > > if (fatal_signal_pending(current)) > > - return SWAP_CLUSTER_MAX; > > + return sc->nr_to_reclaim; > > } > > > > lru_add_drain(); > > > > > > if (!reclaimed && !nr_retries--) > > > return -EAGAIN; > > > -- > > This is interesting, but I don't think it's closely related to this > change. This section looks like it was added to delay OOM kills due to > apparent lack of reclaim progress when pages are isolated and the > direct reclaimer is scheduled out. A couple things: > > In the context of proactive reclaim, current is not really undergoing > reclaim due to memory pressure. It's initiated from userspace. So > whether it has a fatal signal pending or not doesn't seem like it > should influence the return value of shrink_inactive_list for some > probably unrelated process. It seems more straightforward to me to > return 0, and add another fatal signal pending check to the caller > (shrink_lruvec) to bail out early (dealing with OOM kill avoidance > there if necessary) instead of waiting to accumulate fake > SWAP_CLUSTER_MAX values from shrink_inactive_list. The point of this code is to bail out early if the caller has fatal signals pending. That could be SIGTERM sent to the process performing the reclaim for whatever reason. The bail out is tuned for SWAP_CLUSTER_MAX as you can see and your patch is increasing the reclaim target which means that bailout wouldn't work properly and you wouldn't get any useful work done but not really bail out. > As far as changing the value, SWAP_CLUSTER_MAX puts the final value of > sc->nr_reclaimed pretty close to sc->nr_to_reclaim. Since there's a > loop for each evictable lru in shrink_lruvec, we could end up with 4 * > sc->nr_to_reclaim in sc->nr_reclaimed if we switched to > sc->nr_to_reclaim from SWAP_CLUSTER_MAX... an even bigger lie. So I > don't think we'd want to do that. The actual number returned from the reclaim is not really important because memory_reclaim would break out of the loop and userspace would never see the result. -- Michal Hocko SUSE Labs