From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 208C1C00140 for ; Mon, 8 Aug 2022 13:55:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 367826B0071; Mon, 8 Aug 2022 09:55:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 316938E0001; Mon, 8 Aug 2022 09:55:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1B6F16B0073; Mon, 8 Aug 2022 09:55:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 0DF276B0071 for ; Mon, 8 Aug 2022 09:55:25 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id DB2AC140E2A for ; Mon, 8 Aug 2022 13:55:24 +0000 (UTC) X-FDA: 79776572568.11.E8B8C04 Received: from mail-wm1-f41.google.com (mail-wm1-f41.google.com [209.85.128.41]) by imf22.hostedemail.com (Postfix) with ESMTP id 6A95AC0138 for ; Mon, 8 Aug 2022 13:55:24 +0000 (UTC) Received: by mail-wm1-f41.google.com with SMTP id k6-20020a05600c1c8600b003a54ecc62f6so688474wms.5 for ; Mon, 08 Aug 2022 06:55:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=VSs1PwxK1pFEMPJMNC1f1b9KRjUrh6GnbVlzvd33lyE=; b=pr6raZNP0Wn3nSNWGMk+k/3sDBTd7KCY1cHNEnAnHGNGINF7WSen8W0MftMI68LJuj lMKThulhxZQZSK4hgS1QhXp+W/8IupTMBV2GcRXJXZSku8z6sLBkkGZmv6WAoif8SLQQ tMq2Za1hfylV+k7Z+wXmLzeUWxTMtch3aBRtD3JPdUVwQHZ/XwEhAMr656djEYOXhDvv DTY1x6s+0stEi154v2lGH31u86rwoNT/c4BhdA31BfF1+U1Gj40LCVF7/MZ1pfriuGfC hn8WpCAHApLW50MRXEKYYgQuY04MDMEwZIg4SfRVT+LYDthQrV9DysAevLSCcAgw6Fi1 R5wg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=VSs1PwxK1pFEMPJMNC1f1b9KRjUrh6GnbVlzvd33lyE=; b=aVX0dmizSYNTOlPJAxkmqJIEHQDUe65u9sFGQIwvgu+d7veFtJARDvijAab2wPv4M3 LgKYj5oSkq/R3YqsEbNGirOY6PVhW2CCS2ZlwBxaMKP/CnLf0ndUDZw3TLXhb+q78nfv ZugC2SurBW0XnYaBzPfzc0xdpa76vY/Nirq4W/+1uJgKZM2/0S9Dl5/6rLgFZZgD9iAs zL0PKK+wsFNQ9WFcXet3Vd6HIRTVMjbh85C7iqPJrtEFCbAEZ/OOWKbVGFy51o3PhCeR kwwepivOg58mSyVIg6kt+lPgMAQZ8mx9Op61TOLm6lEzRqUSuoEPbLAz9P4P2HlXF5YU bJTQ== X-Gm-Message-State: ACgBeo0oVHPbZ62ZsvZr0wdbljUuP7ok3rhWcd1OQ0DxXDE0WQ84we1z BxXhSbfnUuCO6pGhvLpUPrYr6C4BncpmG7BoWMwXfw== X-Google-Smtp-Source: AA6agR4U5Cagc6pIRcc9v0pP5PrNdd6RKzAGHaRuoWXKlx3TxcMAAlwcDwpDuma61NIzgXkXM4sbNfJcGqXfRyMuFE0= X-Received: by 2002:a05:600c:3d17:b0:3a3:2bc8:6132 with SMTP id bh23-20020a05600c3d1700b003a32bc86132mr12749249wmb.24.1659966922907; Mon, 08 Aug 2022 06:55:22 -0700 (PDT) MIME-Version: 1.0 References: <20220802162811.39216-1-hannes@cmpxchg.org> In-Reply-To: <20220802162811.39216-1-hannes@cmpxchg.org> From: Yosry Ahmed Date: Mon, 8 Aug 2022 06:54:46 -0700 Message-ID: Subject: Re: [PATCH] mm: vmscan: fix extreme overreclaim and swap floods To: Johannes Weiner Cc: Andrew Morton , Mel Gorman , Hugh Dickins , Joonsoo Kim , Linux-MM , Linux Kernel Mailing List , Kernel Team Content-Type: text/plain; charset="UTF-8" ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659966924; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VSs1PwxK1pFEMPJMNC1f1b9KRjUrh6GnbVlzvd33lyE=; b=EwKgF5GOKvvwEsu01tfT8kYQf30ATdJCdabXyuAdB0bZvoq/GHVV7cJDMlahyI4eNplsdT 6NXoU3ObRz57T9S1nCle2eSlMQdkTbgxNxgzS3OZb1/c7KZPOfsDZF44aam8v/uyjTsVkv WJ4GK604XN24lVn7h41UrN2Rxdg0K7I= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=pr6raZNP; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf22.hostedemail.com: domain of yosryahmed@google.com designates 209.85.128.41 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659966924; a=rsa-sha256; cv=none; b=XakUfKi2PRx0XbkYEpPi2Bzdu6LaqdF0Io0g14Ylvf1QLi0/kuXuEmqZeCR9RdlJ4yVcpt /5F8XU9FUaj0CD+KghQVelX6ObLd2RY3PPD04vrehQy70WffQ1FDcB7qbFQhWp0jiaisVc Y0G3kOkEC3JdkoVf62wWwt6drb5Vcm0= Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=pr6raZNP; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf22.hostedemail.com: domain of yosryahmed@google.com designates 209.85.128.41 as permitted sender) smtp.mailfrom=yosryahmed@google.com X-Rspamd-Server: rspam02 X-Stat-Signature: eydnfg764cp9yy33phx8iq967u1gg4oj X-Rspamd-Queue-Id: 6A95AC0138 X-Rspam-User: X-HE-Tag: 1659966924-730906 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Aug 2, 2022 at 9:28 AM Johannes Weiner wrote: > > During proactive reclaim, we sometimes observe severe overreclaim, > with several thousand times more pages reclaimed than requested. > > This trace was obtained from shrink_lruvec() during such an instance: > > prio:0 anon_cost:1141521 file_cost:7767 > nr_reclaimed:4387406 nr_to_reclaim:1047 (or_factor:4190) > nr=[7161123 345 578 1111] > > While he reclaimer requested 4M, vmscan reclaimed close to 16G, most > of it by swapping. These requests take over a minute, during which the > write() to memory.reclaim is unkillably stuck inside the kernel. > > Digging into the source, this is caused by the proportional reclaim > bailout logic. This code tries to resolve a fundamental conflict: to > reclaim roughly what was requested, while also aging all LRUs fairly > and in accordance to their size, swappiness, refault rates etc. The > way it attempts fairness is that once the reclaim goal has been > reached, it stops scanning the LRUs with the smaller remaining scan > targets, and adjusts the remainder of the bigger LRUs according to how > much of the smaller LRUs was scanned. It then finishes scanning that > remainder regardless of the reclaim goal. > > This works fine if priority levels are low and the LRU lists are > comparable in size. However, in this instance, the cgroup that is > targeted by proactive reclaim has almost no files left - they've > already been squeezed out by proactive reclaim earlier - and the > remaining anon pages are hot. Anon rotations cause the priority level > to drop to 0, which results in reclaim targeting all of anon (a lot) > and all of file (almost nothing). By the time reclaim decides to bail, > it has scanned most or all of the file target, and therefor must also > scan most or all of the enormous anon target. This target is thousands > of times larger than the reclaim goal, thus causing the overreclaim. > > The bailout code hasn't changed in years, why is this failing now? > The most likely explanations are two other recent changes in anon > reclaim: > > 1. Before the series starting with commit 5df741963d52 ("mm: fix LRU > balancing effect of new transparent huge pages"), the VM was > overall relatively reluctant to swap at all, even if swap was > configured. This means the LRU balancing code didn't come into play > as often as it does now, and mostly in high pressure situations > where pronounced swap activity wouldn't be as surprising. > > 2. For historic reasons, shrink_lruvec() loops on the scan targets of > all LRU lists except the active anon one, meaning it would bail if > the only remaining pages to scan were active anon - even if there > were a lot of them. > > Before the series starting with commit ccc5dc67340c ("mm/vmscan: > make active/inactive ratio as 1:1 for anon lru"), most anon pages > would live on the active LRU; the inactive one would contain only a > handful of preselected reclaim candidates. After the series, anon > gets aged similarly to file, and the inactive list is the default > for new anon pages as well, making it often the much bigger list. > > As a result, the VM is now more likely to actually finish large > anon targets than before. > > Change the code such that only one SWAP_CLUSTER_MAX-sized nudge toward > the larger LRU lists is made before bailing out on a met reclaim goal. > > This fixes the extreme overreclaim problem. > > Fairness is more subtle and harder to evaluate. No obvious misbehavior > was observed on the test workload, in any case. Conceptually, fairness > should primarily be a cumulative effect from regular, lower priority > scans. Once the VM is in trouble and needs to escalate scan targets to > make forward progress, fairness needs to take a backseat. This is also > acknowledged by the myriad exceptions in get_scan_count(). This patch > makes fairness decrease gradually, as it keeps fairness work static > over increasing priority levels with growing scan targets. This should > make more sense - although we may have to re-visit the exact values. > > Signed-off-by: Johannes Weiner > --- > mm/vmscan.c | 10 ++++------ > 1 file changed, 4 insertions(+), 6 deletions(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index f7d9a683e3a7..1cc0c6666787 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -2897,8 +2897,8 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) > enum lru_list lru; > unsigned long nr_reclaimed = 0; > unsigned long nr_to_reclaim = sc->nr_to_reclaim; > + bool proportional_reclaim; > struct blk_plug plug; > - bool scan_adjusted; > > get_scan_count(lruvec, sc, nr); > > @@ -2916,8 +2916,8 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) > * abort proportional reclaim if either the file or anon lru has already > * dropped to zero at the first pass. > */ > - scan_adjusted = (!cgroup_reclaim(sc) && !current_is_kswapd() && > - sc->priority == DEF_PRIORITY); > + proportional_reclaim = (!cgroup_reclaim(sc) && !current_is_kswapd() && > + sc->priority == DEF_PRIORITY); > > blk_start_plug(&plug); > while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] || > @@ -2937,7 +2937,7 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) > > cond_resched(); > > - if (nr_reclaimed < nr_to_reclaim || scan_adjusted) > + if (nr_reclaimed < nr_to_reclaim || proportional_reclaim) > continue; > > /* > @@ -2988,8 +2988,6 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) > nr_scanned = targets[lru] - nr[lru]; > nr[lru] = targets[lru] * (100 - percentage) / 100; > nr[lru] -= min(nr[lru], nr_scanned); > - > - scan_adjusted = true; Thanks for the great analysis of the problem! I have a question here. This fixes the overreclaim problem for proactive reclaim (and most other scenarios), but what about the case where proportional_reclaim (aka scan_adjusted) is set before we start shrinking lrus: global direct reclaim on DEF_PRIORITY? If we hit a memcg that has very few file pages and a ton of anon pages in this scenario (or vice versa), wouldn't we still overreclaim and possibly stall unnecessarily? or am I missing something here? > } > blk_finish_plug(&plug); > sc->nr_reclaimed += nr_reclaimed; > -- > 2.37.1 > >