From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DA48C19F2B for ; Thu, 4 Aug 2022 09:39:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BADAB8E0002; Thu, 4 Aug 2022 05:39:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B5CFD8E0001; Thu, 4 Aug 2022 05:39:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A4D828E0002; Thu, 4 Aug 2022 05:39:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 95F4C8E0001 for ; Thu, 4 Aug 2022 05:39:10 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 758B240316 for ; Thu, 4 Aug 2022 09:39:10 +0000 (UTC) X-FDA: 79761411660.20.93ADF93 Received: from outbound-smtp23.blacknight.com (outbound-smtp23.blacknight.com [81.17.249.191]) by imf10.hostedemail.com (Postfix) with ESMTP id 1A0A7C0025 for ; Thu, 4 Aug 2022 09:39:08 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail03.blacknight.ie [81.17.254.16]) by outbound-smtp23.blacknight.com (Postfix) with ESMTPS id CAA93BED57 for ; Thu, 4 Aug 2022 10:39:07 +0100 (IST) Received: (qmail 5340 invoked from network); 4 Aug 2022 09:39:07 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 4 Aug 2022 09:39:07 -0000 Date: Thu, 4 Aug 2022 10:38:00 +0100 From: Mel Gorman To: Johannes Weiner Cc: Andrew Morton , Hugh Dickins , Joonsoo Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: Re: [PATCH] mm: vmscan: fix extreme overreclaim and swap floods Message-ID: <20220804093800.yrmkcspzb35gvnfp@techsingularity.net> References: <20220802162811.39216-1-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20220802162811.39216-1-hannes@cmpxchg.org> ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659605949; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hPL5EUA7hl5fcuknoGUTN14cLPO2f0vX7QZr+exKOYM=; b=aVs5iTlouRhPJg24rmMc+l8W1/oPLe6HlyTkFPGSg+CU2EC7/16A6dw4TScJXLeRwMkGJN D+gz4msqMkrI5YfLmCZNsyRuvp+F70oJ+rkp+lCsz9X6VKFQ4ToWd7PG9Zu4GcNCHBt9JS 887eqV7RXyRDNvKmMmfDLxWPBnqAIS8= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=none; spf=pass (imf10.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.191 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659605949; a=rsa-sha256; cv=none; b=3KE4vejpDPyl7OBTSp23kdzVa31r2m8D2WWZvD7QI98F+3uqeDAeLSRoO5R5F8LXuBoy3l 6Teu35Xd6UayiqKcU3zEv97MBIhsSFEfBDeg10rvgmhO4P+ZwzbsxC+F6Ul7fC7NDyfa/z 1wQ3E5ebZHsss422endUx1ge1lDslAQ= X-Rspam-User: Authentication-Results: imf10.hostedemail.com; dkim=none; spf=pass (imf10.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.191 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 1A0A7C0025 X-Stat-Signature: f7z6qxfk3znnht4pknu6snqpzriq5rip X-HE-Tag: 1659605948-58573 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Aug 02, 2022 at 12:28:11PM -0400, Johannes Weiner wrote: > During proactive reclaim, we sometimes observe severe overreclaim, > with several thousand times more pages reclaimed than requested. > > This trace was obtained from shrink_lruvec() during such an instance: > > prio:0 anon_cost:1141521 file_cost:7767 > nr_reclaimed:4387406 nr_to_reclaim:1047 (or_factor:4190) > nr=[7161123 345 578 1111] > > While he reclaimer requested 4M, vmscan reclaimed close to 16G, most > of it by swapping. These requests take over a minute, during which the > write() to memory.reclaim is unkillably stuck inside the kernel. > > Digging into the source, this is caused by the proportional reclaim > bailout logic. This code tries to resolve a fundamental conflict: to > reclaim roughly what was requested, while also aging all LRUs fairly > and in accordance to their size, swappiness, refault rates etc. The > way it attempts fairness is that once the reclaim goal has been > reached, it stops scanning the LRUs with the smaller remaining scan > targets, and adjusts the remainder of the bigger LRUs according to how > much of the smaller LRUs was scanned. It then finishes scanning that > remainder regardless of the reclaim goal. > > This works fine if priority levels are low and the LRU lists are > comparable in size. However, in this instance, the cgroup that is > targeted by proactive reclaim has almost no files left - they've > already been squeezed out by proactive reclaim earlier - and the > remaining anon pages are hot. Anon rotations cause the priority level > to drop to 0, which results in reclaim targeting all of anon (a lot) > and all of file (almost nothing). By the time reclaim decides to bail, > it has scanned most or all of the file target, and therefor must also > scan most or all of the enormous anon target. This target is thousands > of times larger than the reclaim goal, thus causing the overreclaim. > > The bailout code hasn't changed in years, why is this failing now? > The most likely explanations are two other recent changes in anon > reclaim: > > 1. Before the series starting with commit 5df741963d52 ("mm: fix LRU > balancing effect of new transparent huge pages"), the VM was > overall relatively reluctant to swap at all, even if swap was > configured. This means the LRU balancing code didn't come into play > as often as it does now, and mostly in high pressure situations > where pronounced swap activity wouldn't be as surprising. > > 2. For historic reasons, shrink_lruvec() loops on the scan targets of > all LRU lists except the active anon one, meaning it would bail if > the only remaining pages to scan were active anon - even if there > were a lot of them. > > Before the series starting with commit ccc5dc67340c ("mm/vmscan: > make active/inactive ratio as 1:1 for anon lru"), most anon pages > would live on the active LRU; the inactive one would contain only a > handful of preselected reclaim candidates. After the series, anon > gets aged similarly to file, and the inactive list is the default > for new anon pages as well, making it often the much bigger list. > > As a result, the VM is now more likely to actually finish large > anon targets than before. > > Change the code such that only one SWAP_CLUSTER_MAX-sized nudge toward > the larger LRU lists is made before bailing out on a met reclaim goal. > > This fixes the extreme overreclaim problem. > > Fairness is more subtle and harder to evaluate. No obvious misbehavior > was observed on the test workload, in any case. Conceptually, fairness > should primarily be a cumulative effect from regular, lower priority > scans. Once the VM is in trouble and needs to escalate scan targets to > make forward progress, fairness needs to take a backseat. This is also > acknowledged by the myriad exceptions in get_scan_count(). This patch > makes fairness decrease gradually, as it keeps fairness work static > over increasing priority levels with growing scan targets. This should > make more sense - although we may have to re-visit the exact values. > > Signed-off-by: Johannes Weiner Acked-by: Mel Gorman -- Mel Gorman SUSE Labs