From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 678F6C04FFE for ; Tue, 14 May 2024 23:37:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9BD556B0189; Tue, 14 May 2024 19:37:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 96E3A6B018B; Tue, 14 May 2024 19:37:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7BF736B018C; Tue, 14 May 2024 19:37:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 5EAFE6B0189 for ; Tue, 14 May 2024 19:37:00 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id E2337A210D for ; Tue, 14 May 2024 23:36:59 +0000 (UTC) X-FDA: 82118614158.18.76E826F Received: from out-179.mta1.migadu.com (out-179.mta1.migadu.com [95.215.58.179]) by imf03.hostedemail.com (Postfix) with ESMTP id F31AD20013 for ; Tue, 14 May 2024 23:36:56 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=ia9TWNR6; spf=pass (imf03.hostedemail.com: domain of roman.gushchin@linux.dev designates 95.215.58.179 as permitted sender) smtp.mailfrom=roman.gushchin@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715729817; a=rsa-sha256; cv=none; b=My5cF8vlVckbVr7PTXVqYcBDV6pKQ6NBzrPYryP3eihyfvdvYZrKb//59jGt21IO+RH5Ii aA3GLNLczLC3/IExscTeZ5Ao2LIucuBNBjSyZr3FRxrhmBsJctnmE363cFcpSksWBu3BFN ZsBXmAPraMgc4kPHfXnDGY8GASKtJek= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=ia9TWNR6; spf=pass (imf03.hostedemail.com: domain of roman.gushchin@linux.dev designates 95.215.58.179 as permitted sender) smtp.mailfrom=roman.gushchin@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715729817; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QGeu7E7YtU64Es+FXJxCc3GogLv/cEp8pF/frLRvsxw=; b=q/eRhIZElni4cJQWpOz+WW+wo8AhL2/nZx6IFVxmmzHOgB9x4iux5ADB3oKxCFOLIY3kIH qLbqYoZKrUJZ5ReCii7wzRHv+j8Tp6qqnYszdGWuBe+NEu4Nd/3cAwgj9HnbysL4YB2Apr xmc98JrQAVVy/7LK2fOIy/vJJSrhEH0= Date: Tue, 14 May 2024 16:36:49 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1715729814; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=QGeu7E7YtU64Es+FXJxCc3GogLv/cEp8pF/frLRvsxw=; b=ia9TWNR6cK0XgiL91oaEYNnsvY8LIynRf33KCN9kTNxOQLhTvTuZX6AIqh9dqUWc0SOO3D lEn4JEcU1aa4TH6gewXNp/j53gmNvYEpXX3py4Wj7xtH0vXAut0Isjc0m7/48vWNgySInp jVB+5339DuiYFwz1X9Xt6FK3oV/EerY= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Roman Gushchin To: Johannes Weiner Cc: Andrew Morton , Michal Hocko , Shakeel Butt , Rik van Riel , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: Re: [PATCH] mm: vmscan: restore incremental cgroup iteration Message-ID: References: <20240514202641.2821494-1-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240514202641.2821494-1-hannes@cmpxchg.org> X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: F31AD20013 X-Stat-Signature: oeynxytu1i35bwyopw9ic4ooiitoxckk X-HE-Tag: 1715729816-974964 X-HE-Meta: U2FsdGVkX1/s19HznLTAUrpghOj+uZ36LqiU7R/hljD6ERgZ9Ju7xLnCOc36N9dM/4n9lbjeQx/D/4qIG4a/V2A/30jGPFZekcBvryaO0ra1M6BYLxYk22GnCiV9/ulcIBv0vnVgCsK9ol1BdzfC4xw8zDQFar9IJImiYvtO580hcj9LpMLreFqkaSwSK8TFvGjbjnDOUtcg3671fuxr/oNFXrt7O6UedtlYLj7v7WdDtKZrXhTsZckJ42Q1mKj6dY3A3OvK6kTFG+6rlqdrmoXt4muH9NWxMUnAhjOjog5ud9n1ymEYd+MqjHiC7eEKupdQqnep7V4xb8KFK/1wyyi8Kv80+ld8VL1ON6WfuivJ3rp1tdRXQn8e1hsuBVWfMlmWxH5PgV8tw1QPcj/PTU4ObPfq85HQL61yGDaCX3NLovYnV8/0eO3wcBCnqMYm/HsXUuYYztlUO6plQez0fwEAuuAtdbnHayMRmvt2Ue0vQoBLyvzXxuN0KQrsq6AMNhtvJqhADAc469QsvVqAfQBA2GwSWFmb+styEJgsEybdKTJSV9wBl8jdgmU9y3wA1rN745GSgbdPGhIbWaqhOukJWfIlAsY7NgJu6gUBeD+v+YhIQB6qeaw8B0RMRxnk71P1TfxP4eYPDcyzPoGW6XabOqOtYfnZGqt5/9jOlqidUNFN+XeQczhxf8JFH/XM7Gtz3bEp//hsw3WVnSGOqBFFoblK0ZYx5Q6i9nYee5InPVJnSV41kpwQPAxqDQTqab0WmjVmaJ9zWOKa5YCFALHaZghH7uvjhCAqFHzqaqWAbyZ85zD8kdI6W6+UScQ8V2rNVm7zLZ2CPeQV+h90YW4b9EwrUKR7e8w420q4M9JdO/fKhA8N4dDvqEjAlPaaKbMjIvqhgb4jHsfpGQiaADFI0ru3Ji1JoopINsYTr+sauLQgvZIlmr61D79iKixmpNBpRpx0LS8TW6lQ4tS 0uZo3+HX rl0K0CiaNAk73bqi3CU/1kPGc2OM2q7qfOmoRKIVCIATv8sj6F9nxdR2Y7ERM1/3D3snDOMB2mG8eAbTNEmCTlQSpzZqCq/uo4J/Vpf4dNdFhPukKSF1L7LmMW5cWRwdOOIul3Hg2uVdLLWcjEA7xYeFhyqgRPt1o3wyTKhLiYWQ9/DgfKpEaMvdyO6KJTrybyDIizaIERaYZLhI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, May 14, 2024 at 04:26:41PM -0400, Johannes Weiner wrote: > Currently, reclaim always walks the entire cgroup tree in order to > ensure fairness between groups. While overreclaim is limited in > shrink_lruvec(), many of our systems have a sizable number of active > groups, and an even bigger number of idle cgroups with cache left > behind by previous jobs; the mere act of walking all these cgroups can > impose significant latency on direct reclaimers. > > In the past, we've used a save-and-restore iterator that enabled > incremental tree walks over multiple reclaim invocations. This ensured > fairness, while keeping the work of individual reclaimers small. > > However, in edge cases with a lot of reclaim concurrency, individual > reclaimers would sometimes not see enough of the cgroup tree to make > forward progress and (prematurely) declare OOM. Consequently we > switched to comprehensive walks in 1ba6fc9af35b ("mm: vmscan: do not > share cgroup iteration between reclaimers"). > > To address the latency problem without bringing back the premature OOM > issue, reinstate the shared iteration, but with a restart condition to > do the full walk in the OOM case - similar to what we do for > memory.low enforcement and active page protection. > > In the worst case, we do one more full tree walk before declaring > OOM. But the vast majority of direct reclaim scans can then finish > much quicker, while fairness across the tree is maintained: > > - Before this patch, we observed that direct reclaim always takes more > than 100us and most direct reclaim time is spent in reclaim cycles > lasting between 1ms and 1 second. Almost 40% of direct reclaim time > was spent on reclaim cycles exceeding 100ms. > > - With this patch, almost all page reclaim cycles last less than 10ms, > and a good amount of direct page reclaim finishes in under 100us. No > page reclaim cycles lasting over 100ms were observed anymore. > > The shared iterator state is maintaned inside the target cgroup, so > fair and incremental walks are performed during both global reclaim > and cgroup limit reclaim of complex subtrees. > > Reported-by: Rik van Riel > Signed-off-by: Johannes Weiner > Signed-off-by: Rik van Riel Looks really solid. Reviewed-by: Roman Gushchin Thanks!