From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41082C369D5 for ; Thu, 24 Apr 2025 20:22:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B4E9D6B00CD; Thu, 24 Apr 2025 16:22:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AF0DA6B00CB; Thu, 24 Apr 2025 16:22:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8D1006B00CD; Thu, 24 Apr 2025 16:22:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 6435A6B00CA for ; Thu, 24 Apr 2025 16:22:17 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id AFA9B141E33 for ; Thu, 24 Apr 2025 20:22:17 +0000 (UTC) X-FDA: 83370059514.18.2CEE7DC Received: from mail-qk1-f175.google.com (mail-qk1-f175.google.com [209.85.222.175]) by imf19.hostedemail.com (Postfix) with ESMTP id DF16E1A0003 for ; Thu, 24 Apr 2025 20:22:15 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=gourry.net header.s=google header.b=OsS7k+Ek; dmarc=none; spf=pass (imf19.hostedemail.com: domain of gourry@gourry.net designates 209.85.222.175 as permitted sender) smtp.mailfrom=gourry@gourry.net ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745526135; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CxWEnn5MTFE6+1tVKFs4oJ/SdesKMia6cc2bPiHfnm4=; b=Ij6HtS4RRQLRVjBn3RGfOsWv6gIv6SWQNMiXiUTIRd2mj+7qLJSeEo3+2iyK1rG6mT78mR AWObNbeBUYXfdszTXjGwDo+xoixKgCJ1tiNgPjA/JfZHC33tR1PB7xOAaTF8awl6LuZk/r ESV8T8UzC9tfqy4oQGHvvmapmcxLp34= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745526135; a=rsa-sha256; cv=none; b=vUn96sexu5v+jZ+++aRoTl2aOys5dSy2m5yNXpBq4Z++XAOstGZDw1p+zSiESt2Jqimwat 2+IgpY5R1tMLIzS0VF1gW0UFNIwgVLNbxuw31j68JVtoIZaKRu/NEIq3YKij0I+gjmZ462 VPsbomuMYEKGzr2bXZcQiw+i4Bgn5SE= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=gourry.net header.s=google header.b=OsS7k+Ek; dmarc=none; spf=pass (imf19.hostedemail.com: domain of gourry@gourry.net designates 209.85.222.175 as permitted sender) smtp.mailfrom=gourry@gourry.net Received: by mail-qk1-f175.google.com with SMTP id af79cd13be357-7c922169051so79867785a.0 for ; Thu, 24 Apr 2025 13:22:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gourry.net; s=google; t=1745526135; x=1746130935; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CxWEnn5MTFE6+1tVKFs4oJ/SdesKMia6cc2bPiHfnm4=; b=OsS7k+EkX+9WYh3S49/v736+xiOBrHncoCDWckOmxBGLdsDcBtHRkC+qbSs2VyEN9q cNurQOjvmenMDpYMokeiPczm+iDDhEHLijEqlubH7u2qr15Vkf9i5lOo1N41lbMP89DL q+sfHMfv5YfO2S2lWJ7atuN6oJrJCTVHPlJZOzmdCeT+TEwzCfYIYsSymBOQ05yOWfg0 XBC0bnD9EWCj8Vu+Gvx3TEDNuGKZjVaHJrB5B522DMag8xVfEx1Q43K9JEsSlM3eqYYi BOp8uuZKMIk94EV9i3Hw45/+KFWRhU8AbHs8DaOBQ18qy0fcGUBnAXI/XVsI+LMOaL21 20uA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745526135; x=1746130935; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CxWEnn5MTFE6+1tVKFs4oJ/SdesKMia6cc2bPiHfnm4=; b=cvl0o4LsSTr3+GxSOr8DS4rCvzkUsY3m1W4oAwebRgiAzGKBK6paeybbtymqFEf8eN 0XruIpimJZfCLv7DSF5Qpqe6HmfztdQcbteasBHD1FJjqvOLGMkf/40urfqqBiJVXGak bI9WgC0KveGkWhhEVcDQli5OHRsMtUWp4zwLntIAtWoQuCPtGTMyLOmzlvOrq8YvC3hF UwCZkCoGzO7jpBCTk9vMS5sdhG/AZa+eiGWHBswBcsnNDDqMLYPruHeS+Zt0bO1IH62z 5h2aa/qoxTW8DlJfbwwpEzwKM+rqV2JLt3i9fMSqhDzH7h239mTxXFXXMwvGHeDOVlSx QWWg== X-Gm-Message-State: AOJu0YxtHwUvN35bqi0u4GewOvajfY+k1I4hmDTcG+XOaIcAP8jS0OXa C1SwbtoW4nOnRY67VAQw+LDQAKfhM6rdYXEfqD5vxx54zCs/ADUzVv1Q4AyrhU+wHi+Yif/RPi2 s X-Gm-Gg: ASbGncsAFmJfLbriAz+DTvgNmGQQM5Gk+MCJmqEbpZ8kxCsNTVxjtsxZwfxRFAZdT4u bxLIfZeTEpPS/qhiPzw1ofBpU0U57Z9j6dRydNBYbZcqWxjZlQ1JUKLw32JtuFgjPOk6hgg0EBK 1fxsAmcjui9k6iGVaOacdr4jrFz9MpDoEtj8IlJuHKkzOY9znMDwiEgurDmpoXnIbrUMOZgCiN/ QZ2bL0NfDXb3xg6wuchBrzLUy3do0rNc3eWz5DYKiTCxbwKLqrM9wKwbxTlAS+n3Eo1hq8lOzCk RABWqBwCnnZsHN3x05019pm0hIIPDPvYdT0D9OfJLPGPqrjIm0m16zBtEzVGp5uKV8hii0xP6Ox 9IazQPqQMmnylwQBwLFjdr/ExVyxM X-Google-Smtp-Source: AGHT+IFyN8wb//0ERja9QfMESqcq2paUli+6NaJiW4JLzbqSHvuuo3oBkSX4VWrim2OosaiAh03+XQ== X-Received: by 2002:a05:620a:2404:b0:7c5:e2a0:4e64 with SMTP id af79cd13be357-7c95ef89880mr122896385a.51.1745526134699; Thu, 24 Apr 2025 13:22:14 -0700 (PDT) Received: from gourry-fedora-PF4VCD3F.lan (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7c958cbfb1bsm129618385a.44.2025.04.24.13.22.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Apr 2025 13:22:14 -0700 (PDT) From: Gregory Price To: linux-mm@kvack.org Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@meta.com, longman@redhat.com, hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, tj@kernel.org, mkoutny@suse.com, akpm@linux-foundation.org Subject: [PATCH v5 2/2] vmscan,cgroup: apply mems_effective to reclaim Date: Thu, 24 Apr 2025 16:22:07 -0400 Message-ID: <20250424202207.50028-3-gourry@gourry.net> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250424202207.50028-1-gourry@gourry.net> References: <20250422012616.1883287-3-gourry@gourry.net> <20250424202207.50028-1-gourry@gourry.net> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: DF16E1A0003 X-Rspam-User: X-Stat-Signature: gcp1qdipsb9d63gxfu67r1s4xomiph7j X-HE-Tag: 1745526135-303177 X-HE-Meta: U2FsdGVkX1+UbwrnVBLUb0BXhf3pQMgDudWJwAWkq5ncpNhVXcpQMfmQgdFhQbQYhDZgX9wfcdw1dw1Qr4WEKGaK4EWe+VZ0/curRYap6wnMQ0CVDGUzXmsvgd0B93OxFjNcvF3iMopOwBBPzNElJeElQqTcPhcUh+nvyVRHJDEYRuyt6ihltiVg9wxuhz9PZ/LkubS3iQZoCzr7WVvdw6W25OvrxU7BM6jO2TZ8Wv+N2UNDCD3Qevd5qVmpp9uHKulMIVGmh69RA6W00wPmLx58irH8UZyn7vYxTsjmK30mJN4GowXCoQOLcZMHzRSW/LYEevguIMb5RDJSbdZhXl7EnGkeN7Kspq7r8RGRlJLOiono3vBFVHOrM0+YGk7lzUybF24EeRywq9GYefjcE5bmIbk6CKe13HON3Yk+HSfKEILl96sbbzhjumB0mzabouLYpkoIf8LMzicHgHs1WZezP0VSiiDkIw7ORMZMIgcaXde+P3z77tKQgweJs0oIJAEd48PlFH6IDqOgsGFGt5bM/w2RP2HTlknT9CccdxaIwlT8FOo73uCZzf+3lOekrqEauSJjSs3ZnszflgtZ82tufqijw5CSWHxEeUnA70VqanAEBbpcbcFAIXJwM7B6F9Pv5yBeOUFUj9hPXWWBm757RkxVSoNhotPxyfsY+EjrvkF0wdGfqimZONJgZFZJ9rr8800i5aK9s0DzIuJiCnPESmr3sRHJRTBUI4Czcn441sQ2HmMwBjUAa1rrofMqo/qQWUSrc4hasiwADfx8NUglWfLVKW+w9O4klrN7ZRg3krpZwDmbPT1EsNFx16KLeCcZM5TeNdm+nNYpfL7GP9LAADjYFGOb/K8gK9bEVBOG5pYodTUoZjlyLeLc6jDHMmLQhPDAbvAy+dVDRQPDc9m67iQnZ/1XMrH8Hr+n+cixjxVlZ14wbq8q6s2fiYcv/njNdUqXfV2SEy/qanW 7wsE/DDY vKRF2IxKBb6fq39FjU17Ozm3SI2tbn+zYYp7a1C9RJCO6J+/PBXIZIJqvwSbvSyqtRa+h+NFNO4dZY7hQoBhDGfDDhB/xswnPRFc7ZyFUO3KoLwO/ZBOOBE89fGwNUpgu3Bu6gEvJPP9WMagkWtVmI/xt65b7uMggvJ404MssQoYxZe5wHMevy1qdjX9jm9YC0F9wEGxVS8WmUlO/2FdKVfiXKfgBg0U4oFvE34HSp9u9xnttKLvpNsfxTTmn/mPSN6qncDaBK69a7KZRLrLz0o1Ja/olKJ6o8g7aW2pNA7LVcIKASXynwWmwksxXaRCQYGTt8cHIMRyOovlX8sZLE3fsKXsJglHqOjedXnNc5p4vPqGEFeyg6f3BpiGCmZsX4EoF7cpqX1NxC2Le2av9KQkdFeWgRVzo+xP7tRHKdHVeja4cyFo4olyU75KLNlyiFrOfE7S1+V6aKa4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: It is possible for a reclaimer to cause demotions of an lruvec belonging to a cgroup with cpuset.mems set to exclude some nodes. Attempt to apply this limitation based on the lruvec's memcg and prevent demotion. Notably, this may still allow demotion of shared libraries or any memory first instantiated in another cgroup. This means cpusets still cannot cannot guarantee complete isolation when demotion is enabled, and the docs have been updated to reflect this. This is useful for isolating workloads on a multi-tenant system from certain classes of memory more consistently - with the noted exceptions. Note on locking: The cgroup_get_e_css reference protects the css->effective_mems, and calls of this interface would be subject to the same race conditions associated with a non-atomic access to cs->effective_mems. So while this interface cannot make strong guarantees of correctness, it can therefore avoid taking a global or rcu_read_lock for performance. Suggested-by: Shakeel Butt Suggested-by: Waiman Long Acked-by: Tejun Heo Acked-by: Johannes Weiner Reviewed-by: Shakeel Butt Reviewed-by: Waiman Long Signed-off-by: Gregory Price --- .../ABI/testing/sysfs-kernel-mm-numa | 16 +++++--- include/linux/cpuset.h | 5 +++ include/linux/memcontrol.h | 6 +++ kernel/cgroup/cpuset.c | 36 ++++++++++++++++ mm/memcontrol.c | 6 +++ mm/vmscan.c | 41 +++++++++++-------- 6 files changed, 88 insertions(+), 22 deletions(-) diff --git a/Documentation/ABI/testing/sysfs-kernel-mm-numa b/Documentation/ABI/testing/sysfs-kernel-mm-numa index 77e559d4ed80..90e375ff54cb 100644 --- a/Documentation/ABI/testing/sysfs-kernel-mm-numa +++ b/Documentation/ABI/testing/sysfs-kernel-mm-numa @@ -16,9 +16,13 @@ Description: Enable/disable demoting pages during reclaim Allowing page migration during reclaim enables these systems to migrate pages from fast tiers to slow tiers when the fast tier is under pressure. This migration - is performed before swap. It may move data to a NUMA - node that does not fall into the cpuset of the - allocating process which might be construed to violate - the guarantees of cpusets. This should not be enabled - on systems which need strict cpuset location - guarantees. + is performed before swap if an eligible numa node is + present in cpuset.mems for the cgroup (or if cpuset v1 + is being used). If cpusets.mems changes at runtime, it + may move data to a NUMA node that does not fall into the + cpuset of the new cpusets.mems, which might be construed + to violate the guarantees of cpusets. Shared memory, + such as libraries, owned by another cgroup may still be + demoted and result in memory use on a node not present + in cpusets.mem. This should not be enabled on systems + which need strict cpuset location guarantees. diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h index 893a4c340d48..5255e3fdbf62 100644 --- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h @@ -171,6 +171,7 @@ static inline void set_mems_allowed(nodemask_t nodemask) task_unlock(current); } +extern bool cpuset_node_allowed(struct cgroup *cgroup, int nid); #else /* !CONFIG_CPUSETS */ static inline bool cpusets_enabled(void) { return false; } @@ -282,6 +283,10 @@ static inline bool read_mems_allowed_retry(unsigned int seq) return false; } +static inline bool cpuset_node_allowed(struct cgroup *cgroup, int nid) +{ + return true; +} #endif /* !CONFIG_CPUSETS */ #endif /* _LINUX_CPUSET_H */ diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 53364526d877..a6c4e3faf721 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1736,6 +1736,8 @@ static inline void count_objcg_events(struct obj_cgroup *objcg, rcu_read_unlock(); } +bool mem_cgroup_node_allowed(struct mem_cgroup *memcg, int nid); + #else static inline bool mem_cgroup_kmem_disabled(void) { @@ -1793,6 +1795,10 @@ static inline void count_objcg_events(struct obj_cgroup *objcg, { } +static inline bool mem_cgroup_node_allowed(struct mem_cgroup *memcg, int nid) +{ + return true; +} #endif /* CONFIG_MEMCG */ #if defined(CONFIG_MEMCG) && defined(CONFIG_ZSWAP) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index f8e6a9b642cb..7eb71d411dc7 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -4163,6 +4163,42 @@ bool cpuset_current_node_allowed(int node, gfp_t gfp_mask) return allowed; } +bool cpuset_node_allowed(struct cgroup *cgroup, int nid) +{ + struct cgroup_subsys_state *css; + struct cpuset *cs; + bool allowed; + + /* + * In v1, mem_cgroup and cpuset are unlikely in the same hierarchy + * and mems_allowed is likely to be empty even if we could get to it, + * so return true to avoid taking a global lock on the empty check. + */ + if (!cpuset_v2()) + return true; + + css = cgroup_get_e_css(cgroup, &cpuset_cgrp_subsys); + if (!css) + return true; + + /* + * Normally, accessing effective_mems would require the cpuset_mutex + * or callback_lock - but node_isset is atomic and the reference + * taken via cgroup_get_e_css is sufficient to protect css. + * + * Since this interface is intended for use by migration paths, we + * relax locking here to avoid taking global locks - while accepting + * there may be rare scenarios where the result may be innaccurate. + * + * Reclaim and migration are subject to these same race conditions, and + * cannot make strong isolation guarantees, so this is acceptable. + */ + cs = container_of(css, struct cpuset, css); + allowed = node_isset(nid, cs->effective_mems); + css_put(css); + return allowed; +} + /** * cpuset_spread_node() - On which node to begin search for a page * @rotor: round robin rotor diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 40c07b8699ae..2f61d0060fd1 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include #include @@ -5437,3 +5438,8 @@ static int __init mem_cgroup_swap_init(void) subsys_initcall(mem_cgroup_swap_init); #endif /* CONFIG_SWAP */ + +bool mem_cgroup_node_allowed(struct mem_cgroup *memcg, int nid) +{ + return memcg ? cpuset_node_allowed(memcg->css.cgroup, nid) : true; +} diff --git a/mm/vmscan.c b/mm/vmscan.c index 2b2ab386cab5..32a7ce421e42 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -342,16 +342,22 @@ static void flush_reclaim_state(struct scan_control *sc) } } -static bool can_demote(int nid, struct scan_control *sc) +static bool can_demote(int nid, struct scan_control *sc, + struct mem_cgroup *memcg) { + int demotion_nid; + if (!numa_demotion_enabled) return false; if (sc && sc->no_demotion) return false; - if (next_demotion_node(nid) == NUMA_NO_NODE) + + demotion_nid = next_demotion_node(nid); + if (demotion_nid == NUMA_NO_NODE) return false; - return true; + /* If demotion node isn't in the cgroup's mems_allowed, fall back */ + return mem_cgroup_node_allowed(memcg, demotion_nid); } static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg, @@ -376,7 +382,7 @@ static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg, * * Can it be reclaimed from this node via demotion? */ - return can_demote(nid, sc); + return can_demote(nid, sc, memcg); } /* @@ -1096,7 +1102,8 @@ static bool may_enter_fs(struct folio *folio, gfp_t gfp_mask) */ static unsigned int shrink_folio_list(struct list_head *folio_list, struct pglist_data *pgdat, struct scan_control *sc, - struct reclaim_stat *stat, bool ignore_references) + struct reclaim_stat *stat, bool ignore_references, + struct mem_cgroup *memcg) { struct folio_batch free_folios; LIST_HEAD(ret_folios); @@ -1109,7 +1116,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, folio_batch_init(&free_folios); memset(stat, 0, sizeof(*stat)); cond_resched(); - do_demote_pass = can_demote(pgdat->node_id, sc); + do_demote_pass = can_demote(pgdat->node_id, sc, memcg); retry: while (!list_empty(folio_list)) { @@ -1658,7 +1665,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, */ noreclaim_flag = memalloc_noreclaim_save(); nr_reclaimed = shrink_folio_list(&clean_folios, zone->zone_pgdat, &sc, - &stat, true); + &stat, true, NULL); memalloc_noreclaim_restore(noreclaim_flag); list_splice(&clean_folios, folio_list); @@ -2031,7 +2038,8 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan, if (nr_taken == 0) return 0; - nr_reclaimed = shrink_folio_list(&folio_list, pgdat, sc, &stat, false); + nr_reclaimed = shrink_folio_list(&folio_list, pgdat, sc, &stat, false, + lruvec_memcg(lruvec)); spin_lock_irq(&lruvec->lru_lock); move_folios_to_lru(lruvec, &folio_list); @@ -2214,7 +2222,7 @@ static unsigned int reclaim_folio_list(struct list_head *folio_list, .no_demotion = 1, }; - nr_reclaimed = shrink_folio_list(folio_list, pgdat, &sc, &stat, true); + nr_reclaimed = shrink_folio_list(folio_list, pgdat, &sc, &stat, true, NULL); while (!list_empty(folio_list)) { folio = lru_to_folio(folio_list); list_del(&folio->lru); @@ -2646,7 +2654,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, * Anonymous LRU management is a waste if there is * ultimately no way to reclaim the memory. */ -static bool can_age_anon_pages(struct pglist_data *pgdat, +static bool can_age_anon_pages(struct lruvec *lruvec, struct scan_control *sc) { /* Aging the anon LRU is valuable if swap is present: */ @@ -2654,7 +2662,8 @@ static bool can_age_anon_pages(struct pglist_data *pgdat, return true; /* Also valuable if anon pages can be demoted: */ - return can_demote(pgdat->node_id, sc); + return can_demote(lruvec_pgdat(lruvec)->node_id, sc, + lruvec_memcg(lruvec)); } #ifdef CONFIG_LRU_GEN @@ -2732,7 +2741,7 @@ static int get_swappiness(struct lruvec *lruvec, struct scan_control *sc) if (!sc->may_swap) return 0; - if (!can_demote(pgdat->node_id, sc) && + if (!can_demote(pgdat->node_id, sc, memcg) && mem_cgroup_get_nr_swap_pages(memcg) < MIN_LRU_BATCH) return 0; @@ -4695,7 +4704,7 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap if (list_empty(&list)) return scanned; retry: - reclaimed = shrink_folio_list(&list, pgdat, sc, &stat, false); + reclaimed = shrink_folio_list(&list, pgdat, sc, &stat, false, memcg); sc->nr.unqueued_dirty += stat.nr_unqueued_dirty; sc->nr_reclaimed += reclaimed; trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id, @@ -5850,7 +5859,7 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) * Even if we did not try to evict anon pages at all, we want to * rebalance the anon lru active/inactive ratio. */ - if (can_age_anon_pages(lruvec_pgdat(lruvec), sc) && + if (can_age_anon_pages(lruvec, sc) && inactive_is_low(lruvec, LRU_INACTIVE_ANON)) shrink_active_list(SWAP_CLUSTER_MAX, lruvec, sc, LRU_ACTIVE_ANON); @@ -6681,10 +6690,10 @@ static void kswapd_age_node(struct pglist_data *pgdat, struct scan_control *sc) return; } - if (!can_age_anon_pages(pgdat, sc)) + lruvec = mem_cgroup_lruvec(NULL, pgdat); + if (!can_age_anon_pages(lruvec, sc)) return; - lruvec = mem_cgroup_lruvec(NULL, pgdat); if (!inactive_is_low(lruvec, LRU_INACTIVE_ANON)) return; -- 2.49.0