From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FF84C369CF for ; Fri, 18 Apr 2025 03:14:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 131236B02A0; Thu, 17 Apr 2025 23:14:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0E0B56B02A1; Thu, 17 Apr 2025 23:14:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EC4136B02A2; Thu, 17 Apr 2025 23:14:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id CDC536B02A0 for ; Thu, 17 Apr 2025 23:14:02 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id C8197140CC9 for ; Fri, 18 Apr 2025 03:14:03 +0000 (UTC) X-FDA: 83345695566.01.B077837 Received: from mail-qv1-f54.google.com (mail-qv1-f54.google.com [209.85.219.54]) by imf18.hostedemail.com (Postfix) with ESMTP id D70791C0004 for ; Fri, 18 Apr 2025 03:14:01 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=gourry.net header.s=google header.b=W9EE8+UK; spf=pass (imf18.hostedemail.com: domain of gourry@gourry.net designates 209.85.219.54 as permitted sender) smtp.mailfrom=gourry@gourry.net; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744946041; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5c4EiNLEzaH1kEu4rlybhAXBFQwwhe/VADHqdjirWGU=; b=SD6tMqulwFeCPboGiUJIY178H7pH9JYN8BMJofxROmjcLKkYschQuAVtZuiiGSKxPVm6+E r1unoFtdOX14rsP9IC5klZ8Mi4E1y1qjBhjAuIp1Zicsyk9Tp1NqTCUH36iSOf708MJeDN FOg7fBiWjdT7XD6zxthbpe3Myp3B/kM= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=gourry.net header.s=google header.b=W9EE8+UK; spf=pass (imf18.hostedemail.com: domain of gourry@gourry.net designates 209.85.219.54 as permitted sender) smtp.mailfrom=gourry@gourry.net; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744946041; a=rsa-sha256; cv=none; b=OHnpdY9EM6o6lwQs9JUfwudlWcituL6Jw4EAsKEmf9xtNzB2PtO34UZxmdRGzUjykA3I+7 2tORvLJeH999GChKO4MAoD1QM2h5K3ytr8B9drKubZ//ZNrPfvS7ALWKfe2S0I3cgwOv/r QA2W1lg5oh2gx5Pc5DY4+1Bb4Vp1UgQ= Received: by mail-qv1-f54.google.com with SMTP id 6a1803df08f44-6ecfc2cb1aaso14434076d6.3 for ; Thu, 17 Apr 2025 20:14:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gourry.net; s=google; t=1744946041; x=1745550841; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5c4EiNLEzaH1kEu4rlybhAXBFQwwhe/VADHqdjirWGU=; b=W9EE8+UKu9J6MXwXtdrzbteqbX6+g2pOzqFQXn4WYQLq/zcrHbeaKiTOLgJCo8Y9OY qvsvcx9qJyM8j7HF303JsoH4c/ikn3gvZ3I6YtVaRNX73lrmypqsUZOsqt1Taqoe096t YfBvgw4aT0lO44EaEZWg0ZQNYqrEs633vR+MKaTM3bBj/VEkDAo68XBDfQjI304mXu+B WxabJdcEBTfiYw2S/W1ldkSrarchomFxo79WSlJBRNdNBWn5T83dhEn/+axSgoIqOh4W WP6T6AlOTs0PAviK156w+bfe084k58TX0DhtonVSgS0y6p3Nt1Y0FWJdvmvDXSi2/FmL kgtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744946041; x=1745550841; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5c4EiNLEzaH1kEu4rlybhAXBFQwwhe/VADHqdjirWGU=; b=YFIyAiogsXxow5WWjAyk78zFkEUR4gzlQBf8jkgKgW3gmQS3A/UywzM9xEMfgS5X1y Jvy0Jf9OfjTIsjYyxzo3jaHpD2DyTRAKyfEZ+RuiRkNFVhHGbJxdIuGhAaKys1J1p7ZI TrhAYV7mrfeqmTs1W/tda4p2zLcJXAnbFxN2cxn8LzTWOiyeR06gsUyTUTwyoOsw9v9n x/PxWQNPgN4e4oGjjnyMTTT0nQ5LtnHSdBfXAo/qBxvYuhpe2GMGPOox5ZgQ1Ux4GFKq +08eQ9rIUZqo/eiHvTqgfmGdfX2zTHixOm3ZQOUoOG/sFkxkC3p+OJj1Eg0CPUXQXU7V UuPg== X-Gm-Message-State: AOJu0Yw/hz+9h5j2KnuSoC+YOnNsA7WmEAI6RfmZbxY+VYAhsSyxZQMB jqfDutwg4JC6/a26Kj61RZb7Yi0Re2WkCAZdXz/B4NBZva+sygL533O1U3FnDtw= X-Gm-Gg: ASbGncsJMb/MYC88qz8W2uRyXIboBZM960iG6te3wpdidHz4jee7et6xyM5t0NHTEqf jO0ahF2jl+4ImeQZ5IY/FSGK2kBX92bC46hsNhRm2b2f+8hpoJEV+rJytMn+4EA4O/HvH1+qEYW GISAaq/SGkEFIOHe2jyJ+nnWcMlusZzMoTHNZvmvYz8Yr57dRmQGsKiIiwDjlZaOADT+RKmtngY NFc/+F8uPiH/K9+ISeEfEhLXeH3BzNaKBRINJ3WGXiA/8ssG/AuYlUueZUIXZtKQTp5oG6NNzg5 r0vFM2DVA/BZF6WTSXcvnILLxhZ278zpUIrPgtpKo4eStLq/TURSjU8KoROKOQORwJmC+TTu+AR cqyyRO+8H2aYu5REuUdZBan4Vkd81 X-Google-Smtp-Source: AGHT+IEsxXzvK+RuOTa9fUyb98lbKMDB18an/pe4G3IndVeUIywzKG7ffnqYQetM9ARvJ/Mn4afdqg== X-Received: by 2002:a05:6214:4012:b0:6ed:1da2:afb9 with SMTP id 6a1803df08f44-6f2c45750a8mr28479756d6.19.1744946040958; Thu, 17 Apr 2025 20:14:00 -0700 (PDT) Received: from gourry-fedora-PF4VCD3F.lan (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6f2c2af1717sm6231796d6.15.2025.04.17.20.14.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 17 Apr 2025 20:14:00 -0700 (PDT) From: Gregory Price To: cgroups@vger.kernel.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@meta.com, tj@kernel.org, hannes@cmpxchg.org, mkoutny@suse.com, longman@redhat.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, akpm@linux-foundation.org Subject: [PATCH v2 2/2] vmscan,cgroup: apply mems_effective to reclaim Date: Thu, 17 Apr 2025 23:13:52 -0400 Message-ID: <20250418031352.1277966-2-gourry@gourry.net> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250418031352.1277966-1-gourry@gourry.net> References: <20250418031352.1277966-1-gourry@gourry.net> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: D70791C0004 X-Stat-Signature: fhxccrh9umxjsuhaypu8aktqsw39aprj X-HE-Tag: 1744946041-291546 X-HE-Meta: U2FsdGVkX18DaSafaZfTxSUd/h6E8SzPnGvR43ChIHuNrol4iQ8fqDoH/+Um6OvMx6X9+OaA5R7kDuKL7Ahicw5vcd5DWydS2vj8z2Bfiy8B8FnErIZuLI3ph/IQ7zgA6+cajks2MQZBmIAslX09SO0M8VQmUOj6vGKWgId8FwCZtGgGr5Pvo7cuSZa5OK9xVWyTDe6GDVqHBQSBMa1KLuH6363jSV2M7D/k2LYamadJR8hKuafqclDVYIlW5/opOGKyKtSjDFOyylCDIoUm+2l4A3a08jGVanHmgBNBeZzNV7y+xTX2eQQXfaVt/fnUzVbv7GBz1bvCXJs9HtHSLSBi/jGrnB6YyI7koA9t48rkHV2y9YFDZ2jGTQm/LRs5FSXBDy+ztJN44s+amZEndefB51gHCyima6n5fNQIbDekCphjhRNkmjwUwhpXYHsGhWmIiFbq/PVmiWEC8pmH4MZNGA+kTNGL2E+LKgBxxiEJs7HVwy4LF2aHvboJKRoSta4bHhU/BMOT9QeNlMOmBD5H8rC8bPxhYxvsZLUBPbZlapqLLHiaX2FwDlfpfvgjGBiqeJsofiI32HvCBA4uazXhxtw3yG0x/JuERpmT+7Afmv3K0GRlfbMZczy0OA27Dlx21D0NcUOClmAL1nqWyazLCHZB7l2GhLFd/YqUO1EroWHPegraeJ4LN6Ju+uKxqKCyVCyb6bJ/KuXgbv0+j+oUWyv5GP3umpfzrVm8/zp7oKE0gSC7OPLYyayo17liH2uoGRi/NOWdlcKPgkfjCmIiKReDoZZ205osm8tnN97p8NeEtGieZgRjwe3gENQdp19kJpIeLCku5uFIyPLU8gE0oac8Bk/zE2It3ZMUvxWUtmx7RmdCuEgYOgAC0vR0/ZVa2arkbDP9lrtquEzyiO/UKKsDep7uC3dBJRvuzdr5mLNyFEbSo0AT/B1/QiLncVb8fcgSYuP8AHpmpba SCe5tIMw LRVTPZvWMdeFQcnFFwv+raw/5HOyk5NiuVKNkRb4dYS3YahcbjnvuXk6d569lgrp04DBrZjVZImRXBJFmxb509/GcgeropYjY/Kq/Am6eH1ZjzAy7JJu3SvFAbA34TdeEDbNKNQZiRSzKjy8TO9aEe8Og62/qW4XugDLSmrokvh4NQA+g47M/kyzsxlXbscPHLuRTBDRr4wCXUHlkaM1LGDlIHldQB/AsXKFHDA3Nd2BwKC24aj+iIWX3w3fCOYJ3rS+keeXOpfsHsVrgdKuUnnZGTD5qZuBxdporxvTU1CKDDy4B/QeEZuX6Ml5R5/beTrOQ4ksBO9rDQwaZKPXKL3O2Ui4eMHjiLKI9 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: It is possible for a reclaimer to cause demotions of an lruvec belonging to a cgroup with cpuset.mems set to exclude some nodes. Attempt to apply this limitation based on the lruvec's memcg and prevent demotion. Notably, this may still allow demotion of shared libraries or any memory first instantiated in another cgroup. This means cpusets still cannot cannot guarantee complete isolation when demotion is enabled, and the docs have been updated to reflect this. This is useful for isolating workloads on a multi-tenant system from certain classes of memory more consistently - with the noted exceptions. Signed-off-by: Gregory Price --- .../ABI/testing/sysfs-kernel-mm-numa | 14 ++++--- include/linux/cgroup.h | 7 ++++ include/linux/cpuset.h | 5 +++ include/linux/memcontrol.h | 9 ++++ kernel/cgroup/cgroup.c | 5 +++ kernel/cgroup/cpuset.c | 22 ++++++++++ mm/vmscan.c | 41 +++++++++++-------- 7 files changed, 82 insertions(+), 21 deletions(-) diff --git a/Documentation/ABI/testing/sysfs-kernel-mm-numa b/Documentation/ABI/testing/sysfs-kernel-mm-numa index 77e559d4ed80..27cdcab901f7 100644 --- a/Documentation/ABI/testing/sysfs-kernel-mm-numa +++ b/Documentation/ABI/testing/sysfs-kernel-mm-numa @@ -16,9 +16,13 @@ Description: Enable/disable demoting pages during reclaim Allowing page migration during reclaim enables these systems to migrate pages from fast tiers to slow tiers when the fast tier is under pressure. This migration - is performed before swap. It may move data to a NUMA - node that does not fall into the cpuset of the - allocating process which might be construed to violate - the guarantees of cpusets. This should not be enabled - on systems which need strict cpuset location + is performed before swap if an eligible numa node is + present in cpuset.mems for the cgroup. If cpusets.mems + changes at runtime, it may move data to a NUMA node that + does not fall into the cpuset of the new cpusets.mems, + which might be construed to violate the guarantees of + cpusets. Shared memory, such as libraries, owned by + another cgroup may still be demoted and result in memory + use on a node not present in cpusets.mem. This should not + be enabled on systems which need strict cpuset location guarantees. diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h index f8ef47f8a634..2915250a3e5e 100644 --- a/include/linux/cgroup.h +++ b/include/linux/cgroup.h @@ -632,6 +632,8 @@ static inline void cgroup_kthread_ready(void) void cgroup_path_from_kernfs_id(u64 id, char *buf, size_t buflen); struct cgroup *cgroup_get_from_id(u64 id); + +extern bool cgroup_node_allowed(struct cgroup *cgroup, int nid); #else /* !CONFIG_CGROUPS */ struct cgroup_subsys_state; @@ -681,6 +683,11 @@ static inline bool task_under_cgroup_hierarchy(struct task_struct *task, static inline void cgroup_path_from_kernfs_id(u64 id, char *buf, size_t buflen) {} + +static inline bool cgroup_node_allowed(struct cgroup *cgroup, int nid) +{ + return true; +} #endif /* !CONFIG_CGROUPS */ #ifdef CONFIG_CGROUPS diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h index 893a4c340d48..c64b4a174456 100644 --- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h @@ -171,6 +171,7 @@ static inline void set_mems_allowed(nodemask_t nodemask) task_unlock(current); } +extern bool cpuset_node_allowed(struct cgroup *cgroup, int nid); #else /* !CONFIG_CPUSETS */ static inline bool cpusets_enabled(void) { return false; } @@ -282,6 +283,10 @@ static inline bool read_mems_allowed_retry(unsigned int seq) return false; } +static inline bool cpuset_node_allowed(struct cgroup *cgroup, int nid) +{ + return false; +} #endif /* !CONFIG_CPUSETS */ #endif /* _LINUX_CPUSET_H */ diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 53364526d877..2906e4bb12e9 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1736,6 +1736,11 @@ static inline void count_objcg_events(struct obj_cgroup *objcg, rcu_read_unlock(); } +static inline bool mem_cgroup_node_allowed(struct mem_cgroup *memcg, int nid) +{ + return memcg ? cgroup_node_allowed(memcg->css.cgroup, nid) : true; +} + #else static inline bool mem_cgroup_kmem_disabled(void) { @@ -1793,6 +1798,10 @@ static inline void count_objcg_events(struct obj_cgroup *objcg, { } +static inline bool mem_cgroup_node_allowed(struct mem_cgroup *memcg, int nid) +{ + return true; +} #endif /* CONFIG_MEMCG */ #if defined(CONFIG_MEMCG) && defined(CONFIG_ZSWAP) diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c index afc665b7b1fe..ba0b90cd774c 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -7038,6 +7038,11 @@ int cgroup_parse_float(const char *input, unsigned dec_shift, s64 *v) return 0; } +bool cgroup_node_allowed(struct cgroup *cgroup, int nid) +{ + return cpuset_node_allowed(cgroup, nid); +} + /* * sock->sk_cgrp_data handling. For more info, see sock_cgroup_data * definition in cgroup-defs.h. diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index d6ed3f053e62..31e4c4cbcdfc 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -4163,6 +4163,28 @@ bool cpuset_current_node_allowed(int node, gfp_t gfp_mask) return allowed; } +bool cpuset_node_allowed(struct cgroup *cgroup, int nid) +{ + struct cgroup_subsys_state *css; + unsigned long flags; + struct cpuset *cs; + bool allowed; + + css = cgroup_get_e_css(cgroup, &cpuset_cgrp_subsys); + if (!css) + return true; + + cs = container_of(css, struct cpuset, css); + spin_lock_irqsave(&callback_lock, flags); + /* At least one parent must have a valid node list */ + while (nodes_empty(cs->effective_mems)) + cs = parent_cs(cs); + allowed = node_isset(nid, cs->effective_mems); + spin_unlock_irqrestore(&callback_lock, flags); + css_put(css); + return allowed; +} + /** * cpuset_spread_node() - On which node to begin search for a page * @rotor: round robin rotor diff --git a/mm/vmscan.c b/mm/vmscan.c index 2b2ab386cab5..32a7ce421e42 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -342,16 +342,22 @@ static void flush_reclaim_state(struct scan_control *sc) } } -static bool can_demote(int nid, struct scan_control *sc) +static bool can_demote(int nid, struct scan_control *sc, + struct mem_cgroup *memcg) { + int demotion_nid; + if (!numa_demotion_enabled) return false; if (sc && sc->no_demotion) return false; - if (next_demotion_node(nid) == NUMA_NO_NODE) + + demotion_nid = next_demotion_node(nid); + if (demotion_nid == NUMA_NO_NODE) return false; - return true; + /* If demotion node isn't in the cgroup's mems_allowed, fall back */ + return mem_cgroup_node_allowed(memcg, demotion_nid); } static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg, @@ -376,7 +382,7 @@ static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg, * * Can it be reclaimed from this node via demotion? */ - return can_demote(nid, sc); + return can_demote(nid, sc, memcg); } /* @@ -1096,7 +1102,8 @@ static bool may_enter_fs(struct folio *folio, gfp_t gfp_mask) */ static unsigned int shrink_folio_list(struct list_head *folio_list, struct pglist_data *pgdat, struct scan_control *sc, - struct reclaim_stat *stat, bool ignore_references) + struct reclaim_stat *stat, bool ignore_references, + struct mem_cgroup *memcg) { struct folio_batch free_folios; LIST_HEAD(ret_folios); @@ -1109,7 +1116,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, folio_batch_init(&free_folios); memset(stat, 0, sizeof(*stat)); cond_resched(); - do_demote_pass = can_demote(pgdat->node_id, sc); + do_demote_pass = can_demote(pgdat->node_id, sc, memcg); retry: while (!list_empty(folio_list)) { @@ -1658,7 +1665,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, */ noreclaim_flag = memalloc_noreclaim_save(); nr_reclaimed = shrink_folio_list(&clean_folios, zone->zone_pgdat, &sc, - &stat, true); + &stat, true, NULL); memalloc_noreclaim_restore(noreclaim_flag); list_splice(&clean_folios, folio_list); @@ -2031,7 +2038,8 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan, if (nr_taken == 0) return 0; - nr_reclaimed = shrink_folio_list(&folio_list, pgdat, sc, &stat, false); + nr_reclaimed = shrink_folio_list(&folio_list, pgdat, sc, &stat, false, + lruvec_memcg(lruvec)); spin_lock_irq(&lruvec->lru_lock); move_folios_to_lru(lruvec, &folio_list); @@ -2214,7 +2222,7 @@ static unsigned int reclaim_folio_list(struct list_head *folio_list, .no_demotion = 1, }; - nr_reclaimed = shrink_folio_list(folio_list, pgdat, &sc, &stat, true); + nr_reclaimed = shrink_folio_list(folio_list, pgdat, &sc, &stat, true, NULL); while (!list_empty(folio_list)) { folio = lru_to_folio(folio_list); list_del(&folio->lru); @@ -2646,7 +2654,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, * Anonymous LRU management is a waste if there is * ultimately no way to reclaim the memory. */ -static bool can_age_anon_pages(struct pglist_data *pgdat, +static bool can_age_anon_pages(struct lruvec *lruvec, struct scan_control *sc) { /* Aging the anon LRU is valuable if swap is present: */ @@ -2654,7 +2662,8 @@ static bool can_age_anon_pages(struct pglist_data *pgdat, return true; /* Also valuable if anon pages can be demoted: */ - return can_demote(pgdat->node_id, sc); + return can_demote(lruvec_pgdat(lruvec)->node_id, sc, + lruvec_memcg(lruvec)); } #ifdef CONFIG_LRU_GEN @@ -2732,7 +2741,7 @@ static int get_swappiness(struct lruvec *lruvec, struct scan_control *sc) if (!sc->may_swap) return 0; - if (!can_demote(pgdat->node_id, sc) && + if (!can_demote(pgdat->node_id, sc, memcg) && mem_cgroup_get_nr_swap_pages(memcg) < MIN_LRU_BATCH) return 0; @@ -4695,7 +4704,7 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap if (list_empty(&list)) return scanned; retry: - reclaimed = shrink_folio_list(&list, pgdat, sc, &stat, false); + reclaimed = shrink_folio_list(&list, pgdat, sc, &stat, false, memcg); sc->nr.unqueued_dirty += stat.nr_unqueued_dirty; sc->nr_reclaimed += reclaimed; trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id, @@ -5850,7 +5859,7 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) * Even if we did not try to evict anon pages at all, we want to * rebalance the anon lru active/inactive ratio. */ - if (can_age_anon_pages(lruvec_pgdat(lruvec), sc) && + if (can_age_anon_pages(lruvec, sc) && inactive_is_low(lruvec, LRU_INACTIVE_ANON)) shrink_active_list(SWAP_CLUSTER_MAX, lruvec, sc, LRU_ACTIVE_ANON); @@ -6681,10 +6690,10 @@ static void kswapd_age_node(struct pglist_data *pgdat, struct scan_control *sc) return; } - if (!can_age_anon_pages(pgdat, sc)) + lruvec = mem_cgroup_lruvec(NULL, pgdat); + if (!can_age_anon_pages(lruvec, sc)) return; - lruvec = mem_cgroup_lruvec(NULL, pgdat); if (!inactive_is_low(lruvec, LRU_INACTIVE_ANON)) return; -- 2.49.0