* [PATCH v3 0/2] vmscan: enforce mems_effective during demotion
@ 2025-04-19 5:38 Gregory Price
2025-04-19 5:38 ` [PATCH v3 1/2] cpuset: rename cpuset_node_allowed to cpuset_current_node_allowed Gregory Price
` (2 more replies)
0 siblings, 3 replies; 20+ messages in thread
From: Gregory Price @ 2025-04-19 5:38 UTC (permalink / raw)
To: linux-mm
Cc: cgroups, linux-kernel, kernel-team, longman, hannes, mhocko,
roman.gushchin, shakeel.butt, muchun.song, tj, mkoutny, akpm
Change reclaim to respect cpuset.mems_effective during demotion when
possible. Presently, reclaim explicitly ignores cpuset.mems_effective
when demoting, which may cause the cpuset settings to violated.
Implement cpuset_node_allowed() to check the cpuset.mems_effective
associated wih the mem_cgroup of the lruvec being scanned.
This requires renaming the existing cpuset_node_allowed() to be
cpuset_current_now_allowed() - which is more descriptive anyway - to
implement the new cpuset_node_allowed() which takes a target cgroup.
v3:
- remove cgroup indirection, call cpuset directly from memcontrol
- put mem_cgroup_node_allowed in memcontrol.c to reduce cpuset.h
include scope
- return true if mems_effective is empty, and don't walk the parents
as recommended by Waiman Long.
Gregory Price (2):
cpuset: rename cpuset_node_allowed to cpuset_current_node_allowed
vmscan,cgroup: apply mems_effective to reclaim
.../ABI/testing/sysfs-kernel-mm-numa | 14 ++++---
include/linux/cpuset.h | 9 +++-
include/linux/memcontrol.h | 6 +++
kernel/cgroup/cpuset.c | 25 ++++++++++-
mm/memcontrol.c | 6 +++
mm/page_alloc.c | 4 +-
mm/vmscan.c | 41 +++++++++++--------
7 files changed, 78 insertions(+), 27 deletions(-)
--
2.49.0
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v3 1/2] cpuset: rename cpuset_node_allowed to cpuset_current_node_allowed
2025-04-19 5:38 [PATCH v3 0/2] vmscan: enforce mems_effective during demotion Gregory Price
@ 2025-04-19 5:38 ` Gregory Price
2025-04-19 18:37 ` Shakeel Butt
2025-04-19 5:38 ` [PATCH v3 2/2] vmscan,cgroup: apply mems_effective to reclaim Gregory Price
2025-04-19 17:21 ` [PATCH v3 0/2] vmscan: enforce mems_effective during demotion Tejun Heo
2 siblings, 1 reply; 20+ messages in thread
From: Gregory Price @ 2025-04-19 5:38 UTC (permalink / raw)
To: linux-mm
Cc: cgroups, linux-kernel, kernel-team, longman, hannes, mhocko,
roman.gushchin, shakeel.butt, muchun.song, tj, mkoutny, akpm
Rename cpuset_node_allowed to reflect that the function checks the
current task's cpuset.mems. This allows us to make a new
cpuset_node_allowed function that checks a target cgroup's cpuset.mems.
Acked-by: Waiman Long <longman@redhat.com>
Signed-off-by: Gregory Price <gourry@gourry.net>
---
include/linux/cpuset.h | 4 ++--
kernel/cgroup/cpuset.c | 4 ++--
mm/page_alloc.c | 4 ++--
3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
index 835e7b793f6a..893a4c340d48 100644
--- a/include/linux/cpuset.h
+++ b/include/linux/cpuset.h
@@ -82,11 +82,11 @@ extern nodemask_t cpuset_mems_allowed(struct task_struct *p);
void cpuset_init_current_mems_allowed(void);
int cpuset_nodemask_valid_mems_allowed(nodemask_t *nodemask);
-extern bool cpuset_node_allowed(int node, gfp_t gfp_mask);
+extern bool cpuset_current_node_allowed(int node, gfp_t gfp_mask);
static inline bool __cpuset_zone_allowed(struct zone *z, gfp_t gfp_mask)
{
- return cpuset_node_allowed(zone_to_nid(z), gfp_mask);
+ return cpuset_current_node_allowed(zone_to_nid(z), gfp_mask);
}
static inline bool cpuset_zone_allowed(struct zone *z, gfp_t gfp_mask)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 0f910c828973..f8e6a9b642cb 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -4090,7 +4090,7 @@ static struct cpuset *nearest_hardwall_ancestor(struct cpuset *cs)
}
/*
- * cpuset_node_allowed - Can we allocate on a memory node?
+ * cpuset_current_node_allowed - Can current task allocate on a memory node?
* @node: is this an allowed node?
* @gfp_mask: memory allocation flags
*
@@ -4129,7 +4129,7 @@ static struct cpuset *nearest_hardwall_ancestor(struct cpuset *cs)
* GFP_KERNEL - any node in enclosing hardwalled cpuset ok
* GFP_USER - only nodes in current tasks mems allowed ok.
*/
-bool cpuset_node_allowed(int node, gfp_t gfp_mask)
+bool cpuset_current_node_allowed(int node, gfp_t gfp_mask)
{
struct cpuset *cs; /* current cpuset ancestors */
bool allowed; /* is allocation in zone z allowed? */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5079b1b04d49..233ce25f8f3d 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3461,7 +3461,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
retry:
/*
* Scan zonelist, looking for a zone with enough free.
- * See also cpuset_node_allowed() comment in kernel/cgroup/cpuset.c.
+ * See also cpuset_current_node_allowed() comment in kernel/cgroup/cpuset.c.
*/
no_fallback = alloc_flags & ALLOC_NOFRAGMENT;
z = ac->preferred_zoneref;
@@ -4148,7 +4148,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask, unsigned int order)
/*
* Ignore cpuset mems for non-blocking __GFP_HIGH (probably
* GFP_ATOMIC) rather than fail, see the comment for
- * cpuset_node_allowed().
+ * cpuset_current_node_allowed().
*/
if (alloc_flags & ALLOC_MIN_RESERVE)
alloc_flags &= ~ALLOC_CPUSET;
--
2.49.0
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v3 2/2] vmscan,cgroup: apply mems_effective to reclaim
2025-04-19 5:38 [PATCH v3 0/2] vmscan: enforce mems_effective during demotion Gregory Price
2025-04-19 5:38 ` [PATCH v3 1/2] cpuset: rename cpuset_node_allowed to cpuset_current_node_allowed Gregory Price
@ 2025-04-19 5:38 ` Gregory Price
2025-04-19 18:48 ` Shakeel Butt
` (2 more replies)
2025-04-19 17:21 ` [PATCH v3 0/2] vmscan: enforce mems_effective during demotion Tejun Heo
2 siblings, 3 replies; 20+ messages in thread
From: Gregory Price @ 2025-04-19 5:38 UTC (permalink / raw)
To: linux-mm
Cc: cgroups, linux-kernel, kernel-team, longman, hannes, mhocko,
roman.gushchin, shakeel.butt, muchun.song, tj, mkoutny, akpm
It is possible for a reclaimer to cause demotions of an lruvec belonging
to a cgroup with cpuset.mems set to exclude some nodes. Attempt to apply
this limitation based on the lruvec's memcg and prevent demotion.
Notably, this may still allow demotion of shared libraries or any memory
first instantiated in another cgroup. This means cpusets still cannot
cannot guarantee complete isolation when demotion is enabled, and the
docs have been updated to reflect this.
This is useful for isolating workloads on a multi-tenant system from
certain classes of memory more consistently - with the noted exceptions.
Signed-off-by: Gregory Price <gourry@gourry.net>
---
.../ABI/testing/sysfs-kernel-mm-numa | 14 ++++---
include/linux/cpuset.h | 5 +++
include/linux/memcontrol.h | 6 +++
kernel/cgroup/cpuset.c | 21 ++++++++++
mm/memcontrol.c | 6 +++
mm/vmscan.c | 41 +++++++++++--------
6 files changed, 72 insertions(+), 21 deletions(-)
diff --git a/Documentation/ABI/testing/sysfs-kernel-mm-numa b/Documentation/ABI/testing/sysfs-kernel-mm-numa
index 77e559d4ed80..27cdcab901f7 100644
--- a/Documentation/ABI/testing/sysfs-kernel-mm-numa
+++ b/Documentation/ABI/testing/sysfs-kernel-mm-numa
@@ -16,9 +16,13 @@ Description: Enable/disable demoting pages during reclaim
Allowing page migration during reclaim enables these
systems to migrate pages from fast tiers to slow tiers
when the fast tier is under pressure. This migration
- is performed before swap. It may move data to a NUMA
- node that does not fall into the cpuset of the
- allocating process which might be construed to violate
- the guarantees of cpusets. This should not be enabled
- on systems which need strict cpuset location
+ is performed before swap if an eligible numa node is
+ present in cpuset.mems for the cgroup. If cpusets.mems
+ changes at runtime, it may move data to a NUMA node that
+ does not fall into the cpuset of the new cpusets.mems,
+ which might be construed to violate the guarantees of
+ cpusets. Shared memory, such as libraries, owned by
+ another cgroup may still be demoted and result in memory
+ use on a node not present in cpusets.mem. This should not
+ be enabled on systems which need strict cpuset location
guarantees.
diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
index 893a4c340d48..c64b4a174456 100644
--- a/include/linux/cpuset.h
+++ b/include/linux/cpuset.h
@@ -171,6 +171,7 @@ static inline void set_mems_allowed(nodemask_t nodemask)
task_unlock(current);
}
+extern bool cpuset_node_allowed(struct cgroup *cgroup, int nid);
#else /* !CONFIG_CPUSETS */
static inline bool cpusets_enabled(void) { return false; }
@@ -282,6 +283,10 @@ static inline bool read_mems_allowed_retry(unsigned int seq)
return false;
}
+static inline bool cpuset_node_allowed(struct cgroup *cgroup, int nid)
+{
+ return false;
+}
#endif /* !CONFIG_CPUSETS */
#endif /* _LINUX_CPUSET_H */
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 53364526d877..a6c4e3faf721 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1736,6 +1736,8 @@ static inline void count_objcg_events(struct obj_cgroup *objcg,
rcu_read_unlock();
}
+bool mem_cgroup_node_allowed(struct mem_cgroup *memcg, int nid);
+
#else
static inline bool mem_cgroup_kmem_disabled(void)
{
@@ -1793,6 +1795,10 @@ static inline void count_objcg_events(struct obj_cgroup *objcg,
{
}
+static inline bool mem_cgroup_node_allowed(struct mem_cgroup *memcg, int nid)
+{
+ return true;
+}
#endif /* CONFIG_MEMCG */
#if defined(CONFIG_MEMCG) && defined(CONFIG_ZSWAP)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index f8e6a9b642cb..8814ca8ec710 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -4163,6 +4163,27 @@ bool cpuset_current_node_allowed(int node, gfp_t gfp_mask)
return allowed;
}
+bool cpuset_node_allowed(struct cgroup *cgroup, int nid)
+{
+ struct cgroup_subsys_state *css;
+ unsigned long flags;
+ struct cpuset *cs;
+ bool allowed;
+
+ css = cgroup_get_e_css(cgroup, &cpuset_cgrp_subsys);
+ if (!css)
+ return true;
+
+ cs = container_of(css, struct cpuset, css);
+ spin_lock_irqsave(&callback_lock, flags);
+ /* On v1 effective_mems may be empty, simply allow */
+ allowed = node_isset(nid, cs->effective_mems) ||
+ nodes_empty(cs->effective_mems);
+ spin_unlock_irqrestore(&callback_lock, flags);
+ css_put(css);
+ return allowed;
+}
+
/**
* cpuset_spread_node() - On which node to begin search for a page
* @rotor: round robin rotor
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 40c07b8699ae..2f61d0060fd1 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -29,6 +29,7 @@
#include <linux/page_counter.h>
#include <linux/memcontrol.h>
#include <linux/cgroup.h>
+#include <linux/cpuset.h>
#include <linux/sched/mm.h>
#include <linux/shmem_fs.h>
#include <linux/hugetlb.h>
@@ -5437,3 +5438,8 @@ static int __init mem_cgroup_swap_init(void)
subsys_initcall(mem_cgroup_swap_init);
#endif /* CONFIG_SWAP */
+
+bool mem_cgroup_node_allowed(struct mem_cgroup *memcg, int nid)
+{
+ return memcg ? cpuset_node_allowed(memcg->css.cgroup, nid) : true;
+}
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 2b2ab386cab5..32a7ce421e42 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -342,16 +342,22 @@ static void flush_reclaim_state(struct scan_control *sc)
}
}
-static bool can_demote(int nid, struct scan_control *sc)
+static bool can_demote(int nid, struct scan_control *sc,
+ struct mem_cgroup *memcg)
{
+ int demotion_nid;
+
if (!numa_demotion_enabled)
return false;
if (sc && sc->no_demotion)
return false;
- if (next_demotion_node(nid) == NUMA_NO_NODE)
+
+ demotion_nid = next_demotion_node(nid);
+ if (demotion_nid == NUMA_NO_NODE)
return false;
- return true;
+ /* If demotion node isn't in the cgroup's mems_allowed, fall back */
+ return mem_cgroup_node_allowed(memcg, demotion_nid);
}
static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg,
@@ -376,7 +382,7 @@ static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg,
*
* Can it be reclaimed from this node via demotion?
*/
- return can_demote(nid, sc);
+ return can_demote(nid, sc, memcg);
}
/*
@@ -1096,7 +1102,8 @@ static bool may_enter_fs(struct folio *folio, gfp_t gfp_mask)
*/
static unsigned int shrink_folio_list(struct list_head *folio_list,
struct pglist_data *pgdat, struct scan_control *sc,
- struct reclaim_stat *stat, bool ignore_references)
+ struct reclaim_stat *stat, bool ignore_references,
+ struct mem_cgroup *memcg)
{
struct folio_batch free_folios;
LIST_HEAD(ret_folios);
@@ -1109,7 +1116,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
folio_batch_init(&free_folios);
memset(stat, 0, sizeof(*stat));
cond_resched();
- do_demote_pass = can_demote(pgdat->node_id, sc);
+ do_demote_pass = can_demote(pgdat->node_id, sc, memcg);
retry:
while (!list_empty(folio_list)) {
@@ -1658,7 +1665,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
*/
noreclaim_flag = memalloc_noreclaim_save();
nr_reclaimed = shrink_folio_list(&clean_folios, zone->zone_pgdat, &sc,
- &stat, true);
+ &stat, true, NULL);
memalloc_noreclaim_restore(noreclaim_flag);
list_splice(&clean_folios, folio_list);
@@ -2031,7 +2038,8 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan,
if (nr_taken == 0)
return 0;
- nr_reclaimed = shrink_folio_list(&folio_list, pgdat, sc, &stat, false);
+ nr_reclaimed = shrink_folio_list(&folio_list, pgdat, sc, &stat, false,
+ lruvec_memcg(lruvec));
spin_lock_irq(&lruvec->lru_lock);
move_folios_to_lru(lruvec, &folio_list);
@@ -2214,7 +2222,7 @@ static unsigned int reclaim_folio_list(struct list_head *folio_list,
.no_demotion = 1,
};
- nr_reclaimed = shrink_folio_list(folio_list, pgdat, &sc, &stat, true);
+ nr_reclaimed = shrink_folio_list(folio_list, pgdat, &sc, &stat, true, NULL);
while (!list_empty(folio_list)) {
folio = lru_to_folio(folio_list);
list_del(&folio->lru);
@@ -2646,7 +2654,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
* Anonymous LRU management is a waste if there is
* ultimately no way to reclaim the memory.
*/
-static bool can_age_anon_pages(struct pglist_data *pgdat,
+static bool can_age_anon_pages(struct lruvec *lruvec,
struct scan_control *sc)
{
/* Aging the anon LRU is valuable if swap is present: */
@@ -2654,7 +2662,8 @@ static bool can_age_anon_pages(struct pglist_data *pgdat,
return true;
/* Also valuable if anon pages can be demoted: */
- return can_demote(pgdat->node_id, sc);
+ return can_demote(lruvec_pgdat(lruvec)->node_id, sc,
+ lruvec_memcg(lruvec));
}
#ifdef CONFIG_LRU_GEN
@@ -2732,7 +2741,7 @@ static int get_swappiness(struct lruvec *lruvec, struct scan_control *sc)
if (!sc->may_swap)
return 0;
- if (!can_demote(pgdat->node_id, sc) &&
+ if (!can_demote(pgdat->node_id, sc, memcg) &&
mem_cgroup_get_nr_swap_pages(memcg) < MIN_LRU_BATCH)
return 0;
@@ -4695,7 +4704,7 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap
if (list_empty(&list))
return scanned;
retry:
- reclaimed = shrink_folio_list(&list, pgdat, sc, &stat, false);
+ reclaimed = shrink_folio_list(&list, pgdat, sc, &stat, false, memcg);
sc->nr.unqueued_dirty += stat.nr_unqueued_dirty;
sc->nr_reclaimed += reclaimed;
trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id,
@@ -5850,7 +5859,7 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
* Even if we did not try to evict anon pages at all, we want to
* rebalance the anon lru active/inactive ratio.
*/
- if (can_age_anon_pages(lruvec_pgdat(lruvec), sc) &&
+ if (can_age_anon_pages(lruvec, sc) &&
inactive_is_low(lruvec, LRU_INACTIVE_ANON))
shrink_active_list(SWAP_CLUSTER_MAX, lruvec,
sc, LRU_ACTIVE_ANON);
@@ -6681,10 +6690,10 @@ static void kswapd_age_node(struct pglist_data *pgdat, struct scan_control *sc)
return;
}
- if (!can_age_anon_pages(pgdat, sc))
+ lruvec = mem_cgroup_lruvec(NULL, pgdat);
+ if (!can_age_anon_pages(lruvec, sc))
return;
- lruvec = mem_cgroup_lruvec(NULL, pgdat);
if (!inactive_is_low(lruvec, LRU_INACTIVE_ANON))
return;
--
2.49.0
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 0/2] vmscan: enforce mems_effective during demotion
2025-04-19 5:38 [PATCH v3 0/2] vmscan: enforce mems_effective during demotion Gregory Price
2025-04-19 5:38 ` [PATCH v3 1/2] cpuset: rename cpuset_node_allowed to cpuset_current_node_allowed Gregory Price
2025-04-19 5:38 ` [PATCH v3 2/2] vmscan,cgroup: apply mems_effective to reclaim Gregory Price
@ 2025-04-19 17:21 ` Tejun Heo
2 siblings, 0 replies; 20+ messages in thread
From: Tejun Heo @ 2025-04-19 17:21 UTC (permalink / raw)
To: Gregory Price
Cc: linux-mm, cgroups, linux-kernel, kernel-team, longman, hannes,
mhocko, roman.gushchin, shakeel.butt, muchun.song, mkoutny, akpm
On Sat, Apr 19, 2025 at 01:38:22AM -0400, Gregory Price wrote:
> Change reclaim to respect cpuset.mems_effective during demotion when
> possible. Presently, reclaim explicitly ignores cpuset.mems_effective
> when demoting, which may cause the cpuset settings to violated.
>
> Implement cpuset_node_allowed() to check the cpuset.mems_effective
> associated wih the mem_cgroup of the lruvec being scanned.
>
> This requires renaming the existing cpuset_node_allowed() to be
> cpuset_current_now_allowed() - which is more descriptive anyway - to
> implement the new cpuset_node_allowed() which takes a target cgroup.
>
> v3:
> - remove cgroup indirection, call cpuset directly from memcontrol
> - put mem_cgroup_node_allowed in memcontrol.c to reduce cpuset.h
> include scope
> - return true if mems_effective is empty, and don't walk the parents
> as recommended by Waiman Long.
>
> Gregory Price (2):
> cpuset: rename cpuset_node_allowed to cpuset_current_node_allowed
> vmscan,cgroup: apply mems_effective to reclaim
From cgroup POV:
Acked-by: Tejun Heo <tj@kernel.org>
Given that the operative changes are mostly in mm, it'd probably be best to
route through -mm, but please let me know if you wanna go through the cgroup
tree.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 1/2] cpuset: rename cpuset_node_allowed to cpuset_current_node_allowed
2025-04-19 5:38 ` [PATCH v3 1/2] cpuset: rename cpuset_node_allowed to cpuset_current_node_allowed Gregory Price
@ 2025-04-19 18:37 ` Shakeel Butt
0 siblings, 0 replies; 20+ messages in thread
From: Shakeel Butt @ 2025-04-19 18:37 UTC (permalink / raw)
To: Gregory Price
Cc: linux-mm, cgroups, linux-kernel, kernel-team, longman, hannes,
mhocko, roman.gushchin, muchun.song, tj, mkoutny, akpm
On Sat, Apr 19, 2025 at 01:38:23AM -0400, Gregory Price wrote:
> Rename cpuset_node_allowed to reflect that the function checks the
> current task's cpuset.mems. This allows us to make a new
> cpuset_node_allowed function that checks a target cgroup's cpuset.mems.
>
> Acked-by: Waiman Long <longman@redhat.com>
> Signed-off-by: Gregory Price <gourry@gourry.net>
Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 2/2] vmscan,cgroup: apply mems_effective to reclaim
2025-04-19 5:38 ` [PATCH v3 2/2] vmscan,cgroup: apply mems_effective to reclaim Gregory Price
@ 2025-04-19 18:48 ` Shakeel Butt
2025-04-20 0:14 ` Waiman Long
2025-04-20 0:31 ` Waiman Long
2025-04-21 23:37 ` Shakeel Butt
2 siblings, 1 reply; 20+ messages in thread
From: Shakeel Butt @ 2025-04-19 18:48 UTC (permalink / raw)
To: Gregory Price
Cc: linux-mm, cgroups, linux-kernel, kernel-team, longman, hannes,
mhocko, roman.gushchin, muchun.song, tj, mkoutny, akpm
On Sat, Apr 19, 2025 at 01:38:24AM -0400, Gregory Price wrote:
>
> +bool cpuset_node_allowed(struct cgroup *cgroup, int nid)
> +{
> + struct cgroup_subsys_state *css;
> + unsigned long flags;
> + struct cpuset *cs;
> + bool allowed;
> +
> + css = cgroup_get_e_css(cgroup, &cpuset_cgrp_subsys);
> + if (!css)
> + return true;
> +
> + cs = container_of(css, struct cpuset, css);
> + spin_lock_irqsave(&callback_lock, flags);
Do we really need callback_lock here? We are not modifying and I am
wondering if simple rcu read lock is enough here (similar to
update_nodemasks_hier() where parent's effective_mems is accessed within
rcu read lock).
> + /* On v1 effective_mems may be empty, simply allow */
> + allowed = node_isset(nid, cs->effective_mems) ||
> + nodes_empty(cs->effective_mems);
> + spin_unlock_irqrestore(&callback_lock, flags);
> + css_put(css);
> + return allowed;
> +}
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 2/2] vmscan,cgroup: apply mems_effective to reclaim
2025-04-19 18:48 ` Shakeel Butt
@ 2025-04-20 0:14 ` Waiman Long
2025-04-21 17:39 ` Shakeel Butt
0 siblings, 1 reply; 20+ messages in thread
From: Waiman Long @ 2025-04-20 0:14 UTC (permalink / raw)
To: Shakeel Butt, Gregory Price
Cc: linux-mm, cgroups, linux-kernel, kernel-team, hannes, mhocko,
roman.gushchin, muchun.song, tj, mkoutny, akpm
On 4/19/25 2:48 PM, Shakeel Butt wrote:
> On Sat, Apr 19, 2025 at 01:38:24AM -0400, Gregory Price wrote:
>>
>> +bool cpuset_node_allowed(struct cgroup *cgroup, int nid)
>> +{
>> + struct cgroup_subsys_state *css;
>> + unsigned long flags;
>> + struct cpuset *cs;
>> + bool allowed;
>> +
>> + css = cgroup_get_e_css(cgroup, &cpuset_cgrp_subsys);
>> + if (!css)
>> + return true;
>> +
>> + cs = container_of(css, struct cpuset, css);
>> + spin_lock_irqsave(&callback_lock, flags);
> Do we really need callback_lock here? We are not modifying and I am
> wondering if simple rcu read lock is enough here (similar to
> update_nodemasks_hier() where parent's effective_mems is accessed within
> rcu read lock).
The callback_lock is required to ensure the stability of the
effective_mems which may be in the process of being changed if not taken.
Cheers,
Longman
>
>> + /* On v1 effective_mems may be empty, simply allow */
>> + allowed = node_isset(nid, cs->effective_mems) ||
>> + nodes_empty(cs->effective_mems);
>> + spin_unlock_irqrestore(&callback_lock, flags);
>> + css_put(css);
>> + return allowed;
>> +}
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 2/2] vmscan,cgroup: apply mems_effective to reclaim
2025-04-19 5:38 ` [PATCH v3 2/2] vmscan,cgroup: apply mems_effective to reclaim Gregory Price
2025-04-19 18:48 ` Shakeel Butt
@ 2025-04-20 0:31 ` Waiman Long
2025-04-20 23:59 ` Gregory Price
2025-04-21 23:37 ` Shakeel Butt
2 siblings, 1 reply; 20+ messages in thread
From: Waiman Long @ 2025-04-20 0:31 UTC (permalink / raw)
To: Gregory Price, linux-mm
Cc: cgroups, linux-kernel, kernel-team, hannes, mhocko,
roman.gushchin, shakeel.butt, muchun.song, tj, mkoutny, akpm
On 4/19/25 1:38 AM, Gregory Price wrote:
> diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
> index 893a4c340d48..c64b4a174456 100644
> --- a/include/linux/cpuset.h
> +++ b/include/linux/cpuset.h
> @@ -171,6 +171,7 @@ static inline void set_mems_allowed(nodemask_t nodemask)
> task_unlock(current);
> }
>
> +extern bool cpuset_node_allowed(struct cgroup *cgroup, int nid);
> #else /* !CONFIG_CPUSETS */
>
> static inline bool cpusets_enabled(void) { return false; }
> @@ -282,6 +283,10 @@ static inline bool read_mems_allowed_retry(unsigned int seq)
> return false;
> }
>
> +static inline bool cpuset_node_allowed(struct cgroup *cgroup, int nid)
> +{
> + return false;
> +}
> #endif /* !CONFIG_CPUSETS */
I suppose we should return true in the !CONFIG_CPUSETS case.
Other than that, the patch looks good to me.
Cheers,
Longman
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 2/2] vmscan,cgroup: apply mems_effective to reclaim
2025-04-20 0:31 ` Waiman Long
@ 2025-04-20 23:59 ` Gregory Price
0 siblings, 0 replies; 20+ messages in thread
From: Gregory Price @ 2025-04-20 23:59 UTC (permalink / raw)
To: Waiman Long
Cc: linux-mm, cgroups, linux-kernel, kernel-team, hannes, mhocko,
roman.gushchin, shakeel.butt, muchun.song, tj, mkoutny, akpm
On Sat, Apr 19, 2025 at 08:31:41PM -0400, Waiman Long wrote:
> On 4/19/25 1:38 AM, Gregory Price wrote:
> > diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
> > index 893a4c340d48..c64b4a174456 100644
> > --- a/include/linux/cpuset.h
> > +++ b/include/linux/cpuset.h
> > @@ -171,6 +171,7 @@ static inline void set_mems_allowed(nodemask_t nodemask)
> > task_unlock(current);
> > }
> > +extern bool cpuset_node_allowed(struct cgroup *cgroup, int nid);
> > #else /* !CONFIG_CPUSETS */
> > static inline bool cpusets_enabled(void) { return false; }
> > @@ -282,6 +283,10 @@ static inline bool read_mems_allowed_retry(unsigned int seq)
> > return false;
> > }
> > +static inline bool cpuset_node_allowed(struct cgroup *cgroup, int nid)
> > +{
> > + return false;
> > +}
> > #endif /* !CONFIG_CPUSETS */
>
> I suppose we should return true in the !CONFIG_CPUSETS case.
>
> Other than that, the patch looks good to me.
>
Woop, quite right, thanks.
I'll v4 and hopefully get some -mm feedback
~Gregory
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 2/2] vmscan,cgroup: apply mems_effective to reclaim
2025-04-20 0:14 ` Waiman Long
@ 2025-04-21 17:39 ` Shakeel Butt
2025-04-21 22:59 ` Gregory Price
0 siblings, 1 reply; 20+ messages in thread
From: Shakeel Butt @ 2025-04-21 17:39 UTC (permalink / raw)
To: Waiman Long
Cc: Gregory Price, linux-mm, cgroups, linux-kernel, kernel-team,
hannes, mhocko, roman.gushchin, muchun.song, tj, mkoutny, akpm
On Sat, Apr 19, 2025 at 08:14:29PM -0400, Waiman Long wrote:
>
> On 4/19/25 2:48 PM, Shakeel Butt wrote:
> > On Sat, Apr 19, 2025 at 01:38:24AM -0400, Gregory Price wrote:
> > > +bool cpuset_node_allowed(struct cgroup *cgroup, int nid)
> > > +{
> > > + struct cgroup_subsys_state *css;
> > > + unsigned long flags;
> > > + struct cpuset *cs;
> > > + bool allowed;
> > > +
> > > + css = cgroup_get_e_css(cgroup, &cpuset_cgrp_subsys);
> > > + if (!css)
> > > + return true;
> > > +
> > > + cs = container_of(css, struct cpuset, css);
> > > + spin_lock_irqsave(&callback_lock, flags);
> > Do we really need callback_lock here? We are not modifying and I am
> > wondering if simple rcu read lock is enough here (similar to
> > update_nodemasks_hier() where parent's effective_mems is accessed within
> > rcu read lock).
>
> The callback_lock is required to ensure the stability of the effective_mems
> which may be in the process of being changed if not taken.
Stability in what sense? effective_mems will not get freed under us
here or is there a chance for corrupted read here? node_isset() and
nodes_empty() seems atomic. What's the worst that can happen without
callback_lock?
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 2/2] vmscan,cgroup: apply mems_effective to reclaim
2025-04-21 17:39 ` Shakeel Butt
@ 2025-04-21 22:59 ` Gregory Price
2025-04-21 23:15 ` Shakeel Butt
0 siblings, 1 reply; 20+ messages in thread
From: Gregory Price @ 2025-04-21 22:59 UTC (permalink / raw)
To: Shakeel Butt
Cc: Waiman Long, linux-mm, cgroups, linux-kernel, kernel-team,
hannes, mhocko, roman.gushchin, muchun.song, tj, mkoutny, akpm
On Mon, Apr 21, 2025 at 10:39:58AM -0700, Shakeel Butt wrote:
> On Sat, Apr 19, 2025 at 08:14:29PM -0400, Waiman Long wrote:
> >
> > On 4/19/25 2:48 PM, Shakeel Butt wrote:
> > > On Sat, Apr 19, 2025 at 01:38:24AM -0400, Gregory Price wrote:
> > > > +bool cpuset_node_allowed(struct cgroup *cgroup, int nid)
> > > > +{
> > > > + struct cgroup_subsys_state *css;
> > > > + unsigned long flags;
> > > > + struct cpuset *cs;
> > > > + bool allowed;
> > > > +
> > > > + css = cgroup_get_e_css(cgroup, &cpuset_cgrp_subsys);
> > > > + if (!css)
> > > > + return true;
> > > > +
> > > > + cs = container_of(css, struct cpuset, css);
> > > > + spin_lock_irqsave(&callback_lock, flags);
> > > Do we really need callback_lock here? We are not modifying and I am
> > > wondering if simple rcu read lock is enough here (similar to
> > > update_nodemasks_hier() where parent's effective_mems is accessed within
> > > rcu read lock).
> >
> > The callback_lock is required to ensure the stability of the effective_mems
> > which may be in the process of being changed if not taken.
>
> Stability in what sense? effective_mems will not get freed under us
> here or is there a chance for corrupted read here? node_isset() and
> nodes_empty() seems atomic. What's the worst that can happen without
> callback_lock?
Fairly sure nodes_empty is not atomic, it's a bitmap search.
~Gregory
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 2/2] vmscan,cgroup: apply mems_effective to reclaim
2025-04-21 22:59 ` Gregory Price
@ 2025-04-21 23:15 ` Shakeel Butt
2025-04-21 23:58 ` Gregory Price
2025-04-22 0:10 ` Waiman Long
0 siblings, 2 replies; 20+ messages in thread
From: Shakeel Butt @ 2025-04-21 23:15 UTC (permalink / raw)
To: Gregory Price
Cc: Waiman Long, linux-mm, cgroups, linux-kernel, kernel-team,
hannes, mhocko, roman.gushchin, muchun.song, tj, mkoutny, akpm
On Mon, Apr 21, 2025 at 06:59:20PM -0400, Gregory Price wrote:
> On Mon, Apr 21, 2025 at 10:39:58AM -0700, Shakeel Butt wrote:
> > On Sat, Apr 19, 2025 at 08:14:29PM -0400, Waiman Long wrote:
> > >
> > > On 4/19/25 2:48 PM, Shakeel Butt wrote:
> > > > On Sat, Apr 19, 2025 at 01:38:24AM -0400, Gregory Price wrote:
> > > > > +bool cpuset_node_allowed(struct cgroup *cgroup, int nid)
> > > > > +{
> > > > > + struct cgroup_subsys_state *css;
> > > > > + unsigned long flags;
> > > > > + struct cpuset *cs;
> > > > > + bool allowed;
> > > > > +
> > > > > + css = cgroup_get_e_css(cgroup, &cpuset_cgrp_subsys);
> > > > > + if (!css)
> > > > > + return true;
> > > > > +
> > > > > + cs = container_of(css, struct cpuset, css);
> > > > > + spin_lock_irqsave(&callback_lock, flags);
> > > > Do we really need callback_lock here? We are not modifying and I am
> > > > wondering if simple rcu read lock is enough here (similar to
> > > > update_nodemasks_hier() where parent's effective_mems is accessed within
> > > > rcu read lock).
> > >
> > > The callback_lock is required to ensure the stability of the effective_mems
> > > which may be in the process of being changed if not taken.
> >
> > Stability in what sense? effective_mems will not get freed under us
> > here or is there a chance for corrupted read here? node_isset() and
> > nodes_empty() seems atomic. What's the worst that can happen without
> > callback_lock?
>
> Fairly sure nodes_empty is not atomic, it's a bitmap search.
For bitmaps smaller than 64 bits, it seems atomic and MAX_NUMNODES seems
smaller than 64 in all the archs.
Anyways I am hoping that we can avoid taking a global lock in reclaim
path which will become a source of contention for memory pressure
situations.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 2/2] vmscan,cgroup: apply mems_effective to reclaim
2025-04-19 5:38 ` [PATCH v3 2/2] vmscan,cgroup: apply mems_effective to reclaim Gregory Price
2025-04-19 18:48 ` Shakeel Butt
2025-04-20 0:31 ` Waiman Long
@ 2025-04-21 23:37 ` Shakeel Butt
2 siblings, 0 replies; 20+ messages in thread
From: Shakeel Butt @ 2025-04-21 23:37 UTC (permalink / raw)
To: Gregory Price
Cc: linux-mm, cgroups, linux-kernel, kernel-team, longman, hannes,
mhocko, roman.gushchin, muchun.song, tj, mkoutny, akpm
On Sat, Apr 19, 2025 at 01:38:24AM -0400, Gregory Price wrote:
> static unsigned int shrink_folio_list(struct list_head *folio_list,
> struct pglist_data *pgdat, struct scan_control *sc,
> - struct reclaim_stat *stat, bool ignore_references)
> + struct reclaim_stat *stat, bool ignore_references,
> + struct mem_cgroup *memcg)
> {
> struct folio_batch free_folios;
> LIST_HEAD(ret_folios);
> @@ -1109,7 +1116,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
> folio_batch_init(&free_folios);
> memset(stat, 0, sizeof(*stat));
> cond_resched();
> - do_demote_pass = can_demote(pgdat->node_id, sc);
> + do_demote_pass = can_demote(pgdat->node_id, sc, memcg);
For code paths where we might have folios of different memcgs in
folio_list, should we look at individual folios for their memcgs or is
it ok to ignore such cases? If we are ignoring then let's add a comment
about that.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 2/2] vmscan,cgroup: apply mems_effective to reclaim
2025-04-21 23:15 ` Shakeel Butt
@ 2025-04-21 23:58 ` Gregory Price
2025-04-22 0:10 ` Shakeel Butt
2025-04-22 0:35 ` Waiman Long
2025-04-22 0:10 ` Waiman Long
1 sibling, 2 replies; 20+ messages in thread
From: Gregory Price @ 2025-04-21 23:58 UTC (permalink / raw)
To: Shakeel Butt
Cc: Waiman Long, linux-mm, cgroups, linux-kernel, kernel-team,
hannes, mhocko, roman.gushchin, muchun.song, tj, mkoutny, akpm
On Mon, Apr 21, 2025 at 04:15:49PM -0700, Shakeel Butt wrote:
> On Mon, Apr 21, 2025 at 06:59:20PM -0400, Gregory Price wrote:
> > On Mon, Apr 21, 2025 at 10:39:58AM -0700, Shakeel Butt wrote:
> > > On Sat, Apr 19, 2025 at 08:14:29PM -0400, Waiman Long wrote:
> > > >
> > > > On 4/19/25 2:48 PM, Shakeel Butt wrote:
> > > > > On Sat, Apr 19, 2025 at 01:38:24AM -0400, Gregory Price wrote:
> > > > > > +bool cpuset_node_allowed(struct cgroup *cgroup, int nid)
> > > > > > +{
> > > > > > + struct cgroup_subsys_state *css;
> > > > > > + unsigned long flags;
> > > > > > + struct cpuset *cs;
> > > > > > + bool allowed;
> > > > > > +
> > > > > > + css = cgroup_get_e_css(cgroup, &cpuset_cgrp_subsys);
> > > > > > + if (!css)
> > > > > > + return true;
> > > > > > +
> > > > > > + cs = container_of(css, struct cpuset, css);
> > > > > > + spin_lock_irqsave(&callback_lock, flags);
> > > > > Do we really need callback_lock here? We are not modifying and I am
> > > > > wondering if simple rcu read lock is enough here (similar to
> > > > > update_nodemasks_hier() where parent's effective_mems is accessed within
> > > > > rcu read lock).
> > > >
> > > > The callback_lock is required to ensure the stability of the effective_mems
> > > > which may be in the process of being changed if not taken.
> > >
> > > Stability in what sense? effective_mems will not get freed under us
> > > here or is there a chance for corrupted read here? node_isset() and
> > > nodes_empty() seems atomic. What's the worst that can happen without
> > > callback_lock?
> >
> > Fairly sure nodes_empty is not atomic, it's a bitmap search.
>
> For bitmaps smaller than 64 bits, it seems atomic and MAX_NUMNODES seems
> smaller than 64 in all the archs.
Unfortunately, it's config-defined on (NODES_SHIFT) and the max is 1024.
Is there an argument here for ignoring v1 and just doing the bit-check
without the lock? Is there an easy ifdef way for us to just return true
if it's v1?
~Gregory
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 2/2] vmscan,cgroup: apply mems_effective to reclaim
2025-04-21 23:15 ` Shakeel Butt
2025-04-21 23:58 ` Gregory Price
@ 2025-04-22 0:10 ` Waiman Long
2025-04-22 0:16 ` Shakeel Butt
1 sibling, 1 reply; 20+ messages in thread
From: Waiman Long @ 2025-04-22 0:10 UTC (permalink / raw)
To: Shakeel Butt, Gregory Price
Cc: Waiman Long, linux-mm, cgroups, linux-kernel, kernel-team,
hannes, mhocko, roman.gushchin, muchun.song, tj, mkoutny, akpm
On 4/21/25 7:15 PM, Shakeel Butt wrote:
> On Mon, Apr 21, 2025 at 06:59:20PM -0400, Gregory Price wrote:
>> On Mon, Apr 21, 2025 at 10:39:58AM -0700, Shakeel Butt wrote:
>>> On Sat, Apr 19, 2025 at 08:14:29PM -0400, Waiman Long wrote:
>>>> On 4/19/25 2:48 PM, Shakeel Butt wrote:
>>>>> On Sat, Apr 19, 2025 at 01:38:24AM -0400, Gregory Price wrote:
>>>>>> +bool cpuset_node_allowed(struct cgroup *cgroup, int nid)
>>>>>> +{
>>>>>> + struct cgroup_subsys_state *css;
>>>>>> + unsigned long flags;
>>>>>> + struct cpuset *cs;
>>>>>> + bool allowed;
>>>>>> +
>>>>>> + css = cgroup_get_e_css(cgroup, &cpuset_cgrp_subsys);
>>>>>> + if (!css)
>>>>>> + return true;
>>>>>> +
>>>>>> + cs = container_of(css, struct cpuset, css);
>>>>>> + spin_lock_irqsave(&callback_lock, flags);
>>>>> Do we really need callback_lock here? We are not modifying and I am
>>>>> wondering if simple rcu read lock is enough here (similar to
>>>>> update_nodemasks_hier() where parent's effective_mems is accessed within
>>>>> rcu read lock).
>>>> The callback_lock is required to ensure the stability of the effective_mems
>>>> which may be in the process of being changed if not taken.
>>> Stability in what sense? effective_mems will not get freed under us
>>> here or is there a chance for corrupted read here? node_isset() and
>>> nodes_empty() seems atomic. What's the worst that can happen without
>>> callback_lock?
>> Fairly sure nodes_empty is not atomic, it's a bitmap search.
> For bitmaps smaller than 64 bits, it seems atomic and MAX_NUMNODES seems
> smaller than 64 in all the archs.
RHEL sets MAX_NUMNODES to 1024 for x86_64. So it is not really atomic
for some distros. In reality, it is rare to have a system with more than
64 nodes (nr_node_ids <= 64). So it can be considered atomic in most cases.
>
> Anyways I am hoping that we can avoid taking a global lock in reclaim
> path which will become a source of contention for memory pressure
> situations.
It is a valid conern. I will not oppose to checking effective_mems
without taking the callback_lock, but we will have to take rcu_read_lock
to make sure that the cpuset structure won't go away and clearly
document that this is an exceptional case as it is against our usual
rule and the check may be incorrect in some rare cases.
Cheers,
Longman
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 2/2] vmscan,cgroup: apply mems_effective to reclaim
2025-04-21 23:58 ` Gregory Price
@ 2025-04-22 0:10 ` Shakeel Butt
2025-04-22 0:39 ` Waiman Long
2025-04-22 0:35 ` Waiman Long
1 sibling, 1 reply; 20+ messages in thread
From: Shakeel Butt @ 2025-04-22 0:10 UTC (permalink / raw)
To: Gregory Price
Cc: Waiman Long, linux-mm, cgroups, linux-kernel, kernel-team,
hannes, mhocko, roman.gushchin, muchun.song, tj, mkoutny, akpm
On Mon, Apr 21, 2025 at 07:58:44PM -0400, Gregory Price wrote:
> On Mon, Apr 21, 2025 at 04:15:49PM -0700, Shakeel Butt wrote:
> > On Mon, Apr 21, 2025 at 06:59:20PM -0400, Gregory Price wrote:
> > > On Mon, Apr 21, 2025 at 10:39:58AM -0700, Shakeel Butt wrote:
> > > > On Sat, Apr 19, 2025 at 08:14:29PM -0400, Waiman Long wrote:
> > > > >
> > > > > On 4/19/25 2:48 PM, Shakeel Butt wrote:
> > > > > > On Sat, Apr 19, 2025 at 01:38:24AM -0400, Gregory Price wrote:
> > > > > > > +bool cpuset_node_allowed(struct cgroup *cgroup, int nid)
> > > > > > > +{
> > > > > > > + struct cgroup_subsys_state *css;
> > > > > > > + unsigned long flags;
> > > > > > > + struct cpuset *cs;
> > > > > > > + bool allowed;
> > > > > > > +
> > > > > > > + css = cgroup_get_e_css(cgroup, &cpuset_cgrp_subsys);
> > > > > > > + if (!css)
> > > > > > > + return true;
> > > > > > > +
> > > > > > > + cs = container_of(css, struct cpuset, css);
> > > > > > > + spin_lock_irqsave(&callback_lock, flags);
> > > > > > Do we really need callback_lock here? We are not modifying and I am
> > > > > > wondering if simple rcu read lock is enough here (similar to
> > > > > > update_nodemasks_hier() where parent's effective_mems is accessed within
> > > > > > rcu read lock).
> > > > >
> > > > > The callback_lock is required to ensure the stability of the effective_mems
> > > > > which may be in the process of being changed if not taken.
> > > >
> > > > Stability in what sense? effective_mems will not get freed under us
> > > > here or is there a chance for corrupted read here? node_isset() and
> > > > nodes_empty() seems atomic. What's the worst that can happen without
> > > > callback_lock?
> > >
> > > Fairly sure nodes_empty is not atomic, it's a bitmap search.
> >
> > For bitmaps smaller than 64 bits, it seems atomic and MAX_NUMNODES seems
> > smaller than 64 in all the archs.
>
> Unfortunately, it's config-defined on (NODES_SHIFT) and the max is 1024.
>
> Is there an argument here for ignoring v1 and just doing the bit-check
> without the lock? Is there an easy ifdef way for us to just return true
> if it's v1?
>
It is !(cgroup_subsys_on_dfl(cpuset_cgrp_subsys)) and I see cpuset_v2()
and is_in_v2_mode() in kernel/cgroup/cpuset.c.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 2/2] vmscan,cgroup: apply mems_effective to reclaim
2025-04-22 0:10 ` Waiman Long
@ 2025-04-22 0:16 ` Shakeel Butt
0 siblings, 0 replies; 20+ messages in thread
From: Shakeel Butt @ 2025-04-22 0:16 UTC (permalink / raw)
To: Waiman Long
Cc: Gregory Price, linux-mm, cgroups, linux-kernel, kernel-team,
hannes, mhocko, roman.gushchin, muchun.song, tj, mkoutny, akpm
On Mon, Apr 21, 2025 at 08:10:41PM -0400, Waiman Long wrote:
> On 4/21/25 7:15 PM, Shakeel Butt wrote:
> > On Mon, Apr 21, 2025 at 06:59:20PM -0400, Gregory Price wrote:
> > > On Mon, Apr 21, 2025 at 10:39:58AM -0700, Shakeel Butt wrote:
> > > > On Sat, Apr 19, 2025 at 08:14:29PM -0400, Waiman Long wrote:
> > > > > On 4/19/25 2:48 PM, Shakeel Butt wrote:
> > > > > > On Sat, Apr 19, 2025 at 01:38:24AM -0400, Gregory Price wrote:
> > > > > > > +bool cpuset_node_allowed(struct cgroup *cgroup, int nid)
> > > > > > > +{
> > > > > > > + struct cgroup_subsys_state *css;
> > > > > > > + unsigned long flags;
> > > > > > > + struct cpuset *cs;
> > > > > > > + bool allowed;
> > > > > > > +
> > > > > > > + css = cgroup_get_e_css(cgroup, &cpuset_cgrp_subsys);
> > > > > > > + if (!css)
> > > > > > > + return true;
> > > > > > > +
> > > > > > > + cs = container_of(css, struct cpuset, css);
> > > > > > > + spin_lock_irqsave(&callback_lock, flags);
> > > > > > Do we really need callback_lock here? We are not modifying and I am
> > > > > > wondering if simple rcu read lock is enough here (similar to
> > > > > > update_nodemasks_hier() where parent's effective_mems is accessed within
> > > > > > rcu read lock).
> > > > > The callback_lock is required to ensure the stability of the effective_mems
> > > > > which may be in the process of being changed if not taken.
> > > > Stability in what sense? effective_mems will not get freed under us
> > > > here or is there a chance for corrupted read here? node_isset() and
> > > > nodes_empty() seems atomic. What's the worst that can happen without
> > > > callback_lock?
> > > Fairly sure nodes_empty is not atomic, it's a bitmap search.
> > For bitmaps smaller than 64 bits, it seems atomic and MAX_NUMNODES seems
> > smaller than 64 in all the archs.
>
> RHEL sets MAX_NUMNODES to 1024 for x86_64. So it is not really atomic for
> some distros. In reality, it is rare to have a system with more than 64
> nodes (nr_node_ids <= 64). So it can be considered atomic in most cases.
Thanks for the explanation.
>
>
> >
> > Anyways I am hoping that we can avoid taking a global lock in reclaim
> > path which will become a source of contention for memory pressure
> > situations.
>
> It is a valid conern. I will not oppose to checking effective_mems without
> taking the callback_lock, but we will have to take rcu_read_lock to make
> sure that the cpuset structure won't go away and clearly document that this
> is an exceptional case as it is against our usual rule and the check may be
> incorrect in some rare cases.
Oh this function is doing cgroup_get_e_css(cgroup, &cpuset_cgrp_subsys)
i.e. taking a reference on cpuset, so with rcu_read_lock, we can avoid
that as well.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 2/2] vmscan,cgroup: apply mems_effective to reclaim
2025-04-21 23:58 ` Gregory Price
2025-04-22 0:10 ` Shakeel Butt
@ 2025-04-22 0:35 ` Waiman Long
2025-04-22 1:00 ` Gregory Price
1 sibling, 1 reply; 20+ messages in thread
From: Waiman Long @ 2025-04-22 0:35 UTC (permalink / raw)
To: Gregory Price, Shakeel Butt
Cc: Waiman Long, linux-mm, cgroups, linux-kernel, kernel-team,
hannes, mhocko, roman.gushchin, muchun.song, tj, mkoutny, akpm
On 4/21/25 7:58 PM, Gregory Price wrote:
> On Mon, Apr 21, 2025 at 04:15:49PM -0700, Shakeel Butt wrote:
>> On Mon, Apr 21, 2025 at 06:59:20PM -0400, Gregory Price wrote:
>>> On Mon, Apr 21, 2025 at 10:39:58AM -0700, Shakeel Butt wrote:
>>>> On Sat, Apr 19, 2025 at 08:14:29PM -0400, Waiman Long wrote:
>>>>> On 4/19/25 2:48 PM, Shakeel Butt wrote:
>>>>>> On Sat, Apr 19, 2025 at 01:38:24AM -0400, Gregory Price wrote:
>>>>>>> +bool cpuset_node_allowed(struct cgroup *cgroup, int nid)
>>>>>>> +{
>>>>>>> + struct cgroup_subsys_state *css;
>>>>>>> + unsigned long flags;
>>>>>>> + struct cpuset *cs;
>>>>>>> + bool allowed;
>>>>>>> +
>>>>>>> + css = cgroup_get_e_css(cgroup, &cpuset_cgrp_subsys);
>>>>>>> + if (!css)
>>>>>>> + return true;
>>>>>>> +
>>>>>>> + cs = container_of(css, struct cpuset, css);
>>>>>>> + spin_lock_irqsave(&callback_lock, flags);
>>>>>> Do we really need callback_lock here? We are not modifying and I am
>>>>>> wondering if simple rcu read lock is enough here (similar to
>>>>>> update_nodemasks_hier() where parent's effective_mems is accessed within
>>>>>> rcu read lock).
>>>>> The callback_lock is required to ensure the stability of the effective_mems
>>>>> which may be in the process of being changed if not taken.
>>>> Stability in what sense? effective_mems will not get freed under us
>>>> here or is there a chance for corrupted read here? node_isset() and
>>>> nodes_empty() seems atomic. What's the worst that can happen without
>>>> callback_lock?
>>> Fairly sure nodes_empty is not atomic, it's a bitmap search.
>> For bitmaps smaller than 64 bits, it seems atomic and MAX_NUMNODES seems
>> smaller than 64 in all the archs.
> Unfortunately, it's config-defined on (NODES_SHIFT) and the max is 1024.
Actually, it is nr_node_ids that control how many reads are needed to
scan the complete node mask, not CONFIG_NODE_SHIFT.
>
> Is there an argument here for ignoring v1 and just doing the bit-check
> without the lock? Is there an easy ifdef way for us to just return true
> if it's v1?
Your current patch is ignoring v1 as css will be NULL. It only works for
v2 with a unified hierarchy unless some users explicitly force cpuset
and memcg v1 to be in the same hierarchy. You can certainly ignore v1 by
using cpuset_v2() check.
Cheers,
Longman
~Gregory
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 2/2] vmscan,cgroup: apply mems_effective to reclaim
2025-04-22 0:10 ` Shakeel Butt
@ 2025-04-22 0:39 ` Waiman Long
0 siblings, 0 replies; 20+ messages in thread
From: Waiman Long @ 2025-04-22 0:39 UTC (permalink / raw)
To: Shakeel Butt, Gregory Price
Cc: Waiman Long, linux-mm, cgroups, linux-kernel, kernel-team,
hannes, mhocko, roman.gushchin, muchun.song, tj, mkoutny, akpm
On 4/21/25 8:10 PM, Shakeel Butt wrote:
> On Mon, Apr 21, 2025 at 07:58:44PM -0400, Gregory Price wrote:
>> On Mon, Apr 21, 2025 at 04:15:49PM -0700, Shakeel Butt wrote:
>>> On Mon, Apr 21, 2025 at 06:59:20PM -0400, Gregory Price wrote:
>>>> On Mon, Apr 21, 2025 at 10:39:58AM -0700, Shakeel Butt wrote:
>>>>> On Sat, Apr 19, 2025 at 08:14:29PM -0400, Waiman Long wrote:
>>>>>> On 4/19/25 2:48 PM, Shakeel Butt wrote:
>>>>>>> On Sat, Apr 19, 2025 at 01:38:24AM -0400, Gregory Price wrote:
>>>>>>>> +bool cpuset_node_allowed(struct cgroup *cgroup, int nid)
>>>>>>>> +{
>>>>>>>> + struct cgroup_subsys_state *css;
>>>>>>>> + unsigned long flags;
>>>>>>>> + struct cpuset *cs;
>>>>>>>> + bool allowed;
>>>>>>>> +
>>>>>>>> + css = cgroup_get_e_css(cgroup, &cpuset_cgrp_subsys);
>>>>>>>> + if (!css)
>>>>>>>> + return true;
>>>>>>>> +
>>>>>>>> + cs = container_of(css, struct cpuset, css);
>>>>>>>> + spin_lock_irqsave(&callback_lock, flags);
>>>>>>> Do we really need callback_lock here? We are not modifying and I am
>>>>>>> wondering if simple rcu read lock is enough here (similar to
>>>>>>> update_nodemasks_hier() where parent's effective_mems is accessed within
>>>>>>> rcu read lock).
>>>>>> The callback_lock is required to ensure the stability of the effective_mems
>>>>>> which may be in the process of being changed if not taken.
>>>>> Stability in what sense? effective_mems will not get freed under us
>>>>> here or is there a chance for corrupted read here? node_isset() and
>>>>> nodes_empty() seems atomic. What's the worst that can happen without
>>>>> callback_lock?
>>>> Fairly sure nodes_empty is not atomic, it's a bitmap search.
>>> For bitmaps smaller than 64 bits, it seems atomic and MAX_NUMNODES seems
>>> smaller than 64 in all the archs.
>> Unfortunately, it's config-defined on (NODES_SHIFT) and the max is 1024.
>>
>> Is there an argument here for ignoring v1 and just doing the bit-check
>> without the lock? Is there an easy ifdef way for us to just return true
>> if it's v1?
>>
> It is !(cgroup_subsys_on_dfl(cpuset_cgrp_subsys)) and I see cpuset_v2()
> and is_in_v2_mode() in kernel/cgroup/cpuset.c.
The is_in_v2_mode() function covers cpuset2 and cpuset1 with
cpuset_v2_mode mount option, while the cpuset_v2() will only be true for
cpuset2 and allowing compiling code out in case CPUSETS_V1 isn't set.
Cheers,
Longman
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 2/2] vmscan,cgroup: apply mems_effective to reclaim
2025-04-22 0:35 ` Waiman Long
@ 2025-04-22 1:00 ` Gregory Price
0 siblings, 0 replies; 20+ messages in thread
From: Gregory Price @ 2025-04-22 1:00 UTC (permalink / raw)
To: Waiman Long
Cc: Shakeel Butt, linux-mm, cgroups, linux-kernel, kernel-team,
hannes, mhocko, roman.gushchin, muchun.song, tj, mkoutny, akpm
On Mon, Apr 21, 2025 at 08:35:15PM -0400, Waiman Long wrote:
>
> Your current patch is ignoring v1 as css will be NULL. It only works for v2
> with a unified hierarchy unless some users explicitly force cpuset and memcg
> v1 to be in the same hierarchy. You can certainly ignore v1 by using
> cpuset_v2() check.
>
I'll whip this up and get it out this evening.
~Gregory
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2025-04-22 1:00 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-19 5:38 [PATCH v3 0/2] vmscan: enforce mems_effective during demotion Gregory Price
2025-04-19 5:38 ` [PATCH v3 1/2] cpuset: rename cpuset_node_allowed to cpuset_current_node_allowed Gregory Price
2025-04-19 18:37 ` Shakeel Butt
2025-04-19 5:38 ` [PATCH v3 2/2] vmscan,cgroup: apply mems_effective to reclaim Gregory Price
2025-04-19 18:48 ` Shakeel Butt
2025-04-20 0:14 ` Waiman Long
2025-04-21 17:39 ` Shakeel Butt
2025-04-21 22:59 ` Gregory Price
2025-04-21 23:15 ` Shakeel Butt
2025-04-21 23:58 ` Gregory Price
2025-04-22 0:10 ` Shakeel Butt
2025-04-22 0:39 ` Waiman Long
2025-04-22 0:35 ` Waiman Long
2025-04-22 1:00 ` Gregory Price
2025-04-22 0:10 ` Waiman Long
2025-04-22 0:16 ` Shakeel Butt
2025-04-20 0:31 ` Waiman Long
2025-04-20 23:59 ` Gregory Price
2025-04-21 23:37 ` Shakeel Butt
2025-04-19 17:21 ` [PATCH v3 0/2] vmscan: enforce mems_effective during demotion Tejun Heo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox