linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] mm: memcg: subtree stats flushing and thresholds
@ 2023-09-21  8:10 Yosry Ahmed
  2023-09-21  8:10 ` [PATCH 1/5] mm: memcg: change flush_next_time to flush_last_time Yosry Ahmed
                   ` (5 more replies)
  0 siblings, 6 replies; 12+ messages in thread
From: Yosry Ahmed @ 2023-09-21  8:10 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Muchun Song, Ivan Babrou, Tejun Heo, Michal Koutný,
	Waiman Long, kernel-team, Wei Xu, Greg Thelen, linux-mm, cgroups,
	linux-kernel, Yosry Ahmed

This series attempts to address shortages in today's approach for memcg
stats flushing, namely occasionally stale or expensive stat reads. The
series does so by changing the threshold that we use to decide whether
to trigger a flush to be per memcg instead of global (patch 3), and then
changing flushing to be per memcg (i.e. subtree flushes) instead of
global (patch 5).

Patch 3 & 5 are the core of the series, and they include more details
and testing results. The rest are either cleanups or prep work.

This series replaces the "memcg: more sophisticated stats flushing"
series [1], which also replaces another series, in a long list of
attempts to improve memcg stats flushing. It is not a v2 as it is a
completely different approach. This is based on collected feedback from
discussions on lkml in all previous attempts. Hopefully, this is the
final attempt.

[1]https://lore.kernel.org/lkml/20230913073846.1528938-1-yosryahmed@google.com/

Yosry Ahmed (5):
  mm: memcg: change flush_next_time to flush_last_time
  mm: memcg: move vmstats structs definition above flushing code
  mm: memcg: make stats flushing threshold per-memcg
  mm: workingset: move the stats flush into workingset_test_recent()
  mm: memcg: restore subtree stats flushing

 include/linux/memcontrol.h |   8 +-
 mm/memcontrol.c            | 269 +++++++++++++++++++++----------------
 mm/vmscan.c                |   2 +-
 mm/workingset.c            |  37 +++--
 4 files changed, 181 insertions(+), 135 deletions(-)

-- 
2.42.0.459.ge4e396fd5e-goog



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/5] mm: memcg: change flush_next_time to flush_last_time
  2023-09-21  8:10 [PATCH 0/5] mm: memcg: subtree stats flushing and thresholds Yosry Ahmed
@ 2023-09-21  8:10 ` Yosry Ahmed
  2023-09-21  8:10 ` [PATCH 2/5] mm: memcg: move vmstats structs definition above flushing code Yosry Ahmed
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: Yosry Ahmed @ 2023-09-21  8:10 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Muchun Song, Ivan Babrou, Tejun Heo, Michal Koutný,
	Waiman Long, kernel-team, Wei Xu, Greg Thelen, linux-mm, cgroups,
	linux-kernel, Yosry Ahmed

flush_next_time is an inaccurate name. It's not the next time that
periodic flushing will happen, it's rather the next time that
ratelimited flushing can happen if the periodic flusher is late.

Simplify its semantics by just storing the timestamp of the last flush
instead, flush_last_time. Move the 2*FLUSH_TIME addition to
mem_cgroup_flush_stats_ratelimited(), and add a comment explaining it.
This way, all the ratelimiting semantics live in one place.

No functional change intended.

Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
---
 mm/memcontrol.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 927c64d3cbcb..49562dfdeab2 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -590,7 +590,7 @@ static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork);
 static DEFINE_PER_CPU(unsigned int, stats_updates);
 static atomic_t stats_flush_ongoing = ATOMIC_INIT(0);
 static atomic_t stats_flush_threshold = ATOMIC_INIT(0);
-static u64 flush_next_time;
+static u64 flush_last_time;
 
 #define FLUSH_TIME (2UL*HZ)
 
@@ -650,7 +650,7 @@ static void do_flush_stats(void)
 	    atomic_xchg(&stats_flush_ongoing, 1))
 		return;
 
-	WRITE_ONCE(flush_next_time, jiffies_64 + 2*FLUSH_TIME);
+	WRITE_ONCE(flush_last_time, jiffies_64);
 
 	cgroup_rstat_flush(root_mem_cgroup->css.cgroup);
 
@@ -666,7 +666,8 @@ void mem_cgroup_flush_stats(void)
 
 void mem_cgroup_flush_stats_ratelimited(void)
 {
-	if (time_after64(jiffies_64, READ_ONCE(flush_next_time)))
+	/* Only flush if the periodic flusher is one full cycle late */
+	if (time_after64(jiffies_64, READ_ONCE(flush_last_time) + 2*FLUSH_TIME))
 		mem_cgroup_flush_stats();
 }
 
-- 
2.42.0.459.ge4e396fd5e-goog



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 2/5] mm: memcg: move vmstats structs definition above flushing code
  2023-09-21  8:10 [PATCH 0/5] mm: memcg: subtree stats flushing and thresholds Yosry Ahmed
  2023-09-21  8:10 ` [PATCH 1/5] mm: memcg: change flush_next_time to flush_last_time Yosry Ahmed
@ 2023-09-21  8:10 ` Yosry Ahmed
  2023-09-21  8:10 ` [PATCH 3/5] mm: memcg: make stats flushing threshold per-memcg Yosry Ahmed
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: Yosry Ahmed @ 2023-09-21  8:10 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Muchun Song, Ivan Babrou, Tejun Heo, Michal Koutný,
	Waiman Long, kernel-team, Wei Xu, Greg Thelen, linux-mm, cgroups,
	linux-kernel, Yosry Ahmed

The following patch will make use of those structs in the flushing code,
so move their definitions (and a few other dependencies) a little bit up
to reduce the diff noise in the following patch.

No functional change intended.

Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
---
 mm/memcontrol.c | 146 ++++++++++++++++++++++++------------------------
 1 file changed, 73 insertions(+), 73 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 49562dfdeab2..ef7ad66a9e4c 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -570,6 +570,79 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_node *mctz)
 	return mz;
 }
 
+/* Subset of vm_event_item to report for memcg event stats */
+static const unsigned int memcg_vm_event_stat[] = {
+	PGPGIN,
+	PGPGOUT,
+	PGSCAN_KSWAPD,
+	PGSCAN_DIRECT,
+	PGSCAN_KHUGEPAGED,
+	PGSTEAL_KSWAPD,
+	PGSTEAL_DIRECT,
+	PGSTEAL_KHUGEPAGED,
+	PGFAULT,
+	PGMAJFAULT,
+	PGREFILL,
+	PGACTIVATE,
+	PGDEACTIVATE,
+	PGLAZYFREE,
+	PGLAZYFREED,
+#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
+	ZSWPIN,
+	ZSWPOUT,
+#endif
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+	THP_FAULT_ALLOC,
+	THP_COLLAPSE_ALLOC,
+	THP_SWPOUT,
+	THP_SWPOUT_FALLBACK,
+#endif
+};
+
+#define NR_MEMCG_EVENTS ARRAY_SIZE(memcg_vm_event_stat)
+static int mem_cgroup_events_index[NR_VM_EVENT_ITEMS] __read_mostly;
+
+static void init_memcg_events(void)
+{
+	int i;
+
+	for (i = 0; i < NR_MEMCG_EVENTS; ++i)
+		mem_cgroup_events_index[memcg_vm_event_stat[i]] = i + 1;
+}
+
+static inline int memcg_events_index(enum vm_event_item idx)
+{
+	return mem_cgroup_events_index[idx] - 1;
+}
+
+struct memcg_vmstats_percpu {
+	/* Local (CPU and cgroup) page state & events */
+	long			state[MEMCG_NR_STAT];
+	unsigned long		events[NR_MEMCG_EVENTS];
+
+	/* Delta calculation for lockless upward propagation */
+	long			state_prev[MEMCG_NR_STAT];
+	unsigned long		events_prev[NR_MEMCG_EVENTS];
+
+	/* Cgroup1: threshold notifications & softlimit tree updates */
+	unsigned long		nr_page_events;
+	unsigned long		targets[MEM_CGROUP_NTARGETS];
+};
+
+struct memcg_vmstats {
+	/* Aggregated (CPU and subtree) page state & events */
+	long			state[MEMCG_NR_STAT];
+	unsigned long		events[NR_MEMCG_EVENTS];
+
+	/* Non-hierarchical (CPU aggregated) page state & events */
+	long			state_local[MEMCG_NR_STAT];
+	unsigned long		events_local[NR_MEMCG_EVENTS];
+
+	/* Pending child counts during tree propagation */
+	long			state_pending[MEMCG_NR_STAT];
+	unsigned long		events_pending[NR_MEMCG_EVENTS];
+};
+
 /*
  * memcg and lruvec stats flushing
  *
@@ -681,79 +754,6 @@ static void flush_memcg_stats_dwork(struct work_struct *w)
 	queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME);
 }
 
-/* Subset of vm_event_item to report for memcg event stats */
-static const unsigned int memcg_vm_event_stat[] = {
-	PGPGIN,
-	PGPGOUT,
-	PGSCAN_KSWAPD,
-	PGSCAN_DIRECT,
-	PGSCAN_KHUGEPAGED,
-	PGSTEAL_KSWAPD,
-	PGSTEAL_DIRECT,
-	PGSTEAL_KHUGEPAGED,
-	PGFAULT,
-	PGMAJFAULT,
-	PGREFILL,
-	PGACTIVATE,
-	PGDEACTIVATE,
-	PGLAZYFREE,
-	PGLAZYFREED,
-#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
-	ZSWPIN,
-	ZSWPOUT,
-#endif
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
-	THP_FAULT_ALLOC,
-	THP_COLLAPSE_ALLOC,
-	THP_SWPOUT,
-	THP_SWPOUT_FALLBACK,
-#endif
-};
-
-#define NR_MEMCG_EVENTS ARRAY_SIZE(memcg_vm_event_stat)
-static int mem_cgroup_events_index[NR_VM_EVENT_ITEMS] __read_mostly;
-
-static void init_memcg_events(void)
-{
-	int i;
-
-	for (i = 0; i < NR_MEMCG_EVENTS; ++i)
-		mem_cgroup_events_index[memcg_vm_event_stat[i]] = i + 1;
-}
-
-static inline int memcg_events_index(enum vm_event_item idx)
-{
-	return mem_cgroup_events_index[idx] - 1;
-}
-
-struct memcg_vmstats_percpu {
-	/* Local (CPU and cgroup) page state & events */
-	long			state[MEMCG_NR_STAT];
-	unsigned long		events[NR_MEMCG_EVENTS];
-
-	/* Delta calculation for lockless upward propagation */
-	long			state_prev[MEMCG_NR_STAT];
-	unsigned long		events_prev[NR_MEMCG_EVENTS];
-
-	/* Cgroup1: threshold notifications & softlimit tree updates */
-	unsigned long		nr_page_events;
-	unsigned long		targets[MEM_CGROUP_NTARGETS];
-};
-
-struct memcg_vmstats {
-	/* Aggregated (CPU and subtree) page state & events */
-	long			state[MEMCG_NR_STAT];
-	unsigned long		events[NR_MEMCG_EVENTS];
-
-	/* Non-hierarchical (CPU aggregated) page state & events */
-	long			state_local[MEMCG_NR_STAT];
-	unsigned long		events_local[NR_MEMCG_EVENTS];
-
-	/* Pending child counts during tree propagation */
-	long			state_pending[MEMCG_NR_STAT];
-	unsigned long		events_pending[NR_MEMCG_EVENTS];
-};
-
 unsigned long memcg_page_state(struct mem_cgroup *memcg, int idx)
 {
 	long x = READ_ONCE(memcg->vmstats->state[idx]);
-- 
2.42.0.459.ge4e396fd5e-goog



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 3/5] mm: memcg: make stats flushing threshold per-memcg
  2023-09-21  8:10 [PATCH 0/5] mm: memcg: subtree stats flushing and thresholds Yosry Ahmed
  2023-09-21  8:10 ` [PATCH 1/5] mm: memcg: change flush_next_time to flush_last_time Yosry Ahmed
  2023-09-21  8:10 ` [PATCH 2/5] mm: memcg: move vmstats structs definition above flushing code Yosry Ahmed
@ 2023-09-21  8:10 ` Yosry Ahmed
  2023-09-21  8:13   ` Yosry Ahmed
  2023-09-29 19:57   ` Yosry Ahmed
  2023-09-21  8:10 ` [PATCH 4/5] mm: workingset: move the stats flush into workingset_test_recent() Yosry Ahmed
                   ` (2 subsequent siblings)
  5 siblings, 2 replies; 12+ messages in thread
From: Yosry Ahmed @ 2023-09-21  8:10 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Muchun Song, Ivan Babrou, Tejun Heo, Michal Koutný,
	Waiman Long, kernel-team, Wei Xu, Greg Thelen, linux-mm, cgroups,
	linux-kernel, Yosry Ahmed

A global counter for the magnitude of memcg stats update is maintained
on the memcg side to avoid invoking rstat flushes when the pending
updates are not significant. This avoids unnecessary flushes, which are
not very cheap even if there isn't a lot of stats to flush. It also
avoids unnecessary lock contention on the underlying global rstat lock.

Make this threshold per-memcg. The scheme is followed where percpu (now
also per-memcg) counters are incremented in the update path, and only
propagated to per-memcg atomics when they exceed a certain threshold.

This provides two benefits:
(a) On large machines with a lot of memcgs, the global threshold can be
reached relatively fast, so guarding the underlying lock becomes less
effective. Making the threshold per-memcg avoids this.

(b) Having a global threshold makes it hard to do subtree flushes, as we
cannot reset the global counter except for a full flush. Per-memcg
counters removes this as a blocker from doing subtree flushes, which
helps avoid unnecessary work when the stats of a small subtree are
needed.

Nothing is free, of course. This comes at a cost:
(a) A new per-cpu counter per memcg, consuming NR_CPUS * NR_MEMCGS * 4
bytes.

(b) More work on the update side, although in the common case it will
only be percpu counter updates. The amount of work scales with the
number of ancestors (i.e. tree depth). This is not a new concept, adding
a cgroup to the rstat tree involves a parent loop, so is charging.
Testing in a later patch shows this doesn't introduce significant
regressions.

(c) The error margin in the stats for the system as a whole increases
from NR_CPUS * MEMCG_CHARGE_BATCH to NR_CPUS * MEMCG_CHARGE_BATCH *
NR_MEMCGS. This is probably fine because we have a similar per-memcg
error in charges coming from percpu stocks, and we have a periodic
flusher that makes sure we always flush all the stats every 2s anyway.

This patch was tested to make sure no significant regressions are
introduced on the update path as follows. In a cgroup that is 4 levels
deep (/sys/fs/cgroup/a/b/c/d), the following benchmarks were ran:

(a) neper [1] with 1000 flows and 100 threads (single machine). The
values in the table are the average of server and client throughputs in
mbps after 30 iterations, each running for 30s:

				tcp_rr		tcp_stream
Base				9504218.56	357366.84
Patched				9656205.68	356978.39
Delta				+1.6%		-0.1%
Standard Deviation		0.95%		1.03%

An increase in the performance of tcp_rr doesn't really make sense, but
it's probably in the noise. The same tests were ran with 1 flow and 1
thread but the throughput was too noisy to make any conclusions (the
averages did not show regressions nonetheless).

Looking at perf for one iteration of the above test, __mod_memcg_state()
(which is where memcg_rstat_updated() is called) does not show up at all
without this patch, but it shows up with this patch as 1.06% for tcp_rr
and 0.36% for tcp_stream.

(b) Running "stress-ng --vm 0 -t 1m --times --perf". I don't understand
stress-ng very well, so I am not sure that's the best way to test this,
but it spawns 384 workers and spits a lot of metrics which looks nice :)
I picked a few ones that seem to be relevant to the stats update path. I
also included cache misses as this patch introduce more atomics that may
bounce between cpu caches:

Metric			Base		Patched		Delta
Cache Misses		3.394 B/sec 	3.433 B/sec	+1.14%
Cache L1D Read		0.148 T/sec	0.154 T/sec	+4.05%
Cache L1D Read Miss	20.430 B/sec	21.820 B/sec	+6.8%
Page Faults Total	4.304 M/sec	4.535 M/sec	+5.4%
Page Faults Minor	4.304 M/sec	4.535 M/sec	+5.4%
Page Faults Major	18.794 /sec	0.000 /sec
Kmalloc			0.153 M/sec	0.152 M/sec	-0.65%
Kfree			0.152 M/sec	0.153 M/sec	+0.65%
MM Page Alloc		4.640 M/sec	4.898 M/sec	+5.56%
MM Page Free		4.639 M/sec	4.897 M/sec	+5.56%
Lock Contention Begin	0.362 M/sec	0.479 M/sec	+32.32%
Lock Contention End	0.362 M/sec	0.479 M/sec	+32.32%
page-cache add		238.057 /sec	0.000 /sec
page-cache del		6.265 /sec	6.267 /sec	-0.03%

This is only using a single run in each case. I am not sure what to
make out of most of these numbers, but they mostly seem in the noise
(some better, some worse). The lock contention numbers are interesting.
I am not sure if higher is better or worse here. No new locks or lock
sections are introduced by this patch either way.

Looking at perf, __mod_memcg_state() shows up as 0.00% with and without
this patch. This is suspicious, but I verified while stress-ng is
running that all the threads are in the right cgroup.

[1]https://github.com/google/neper

Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
---
 mm/memcontrol.c | 49 +++++++++++++++++++++++++++++++++----------------
 1 file changed, 33 insertions(+), 16 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index ef7ad66a9e4c..c273c65bb642 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -627,6 +627,9 @@ struct memcg_vmstats_percpu {
 	/* Cgroup1: threshold notifications & softlimit tree updates */
 	unsigned long		nr_page_events;
 	unsigned long		targets[MEM_CGROUP_NTARGETS];
+
+	/* Stats updates since the last flush */
+	unsigned int		stats_updates;
 };
 
 struct memcg_vmstats {
@@ -641,6 +644,9 @@ struct memcg_vmstats {
 	/* Pending child counts during tree propagation */
 	long			state_pending[MEMCG_NR_STAT];
 	unsigned long		events_pending[NR_MEMCG_EVENTS];
+
+	/* Stats updates since the last flush */
+	atomic64_t		stats_updates;
 };
 
 /*
@@ -660,9 +666,7 @@ struct memcg_vmstats {
  */
 static void flush_memcg_stats_dwork(struct work_struct *w);
 static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork);
-static DEFINE_PER_CPU(unsigned int, stats_updates);
 static atomic_t stats_flush_ongoing = ATOMIC_INIT(0);
-static atomic_t stats_flush_threshold = ATOMIC_INIT(0);
 static u64 flush_last_time;
 
 #define FLUSH_TIME (2UL*HZ)
@@ -689,26 +693,37 @@ static void memcg_stats_unlock(void)
 	preempt_enable_nested();
 }
 
+
+static bool memcg_should_flush_stats(struct mem_cgroup *memcg)
+{
+	return atomic64_read(&memcg->vmstats->stats_updates) >
+		MEMCG_CHARGE_BATCH * num_online_cpus();
+}
+
 static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val)
 {
+	int cpu = smp_processor_id();
 	unsigned int x;
 
 	if (!val)
 		return;
 
-	cgroup_rstat_updated(memcg->css.cgroup, smp_processor_id());
+	cgroup_rstat_updated(memcg->css.cgroup, cpu);
+
+	for (; memcg; memcg = parent_mem_cgroup(memcg)) {
+		x = __this_cpu_add_return(memcg->vmstats_percpu->stats_updates,
+					  abs(val));
+
+		if (x < MEMCG_CHARGE_BATCH)
+			continue;
 
-	x = __this_cpu_add_return(stats_updates, abs(val));
-	if (x > MEMCG_CHARGE_BATCH) {
 		/*
-		 * If stats_flush_threshold exceeds the threshold
-		 * (>num_online_cpus()), cgroup stats update will be triggered
-		 * in __mem_cgroup_flush_stats(). Increasing this var further
-		 * is redundant and simply adds overhead in atomic update.
+		 * If @memcg is already flush-able, increasing stats_updates is
+		 * redundant. Avoid the overhead of the atomic update.
 		 */
-		if (atomic_read(&stats_flush_threshold) <= num_online_cpus())
-			atomic_add(x / MEMCG_CHARGE_BATCH, &stats_flush_threshold);
-		__this_cpu_write(stats_updates, 0);
+		if (!memcg_should_flush_stats(memcg))
+			atomic64_add(x, &memcg->vmstats->stats_updates);
+		__this_cpu_write(memcg->vmstats_percpu->stats_updates, 0);
 	}
 }
 
@@ -727,13 +742,12 @@ static void do_flush_stats(void)
 
 	cgroup_rstat_flush(root_mem_cgroup->css.cgroup);
 
-	atomic_set(&stats_flush_threshold, 0);
 	atomic_set(&stats_flush_ongoing, 0);
 }
 
 void mem_cgroup_flush_stats(void)
 {
-	if (atomic_read(&stats_flush_threshold) > num_online_cpus())
+	if (memcg_should_flush_stats(root_mem_cgroup))
 		do_flush_stats();
 }
 
@@ -747,8 +761,8 @@ void mem_cgroup_flush_stats_ratelimited(void)
 static void flush_memcg_stats_dwork(struct work_struct *w)
 {
 	/*
-	 * Always flush here so that flushing in latency-sensitive paths is
-	 * as cheap as possible.
+	 * Deliberately ignore memcg_should_flush_stats() here so that flushing
+	 * in latency-sensitive paths is as cheap as possible.
 	 */
 	do_flush_stats();
 	queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME);
@@ -5622,6 +5636,9 @@ static void mem_cgroup_css_rstat_flush(struct cgroup_subsys_state *css, int cpu)
 			}
 		}
 	}
+	/* We are in a per-cpu loop here, only do the atomic write once */
+	if (atomic64_read(&memcg->vmstats->stats_updates))
+		atomic64_set(&memcg->vmstats->stats_updates, 0);
 }
 
 #ifdef CONFIG_MMU
-- 
2.42.0.459.ge4e396fd5e-goog



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 4/5] mm: workingset: move the stats flush into workingset_test_recent()
  2023-09-21  8:10 [PATCH 0/5] mm: memcg: subtree stats flushing and thresholds Yosry Ahmed
                   ` (2 preceding siblings ...)
  2023-09-21  8:10 ` [PATCH 3/5] mm: memcg: make stats flushing threshold per-memcg Yosry Ahmed
@ 2023-09-21  8:10 ` Yosry Ahmed
  2023-09-21 11:05   ` kernel test robot
  2023-09-21 23:29   ` kernel test robot
  2023-09-21  8:10 ` [PATCH 5/5] mm: memcg: restore subtree stats flushing Yosry Ahmed
  2023-10-02 21:46 ` [PATCH 0/5] mm: memcg: subtree stats flushing and thresholds Yosry Ahmed
  5 siblings, 2 replies; 12+ messages in thread
From: Yosry Ahmed @ 2023-09-21  8:10 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Muchun Song, Ivan Babrou, Tejun Heo, Michal Koutný,
	Waiman Long, kernel-team, Wei Xu, Greg Thelen, linux-mm, cgroups,
	linux-kernel, Yosry Ahmed

The workingset code flushes the stats in workingset_refault() to get
accurate stats of the eviction memcg. In preparation for more scoped
flushed and passing the eviction memcg to the flush call, move the call
to workingset_test_recent() where we have a pointer to the eviction
memcg.

The flush call is sleepable, and cannot be made in an rcu read section.
Hence, minimize the rcu read section by also moving it into
workingset_test_recent(). Furthermore, instead of holding the rcu read
lock throughout workingset_test_recent(), only hold it briefly to get a
ref on the eviction memcg. This allows us to make the flush call after
we get the eviction memcg.

As for workingset_refault(), nothing else there appears to be protected
by rcu. The memcg of the faulted folio (which is not necessarily the
same as the eviction memcg) is protected by the folio lock, which is
held from all callsites. Add a VM_BUG_ON() to make sure this doesn't
change from under us.

No functional change intended.

Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
---
 mm/workingset.c | 31 ++++++++++++++++++++-----------
 1 file changed, 20 insertions(+), 11 deletions(-)

diff --git a/mm/workingset.c b/mm/workingset.c
index b192e44a0e7c..79b338996088 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -425,8 +425,15 @@ bool workingset_test_recent(void *shadow, bool file, bool *workingset)
 	struct pglist_data *pgdat;
 	unsigned long eviction;
 
-	if (lru_gen_enabled())
-		return lru_gen_test_recent(shadow, file, &eviction_lruvec, &eviction, workingset);
+	rcu_read_lock();
+
+	if (lru_gen_enabled()) {
+		bool recent = lru_gen_test_recent(shadow, file,
+						  &eviction_lruvec, &eviction,
+						  workingset);
+		rcu_read_unlock();
+		return recent;
+	}
 
 	unpack_shadow(shadow, &memcgid, &pgdat, &eviction, workingset);
 	eviction <<= bucket_order;
@@ -451,6 +458,12 @@ bool workingset_test_recent(void *shadow, bool file, bool *workingset)
 	if (!mem_cgroup_disabled() && !eviction_memcg)
 		return false;
 
+	css_get(&eviction_memcg->css);
+	rcu_read_unlock();
+
+	/* Flush stats (and potentially sleep) outside the RCU read section */
+	mem_cgroup_flush_stats_ratelimited();
+
 	eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
 	refault = atomic_long_read(&eviction_lruvec->nonresident_age);
 
@@ -493,6 +506,7 @@ bool workingset_test_recent(void *shadow, bool file, bool *workingset)
 		}
 	}
 
+	mem_cgroup_put(eviction_memcg);
 	return refault_distance <= workingset_size;
 }
 
@@ -519,19 +533,16 @@ void workingset_refault(struct folio *folio, void *shadow)
 		return;
 	}
 
-	/* Flush stats (and potentially sleep) before holding RCU read lock */
-	mem_cgroup_flush_stats_ratelimited();
-
-	rcu_read_lock();
-
 	/*
 	 * The activation decision for this folio is made at the level
 	 * where the eviction occurred, as that is where the LRU order
 	 * during folio reclaim is being determined.
 	 *
 	 * However, the cgroup that will own the folio is the one that
-	 * is actually experiencing the refault event.
+	 * is actually experiencing the refault event. Make sure the folio is
+	 * locked to guarantee folio_memcg() stability throughout.
 	 */
+	VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
 	nr = folio_nr_pages(folio);
 	memcg = folio_memcg(folio);
 	pgdat = folio_pgdat(folio);
@@ -540,7 +551,7 @@ void workingset_refault(struct folio *folio, void *shadow)
 	mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr);
 
 	if (!workingset_test_recent(shadow, file, &workingset))
-		goto out;
+		return;
 
 	folio_set_active(folio);
 	workingset_age_nonresident(lruvec, nr);
@@ -556,8 +567,6 @@ void workingset_refault(struct folio *folio, void *shadow)
 		lru_note_cost_refault(folio);
 		mod_lruvec_state(lruvec, WORKINGSET_RESTORE_BASE + file, nr);
 	}
-out:
-	rcu_read_unlock();
 }
 
 /**
-- 
2.42.0.459.ge4e396fd5e-goog



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 5/5] mm: memcg: restore subtree stats flushing
  2023-09-21  8:10 [PATCH 0/5] mm: memcg: subtree stats flushing and thresholds Yosry Ahmed
                   ` (3 preceding siblings ...)
  2023-09-21  8:10 ` [PATCH 4/5] mm: workingset: move the stats flush into workingset_test_recent() Yosry Ahmed
@ 2023-09-21  8:10 ` Yosry Ahmed
  2023-10-02 21:46 ` [PATCH 0/5] mm: memcg: subtree stats flushing and thresholds Yosry Ahmed
  5 siblings, 0 replies; 12+ messages in thread
From: Yosry Ahmed @ 2023-09-21  8:10 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Muchun Song, Ivan Babrou, Tejun Heo, Michal Koutný,
	Waiman Long, kernel-team, Wei Xu, Greg Thelen, linux-mm, cgroups,
	linux-kernel, Yosry Ahmed

Stats flushing for memcg currently follows the following rules:
- Always flush the entire memcg hierarchy (i.e. flush the root).
- Only one flusher is allowed at a time. If someone else tries to flush
  concurrently, they skip and return immediately.
- A periodic flusher flushes all the stats every 2 seconds.

The reason this approach is followed is because all flushes are
serialized by a global rstat spinlock. On the memcg side, flushing is
invoked from userspace reads as well as in-kernel flushers (e.g.
reclaim, refault, etc). This approach aims to avoid serializing all
flushers on the global lock, which can cause a significant performance
hit under high concurrency.

This approach has the following problems:
- Occasionally a userspace read of the stats of a non-root cgroup will
  be too expensive as it has to flush the entire hierarchy [1].
- Sometimes the stats accuracy are compromised if there is an ongoing
  flush, and we skip and return before the subtree of interest is
  actually flushed, yielding stale stats (by up to 2s due to periodic
  flushing). This is more visible when reading stats from userspace,
  but can also affect in-kernel flushers.

The latter problem is particulary a concern when userspace reads stats
after an event occurs, but gets stats from before the event. Examples:
- When memory usage / pressure spikes, a userspace OOM handler may look
  at the stats of different memcgs to select a victim based on various
  heuristics (e.g. how much private memory will be freed by killing
  this). Reading stale stats from before the usage spike in this case
  may cause a wrongful OOM kill.
- A proactive reclaimer may read the stats after writing to
  memory.reclaim to measure the success of the reclaim operation. Stale
  stats from before reclaim may give a false negative.
- Reading the stats of a parent and a child memcg may be inconsistent
  (child larger than parent), if the flush doesn't happen when the
  parent is read, but happens when the child is read.

As for in-kernel flushers, they will occasionally get stale stats. No
regressions are currently known from this, but if there are regressions,
they would be very difficult to debug and link to the source of the
problem.

This patch aims to fix these problems by restoring subtree flushing,
and removing the unified/coalesced flushing logic that skips flushing if
there is an ongoing flush. This change would introduce a significant
regression with global stats flushing thresholds. With per-memcg stats
flushing thresholds, this seems to perform really well. The thresholds
protect the underlying lock from unnecessary contention.

Add a mutex to protect the underlying rstat lock from excessive memcg
flushing. The thresholds are re-checked after the mutex is grabbed to
make sure that a concurrent flush did not already get the subtree we are
trying to flush. A call to cgroup_rstat_flush() is not cheap, even if
there are no pending updates.

This patch was tested in two ways to ensure the latency of flushing is
up to bar, on a machine with 384 cpus:
- A synthetic test with 5000 concurrent workers in 500 cgroups doing
  allocations and reclaim, as well as 1000 readers for memory.stat
  (variation of [2]). No regressions were noticed in the total runtime.
  Note that significant regressions in this test are observed with
  global stats thresholds, but not with per-memcg thresholds.

- A synthetic stress test for concurrently reading memcg stats while
  memory allocation/freeing workers are running in the background,
  provided by Wei Xu [3]. With 250k threads reading the stats every
  100ms in 50k cgroups, 99.9% of reads take <= 50us. Less than 0.01%
  of reads take more than 1ms, and no reads take more than 100ms.

[1] https://lore.kernel.org/lkml/CABWYdi0c6__rh-K7dcM_pkf9BJdTRtAU08M43KO9ME4-dsgfoQ@mail.gmail.com/
[2] https://lore.kernel.org/lkml/CAJD7tka13M-zVZTyQJYL1iUAYvuQ1fcHbCjcOBZcz6POYTV-4g@mail.gmail.com/
[3] https://lore.kernel.org/lkml/CAAPL-u9D2b=iF5Lf_cRnKxUfkiEe0AMDTu6yhrUAzX0b6a6rDg@mail.gmail.com/

Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
---
 include/linux/memcontrol.h |  8 ++---
 mm/memcontrol.c            | 73 +++++++++++++++++++++++---------------
 mm/vmscan.c                |  2 +-
 mm/workingset.c            | 10 ++++--
 4 files changed, 56 insertions(+), 37 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 45d0c10e86cc..1b61a2707307 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1030,8 +1030,8 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec,
 	return x;
 }
 
-void mem_cgroup_flush_stats(void);
-void mem_cgroup_flush_stats_ratelimited(void);
+void mem_cgroup_flush_stats(struct mem_cgroup *memcg);
+void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg);
 
 void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
 			      int val);
@@ -1520,11 +1520,11 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec,
 	return node_page_state(lruvec_pgdat(lruvec), idx);
 }
 
-static inline void mem_cgroup_flush_stats(void)
+static inline void mem_cgroup_flush_stats(struct mem_cgroup *memcg)
 {
 }
 
-static inline void mem_cgroup_flush_stats_ratelimited(void)
+static inline void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg)
 {
 }
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index c273c65bb642..99cfba81684f 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -666,7 +666,6 @@ struct memcg_vmstats {
  */
 static void flush_memcg_stats_dwork(struct work_struct *w);
 static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork);
-static atomic_t stats_flush_ongoing = ATOMIC_INIT(0);
 static u64 flush_last_time;
 
 #define FLUSH_TIME (2UL*HZ)
@@ -727,35 +726,45 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val)
 	}
 }
 
-static void do_flush_stats(void)
+static void do_flush_stats(struct mem_cgroup *memcg)
 {
-	/*
-	 * We always flush the entire tree, so concurrent flushers can just
-	 * skip. This avoids a thundering herd problem on the rstat global lock
-	 * from memcg flushers (e.g. reclaim, refault, etc).
-	 */
-	if (atomic_read(&stats_flush_ongoing) ||
-	    atomic_xchg(&stats_flush_ongoing, 1))
-		return;
-
-	WRITE_ONCE(flush_last_time, jiffies_64);
-
-	cgroup_rstat_flush(root_mem_cgroup->css.cgroup);
+	if (mem_cgroup_is_root(memcg))
+		WRITE_ONCE(flush_last_time, jiffies_64);
 
-	atomic_set(&stats_flush_ongoing, 0);
+	cgroup_rstat_flush(memcg->css.cgroup);
 }
 
-void mem_cgroup_flush_stats(void)
+/*
+ * mem_cgroup_flush_stats - flush the stats of a memory cgroup subtree
+ * @memcg: root of the subtree to flush
+ *
+ * Flushing is serialized by the underlying global rstat lock. There is also a
+ * minimum amount of work to be done even if there are no stat updates to flush.
+ * Hence, we only flush the stats if the updates delta exceeds a threshold. This
+ * avoids unnecessary work and contention on the underlying lock.
+ */
+void mem_cgroup_flush_stats(struct mem_cgroup *memcg)
 {
-	if (memcg_should_flush_stats(root_mem_cgroup))
-		do_flush_stats();
+	static DEFINE_MUTEX(memcg_stats_flush_mutex);
+
+	if (!memcg)
+		memcg = root_mem_cgroup;
+
+	if (!memcg_should_flush_stats(memcg))
+		return;
+
+	mutex_lock(&memcg_stats_flush_mutex);
+	/* An overlapping flush may have occurred, check again after locking */
+	if (memcg_should_flush_stats(memcg))
+		do_flush_stats(memcg);
+	mutex_unlock(&memcg_stats_flush_mutex);
 }
 
-void mem_cgroup_flush_stats_ratelimited(void)
+void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg)
 {
 	/* Only flush if the periodic flusher is one full cycle late */
 	if (time_after64(jiffies_64, READ_ONCE(flush_last_time) + 2*FLUSH_TIME))
-		mem_cgroup_flush_stats();
+		mem_cgroup_flush_stats(memcg);
 }
 
 static void flush_memcg_stats_dwork(struct work_struct *w)
@@ -764,7 +773,7 @@ static void flush_memcg_stats_dwork(struct work_struct *w)
 	 * Deliberately ignore memcg_should_flush_stats() here so that flushing
 	 * in latency-sensitive paths is as cheap as possible.
 	 */
-	do_flush_stats();
+	do_flush_stats(root_mem_cgroup);
 	queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME);
 }
 
@@ -1593,7 +1602,7 @@ static void memcg_stat_format(struct mem_cgroup *memcg, struct seq_buf *s)
 	 *
 	 * Current memory state:
 	 */
-	mem_cgroup_flush_stats();
+	mem_cgroup_flush_stats(memcg);
 
 	for (i = 0; i < ARRAY_SIZE(memory_stats); i++) {
 		u64 size;
@@ -4035,7 +4044,7 @@ static int memcg_numa_stat_show(struct seq_file *m, void *v)
 	int nid;
 	struct mem_cgroup *memcg = mem_cgroup_from_seq(m);
 
-	mem_cgroup_flush_stats();
+	mem_cgroup_flush_stats(memcg);
 
 	for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) {
 		seq_printf(m, "%s=%lu", stat->name,
@@ -4116,7 +4125,7 @@ static void memcg1_stat_format(struct mem_cgroup *memcg, struct seq_buf *s)
 
 	BUILD_BUG_ON(ARRAY_SIZE(memcg1_stat_names) != ARRAY_SIZE(memcg1_stats));
 
-	mem_cgroup_flush_stats();
+	mem_cgroup_flush_stats(memcg);
 
 	for (i = 0; i < ARRAY_SIZE(memcg1_stats); i++) {
 		unsigned long nr;
@@ -4613,7 +4622,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages,
 	struct mem_cgroup *memcg = mem_cgroup_from_css(wb->memcg_css);
 	struct mem_cgroup *parent;
 
-	mem_cgroup_flush_stats();
+	mem_cgroup_flush_stats(memcg);
 
 	*pdirty = memcg_page_state(memcg, NR_FILE_DIRTY);
 	*pwriteback = memcg_page_state(memcg, NR_WRITEBACK);
@@ -6640,7 +6649,7 @@ static int memory_numa_stat_show(struct seq_file *m, void *v)
 	int i;
 	struct mem_cgroup *memcg = mem_cgroup_from_seq(m);
 
-	mem_cgroup_flush_stats();
+	mem_cgroup_flush_stats(memcg);
 
 	for (i = 0; i < ARRAY_SIZE(memory_stats); i++) {
 		int nid;
@@ -7806,7 +7815,11 @@ bool obj_cgroup_may_zswap(struct obj_cgroup *objcg)
 			break;
 		}
 
-		cgroup_rstat_flush(memcg->css.cgroup);
+		/*
+		 * mem_cgroup_flush_stats() ignores small changes. Use
+		 * do_flush_stats() directly to get accurate stats for charging.
+		 */
+		do_flush_stats(memcg);
 		pages = memcg_page_state(memcg, MEMCG_ZSWAP_B) / PAGE_SIZE;
 		if (pages < max)
 			continue;
@@ -7871,8 +7884,10 @@ void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size)
 static u64 zswap_current_read(struct cgroup_subsys_state *css,
 			      struct cftype *cft)
 {
-	cgroup_rstat_flush(css->cgroup);
-	return memcg_page_state(mem_cgroup_from_css(css), MEMCG_ZSWAP_B);
+	struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+
+	mem_cgroup_flush_stats(memcg);
+	return memcg_page_state(memcg, MEMCG_ZSWAP_B);
 }
 
 static int zswap_max_show(struct seq_file *m, void *v)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index a4e44f1c97c1..60bead17b1f7 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2246,7 +2246,7 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
 	 * Flush the memory cgroup stats, so that we read accurate per-memcg
 	 * lruvec stats for heuristics.
 	 */
-	mem_cgroup_flush_stats();
+	mem_cgroup_flush_stats(sc->target_mem_cgroup);
 
 	/*
 	 * Determine the scan balance between anon and file LRUs.
diff --git a/mm/workingset.c b/mm/workingset.c
index 79b338996088..141defbe3da2 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -461,8 +461,12 @@ bool workingset_test_recent(void *shadow, bool file, bool *workingset)
 	css_get(&eviction_memcg->css);
 	rcu_read_unlock();
 
-	/* Flush stats (and potentially sleep) outside the RCU read section */
-	mem_cgroup_flush_stats_ratelimited();
+	/*
+	 * Flush stats (and potentially sleep) outside the RCU read section.
+	 * XXX: With per-memcg flushing and thresholding, is ratelimiting
+	 * still needed here?
+	 */
+	mem_cgroup_flush_stats_ratelimited(eviction_memcg);
 
 	eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
 	refault = atomic_long_read(&eviction_lruvec->nonresident_age);
@@ -673,7 +677,7 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker,
 		struct lruvec *lruvec;
 		int i;
 
-		mem_cgroup_flush_stats();
+		mem_cgroup_flush_stats(sc->memcg);
 		lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
 		for (pages = 0, i = 0; i < NR_LRU_LISTS; i++)
 			pages += lruvec_page_state_local(lruvec,
-- 
2.42.0.459.ge4e396fd5e-goog



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/5] mm: memcg: make stats flushing threshold per-memcg
  2023-09-21  8:10 ` [PATCH 3/5] mm: memcg: make stats flushing threshold per-memcg Yosry Ahmed
@ 2023-09-21  8:13   ` Yosry Ahmed
  2023-09-29 19:57   ` Yosry Ahmed
  1 sibling, 0 replies; 12+ messages in thread
From: Yosry Ahmed @ 2023-09-21  8:13 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Muchun Song, Ivan Babrou, Tejun Heo, Michal Koutný,
	Waiman Long, kernel-team, Wei Xu, Greg Thelen, linux-mm, cgroups,
	linux-kernel

On Thu, Sep 21, 2023 at 1:11 AM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> A global counter for the magnitude of memcg stats update is maintained
> on the memcg side to avoid invoking rstat flushes when the pending
> updates are not significant. This avoids unnecessary flushes, which are
> not very cheap even if there isn't a lot of stats to flush. It also
> avoids unnecessary lock contention on the underlying global rstat lock.
>
> Make this threshold per-memcg. The scheme is followed where percpu (now
> also per-memcg) counters are incremented in the update path, and only
> propagated to per-memcg atomics when they exceed a certain threshold.
>
> This provides two benefits:
> (a) On large machines with a lot of memcgs, the global threshold can be
> reached relatively fast, so guarding the underlying lock becomes less
> effective. Making the threshold per-memcg avoids this.
>
> (b) Having a global threshold makes it hard to do subtree flushes, as we
> cannot reset the global counter except for a full flush. Per-memcg
> counters removes this as a blocker from doing subtree flushes, which
> helps avoid unnecessary work when the stats of a small subtree are
> needed.
>
> Nothing is free, of course. This comes at a cost:
> (a) A new per-cpu counter per memcg, consuming NR_CPUS * NR_MEMCGS * 4
> bytes.
>
> (b) More work on the update side, although in the common case it will
> only be percpu counter updates. The amount of work scales with the
> number of ancestors (i.e. tree depth). This is not a new concept, adding
> a cgroup to the rstat tree involves a parent loop, so is charging.
> Testing in a later patch shows this doesn't introduce significant
> regressions.
>
> (c) The error margin in the stats for the system as a whole increases
> from NR_CPUS * MEMCG_CHARGE_BATCH to NR_CPUS * MEMCG_CHARGE_BATCH *
> NR_MEMCGS. This is probably fine because we have a similar per-memcg
> error in charges coming from percpu stocks, and we have a periodic
> flusher that makes sure we always flush all the stats every 2s anyway.
>
> This patch was tested to make sure no significant regressions are
> introduced on the update path as follows. In a cgroup that is 4 levels
> deep (/sys/fs/cgroup/a/b/c/d), the following benchmarks were ran:
>
> (a) neper [1] with 1000 flows and 100 threads (single machine). The
> values in the table are the average of server and client throughputs in
> mbps after 30 iterations, each running for 30s:
>
>                                 tcp_rr          tcp_stream
> Base                            9504218.56      357366.84
> Patched                         9656205.68      356978.39
> Delta                           +1.6%           -0.1%
> Standard Deviation              0.95%           1.03%
>
> An increase in the performance of tcp_rr doesn't really make sense, but
> it's probably in the noise. The same tests were ran with 1 flow and 1
> thread but the throughput was too noisy to make any conclusions (the
> averages did not show regressions nonetheless).
>
> Looking at perf for one iteration of the above test, __mod_memcg_state()
> (which is where memcg_rstat_updated() is called) does not show up at all
> without this patch, but it shows up with this patch as 1.06% for tcp_rr
> and 0.36% for tcp_stream.
>
> (b) Running "stress-ng --vm 0 -t 1m --times --perf". I don't understand
> stress-ng very well, so I am not sure that's the best way to test this,
> but it spawns 384 workers and spits a lot of metrics which looks nice :)
> I picked a few ones that seem to be relevant to the stats update path. I
> also included cache misses as this patch introduce more atomics that may
> bounce between cpu caches:
>
> Metric                  Base            Patched         Delta
> Cache Misses            3.394 B/sec     3.433 B/sec     +1.14%
> Cache L1D Read          0.148 T/sec     0.154 T/sec     +4.05%
> Cache L1D Read Miss     20.430 B/sec    21.820 B/sec    +6.8%
> Page Faults Total       4.304 M/sec     4.535 M/sec     +5.4%
> Page Faults Minor       4.304 M/sec     4.535 M/sec     +5.4%
> Page Faults Major       18.794 /sec     0.000 /sec
> Kmalloc                 0.153 M/sec     0.152 M/sec     -0.65%
> Kfree                   0.152 M/sec     0.153 M/sec     +0.65%
> MM Page Alloc           4.640 M/sec     4.898 M/sec     +5.56%
> MM Page Free            4.639 M/sec     4.897 M/sec     +5.56%
> Lock Contention Begin   0.362 M/sec     0.479 M/sec     +32.32%
> Lock Contention End     0.362 M/sec     0.479 M/sec     +32.32%
> page-cache add          238.057 /sec    0.000 /sec
> page-cache del          6.265 /sec      6.267 /sec      -0.03%
>
> This is only using a single run in each case. I am not sure what to
> make out of most of these numbers, but they mostly seem in the noise
> (some better, some worse). The lock contention numbers are interesting.
> I am not sure if higher is better or worse here. No new locks or lock
> sections are introduced by this patch either way.
>
> Looking at perf, __mod_memcg_state() shows up as 0.00% with and without
> this patch. This is suspicious, but I verified while stress-ng is
> running that all the threads are in the right cgroup.
>
> [1]https://github.com/google/neper
>
> Signed-off-by: Yosry Ahmed <yosryahmed@google.com>


Johannes, I think this might be what you have suggested in our
previous discussion, but I am not sure this is what you meant for the
update path, so I did not add a Suggested-by. Please let me know if
this is what you meant and I can amend the tag as such.

>
> ---
>  mm/memcontrol.c | 49 +++++++++++++++++++++++++++++++++----------------
>  1 file changed, 33 insertions(+), 16 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index ef7ad66a9e4c..c273c65bb642 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -627,6 +627,9 @@ struct memcg_vmstats_percpu {
>         /* Cgroup1: threshold notifications & softlimit tree updates */
>         unsigned long           nr_page_events;
>         unsigned long           targets[MEM_CGROUP_NTARGETS];
> +
> +       /* Stats updates since the last flush */
> +       unsigned int            stats_updates;
>  };
>
>  struct memcg_vmstats {
> @@ -641,6 +644,9 @@ struct memcg_vmstats {
>         /* Pending child counts during tree propagation */
>         long                    state_pending[MEMCG_NR_STAT];
>         unsigned long           events_pending[NR_MEMCG_EVENTS];
> +
> +       /* Stats updates since the last flush */
> +       atomic64_t              stats_updates;
>  };
>
>  /*
> @@ -660,9 +666,7 @@ struct memcg_vmstats {
>   */
>  static void flush_memcg_stats_dwork(struct work_struct *w);
>  static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork);
> -static DEFINE_PER_CPU(unsigned int, stats_updates);
>  static atomic_t stats_flush_ongoing = ATOMIC_INIT(0);
> -static atomic_t stats_flush_threshold = ATOMIC_INIT(0);
>  static u64 flush_last_time;
>
>  #define FLUSH_TIME (2UL*HZ)
> @@ -689,26 +693,37 @@ static void memcg_stats_unlock(void)
>         preempt_enable_nested();
>  }
>
> +
> +static bool memcg_should_flush_stats(struct mem_cgroup *memcg)
> +{
> +       return atomic64_read(&memcg->vmstats->stats_updates) >
> +               MEMCG_CHARGE_BATCH * num_online_cpus();
> +}
> +
>  static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val)
>  {
> +       int cpu = smp_processor_id();
>         unsigned int x;
>
>         if (!val)
>                 return;
>
> -       cgroup_rstat_updated(memcg->css.cgroup, smp_processor_id());
> +       cgroup_rstat_updated(memcg->css.cgroup, cpu);
> +
> +       for (; memcg; memcg = parent_mem_cgroup(memcg)) {
> +               x = __this_cpu_add_return(memcg->vmstats_percpu->stats_updates,
> +                                         abs(val));
> +
> +               if (x < MEMCG_CHARGE_BATCH)
> +                       continue;
>
> -       x = __this_cpu_add_return(stats_updates, abs(val));
> -       if (x > MEMCG_CHARGE_BATCH) {
>                 /*
> -                * If stats_flush_threshold exceeds the threshold
> -                * (>num_online_cpus()), cgroup stats update will be triggered
> -                * in __mem_cgroup_flush_stats(). Increasing this var further
> -                * is redundant and simply adds overhead in atomic update.
> +                * If @memcg is already flush-able, increasing stats_updates is
> +                * redundant. Avoid the overhead of the atomic update.
>                  */
> -               if (atomic_read(&stats_flush_threshold) <= num_online_cpus())
> -                       atomic_add(x / MEMCG_CHARGE_BATCH, &stats_flush_threshold);
> -               __this_cpu_write(stats_updates, 0);
> +               if (!memcg_should_flush_stats(memcg))
> +                       atomic64_add(x, &memcg->vmstats->stats_updates);
> +               __this_cpu_write(memcg->vmstats_percpu->stats_updates, 0);
>         }
>  }
>
> @@ -727,13 +742,12 @@ static void do_flush_stats(void)
>
>         cgroup_rstat_flush(root_mem_cgroup->css.cgroup);
>
> -       atomic_set(&stats_flush_threshold, 0);
>         atomic_set(&stats_flush_ongoing, 0);
>  }
>
>  void mem_cgroup_flush_stats(void)
>  {
> -       if (atomic_read(&stats_flush_threshold) > num_online_cpus())
> +       if (memcg_should_flush_stats(root_mem_cgroup))
>                 do_flush_stats();
>  }
>
> @@ -747,8 +761,8 @@ void mem_cgroup_flush_stats_ratelimited(void)
>  static void flush_memcg_stats_dwork(struct work_struct *w)
>  {
>         /*
> -        * Always flush here so that flushing in latency-sensitive paths is
> -        * as cheap as possible.
> +        * Deliberately ignore memcg_should_flush_stats() here so that flushing
> +        * in latency-sensitive paths is as cheap as possible.
>          */
>         do_flush_stats();
>         queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME);
> @@ -5622,6 +5636,9 @@ static void mem_cgroup_css_rstat_flush(struct cgroup_subsys_state *css, int cpu)
>                         }
>                 }
>         }
> +       /* We are in a per-cpu loop here, only do the atomic write once */
> +       if (atomic64_read(&memcg->vmstats->stats_updates))
> +               atomic64_set(&memcg->vmstats->stats_updates, 0);
>  }
>
>  #ifdef CONFIG_MMU
> --
> 2.42.0.459.ge4e396fd5e-goog
>


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 4/5] mm: workingset: move the stats flush into workingset_test_recent()
  2023-09-21  8:10 ` [PATCH 4/5] mm: workingset: move the stats flush into workingset_test_recent() Yosry Ahmed
@ 2023-09-21 11:05   ` kernel test robot
  2023-09-21 21:00     ` Yosry Ahmed
  2023-09-21 23:29   ` kernel test robot
  1 sibling, 1 reply; 12+ messages in thread
From: kernel test robot @ 2023-09-21 11:05 UTC (permalink / raw)
  To: Yosry Ahmed, Andrew Morton
  Cc: oe-kbuild-all, Linux Memory Management List, Johannes Weiner,
	Michal Hocko, Roman Gushchin, Shakeel Butt, Muchun Song,
	Ivan Babrou, Tejun Heo, Michal Koutný,
	Waiman Long, kernel-team, Wei Xu, Greg Thelen, cgroups,
	linux-kernel, Yosry Ahmed

Hi Yosry,

kernel test robot noticed the following build errors:

[auto build test ERROR on akpm-mm/mm-everything]

url:    https://github.com/intel-lab-lkp/linux/commits/Yosry-Ahmed/mm-memcg-change-flush_next_time-to-flush_last_time/20230921-161246
base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link:    https://lore.kernel.org/r/20230921081057.3440885-5-yosryahmed%40google.com
patch subject: [PATCH 4/5] mm: workingset: move the stats flush into workingset_test_recent()
config: powerpc-allnoconfig (https://download.01.org/0day-ci/archive/20230921/202309211829.Efuqg8NE-lkp@intel.com/config)
compiler: powerpc-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20230921/202309211829.Efuqg8NE-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202309211829.Efuqg8NE-lkp@intel.com/

All errors (new ones prefixed by >>):

   mm/workingset.c: In function 'workingset_test_recent':
>> mm/workingset.c:461:32: error: invalid use of undefined type 'struct mem_cgroup'
     461 |         css_get(&eviction_memcg->css);
         |                                ^~


vim +461 mm/workingset.c

   405	
   406	/**
   407	 * workingset_test_recent - tests if the shadow entry is for a folio that was
   408	 * recently evicted. Also fills in @workingset with the value unpacked from
   409	 * shadow.
   410	 * @shadow: the shadow entry to be tested.
   411	 * @file: whether the corresponding folio is from the file lru.
   412	 * @workingset: where the workingset value unpacked from shadow should
   413	 * be stored.
   414	 *
   415	 * Return: true if the shadow is for a recently evicted folio; false otherwise.
   416	 */
   417	bool workingset_test_recent(void *shadow, bool file, bool *workingset)
   418	{
   419		struct mem_cgroup *eviction_memcg;
   420		struct lruvec *eviction_lruvec;
   421		unsigned long refault_distance;
   422		unsigned long workingset_size;
   423		unsigned long refault;
   424		int memcgid;
   425		struct pglist_data *pgdat;
   426		unsigned long eviction;
   427	
   428		rcu_read_lock();
   429	
   430		if (lru_gen_enabled()) {
   431			bool recent = lru_gen_test_recent(shadow, file,
   432							  &eviction_lruvec, &eviction,
   433							  workingset);
   434			rcu_read_unlock();
   435			return recent;
   436		}
   437	
   438		unpack_shadow(shadow, &memcgid, &pgdat, &eviction, workingset);
   439		eviction <<= bucket_order;
   440	
   441		/*
   442		 * Look up the memcg associated with the stored ID. It might
   443		 * have been deleted since the folio's eviction.
   444		 *
   445		 * Note that in rare events the ID could have been recycled
   446		 * for a new cgroup that refaults a shared folio. This is
   447		 * impossible to tell from the available data. However, this
   448		 * should be a rare and limited disturbance, and activations
   449		 * are always speculative anyway. Ultimately, it's the aging
   450		 * algorithm's job to shake out the minimum access frequency
   451		 * for the active cache.
   452		 *
   453		 * XXX: On !CONFIG_MEMCG, this will always return NULL; it
   454		 * would be better if the root_mem_cgroup existed in all
   455		 * configurations instead.
   456		 */
   457		eviction_memcg = mem_cgroup_from_id(memcgid);
   458		if (!mem_cgroup_disabled() && !eviction_memcg)
   459			return false;
   460	
 > 461		css_get(&eviction_memcg->css);
   462		rcu_read_unlock();
   463	
   464		/* Flush stats (and potentially sleep) outside the RCU read section */
   465		mem_cgroup_flush_stats_ratelimited();
   466	
   467		eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
   468		refault = atomic_long_read(&eviction_lruvec->nonresident_age);
   469	
   470		/*
   471		 * Calculate the refault distance
   472		 *
   473		 * The unsigned subtraction here gives an accurate distance
   474		 * across nonresident_age overflows in most cases. There is a
   475		 * special case: usually, shadow entries have a short lifetime
   476		 * and are either refaulted or reclaimed along with the inode
   477		 * before they get too old.  But it is not impossible for the
   478		 * nonresident_age to lap a shadow entry in the field, which
   479		 * can then result in a false small refault distance, leading
   480		 * to a false activation should this old entry actually
   481		 * refault again.  However, earlier kernels used to deactivate
   482		 * unconditionally with *every* reclaim invocation for the
   483		 * longest time, so the occasional inappropriate activation
   484		 * leading to pressure on the active list is not a problem.
   485		 */
   486		refault_distance = (refault - eviction) & EVICTION_MASK;
   487	
   488		/*
   489		 * Compare the distance to the existing workingset size. We
   490		 * don't activate pages that couldn't stay resident even if
   491		 * all the memory was available to the workingset. Whether
   492		 * workingset competition needs to consider anon or not depends
   493		 * on having free swap space.
   494		 */
   495		workingset_size = lruvec_page_state(eviction_lruvec, NR_ACTIVE_FILE);
   496		if (!file) {
   497			workingset_size += lruvec_page_state(eviction_lruvec,
   498							     NR_INACTIVE_FILE);
   499		}
   500		if (mem_cgroup_get_nr_swap_pages(eviction_memcg) > 0) {
   501			workingset_size += lruvec_page_state(eviction_lruvec,
   502							     NR_ACTIVE_ANON);
   503			if (file) {
   504				workingset_size += lruvec_page_state(eviction_lruvec,
   505							     NR_INACTIVE_ANON);
   506			}
   507		}
   508	
   509		mem_cgroup_put(eviction_memcg);
   510		return refault_distance <= workingset_size;
   511	}
   512	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 4/5] mm: workingset: move the stats flush into workingset_test_recent()
  2023-09-21 11:05   ` kernel test robot
@ 2023-09-21 21:00     ` Yosry Ahmed
  0 siblings, 0 replies; 12+ messages in thread
From: Yosry Ahmed @ 2023-09-21 21:00 UTC (permalink / raw)
  To: kernel test robot
  Cc: Andrew Morton, oe-kbuild-all, Linux Memory Management List,
	Johannes Weiner, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Muchun Song, Ivan Babrou, Tejun Heo, Michal Koutný,
	Waiman Long, kernel-team, Wei Xu, Greg Thelen, cgroups,
	linux-kernel

On Thu, Sep 21, 2023 at 4:06 AM kernel test robot <lkp@intel.com> wrote:
>
> Hi Yosry,
>
> kernel test robot noticed the following build errors:
>
> [auto build test ERROR on akpm-mm/mm-everything]
>
> url:    https://github.com/intel-lab-lkp/linux/commits/Yosry-Ahmed/mm-memcg-change-flush_next_time-to-flush_last_time/20230921-161246
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
> patch link:    https://lore.kernel.org/r/20230921081057.3440885-5-yosryahmed%40google.com
> patch subject: [PATCH 4/5] mm: workingset: move the stats flush into workingset_test_recent()
> config: powerpc-allnoconfig (https://download.01.org/0day-ci/archive/20230921/202309211829.Efuqg8NE-lkp@intel.com/config)
> compiler: powerpc-linux-gcc (GCC) 13.2.0
> reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20230921/202309211829.Efuqg8NE-lkp@intel.com/reproduce)
>
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <lkp@intel.com>
> | Closes: https://lore.kernel.org/oe-kbuild-all/202309211829.Efuqg8NE-lkp@intel.com/
>
> All errors (new ones prefixed by >>):
>
>    mm/workingset.c: In function 'workingset_test_recent':
> >> mm/workingset.c:461:32: error: invalid use of undefined type 'struct mem_cgroup'
>      461 |         css_get(&eviction_memcg->css);
>          |                                ^~
>

Ah yes, I cannot dereference the memcg pointer here. I think we want
mem_cgroup_get_from_id (a getter version of mem_cgroup_from_id) or
mem_cgroup_get (similar to mem_cgroup_put) to have stubs when
!CONFIG_MEMCG. I will do this change in the next version, but I'll
wait for feedback on this version first.

>
> vim +461 mm/workingset.c
>
>    405
>    406  /**
>    407   * workingset_test_recent - tests if the shadow entry is for a folio that was
>    408   * recently evicted. Also fills in @workingset with the value unpacked from
>    409   * shadow.
>    410   * @shadow: the shadow entry to be tested.
>    411   * @file: whether the corresponding folio is from the file lru.
>    412   * @workingset: where the workingset value unpacked from shadow should
>    413   * be stored.
>    414   *
>    415   * Return: true if the shadow is for a recently evicted folio; false otherwise.
>    416   */
>    417  bool workingset_test_recent(void *shadow, bool file, bool *workingset)
>    418  {
>    419          struct mem_cgroup *eviction_memcg;
>    420          struct lruvec *eviction_lruvec;
>    421          unsigned long refault_distance;
>    422          unsigned long workingset_size;
>    423          unsigned long refault;
>    424          int memcgid;
>    425          struct pglist_data *pgdat;
>    426          unsigned long eviction;
>    427
>    428          rcu_read_lock();
>    429
>    430          if (lru_gen_enabled()) {
>    431                  bool recent = lru_gen_test_recent(shadow, file,
>    432                                                    &eviction_lruvec, &eviction,
>    433                                                    workingset);
>    434                  rcu_read_unlock();
>    435                  return recent;
>    436          }
>    437
>    438          unpack_shadow(shadow, &memcgid, &pgdat, &eviction, workingset);
>    439          eviction <<= bucket_order;
>    440
>    441          /*
>    442           * Look up the memcg associated with the stored ID. It might
>    443           * have been deleted since the folio's eviction.
>    444           *
>    445           * Note that in rare events the ID could have been recycled
>    446           * for a new cgroup that refaults a shared folio. This is
>    447           * impossible to tell from the available data. However, this
>    448           * should be a rare and limited disturbance, and activations
>    449           * are always speculative anyway. Ultimately, it's the aging
>    450           * algorithm's job to shake out the minimum access frequency
>    451           * for the active cache.
>    452           *
>    453           * XXX: On !CONFIG_MEMCG, this will always return NULL; it
>    454           * would be better if the root_mem_cgroup existed in all
>    455           * configurations instead.
>    456           */
>    457          eviction_memcg = mem_cgroup_from_id(memcgid);
>    458          if (!mem_cgroup_disabled() && !eviction_memcg)
>    459                  return false;
>    460
>  > 461          css_get(&eviction_memcg->css);
>    462          rcu_read_unlock();
>    463
>    464          /* Flush stats (and potentially sleep) outside the RCU read section */
>    465          mem_cgroup_flush_stats_ratelimited();
>    466
>    467          eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
>    468          refault = atomic_long_read(&eviction_lruvec->nonresident_age);
>    469
>    470          /*
>    471           * Calculate the refault distance
>    472           *
>    473           * The unsigned subtraction here gives an accurate distance
>    474           * across nonresident_age overflows in most cases. There is a
>    475           * special case: usually, shadow entries have a short lifetime
>    476           * and are either refaulted or reclaimed along with the inode
>    477           * before they get too old.  But it is not impossible for the
>    478           * nonresident_age to lap a shadow entry in the field, which
>    479           * can then result in a false small refault distance, leading
>    480           * to a false activation should this old entry actually
>    481           * refault again.  However, earlier kernels used to deactivate
>    482           * unconditionally with *every* reclaim invocation for the
>    483           * longest time, so the occasional inappropriate activation
>    484           * leading to pressure on the active list is not a problem.
>    485           */
>    486          refault_distance = (refault - eviction) & EVICTION_MASK;
>    487
>    488          /*
>    489           * Compare the distance to the existing workingset size. We
>    490           * don't activate pages that couldn't stay resident even if
>    491           * all the memory was available to the workingset. Whether
>    492           * workingset competition needs to consider anon or not depends
>    493           * on having free swap space.
>    494           */
>    495          workingset_size = lruvec_page_state(eviction_lruvec, NR_ACTIVE_FILE);
>    496          if (!file) {
>    497                  workingset_size += lruvec_page_state(eviction_lruvec,
>    498                                                       NR_INACTIVE_FILE);
>    499          }
>    500          if (mem_cgroup_get_nr_swap_pages(eviction_memcg) > 0) {
>    501                  workingset_size += lruvec_page_state(eviction_lruvec,
>    502                                                       NR_ACTIVE_ANON);
>    503                  if (file) {
>    504                          workingset_size += lruvec_page_state(eviction_lruvec,
>    505                                                       NR_INACTIVE_ANON);
>    506                  }
>    507          }
>    508
>    509          mem_cgroup_put(eviction_memcg);
>    510          return refault_distance <= workingset_size;
>    511  }
>    512
>
> --
> 0-DAY CI Kernel Test Service
> https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 4/5] mm: workingset: move the stats flush into workingset_test_recent()
  2023-09-21  8:10 ` [PATCH 4/5] mm: workingset: move the stats flush into workingset_test_recent() Yosry Ahmed
  2023-09-21 11:05   ` kernel test robot
@ 2023-09-21 23:29   ` kernel test robot
  1 sibling, 0 replies; 12+ messages in thread
From: kernel test robot @ 2023-09-21 23:29 UTC (permalink / raw)
  To: Yosry Ahmed, Andrew Morton
  Cc: oe-kbuild-all, Linux Memory Management List, Johannes Weiner,
	Michal Hocko, Roman Gushchin, Shakeel Butt, Muchun Song,
	Ivan Babrou, Tejun Heo, Michal Koutný,
	Waiman Long, kernel-team, Wei Xu, Greg Thelen, cgroups,
	linux-kernel, Yosry Ahmed

Hi Yosry,

kernel test robot noticed the following build errors:

[auto build test ERROR on akpm-mm/mm-everything]

url:    https://github.com/intel-lab-lkp/linux/commits/Yosry-Ahmed/mm-memcg-change-flush_next_time-to-flush_last_time/20230921-161246
base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link:    https://lore.kernel.org/r/20230921081057.3440885-5-yosryahmed%40google.com
patch subject: [PATCH 4/5] mm: workingset: move the stats flush into workingset_test_recent()
config: um-i386_defconfig (https://download.01.org/0day-ci/archive/20230922/202309220704.NF0GCRP2-lkp@intel.com/config)
compiler: gcc-7 (Ubuntu 7.5.0-6ubuntu2) 7.5.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20230922/202309220704.NF0GCRP2-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202309220704.NF0GCRP2-lkp@intel.com/

All errors (new ones prefixed by >>):

   mm/workingset.c: In function 'workingset_test_recent':
>> mm/workingset.c:461:25: error: dereferencing pointer to incomplete type 'struct mem_cgroup'
     css_get(&eviction_memcg->css);
                            ^~


vim +461 mm/workingset.c

   405	
   406	/**
   407	 * workingset_test_recent - tests if the shadow entry is for a folio that was
   408	 * recently evicted. Also fills in @workingset with the value unpacked from
   409	 * shadow.
   410	 * @shadow: the shadow entry to be tested.
   411	 * @file: whether the corresponding folio is from the file lru.
   412	 * @workingset: where the workingset value unpacked from shadow should
   413	 * be stored.
   414	 *
   415	 * Return: true if the shadow is for a recently evicted folio; false otherwise.
   416	 */
   417	bool workingset_test_recent(void *shadow, bool file, bool *workingset)
   418	{
   419		struct mem_cgroup *eviction_memcg;
   420		struct lruvec *eviction_lruvec;
   421		unsigned long refault_distance;
   422		unsigned long workingset_size;
   423		unsigned long refault;
   424		int memcgid;
   425		struct pglist_data *pgdat;
   426		unsigned long eviction;
   427	
   428		rcu_read_lock();
   429	
   430		if (lru_gen_enabled()) {
   431			bool recent = lru_gen_test_recent(shadow, file,
   432							  &eviction_lruvec, &eviction,
   433							  workingset);
   434			rcu_read_unlock();
   435			return recent;
   436		}
   437	
   438		unpack_shadow(shadow, &memcgid, &pgdat, &eviction, workingset);
   439		eviction <<= bucket_order;
   440	
   441		/*
   442		 * Look up the memcg associated with the stored ID. It might
   443		 * have been deleted since the folio's eviction.
   444		 *
   445		 * Note that in rare events the ID could have been recycled
   446		 * for a new cgroup that refaults a shared folio. This is
   447		 * impossible to tell from the available data. However, this
   448		 * should be a rare and limited disturbance, and activations
   449		 * are always speculative anyway. Ultimately, it's the aging
   450		 * algorithm's job to shake out the minimum access frequency
   451		 * for the active cache.
   452		 *
   453		 * XXX: On !CONFIG_MEMCG, this will always return NULL; it
   454		 * would be better if the root_mem_cgroup existed in all
   455		 * configurations instead.
   456		 */
   457		eviction_memcg = mem_cgroup_from_id(memcgid);
   458		if (!mem_cgroup_disabled() && !eviction_memcg)
   459			return false;
   460	
 > 461		css_get(&eviction_memcg->css);
   462		rcu_read_unlock();
   463	
   464		/* Flush stats (and potentially sleep) outside the RCU read section */
   465		mem_cgroup_flush_stats_ratelimited();
   466	
   467		eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
   468		refault = atomic_long_read(&eviction_lruvec->nonresident_age);
   469	
   470		/*
   471		 * Calculate the refault distance
   472		 *
   473		 * The unsigned subtraction here gives an accurate distance
   474		 * across nonresident_age overflows in most cases. There is a
   475		 * special case: usually, shadow entries have a short lifetime
   476		 * and are either refaulted or reclaimed along with the inode
   477		 * before they get too old.  But it is not impossible for the
   478		 * nonresident_age to lap a shadow entry in the field, which
   479		 * can then result in a false small refault distance, leading
   480		 * to a false activation should this old entry actually
   481		 * refault again.  However, earlier kernels used to deactivate
   482		 * unconditionally with *every* reclaim invocation for the
   483		 * longest time, so the occasional inappropriate activation
   484		 * leading to pressure on the active list is not a problem.
   485		 */
   486		refault_distance = (refault - eviction) & EVICTION_MASK;
   487	
   488		/*
   489		 * Compare the distance to the existing workingset size. We
   490		 * don't activate pages that couldn't stay resident even if
   491		 * all the memory was available to the workingset. Whether
   492		 * workingset competition needs to consider anon or not depends
   493		 * on having free swap space.
   494		 */
   495		workingset_size = lruvec_page_state(eviction_lruvec, NR_ACTIVE_FILE);
   496		if (!file) {
   497			workingset_size += lruvec_page_state(eviction_lruvec,
   498							     NR_INACTIVE_FILE);
   499		}
   500		if (mem_cgroup_get_nr_swap_pages(eviction_memcg) > 0) {
   501			workingset_size += lruvec_page_state(eviction_lruvec,
   502							     NR_ACTIVE_ANON);
   503			if (file) {
   504				workingset_size += lruvec_page_state(eviction_lruvec,
   505							     NR_INACTIVE_ANON);
   506			}
   507		}
   508	
   509		mem_cgroup_put(eviction_memcg);
   510		return refault_distance <= workingset_size;
   511	}
   512	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/5] mm: memcg: make stats flushing threshold per-memcg
  2023-09-21  8:10 ` [PATCH 3/5] mm: memcg: make stats flushing threshold per-memcg Yosry Ahmed
  2023-09-21  8:13   ` Yosry Ahmed
@ 2023-09-29 19:57   ` Yosry Ahmed
  1 sibling, 0 replies; 12+ messages in thread
From: Yosry Ahmed @ 2023-09-29 19:57 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Muchun Song, Ivan Babrou, Tejun Heo, Michal Koutný,
	Waiman Long, kernel-team, Wei Xu, Greg Thelen, linux-mm, cgroups,
	linux-kernel

On Thu, Sep 21, 2023 at 1:11 AM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> A global counter for the magnitude of memcg stats update is maintained
> on the memcg side to avoid invoking rstat flushes when the pending
> updates are not significant. This avoids unnecessary flushes, which are
> not very cheap even if there isn't a lot of stats to flush. It also
> avoids unnecessary lock contention on the underlying global rstat lock.
>
> Make this threshold per-memcg. The scheme is followed where percpu (now
> also per-memcg) counters are incremented in the update path, and only
> propagated to per-memcg atomics when they exceed a certain threshold.
>
> This provides two benefits:
> (a) On large machines with a lot of memcgs, the global threshold can be
> reached relatively fast, so guarding the underlying lock becomes less
> effective. Making the threshold per-memcg avoids this.
>
> (b) Having a global threshold makes it hard to do subtree flushes, as we
> cannot reset the global counter except for a full flush. Per-memcg
> counters removes this as a blocker from doing subtree flushes, which
> helps avoid unnecessary work when the stats of a small subtree are
> needed.
>
> Nothing is free, of course. This comes at a cost:
> (a) A new per-cpu counter per memcg, consuming NR_CPUS * NR_MEMCGS * 4
> bytes.
>
> (b) More work on the update side, although in the common case it will
> only be percpu counter updates. The amount of work scales with the
> number of ancestors (i.e. tree depth). This is not a new concept, adding
> a cgroup to the rstat tree involves a parent loop, so is charging.
> Testing in a later patch shows this doesn't introduce significant
> regressions.
>
> (c) The error margin in the stats for the system as a whole increases
> from NR_CPUS * MEMCG_CHARGE_BATCH to NR_CPUS * MEMCG_CHARGE_BATCH *
> NR_MEMCGS. This is probably fine because we have a similar per-memcg
> error in charges coming from percpu stocks, and we have a periodic
> flusher that makes sure we always flush all the stats every 2s anyway.
>
> This patch was tested to make sure no significant regressions are
> introduced on the update path as follows. In a cgroup that is 4 levels
> deep (/sys/fs/cgroup/a/b/c/d), the following benchmarks were ran:
>
> (a) neper [1] with 1000 flows and 100 threads (single machine). The
> values in the table are the average of server and client throughputs in
> mbps after 30 iterations, each running for 30s:
>
>                                 tcp_rr          tcp_stream
> Base                            9504218.56      357366.84
> Patched                         9656205.68      356978.39
> Delta                           +1.6%           -0.1%
> Standard Deviation              0.95%           1.03%
>
> An increase in the performance of tcp_rr doesn't really make sense, but
> it's probably in the noise. The same tests were ran with 1 flow and 1
> thread but the throughput was too noisy to make any conclusions (the
> averages did not show regressions nonetheless).
>
> Looking at perf for one iteration of the above test, __mod_memcg_state()
> (which is where memcg_rstat_updated() is called) does not show up at all
> without this patch, but it shows up with this patch as 1.06% for tcp_rr
> and 0.36% for tcp_stream.
>
> (b) Running "stress-ng --vm 0 -t 1m --times --perf". I don't understand
> stress-ng very well, so I am not sure that's the best way to test this,
> but it spawns 384 workers and spits a lot of metrics which looks nice :)
> I picked a few ones that seem to be relevant to the stats update path. I
> also included cache misses as this patch introduce more atomics that may
> bounce between cpu caches:
>
> Metric                  Base            Patched         Delta
> Cache Misses            3.394 B/sec     3.433 B/sec     +1.14%
> Cache L1D Read          0.148 T/sec     0.154 T/sec     +4.05%
> Cache L1D Read Miss     20.430 B/sec    21.820 B/sec    +6.8%
> Page Faults Total       4.304 M/sec     4.535 M/sec     +5.4%
> Page Faults Minor       4.304 M/sec     4.535 M/sec     +5.4%
> Page Faults Major       18.794 /sec     0.000 /sec
> Kmalloc                 0.153 M/sec     0.152 M/sec     -0.65%
> Kfree                   0.152 M/sec     0.153 M/sec     +0.65%
> MM Page Alloc           4.640 M/sec     4.898 M/sec     +5.56%
> MM Page Free            4.639 M/sec     4.897 M/sec     +5.56%
> Lock Contention Begin   0.362 M/sec     0.479 M/sec     +32.32%
> Lock Contention End     0.362 M/sec     0.479 M/sec     +32.32%
> page-cache add          238.057 /sec    0.000 /sec
> page-cache del          6.265 /sec      6.267 /sec      -0.03%
>
> This is only using a single run in each case. I am not sure what to
> make out of most of these numbers, but they mostly seem in the noise
> (some better, some worse). The lock contention numbers are interesting.
> I am not sure if higher is better or worse here. No new locks or lock
> sections are introduced by this patch either way.
>
> Looking at perf, __mod_memcg_state() shows up as 0.00% with and without
> this patch. This is suspicious, but I verified while stress-ng is
> running that all the threads are in the right cgroup.

Here is some additional testing. will-it-scale (specifically
per_process_ops in page_fault3 test) detected a 25.9% regression
before for a change in the stats update path, so I thought it would be
a good idea to run the numbers with this change. I ran all page_fault
tests in will-it-scale in a cgroup nested in a 4th level
($ROOT/a/b/c/d), as the work here scales with nesting, and 4 levels
under root seemed like a worst-ish case scenario. These are the
numbers from 30 runs (+ is good):

             LABEL            |     MIN     |     MAX     |    MEAN
 |   MEDIAN    |   STDDEV   |
------------------------------+-------+-------------+-------------+-------------+-------------+------------+
  page_fault1_per_process_ops |             |             |
 |             |            |
  (A) base                    | 250644.000  | 298351.000  | 265207.738
 | 262941.000  | 12112.379  |
  (B) patched                 | 235885.000  | 276680.000  | 249249.191
 | 248781.000  | 8767.457   |
                              | -5.89%      | -7.26%      | -6.02%
 | -5.39%      |            |
  page_fault1_per_thread_ops  |             |             |
 |             |            |
  (A) base                    | 227214.000  | 271539.000  | 241618.484
 | 240209.000  | 10162.207  |
  (B) patched                 | 220156.000  | 248552.000  | 229820.671
 | 229108.000  | 7506.582   |
                              | -3.11%      | -8.47%      | -4.88%
 | -4.62%      |            |
  page_fault1_scalability     |             |             |
 |             |            |
  (A) base                    | 0.031175    | 0.038742    | 0.03545
 | 0.035705    | 0.0015837  |
  (B) patched                 | 0.026511    | 0.032529    | 0.029952
 | 0.029957    | 0.0013551  |
                              | -9.65%      | -9.21%      | -9.29%
 | -9.35%     |            |
  page_fault2_per_process_ops |             |             |
 |             |            |
  (A) base                    | 197948.000  | 209020.000  | 203916.148
 | 203496.000  | 2908.331   |
  (B) patched                 | 183825.000  | 192870.000  | 186975.419
 | 187023.000  | 1991.100   |
                              | -6.67%      | -6.80%      | -6.85%
 | -6.90%      |            |
  page_fault2_per_thread_ops  |             |             |
 |             |            |
  (A) base                    | 166563.000  | 174843.000  | 170604.972
 | 170532.000  | 1624.834   |
  (B) patched                 | 161051.000  | 165887.000  | 163100.260
 | 163263.000  | 1517.967   |
                              | -3.31%      | -5.12%      | -4.40%
 | -4.26%      |            |
  page_fault2_scalability     |             |             |
 |             |            |
  (A) base                    | 0.052992    | 0.056427    | 0.054603
 | 0.054693    | 0.00080196 |
  (B) patched                 | 0.042582    | 0.046753    | 0.044882
 | 0.044957    | 0.0011766  |
                              | +2.74%      | -0.44%      | -0.05%
 | +0.33%     |            |
  page_fault3_per_process_ops |             |             |
 |             |            |
  (A) base                    | 1282706.000 | 1323143.000 |
1299821.099 | 1297918.000 | 9882.872   |
  (B) patched                 | 1232512.000 | 1269816.000 |
1248700.839 | 1247168.000 | 8454.891   |
                              | -3.91%      | -4.03%      | -3.93%
 | -3.91%      |            |
  page_fault3_per_thread_ops  |             |             |
 |             |            |
  (A) base                    | 383583.000  | 390303.000  | 387216.963
 | 387115.000  | 1605.760   |
  (B) patched                 | 363791.000  | 373960.000  | 368538.213
 | 368826.000  | 1852.594   |
                              | -5.16%      | -4.19%      | -4.82%
 | -4.72%      |            |
  page_fault3_scalability     |             |             |
 |             |            |
  (A) base                    | 0.58206     | 0.62882     | 0.59909
 | 0.59367     | 0.01256    |
  (B) patched                 | 0.57967     | 0.62165     | 0.59995
 | 0.59769     | 0.010088   |
                              | -0.41%      | -1.14%      | +0.14%
 | +0.68%      |            |

Most numbers are within the range of normal variation of this
benchmark results or within the standard deviation (especially that
the fix for [1] assumes that 3% is noise -- and there were no further
practical complaints). These numbers are also from 4-level nesting,
which is more than most standard setups in my experience.


[1]https://lore.kernel.org/all/20190520063534.GB19312@shao2-debian/

>
> [1]https://github.com/google/neper
>
> Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
> ---
>  mm/memcontrol.c | 49 +++++++++++++++++++++++++++++++++----------------
>  1 file changed, 33 insertions(+), 16 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index ef7ad66a9e4c..c273c65bb642 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -627,6 +627,9 @@ struct memcg_vmstats_percpu {
>         /* Cgroup1: threshold notifications & softlimit tree updates */
>         unsigned long           nr_page_events;
>         unsigned long           targets[MEM_CGROUP_NTARGETS];
> +
> +       /* Stats updates since the last flush */
> +       unsigned int            stats_updates;
>  };
>
>  struct memcg_vmstats {
> @@ -641,6 +644,9 @@ struct memcg_vmstats {
>         /* Pending child counts during tree propagation */
>         long                    state_pending[MEMCG_NR_STAT];
>         unsigned long           events_pending[NR_MEMCG_EVENTS];
> +
> +       /* Stats updates since the last flush */
> +       atomic64_t              stats_updates;
>  };
>
>  /*
> @@ -660,9 +666,7 @@ struct memcg_vmstats {
>   */
>  static void flush_memcg_stats_dwork(struct work_struct *w);
>  static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork);
> -static DEFINE_PER_CPU(unsigned int, stats_updates);
>  static atomic_t stats_flush_ongoing = ATOMIC_INIT(0);
> -static atomic_t stats_flush_threshold = ATOMIC_INIT(0);
>  static u64 flush_last_time;
>
>  #define FLUSH_TIME (2UL*HZ)
> @@ -689,26 +693,37 @@ static void memcg_stats_unlock(void)
>         preempt_enable_nested();
>  }
>
> +
> +static bool memcg_should_flush_stats(struct mem_cgroup *memcg)
> +{
> +       return atomic64_read(&memcg->vmstats->stats_updates) >
> +               MEMCG_CHARGE_BATCH * num_online_cpus();
> +}
> +
>  static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val)
>  {
> +       int cpu = smp_processor_id();
>         unsigned int x;
>
>         if (!val)
>                 return;
>
> -       cgroup_rstat_updated(memcg->css.cgroup, smp_processor_id());
> +       cgroup_rstat_updated(memcg->css.cgroup, cpu);
> +
> +       for (; memcg; memcg = parent_mem_cgroup(memcg)) {
> +               x = __this_cpu_add_return(memcg->vmstats_percpu->stats_updates,
> +                                         abs(val));
> +
> +               if (x < MEMCG_CHARGE_BATCH)
> +                       continue;
>
> -       x = __this_cpu_add_return(stats_updates, abs(val));
> -       if (x > MEMCG_CHARGE_BATCH) {
>                 /*
> -                * If stats_flush_threshold exceeds the threshold
> -                * (>num_online_cpus()), cgroup stats update will be triggered
> -                * in __mem_cgroup_flush_stats(). Increasing this var further
> -                * is redundant and simply adds overhead in atomic update.
> +                * If @memcg is already flush-able, increasing stats_updates is
> +                * redundant. Avoid the overhead of the atomic update.
>                  */
> -               if (atomic_read(&stats_flush_threshold) <= num_online_cpus())
> -                       atomic_add(x / MEMCG_CHARGE_BATCH, &stats_flush_threshold);
> -               __this_cpu_write(stats_updates, 0);
> +               if (!memcg_should_flush_stats(memcg))
> +                       atomic64_add(x, &memcg->vmstats->stats_updates);
> +               __this_cpu_write(memcg->vmstats_percpu->stats_updates, 0);
>         }
>  }
>
> @@ -727,13 +742,12 @@ static void do_flush_stats(void)
>
>         cgroup_rstat_flush(root_mem_cgroup->css.cgroup);
>
> -       atomic_set(&stats_flush_threshold, 0);
>         atomic_set(&stats_flush_ongoing, 0);
>  }
>
>  void mem_cgroup_flush_stats(void)
>  {
> -       if (atomic_read(&stats_flush_threshold) > num_online_cpus())
> +       if (memcg_should_flush_stats(root_mem_cgroup))
>                 do_flush_stats();
>  }
>
> @@ -747,8 +761,8 @@ void mem_cgroup_flush_stats_ratelimited(void)
>  static void flush_memcg_stats_dwork(struct work_struct *w)
>  {
>         /*
> -        * Always flush here so that flushing in latency-sensitive paths is
> -        * as cheap as possible.
> +        * Deliberately ignore memcg_should_flush_stats() here so that flushing
> +        * in latency-sensitive paths is as cheap as possible.
>          */
>         do_flush_stats();
>         queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME);
> @@ -5622,6 +5636,9 @@ static void mem_cgroup_css_rstat_flush(struct cgroup_subsys_state *css, int cpu)
>                         }
>                 }
>         }
> +       /* We are in a per-cpu loop here, only do the atomic write once */
> +       if (atomic64_read(&memcg->vmstats->stats_updates))
> +               atomic64_set(&memcg->vmstats->stats_updates, 0);
>  }
>
>  #ifdef CONFIG_MMU
> --
> 2.42.0.459.ge4e396fd5e-goog
>


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/5] mm: memcg: subtree stats flushing and thresholds
  2023-09-21  8:10 [PATCH 0/5] mm: memcg: subtree stats flushing and thresholds Yosry Ahmed
                   ` (4 preceding siblings ...)
  2023-09-21  8:10 ` [PATCH 5/5] mm: memcg: restore subtree stats flushing Yosry Ahmed
@ 2023-10-02 21:46 ` Yosry Ahmed
  5 siblings, 0 replies; 12+ messages in thread
From: Yosry Ahmed @ 2023-10-02 21:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Muchun Song, Ivan Babrou, Tejun Heo, Michal Koutný,
	Waiman Long, kernel-team, Wei Xu, Greg Thelen, linux-mm, cgroups,
	linux-kernel

On Thu, Sep 21, 2023 at 1:11 AM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> This series attempts to address shortages in today's approach for memcg
> stats flushing, namely occasionally stale or expensive stat reads. The
> series does so by changing the threshold that we use to decide whether
> to trigger a flush to be per memcg instead of global (patch 3), and then
> changing flushing to be per memcg (i.e. subtree flushes) instead of
> global (patch 5).
>
> Patch 3 & 5 are the core of the series, and they include more details
> and testing results. The rest are either cleanups or prep work.
>
> This series replaces the "memcg: more sophisticated stats flushing"
> series [1], which also replaces another series, in a long list of
> attempts to improve memcg stats flushing. It is not a v2 as it is a
> completely different approach. This is based on collected feedback from
> discussions on lkml in all previous attempts. Hopefully, this is the
> final attempt.
>
> [1]https://lore.kernel.org/lkml/20230913073846.1528938-1-yosryahmed@google.com/
>
> Yosry Ahmed (5):
>   mm: memcg: change flush_next_time to flush_last_time
>   mm: memcg: move vmstats structs definition above flushing code
>   mm: memcg: make stats flushing threshold per-memcg
>   mm: workingset: move the stats flush into workingset_test_recent()
>   mm: memcg: restore subtree stats flushing
>
>  include/linux/memcontrol.h |   8 +-
>  mm/memcontrol.c            | 269 +++++++++++++++++++++----------------
>  mm/vmscan.c                |   2 +-
>  mm/workingset.c            |  37 +++--
>  4 files changed, 181 insertions(+), 135 deletions(-)
>
> --
> 2.42.0.459.ge4e396fd5e-goog
>

Friendly ping for feedback on this approach :)


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2023-10-02 21:47 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-09-21  8:10 [PATCH 0/5] mm: memcg: subtree stats flushing and thresholds Yosry Ahmed
2023-09-21  8:10 ` [PATCH 1/5] mm: memcg: change flush_next_time to flush_last_time Yosry Ahmed
2023-09-21  8:10 ` [PATCH 2/5] mm: memcg: move vmstats structs definition above flushing code Yosry Ahmed
2023-09-21  8:10 ` [PATCH 3/5] mm: memcg: make stats flushing threshold per-memcg Yosry Ahmed
2023-09-21  8:13   ` Yosry Ahmed
2023-09-29 19:57   ` Yosry Ahmed
2023-09-21  8:10 ` [PATCH 4/5] mm: workingset: move the stats flush into workingset_test_recent() Yosry Ahmed
2023-09-21 11:05   ` kernel test robot
2023-09-21 21:00     ` Yosry Ahmed
2023-09-21 23:29   ` kernel test robot
2023-09-21  8:10 ` [PATCH 5/5] mm: memcg: restore subtree stats flushing Yosry Ahmed
2023-10-02 21:46 ` [PATCH 0/5] mm: memcg: subtree stats flushing and thresholds Yosry Ahmed

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox