* [patch 0/2] mm: too_many_isolated can stall due to out of sync VM counters
@ 2023-11-13 23:34 Marcelo Tosatti
2023-11-13 23:34 ` [patch 1/2] mm: vmstat: introduce node_page_state_pages_snapshot Marcelo Tosatti
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: Marcelo Tosatti @ 2023-11-13 23:34 UTC (permalink / raw)
To: linux-kernel, linux-mm
Cc: Michal Hocko, Vlastimil Babka, Andrew Morton, David Hildenbrand,
Peter Xu
A customer reported seeing processes hung at too_many_isolated,
while analysis indicated that the problem occurred due to out
of sync per-CPU stats (see below).
Fix is to use node_page_state_snapshot to avoid the out of stale values.
2136 static unsigned long
2137 shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
2138 struct scan_control *sc, enum lru_list lru)
2139 {
:
2145 bool file = is_file_lru(lru);
:
2147 struct pglist_data *pgdat = lruvec_pgdat(lruvec);
:
2150 while (unlikely(too_many_isolated(pgdat, file, sc))) {
2151 if (stalled)
2152 return 0;
2153
2154 /* wait a bit for the reclaimer. */
2155 msleep(100); <--- some processes were sleeping here, with pending SIGKILL.
2156 stalled = true;
2157
2158 /* We are about to die and free our memory. Return now. */
2159 if (fatal_signal_pending(current))
2160 return SWAP_CLUSTER_MAX;
2161 }
msleep() must be called only when there are too many isolated pages:
2019 static int too_many_isolated(struct pglist_data *pgdat, int file,
2020 struct scan_control *sc)
2021 {
:
2030 if (file) {
2031 inactive = node_page_state(pgdat, NR_INACTIVE_FILE);
2032 isolated = node_page_state(pgdat, NR_ISOLATED_FILE);
2033 } else {
:
2046 return isolated > inactive;
The return value was true since:
crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_INACTIVE_FILE]
$8 = {
counter = 1
}
crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_ISOLATED_FILE]
$9 = {
counter = 2
while per_cpu stats had:
crash> p ((struct pglist_data *) 0xffff00817fffe580)->per_cpu_nodestats
$85 = (struct per_cpu_nodestat *) 0xffff8000118832e0
crash> p/x 0xffff8000118832e0 + __per_cpu_offset[42]
$86 = 0xffff00917fcc32e0
crash> p ((struct per_cpu_nodestat *) 0xffff00917fcc32e0)->vm_node_stat_diff[NR_ISOLATED_FILE]
$87 = -1 '\377'
crash> p/x 0xffff8000118832e0 + __per_cpu_offset[44]
$89 = 0xffff00917fe032e0
crash> p ((struct per_cpu_nodestat *) 0xffff00917fe032e0)->vm_node_stat_diff[NR_ISOLATED_FILE]
$91 = -1 '\377'
It seems that processes were trapped in direct reclaim/compaction loop
because these nodes had few free pages lower than watermark min.
crash> kmem -z | grep -A 3 Normal
:
NODE: 4 ZONE: 1 ADDR: ffff00817fffec40 NAME: "Normal"
SIZE: 8454144 PRESENT: 98304 MIN/LOW/HIGH: 68/166/264
VM_STAT:
NR_FREE_PAGES: 68
--
NODE: 5 ZONE: 1 ADDR: ffff00897fffec40 NAME: "Normal"
SIZE: 118784 MIN/LOW/HIGH: 82/200/318
VM_STAT:
NR_FREE_PAGES: 45
--
NODE: 6 ZONE: 1 ADDR: ffff00917fffec40 NAME: "Normal"
SIZE: 118784 MIN/LOW/HIGH: 82/200/318
VM_STAT:
NR_FREE_PAGES: 53
--
NODE: 7 ZONE: 1 ADDR: ffff00997fbbec40 NAME: "Normal"
SIZE: 118784 MIN/LOW/HIGH: 82/200/318
VM_STAT:
NR_FREE_PAGES: 52
---
include/linux/vmstat.h | 4 ++++
mm/compaction.c | 6 +++---
mm/vmscan.c | 8 ++++----
mm/vmstat.c | 28 ++++++++++++++++++++++++++++
4 files changed, 39 insertions(+), 7 deletions(-)
^ permalink raw reply [flat|nested] 10+ messages in thread
* [patch 1/2] mm: vmstat: introduce node_page_state_pages_snapshot
2023-11-13 23:34 [patch 0/2] mm: too_many_isolated can stall due to out of sync VM counters Marcelo Tosatti
@ 2023-11-13 23:34 ` Marcelo Tosatti
2023-11-13 23:34 ` [patch 2/2] mm: vmstat: use node_page_state_snapshot in too_many_isolated Marcelo Tosatti
2023-11-14 8:20 ` [patch 0/2] mm: too_many_isolated can stall due to out of sync VM counters Michal Hocko
2 siblings, 0 replies; 10+ messages in thread
From: Marcelo Tosatti @ 2023-11-13 23:34 UTC (permalink / raw)
To: linux-kernel, linux-mm
Cc: Michal Hocko, Vlastimil Babka, Andrew Morton, David Hildenbrand,
Peter Xu, Marcelo Tosatti
Introduce a _snapshot variant of node_page_state_snapshot,
similar to zone_page_state_snapshot.
To be used by next patch.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
---
include/linux/vmstat.h | 4 ++++
mm/vmstat.c | 28 ++++++++++++++++++++++++++++
2 files changed, 32 insertions(+)
Index: linux/mm/vmstat.c
===================================================================
--- linux.orig/mm/vmstat.c
+++ linux/mm/vmstat.c
@@ -1031,6 +1031,34 @@ unsigned long node_page_state(struct pgl
return node_page_state_pages(pgdat, item);
}
+
+/*
+ * Determine the per node value of a stat item, snapshot version
+ * (see comment on top zone_page_state_snapshot).
+ */
+unsigned long node_page_state_pages_snapshot(struct pglist_data *pgdat,
+ enum node_stat_item item)
+{
+ long x = atomic_long_read(&pgdat->vm_stat[item]);
+#ifdef CONFIG_SMP
+ int cpu;
+
+ for_each_online_cpu(cpu)
+ x += per_cpu_ptr(pgdat->per_cpu_nodestats, cpu)->vm_node_stat_diff[item];
+
+ if (x < 0)
+ x = 0;
+#endif
+ return x;
+}
+
+unsigned long node_page_state_snapshot(struct pglist_data *pgdat,
+ enum node_stat_item item)
+{
+ VM_WARN_ON_ONCE(vmstat_item_in_bytes(item));
+
+ return node_page_state_pages(pgdat, item);
+}
#endif
#ifdef CONFIG_COMPACTION
Index: linux/include/linux/vmstat.h
===================================================================
--- linux.orig/include/linux/vmstat.h
+++ linux/include/linux/vmstat.h
@@ -262,6 +262,10 @@ extern unsigned long node_page_state(str
enum node_stat_item item);
extern unsigned long node_page_state_pages(struct pglist_data *pgdat,
enum node_stat_item item);
+extern unsigned long node_page_state_snapshot(struct pglist_data *pgdat,
+ enum node_stat_item item);
+extern unsigned long node_page_state_pages_snapshot(struct pglist_data *pgdat,
+ enum node_stat_item item);
extern void fold_vm_numa_events(void);
#else
#define sum_zone_node_page_state(node, item) global_zone_page_state(item)
^ permalink raw reply [flat|nested] 10+ messages in thread
* [patch 2/2] mm: vmstat: use node_page_state_snapshot in too_many_isolated
2023-11-13 23:34 [patch 0/2] mm: too_many_isolated can stall due to out of sync VM counters Marcelo Tosatti
2023-11-13 23:34 ` [patch 1/2] mm: vmstat: introduce node_page_state_pages_snapshot Marcelo Tosatti
@ 2023-11-13 23:34 ` Marcelo Tosatti
2023-11-14 8:20 ` [patch 0/2] mm: too_many_isolated can stall due to out of sync VM counters Michal Hocko
2 siblings, 0 replies; 10+ messages in thread
From: Marcelo Tosatti @ 2023-11-13 23:34 UTC (permalink / raw)
To: linux-kernel, linux-mm
Cc: Michal Hocko, Vlastimil Babka, Andrew Morton, David Hildenbrand,
Peter Xu, Marcelo Tosatti
A customer reported seeing processes hung at too_many_isolated,
while analysis indicated that the problem occurred due to out
of sync per-CPU stats (see below).
Fix is to use node_page_state_snapshot to avoid the out of stale values.
2136 static unsigned long
2137 shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
2138 struct scan_control *sc, enum lru_list lru)
2139 {
:
2145 bool file = is_file_lru(lru);
:
2147 struct pglist_data *pgdat = lruvec_pgdat(lruvec);
:
2150 while (unlikely(too_many_isolated(pgdat, file, sc))) {
2151 if (stalled)
2152 return 0;
2153
2154 /* wait a bit for the reclaimer. */
2155 msleep(100); <--- some processes were sleeping here, with pending SIGKILL.
2156 stalled = true;
2157
2158 /* We are about to die and free our memory. Return now. */
2159 if (fatal_signal_pending(current))
2160 return SWAP_CLUSTER_MAX;
2161 }
msleep() must be called only when there are too many isolated pages:
2019 static int too_many_isolated(struct pglist_data *pgdat, int file,
2020 struct scan_control *sc)
2021 {
:
2030 if (file) {
2031 inactive = node_page_state(pgdat, NR_INACTIVE_FILE);
2032 isolated = node_page_state(pgdat, NR_ISOLATED_FILE);
2033 } else {
:
2046 return isolated > inactive;
The return value was true since:
crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_INACTIVE_FILE]
$8 = {
counter = 1
}
crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_ISOLATED_FILE]
$9 = {
counter = 2
while per_cpu stats had:
crash> p ((struct pglist_data *) 0xffff00817fffe580)->per_cpu_nodestats
$85 = (struct per_cpu_nodestat *) 0xffff8000118832e0
crash> p/x 0xffff8000118832e0 + __per_cpu_offset[42]
$86 = 0xffff00917fcc32e0
crash> p ((struct per_cpu_nodestat *) 0xffff00917fcc32e0)->vm_node_stat_diff[NR_ISOLATED_FILE]
$87 = -1 '\377'
crash> p/x 0xffff8000118832e0 + __per_cpu_offset[44]
$89 = 0xffff00917fe032e0
crash> p ((struct per_cpu_nodestat *) 0xffff00917fe032e0)->vm_node_stat_diff[NR_ISOLATED_FILE]
$91 = -1 '\377'
It seems that processes were trapped in direct reclaim/compaction loop
because these nodes had few free pages lower than watermark min.
crash> kmem -z | grep -A 3 Normal
:
NODE: 4 ZONE: 1 ADDR: ffff00817fffec40 NAME: "Normal"
SIZE: 8454144 PRESENT: 98304 MIN/LOW/HIGH: 68/166/264
VM_STAT:
NR_FREE_PAGES: 68
--
NODE: 5 ZONE: 1 ADDR: ffff00897fffec40 NAME: "Normal"
SIZE: 118784 MIN/LOW/HIGH: 82/200/318
VM_STAT:
NR_FREE_PAGES: 45
--
NODE: 6 ZONE: 1 ADDR: ffff00917fffec40 NAME: "Normal"
SIZE: 118784 MIN/LOW/HIGH: 82/200/318
VM_STAT:
NR_FREE_PAGES: 53
--
NODE: 7 ZONE: 1 ADDR: ffff00997fbbec40 NAME: "Normal"
SIZE: 118784 MIN/LOW/HIGH: 82/200/318
VM_STAT:
NR_FREE_PAGES: 52
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
---
mm/compaction.c | 6 +++---
mm/vmscan.c | 8 ++++----
2 files changed, 7 insertions(+), 7 deletions(-)
Index: linux/mm/compaction.c
===================================================================
--- linux.orig/mm/compaction.c
+++ linux/mm/compaction.c
@@ -791,11 +791,11 @@ static bool too_many_isolated(struct com
unsigned long active, inactive, isolated;
- inactive = node_page_state(pgdat, NR_INACTIVE_FILE) +
+ inactive = node_page_state_snapshot(pgdat, NR_INACTIVE_FILE) +
node_page_state(pgdat, NR_INACTIVE_ANON);
- active = node_page_state(pgdat, NR_ACTIVE_FILE) +
+ active = node_page_state_snapshot(pgdat, NR_ACTIVE_FILE) +
node_page_state(pgdat, NR_ACTIVE_ANON);
- isolated = node_page_state(pgdat, NR_ISOLATED_FILE) +
+ isolated = node_page_state_snapshot(pgdat, NR_ISOLATED_FILE) +
node_page_state(pgdat, NR_ISOLATED_ANON);
/*
Index: linux/mm/vmscan.c
===================================================================
--- linux.orig/mm/vmscan.c
+++ linux/mm/vmscan.c
@@ -1756,11 +1756,11 @@ static int too_many_isolated(struct pgli
return 0;
if (file) {
- inactive = node_page_state(pgdat, NR_INACTIVE_FILE);
- isolated = node_page_state(pgdat, NR_ISOLATED_FILE);
+ inactive = node_page_state_snapshot(pgdat, NR_INACTIVE_FILE);
+ isolated = node_page_state_snapshot(pgdat, NR_ISOLATED_FILE);
} else {
- inactive = node_page_state(pgdat, NR_INACTIVE_ANON);
- isolated = node_page_state(pgdat, NR_ISOLATED_ANON);
+ inactive = node_page_state_snapshot(pgdat, NR_INACTIVE_ANON);
+ isolated = node_page_state_snapshot(pgdat, NR_ISOLATED_ANON);
}
/*
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [patch 0/2] mm: too_many_isolated can stall due to out of sync VM counters
2023-11-13 23:34 [patch 0/2] mm: too_many_isolated can stall due to out of sync VM counters Marcelo Tosatti
2023-11-13 23:34 ` [patch 1/2] mm: vmstat: introduce node_page_state_pages_snapshot Marcelo Tosatti
2023-11-13 23:34 ` [patch 2/2] mm: vmstat: use node_page_state_snapshot in too_many_isolated Marcelo Tosatti
@ 2023-11-14 8:20 ` Michal Hocko
2023-11-14 12:26 ` Marcelo Tosatti
2 siblings, 1 reply; 10+ messages in thread
From: Michal Hocko @ 2023-11-14 8:20 UTC (permalink / raw)
To: Marcelo Tosatti
Cc: linux-kernel, linux-mm, Vlastimil Babka, Andrew Morton,
David Hildenbrand, Peter Xu
On Mon 13-11-23 20:34:20, Marcelo Tosatti wrote:
> A customer reported seeing processes hung at too_many_isolated,
> while analysis indicated that the problem occurred due to out
> of sync per-CPU stats (see below).
>
> Fix is to use node_page_state_snapshot to avoid the out of stale values.
>
> 2136 static unsigned long
> 2137 shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
> 2138 struct scan_control *sc, enum lru_list lru)
> 2139 {
> :
> 2145 bool file = is_file_lru(lru);
> :
> 2147 struct pglist_data *pgdat = lruvec_pgdat(lruvec);
> :
> 2150 while (unlikely(too_many_isolated(pgdat, file, sc))) {
> 2151 if (stalled)
> 2152 return 0;
> 2153
> 2154 /* wait a bit for the reclaimer. */
> 2155 msleep(100); <--- some processes were sleeping here, with pending SIGKILL.
> 2156 stalled = true;
> 2157
> 2158 /* We are about to die and free our memory. Return now. */
> 2159 if (fatal_signal_pending(current))
> 2160 return SWAP_CLUSTER_MAX;
> 2161 }
>
> msleep() must be called only when there are too many isolated pages:
What do you mean here?
> 2019 static int too_many_isolated(struct pglist_data *pgdat, int file,
> 2020 struct scan_control *sc)
> 2021 {
> :
> 2030 if (file) {
> 2031 inactive = node_page_state(pgdat, NR_INACTIVE_FILE);
> 2032 isolated = node_page_state(pgdat, NR_ISOLATED_FILE);
> 2033 } else {
> :
> 2046 return isolated > inactive;
>
> The return value was true since:
>
> crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_INACTIVE_FILE]
> $8 = {
> counter = 1
> }
> crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_ISOLATED_FILE]
> $9 = {
> counter = 2
>
> while per_cpu stats had:
>
> crash> p ((struct pglist_data *) 0xffff00817fffe580)->per_cpu_nodestats
> $85 = (struct per_cpu_nodestat *) 0xffff8000118832e0
> crash> p/x 0xffff8000118832e0 + __per_cpu_offset[42]
> $86 = 0xffff00917fcc32e0
> crash> p ((struct per_cpu_nodestat *) 0xffff00917fcc32e0)->vm_node_stat_diff[NR_ISOLATED_FILE]
> $87 = -1 '\377'
>
> crash> p/x 0xffff8000118832e0 + __per_cpu_offset[44]
> $89 = 0xffff00917fe032e0
> crash> p ((struct per_cpu_nodestat *) 0xffff00917fe032e0)->vm_node_stat_diff[NR_ISOLATED_FILE]
> $91 = -1 '\377'
This doesn't really tell much. How much out of sync they really are
cumulatively over all cpus?
> It seems that processes were trapped in direct reclaim/compaction loop
> because these nodes had few free pages lower than watermark min.
>
> crash> kmem -z | grep -A 3 Normal
> :
> NODE: 4 ZONE: 1 ADDR: ffff00817fffec40 NAME: "Normal"
> SIZE: 8454144 PRESENT: 98304 MIN/LOW/HIGH: 68/166/264
> VM_STAT:
> NR_FREE_PAGES: 68
> --
> NODE: 5 ZONE: 1 ADDR: ffff00897fffec40 NAME: "Normal"
> SIZE: 118784 MIN/LOW/HIGH: 82/200/318
> VM_STAT:
> NR_FREE_PAGES: 45
> --
> NODE: 6 ZONE: 1 ADDR: ffff00917fffec40 NAME: "Normal"
> SIZE: 118784 MIN/LOW/HIGH: 82/200/318
> VM_STAT:
> NR_FREE_PAGES: 53
> --
> NODE: 7 ZONE: 1 ADDR: ffff00997fbbec40 NAME: "Normal"
> SIZE: 118784 MIN/LOW/HIGH: 82/200/318
> VM_STAT:
> NR_FREE_PAGES: 52
How have you concluded that too_many_isolated is at root of this issue.
With a very low NR_FREE_PAGES and many contending allocation the system
could be easily stuck in reclaim. What are other reclaim
characteristics? Is the direct reclaim successful?
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [patch 0/2] mm: too_many_isolated can stall due to out of sync VM counters
2023-11-14 8:20 ` [patch 0/2] mm: too_many_isolated can stall due to out of sync VM counters Michal Hocko
@ 2023-11-14 12:26 ` Marcelo Tosatti
2023-11-14 12:46 ` Michal Hocko
0 siblings, 1 reply; 10+ messages in thread
From: Marcelo Tosatti @ 2023-11-14 12:26 UTC (permalink / raw)
To: Michal Hocko
Cc: linux-kernel, linux-mm, Vlastimil Babka, Andrew Morton,
David Hildenbrand, Peter Xu
Hi Michal,
On Tue, Nov 14, 2023 at 09:20:09AM +0100, Michal Hocko wrote:
> On Mon 13-11-23 20:34:20, Marcelo Tosatti wrote:
> > A customer reported seeing processes hung at too_many_isolated,
> > while analysis indicated that the problem occurred due to out
> > of sync per-CPU stats (see below).
> >
> > Fix is to use node_page_state_snapshot to avoid the out of stale values.
> >
> > 2136 static unsigned long
> > 2137 shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
> > 2138 struct scan_control *sc, enum lru_list lru)
> > 2139 {
> > :
> > 2145 bool file = is_file_lru(lru);
> > :
> > 2147 struct pglist_data *pgdat = lruvec_pgdat(lruvec);
> > :
> > 2150 while (unlikely(too_many_isolated(pgdat, file, sc))) {
> > 2151 if (stalled)
> > 2152 return 0;
> > 2153
> > 2154 /* wait a bit for the reclaimer. */
> > 2155 msleep(100); <--- some processes were sleeping here, with pending SIGKILL.
> > 2156 stalled = true;
> > 2157
> > 2158 /* We are about to die and free our memory. Return now. */
> > 2159 if (fatal_signal_pending(current))
> > 2160 return SWAP_CLUSTER_MAX;
> > 2161 }
> >
> > msleep() must be called only when there are too many isolated pages:
>
> What do you mean here?
That msleep() must not be called when
isolated > inactive
is false.
> > 2019 static int too_many_isolated(struct pglist_data *pgdat, int file,
> > 2020 struct scan_control *sc)
> > 2021 {
> > :
> > 2030 if (file) {
> > 2031 inactive = node_page_state(pgdat, NR_INACTIVE_FILE);
> > 2032 isolated = node_page_state(pgdat, NR_ISOLATED_FILE);
> > 2033 } else {
> > :
> > 2046 return isolated > inactive;
> >
> > The return value was true since:
> >
> > crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_INACTIVE_FILE]
> > $8 = {
> > counter = 1
> > }
> > crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_ISOLATED_FILE]
> > $9 = {
> > counter = 2
> >
> > while per_cpu stats had:
> >
> > crash> p ((struct pglist_data *) 0xffff00817fffe580)->per_cpu_nodestats
> > $85 = (struct per_cpu_nodestat *) 0xffff8000118832e0
> > crash> p/x 0xffff8000118832e0 + __per_cpu_offset[42]
> > $86 = 0xffff00917fcc32e0
> > crash> p ((struct per_cpu_nodestat *) 0xffff00917fcc32e0)->vm_node_stat_diff[NR_ISOLATED_FILE]
> > $87 = -1 '\377'
> >
> > crash> p/x 0xffff8000118832e0 + __per_cpu_offset[44]
> > $89 = 0xffff00917fe032e0
> > crash> p ((struct per_cpu_nodestat *) 0xffff00917fe032e0)->vm_node_stat_diff[NR_ISOLATED_FILE]
> > $91 = -1 '\377'
>
> This doesn't really tell much. How much out of sync they really are
> cumulatively over all cpus?
This is the cumulative value over all CPUs (offsets for other CPUs
have been omitted since they are zero).
> > It seems that processes were trapped in direct reclaim/compaction loop
> > because these nodes had few free pages lower than watermark min.
> >
> > crash> kmem -z | grep -A 3 Normal
> > :
> > NODE: 4 ZONE: 1 ADDR: ffff00817fffec40 NAME: "Normal"
> > SIZE: 8454144 PRESENT: 98304 MIN/LOW/HIGH: 68/166/264
> > VM_STAT:
> > NR_FREE_PAGES: 68
> > --
> > NODE: 5 ZONE: 1 ADDR: ffff00897fffec40 NAME: "Normal"
> > SIZE: 118784 MIN/LOW/HIGH: 82/200/318
> > VM_STAT:
> > NR_FREE_PAGES: 45
> > --
> > NODE: 6 ZONE: 1 ADDR: ffff00917fffec40 NAME: "Normal"
> > SIZE: 118784 MIN/LOW/HIGH: 82/200/318
> > VM_STAT:
> > NR_FREE_PAGES: 53
> > --
> > NODE: 7 ZONE: 1 ADDR: ffff00997fbbec40 NAME: "Normal"
> > SIZE: 118784 MIN/LOW/HIGH: 82/200/318
> > VM_STAT:
> > NR_FREE_PAGES: 52
>
> How have you concluded that too_many_isolated is at root of this issue.
Because the customer observed the problem and obtained traces:
"If so, I have to mention about an another problem caused by vmstat issue here.
The customer experienced process hang like the issue reported here, but in this case the
process was trapped in compaction route. In shrink_inactive_list(), reclaim_throttle() is called
when too_many_isolated() is true. In fact confirmed from memory dump, there was no isolated
pages but zone's vmstat have 2 counts as isolated pages and percpu vmstats have -2 counts.
too_many = isolated > (inactive + active) / 2;
There was no more inactive and active pages. As the result, the process was throttled in
this point again and again until finish of parallel reclaimers who did not exist there in real."
> With a very low NR_FREE_PAGES and many contending allocation the system
> could be easily stuck in reclaim. What are other reclaim
> characteristics?
I can ask. What information in particular do you want to know?
> Is the direct reclaim successful?
Processes are stuck in too_many_isolated (unnecessarily). What do you mean when you ask
"Is the direct reclaim successful", precisely?
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [patch 0/2] mm: too_many_isolated can stall due to out of sync VM counters
2023-11-14 12:26 ` Marcelo Tosatti
@ 2023-11-14 12:46 ` Michal Hocko
2023-11-21 13:35 ` Marcelo Tosatti
2023-11-22 11:23 ` Marcelo Tosatti
0 siblings, 2 replies; 10+ messages in thread
From: Michal Hocko @ 2023-11-14 12:46 UTC (permalink / raw)
To: Marcelo Tosatti
Cc: linux-kernel, linux-mm, Vlastimil Babka, Andrew Morton,
David Hildenbrand, Peter Xu
On Tue 14-11-23 09:26:53, Marcelo Tosatti wrote:
> Hi Michal,
>
> On Tue, Nov 14, 2023 at 09:20:09AM +0100, Michal Hocko wrote:
> > On Mon 13-11-23 20:34:20, Marcelo Tosatti wrote:
> > > A customer reported seeing processes hung at too_many_isolated,
> > > while analysis indicated that the problem occurred due to out
> > > of sync per-CPU stats (see below).
> > >
> > > Fix is to use node_page_state_snapshot to avoid the out of stale values.
> > >
> > > 2136 static unsigned long
> > > 2137 shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
> > > 2138 struct scan_control *sc, enum lru_list lru)
> > > 2139 {
> > > :
> > > 2145 bool file = is_file_lru(lru);
> > > :
> > > 2147 struct pglist_data *pgdat = lruvec_pgdat(lruvec);
> > > :
> > > 2150 while (unlikely(too_many_isolated(pgdat, file, sc))) {
> > > 2151 if (stalled)
> > > 2152 return 0;
> > > 2153
> > > 2154 /* wait a bit for the reclaimer. */
> > > 2155 msleep(100); <--- some processes were sleeping here, with pending SIGKILL.
> > > 2156 stalled = true;
> > > 2157
> > > 2158 /* We are about to die and free our memory. Return now. */
> > > 2159 if (fatal_signal_pending(current))
> > > 2160 return SWAP_CLUSTER_MAX;
> > > 2161 }
> > >
> > > msleep() must be called only when there are too many isolated pages:
> >
> > What do you mean here?
>
> That msleep() must not be called when
>
> isolated > inactive
>
> is false.
Well, but the code is structured in a way that this is simply true.
too_many_isolated might be false positive because it is a very loose
interface and the number of isolated pages can fluctuate depending on
the number of direct reclaimers.
> > > 2019 static int too_many_isolated(struct pglist_data *pgdat, int file,
> > > 2020 struct scan_control *sc)
> > > 2021 {
> > > :
> > > 2030 if (file) {
> > > 2031 inactive = node_page_state(pgdat, NR_INACTIVE_FILE);
> > > 2032 isolated = node_page_state(pgdat, NR_ISOLATED_FILE);
> > > 2033 } else {
> > > :
> > > 2046 return isolated > inactive;
> > >
> > > The return value was true since:
> > >
> > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_INACTIVE_FILE]
> > > $8 = {
> > > counter = 1
> > > }
> > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_ISOLATED_FILE]
> > > $9 = {
> > > counter = 2
> > >
> > > while per_cpu stats had:
> > >
> > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->per_cpu_nodestats
> > > $85 = (struct per_cpu_nodestat *) 0xffff8000118832e0
> > > crash> p/x 0xffff8000118832e0 + __per_cpu_offset[42]
> > > $86 = 0xffff00917fcc32e0
> > > crash> p ((struct per_cpu_nodestat *) 0xffff00917fcc32e0)->vm_node_stat_diff[NR_ISOLATED_FILE]
> > > $87 = -1 '\377'
> > >
> > > crash> p/x 0xffff8000118832e0 + __per_cpu_offset[44]
> > > $89 = 0xffff00917fe032e0
> > > crash> p ((struct per_cpu_nodestat *) 0xffff00917fe032e0)->vm_node_stat_diff[NR_ISOLATED_FILE]
> > > $91 = -1 '\377'
> >
> > This doesn't really tell much. How much out of sync they really are
> > cumulatively over all cpus?
>
> This is the cumulative value over all CPUs (offsets for other CPUs
> have been omitted since they are zero).
OK, so that means the NR_ISOLATED_FILE is 0 while NR_INACTIVE_FILE is 1,
correct? If that is the case then the value is indeed outdated but it
also means that the NR_INACTIVE_FILE is so small that all but 1 (resp. 2
as kswapd is never throttled) reclaimers will be stalled anyway. So does
the exact snapshot really help? Do you have any means to reproduce this
behavior and see that the patch actually changed the behavior?
[...]
> > With a very low NR_FREE_PAGES and many contending allocation the system
> > could be easily stuck in reclaim. What are other reclaim
> > characteristics?
>
> I can ask. What information in particular do you want to know?
When I am dealing with issues like this I heavily rely on /proc/vmstat
counters and pgscan, pgsteal counters to see whether there is any
progress over time.
> > Is the direct reclaim successful?
>
> Processes are stuck in too_many_isolated (unnecessarily). What do you mean when you ask
> "Is the direct reclaim successful", precisely?
With such a small LRU list it is quite likely that many processes will
be competing over last pages on the list while rest will be throttled
because there is nothing to reclaim. It is quite possible that all
reclaimers will be waiting for a single reclaimer (either kswapd or
other direct reclaimer). I would like to understand whether the system
is stuck in unproductive state where everybody just waits until the
counter is synced or everything just progress very slowly because of the
small LRU.
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [patch 0/2] mm: too_many_isolated can stall due to out of sync VM counters
2023-11-14 12:46 ` Michal Hocko
@ 2023-11-21 13:35 ` Marcelo Tosatti
2023-11-22 11:23 ` Marcelo Tosatti
1 sibling, 0 replies; 10+ messages in thread
From: Marcelo Tosatti @ 2023-11-21 13:35 UTC (permalink / raw)
To: Michal Hocko
Cc: linux-kernel, linux-mm, Vlastimil Babka, Andrew Morton,
David Hildenbrand, Peter Xu
On Tue, Nov 14, 2023 at 01:46:41PM +0100, Michal Hocko wrote:
> On Tue 14-11-23 09:26:53, Marcelo Tosatti wrote:
> > Hi Michal,
> >
> > On Tue, Nov 14, 2023 at 09:20:09AM +0100, Michal Hocko wrote:
> > > On Mon 13-11-23 20:34:20, Marcelo Tosatti wrote:
> > > > A customer reported seeing processes hung at too_many_isolated,
> > > > while analysis indicated that the problem occurred due to out
> > > > of sync per-CPU stats (see below).
> > > >
> > > > Fix is to use node_page_state_snapshot to avoid the out of stale values.
> > > >
> > > > 2136 static unsigned long
> > > > 2137 shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
> > > > 2138 struct scan_control *sc, enum lru_list lru)
> > > > 2139 {
> > > > :
> > > > 2145 bool file = is_file_lru(lru);
> > > > :
> > > > 2147 struct pglist_data *pgdat = lruvec_pgdat(lruvec);
> > > > :
> > > > 2150 while (unlikely(too_many_isolated(pgdat, file, sc))) {
> > > > 2151 if (stalled)
> > > > 2152 return 0;
> > > > 2153
> > > > 2154 /* wait a bit for the reclaimer. */
> > > > 2155 msleep(100); <--- some processes were sleeping here, with pending SIGKILL.
> > > > 2156 stalled = true;
> > > > 2157
> > > > 2158 /* We are about to die and free our memory. Return now. */
> > > > 2159 if (fatal_signal_pending(current))
> > > > 2160 return SWAP_CLUSTER_MAX;
> > > > 2161 }
> > > >
> > > > msleep() must be called only when there are too many isolated pages:
> > >
> > > What do you mean here?
> >
> > That msleep() must not be called when
> >
> > isolated > inactive
> >
> > is false.
>
> Well, but the code is structured in a way that this is simply true.
> too_many_isolated might be false positive because it is a very loose
> interface and the number of isolated pages can fluctuate depending on
> the number of direct reclaimers.
OK
>
> > > > 2019 static int too_many_isolated(struct pglist_data *pgdat, int file,
> > > > 2020 struct scan_control *sc)
> > > > 2021 {
> > > > :
> > > > 2030 if (file) {
> > > > 2031 inactive = node_page_state(pgdat, NR_INACTIVE_FILE);
> > > > 2032 isolated = node_page_state(pgdat, NR_ISOLATED_FILE);
> > > > 2033 } else {
> > > > :
> > > > 2046 return isolated > inactive;
> > > >
> > > > The return value was true since:
> > > >
> > > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_INACTIVE_FILE]
> > > > $8 = {
> > > > counter = 1
> > > > }
> > > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_ISOLATED_FILE]
> > > > $9 = {
> > > > counter = 2
> > > >
> > > > while per_cpu stats had:
> > > >
> > > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->per_cpu_nodestats
> > > > $85 = (struct per_cpu_nodestat *) 0xffff8000118832e0
> > > > crash> p/x 0xffff8000118832e0 + __per_cpu_offset[42]
> > > > $86 = 0xffff00917fcc32e0
> > > > crash> p ((struct per_cpu_nodestat *) 0xffff00917fcc32e0)->vm_node_stat_diff[NR_ISOLATED_FILE]
> > > > $87 = -1 '\377'
> > > >
> > > > crash> p/x 0xffff8000118832e0 + __per_cpu_offset[44]
> > > > $89 = 0xffff00917fe032e0
> > > > crash> p ((struct per_cpu_nodestat *) 0xffff00917fe032e0)->vm_node_stat_diff[NR_ISOLATED_FILE]
> > > > $91 = -1 '\377'
> > >
> > > This doesn't really tell much. How much out of sync they really are
> > > cumulatively over all cpus?
> >
> > This is the cumulative value over all CPUs (offsets for other CPUs
> > have been omitted since they are zero).
>
> OK, so that means the NR_ISOLATED_FILE is 0 while NR_INACTIVE_FILE is 1,
> correct? If that is the case then the value is indeed outdated but it
> also means that the NR_INACTIVE_FILE is so small that all but 1 (resp. 2
> as kswapd is never throttled) reclaimers will be stalled anyway. So does
> the exact snapshot really help?
By looking at the data:
> crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_INACTIVE_FILE]
> $8 = {
> counter = 1
> }
> crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_ISOLATED_FILE]
> $9 = {
> counter = 2
>
> while per_cpu stats had:
>
> crash> p ((struct pglist_data *) 0xffff00817fffe580)->per_cpu_nodestats
> $85 = (struct per_cpu_nodestat *) 0xffff8000118832e0
> crash> p/x 0xffff8000118832e0 + __per_cpu_offset[42]
> $86 = 0xffff00917fcc32e0
> crash> p ((struct per_cpu_nodestat *) 0xffff00917fcc32e0)->vm_node_stat_diff[NR_ISOLATED_FILE]
> $87 = -1 '\377'
>
> crash> p/x 0xffff8000118832e0 + __per_cpu_offset[44]
> $89 = 0xffff00917fe032e0
> crash> p ((struct per_cpu_nodestat *) 0xffff00917fe032e0)->vm_node_stat_diff[NR_ISOLATED_FILE]
> $91 = -1 '\377'
Actual-Value = Global-Counter + CPU0.delta + CPU1.delta + ... + CPUn.delta
Nr-Isolated-File = Nr-Isolated-Global + CPU0.delta-isolated + CPU1.delta-isolated + ... + CPUn.delta-isolated
Nr-Inactive-File = Nr-Inactive-Global + CPU0.delta-inactive + CPU1.delta-inactive + ... + CPUn.delta-inactive
With outdated values:
====================
Nr-Isolated-File = 2
Nr-Inactive-File = 1
Therefore isolated > inactive, since 2 > 1.
Without outdated values (snapshot):
==================================
Nr-Isolated-File = 2 - 1 - 1 = 0
Nr-Inactive-File = 1
> Do you have any means to reproduce this
> behavior and see that the patch actually changed the behavior?
No, because its not easy to test patches on the system where this was
reproduced.
However, the calculations above seem pretty unambiguous, showing that
the snapshot would fix the problem.
> [...]
>
> > > With a very low NR_FREE_PAGES and many contending allocation the system
> > > could be easily stuck in reclaim. What are other reclaim
> > > characteristics?
> >
> > I can ask. What information in particular do you want to know?
>
> When I am dealing with issues like this I heavily rely on /proc/vmstat
> counters and pgscan, pgsteal counters to see whether there is any
> progress over time.
I understand your desire for additional data, can try to grab it
(or create a synthetic configuration where this problem is
reproducible).
However, given the calculations above, it is clear that one problem is the
out of sync counters. Don't you agree?
> > > Is the direct reclaim successful?
> >
> > Processes are stuck in too_many_isolated (unnecessarily). What do you mean when you ask
> > "Is the direct reclaim successful", precisely?
>
> With such a small LRU list it is quite likely that many processes will
> be competing over last pages on the list while rest will be throttled
> because there is nothing to reclaim. It is quite possible that all
> reclaimers will be waiting for a single reclaimer (either kswapd or
> other direct reclaimer).
Sure, but again, the calculations above show that processes are stuck
on too_many_isolated (and the proposed fix will address that situation).
> I would like to understand whether the system
> is stuck in unproductive state where everybody just waits until the
> counter is synced or everything just progress very slowly because of the
> small LRU.
OK.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [patch 0/2] mm: too_many_isolated can stall due to out of sync VM counters
2023-11-14 12:46 ` Michal Hocko
2023-11-21 13:35 ` Marcelo Tosatti
@ 2023-11-22 11:23 ` Marcelo Tosatti
2023-11-22 11:26 ` Marcelo Tosatti
1 sibling, 1 reply; 10+ messages in thread
From: Marcelo Tosatti @ 2023-11-22 11:23 UTC (permalink / raw)
To: Michal Hocko
Cc: linux-kernel, linux-mm, Vlastimil Babka, Andrew Morton,
David Hildenbrand, Peter Xu
On Tue, Nov 14, 2023 at 01:46:41PM +0100, Michal Hocko wrote:
> On Tue 14-11-23 09:26:53, Marcelo Tosatti wrote:
> > Hi Michal,
> >
> > On Tue, Nov 14, 2023 at 09:20:09AM +0100, Michal Hocko wrote:
> > > On Mon 13-11-23 20:34:20, Marcelo Tosatti wrote:
> > > > A customer reported seeing processes hung at too_many_isolated,
> > > > while analysis indicated that the problem occurred due to out
> > > > of sync per-CPU stats (see below).
> > > >
> > > > Fix is to use node_page_state_snapshot to avoid the out of stale values.
> > > >
> > > > 2136 static unsigned long
> > > > 2137 shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
> > > > 2138 struct scan_control *sc, enum lru_list lru)
> > > > 2139 {
> > > > :
> > > > 2145 bool file = is_file_lru(lru);
> > > > :
> > > > 2147 struct pglist_data *pgdat = lruvec_pgdat(lruvec);
> > > > :
> > > > 2150 while (unlikely(too_many_isolated(pgdat, file, sc))) {
> > > > 2151 if (stalled)
> > > > 2152 return 0;
> > > > 2153
> > > > 2154 /* wait a bit for the reclaimer. */
> > > > 2155 msleep(100); <--- some processes were sleeping here, with pending SIGKILL.
> > > > 2156 stalled = true;
> > > > 2157
> > > > 2158 /* We are about to die and free our memory. Return now. */
> > > > 2159 if (fatal_signal_pending(current))
> > > > 2160 return SWAP_CLUSTER_MAX;
> > > > 2161 }
> > > >
> > > > msleep() must be called only when there are too many isolated pages:
> > >
> > > What do you mean here?
> >
> > That msleep() must not be called when
> >
> > isolated > inactive
> >
> > is false.
>
> Well, but the code is structured in a way that this is simply true.
> too_many_isolated might be false positive because it is a very loose
> interface and the number of isolated pages can fluctuate depending on
> the number of direct reclaimers.
>
> > > > 2019 static int too_many_isolated(struct pglist_data *pgdat, int file,
> > > > 2020 struct scan_control *sc)
> > > > 2021 {
> > > > :
> > > > 2030 if (file) {
> > > > 2031 inactive = node_page_state(pgdat, NR_INACTIVE_FILE);
> > > > 2032 isolated = node_page_state(pgdat, NR_ISOLATED_FILE);
> > > > 2033 } else {
> > > > :
> > > > 2046 return isolated > inactive;
> > > >
> > > > The return value was true since:
> > > >
> > > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_INACTIVE_FILE]
> > > > $8 = {
> > > > counter = 1
> > > > }
> > > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_ISOLATED_FILE]
> > > > $9 = {
> > > > counter = 2
> > > >
> > > > while per_cpu stats had:
> > > >
> > > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->per_cpu_nodestats
> > > > $85 = (struct per_cpu_nodestat *) 0xffff8000118832e0
> > > > crash> p/x 0xffff8000118832e0 + __per_cpu_offset[42]
> > > > $86 = 0xffff00917fcc32e0
> > > > crash> p ((struct per_cpu_nodestat *) 0xffff00917fcc32e0)->vm_node_stat_diff[NR_ISOLATED_FILE]
> > > > $87 = -1 '\377'
> > > >
> > > > crash> p/x 0xffff8000118832e0 + __per_cpu_offset[44]
> > > > $89 = 0xffff00917fe032e0
> > > > crash> p ((struct per_cpu_nodestat *) 0xffff00917fe032e0)->vm_node_stat_diff[NR_ISOLATED_FILE]
> > > > $91 = -1 '\377'
> > >
> > > This doesn't really tell much. How much out of sync they really are
> > > cumulatively over all cpus?
> >
> > This is the cumulative value over all CPUs (offsets for other CPUs
> > have been omitted since they are zero).
>
> OK, so that means the NR_ISOLATED_FILE is 0 while NR_INACTIVE_FILE is 1,
> correct? If that is the case then the value is indeed outdated but it
> also means that the NR_INACTIVE_FILE is so small that all but 1 (resp. 2
> as kswapd is never throttled) reclaimers will be stalled anyway. So does
> the exact snapshot really help? Do you have any means to reproduce this
> behavior and see that the patch actually changed the behavior?
>
> [...]
>
> > > With a very low NR_FREE_PAGES and many contending allocation the system
> > > could be easily stuck in reclaim. What are other reclaim
> > > characteristics?
> >
> > I can ask. What information in particular do you want to know?
>
> When I am dealing with issues like this I heavily rely on /proc/vmstat
> counters and pgscan, pgsteal counters to see whether there is any
> progress over time.
>
> > > Is the direct reclaim successful?
> >
> > Processes are stuck in too_many_isolated (unnecessarily). What do you mean when you ask
> > "Is the direct reclaim successful", precisely?
>
> With such a small LRU list it is quite likely that many processes will
> be competing over last pages on the list while rest will be throttled
> because there is nothing to reclaim. It is quite possible that all
> reclaimers will be waiting for a single reclaimer (either kswapd or
> other direct reclaimer). I would like to understand whether the system
> is stuck in unproductive state where everybody just waits until the
> counter is synced or everything just progress very slowly because of the
> small LRU.
> --
> Michal Hocko
> SUSE Labs
Michal,
I think this provides the data you are looking for:
It seems that the situation was invoking memory-consuming user program
in pallarel expecting that the system will kick oom-killer at the end.
The node 0-3 are small containing system data and almost all files.
The node 4-7 are large prepared to contain user data only.
The issue described in above was observed on node 4-7, where
had very few memory for files.
The node 4-7 has more cpu than node 0-3.
Only cpus on node 4-7 are configuerd to be nohz_full.
So we often found unflushed percpu vmstat on cpus of node 4-7.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [patch 0/2] mm: too_many_isolated can stall due to out of sync VM counters
2023-11-22 11:23 ` Marcelo Tosatti
@ 2023-11-22 11:26 ` Marcelo Tosatti
2023-11-22 13:56 ` Michal Hocko
0 siblings, 1 reply; 10+ messages in thread
From: Marcelo Tosatti @ 2023-11-22 11:26 UTC (permalink / raw)
To: Michal Hocko
Cc: linux-kernel, linux-mm, Vlastimil Babka, Andrew Morton,
David Hildenbrand, Peter Xu
On Wed, Nov 22, 2023 at 08:23:51AM -0300, Marcelo Tosatti wrote:
> On Tue, Nov 14, 2023 at 01:46:41PM +0100, Michal Hocko wrote:
> > On Tue 14-11-23 09:26:53, Marcelo Tosatti wrote:
> > > Hi Michal,
> > >
> > > On Tue, Nov 14, 2023 at 09:20:09AM +0100, Michal Hocko wrote:
> > > > On Mon 13-11-23 20:34:20, Marcelo Tosatti wrote:
> > > > > A customer reported seeing processes hung at too_many_isolated,
> > > > > while analysis indicated that the problem occurred due to out
> > > > > of sync per-CPU stats (see below).
> > > > >
> > > > > Fix is to use node_page_state_snapshot to avoid the out of stale values.
> > > > >
> > > > > 2136 static unsigned long
> > > > > 2137 shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
> > > > > 2138 struct scan_control *sc, enum lru_list lru)
> > > > > 2139 {
> > > > > :
> > > > > 2145 bool file = is_file_lru(lru);
> > > > > :
> > > > > 2147 struct pglist_data *pgdat = lruvec_pgdat(lruvec);
> > > > > :
> > > > > 2150 while (unlikely(too_many_isolated(pgdat, file, sc))) {
> > > > > 2151 if (stalled)
> > > > > 2152 return 0;
> > > > > 2153
> > > > > 2154 /* wait a bit for the reclaimer. */
> > > > > 2155 msleep(100); <--- some processes were sleeping here, with pending SIGKILL.
> > > > > 2156 stalled = true;
> > > > > 2157
> > > > > 2158 /* We are about to die and free our memory. Return now. */
> > > > > 2159 if (fatal_signal_pending(current))
> > > > > 2160 return SWAP_CLUSTER_MAX;
> > > > > 2161 }
> > > > >
> > > > > msleep() must be called only when there are too many isolated pages:
> > > >
> > > > What do you mean here?
> > >
> > > That msleep() must not be called when
> > >
> > > isolated > inactive
> > >
> > > is false.
> >
> > Well, but the code is structured in a way that this is simply true.
> > too_many_isolated might be false positive because it is a very loose
> > interface and the number of isolated pages can fluctuate depending on
> > the number of direct reclaimers.
> >
> > > > > 2019 static int too_many_isolated(struct pglist_data *pgdat, int file,
> > > > > 2020 struct scan_control *sc)
> > > > > 2021 {
> > > > > :
> > > > > 2030 if (file) {
> > > > > 2031 inactive = node_page_state(pgdat, NR_INACTIVE_FILE);
> > > > > 2032 isolated = node_page_state(pgdat, NR_ISOLATED_FILE);
> > > > > 2033 } else {
> > > > > :
> > > > > 2046 return isolated > inactive;
> > > > >
> > > > > The return value was true since:
> > > > >
> > > > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_INACTIVE_FILE]
> > > > > $8 = {
> > > > > counter = 1
> > > > > }
> > > > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_ISOLATED_FILE]
> > > > > $9 = {
> > > > > counter = 2
> > > > >
> > > > > while per_cpu stats had:
> > > > >
> > > > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->per_cpu_nodestats
> > > > > $85 = (struct per_cpu_nodestat *) 0xffff8000118832e0
> > > > > crash> p/x 0xffff8000118832e0 + __per_cpu_offset[42]
> > > > > $86 = 0xffff00917fcc32e0
> > > > > crash> p ((struct per_cpu_nodestat *) 0xffff00917fcc32e0)->vm_node_stat_diff[NR_ISOLATED_FILE]
> > > > > $87 = -1 '\377'
> > > > >
> > > > > crash> p/x 0xffff8000118832e0 + __per_cpu_offset[44]
> > > > > $89 = 0xffff00917fe032e0
> > > > > crash> p ((struct per_cpu_nodestat *) 0xffff00917fe032e0)->vm_node_stat_diff[NR_ISOLATED_FILE]
> > > > > $91 = -1 '\377'
> > > >
> > > > This doesn't really tell much. How much out of sync they really are
> > > > cumulatively over all cpus?
> > >
> > > This is the cumulative value over all CPUs (offsets for other CPUs
> > > have been omitted since they are zero).
> >
> > OK, so that means the NR_ISOLATED_FILE is 0 while NR_INACTIVE_FILE is 1,
> > correct? If that is the case then the value is indeed outdated but it
> > also means that the NR_INACTIVE_FILE is so small that all but 1 (resp. 2
> > as kswapd is never throttled) reclaimers will be stalled anyway. So does
> > the exact snapshot really help? Do you have any means to reproduce this
> > behavior and see that the patch actually changed the behavior?
> >
> > [...]
> >
> > > > With a very low NR_FREE_PAGES and many contending allocation the system
> > > > could be easily stuck in reclaim. What are other reclaim
> > > > characteristics?
> > >
> > > I can ask. What information in particular do you want to know?
> >
> > When I am dealing with issues like this I heavily rely on /proc/vmstat
> > counters and pgscan, pgsteal counters to see whether there is any
> > progress over time.
> >
> > > > Is the direct reclaim successful?
> > >
> > > Processes are stuck in too_many_isolated (unnecessarily). What do you mean when you ask
> > > "Is the direct reclaim successful", precisely?
> >
> > With such a small LRU list it is quite likely that many processes will
> > be competing over last pages on the list while rest will be throttled
> > because there is nothing to reclaim. It is quite possible that all
> > reclaimers will be waiting for a single reclaimer (either kswapd or
> > other direct reclaimer). I would like to understand whether the system
> > is stuck in unproductive state where everybody just waits until the
> > counter is synced or everything just progress very slowly because of the
> > small LRU.
> > --
> > Michal Hocko
> > SUSE Labs
>
> Michal,
>
> I think this provides the data you are looking for:
>
> It seems that the situation was invoking memory-consuming user program
> in pallarel expecting that the system will kick oom-killer at the end.
>
> The node 0-3 are small containing system data and almost all files.
> The node 4-7 are large prepared to contain user data only.
> The issue described in above was observed on node 4-7, where
> had very few memory for files.
>
> The node 4-7 has more cpu than node 0-3.
> Only cpus on node 4-7 are configuerd to be nohz_full.
> So we often found unflushed percpu vmstat on cpus of node 4-7.
>
>
Michal,
Let me know if you have any objections to the patch, thanks.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [patch 0/2] mm: too_many_isolated can stall due to out of sync VM counters
2023-11-22 11:26 ` Marcelo Tosatti
@ 2023-11-22 13:56 ` Michal Hocko
0 siblings, 0 replies; 10+ messages in thread
From: Michal Hocko @ 2023-11-22 13:56 UTC (permalink / raw)
To: Marcelo Tosatti
Cc: linux-kernel, linux-mm, Vlastimil Babka, Andrew Morton,
David Hildenbrand, Peter Xu
On Wed 22-11-23 08:26:02, Marcelo Tosatti wrote:
[...]
> Michal,
>
> Let me know if you have any objections to the patch, thanks.
I do not think you have exaplained how the patch helps nor you have
shown it has fixed the described problem. You seem to be very focused on
the specific snapshot which I do agree shows that the data is out of
sync and that there is throttling happening when strictly speaking it
should noti. But (let me repeat) those discrepancies are so small that
it is very likely that concurrent reclaimers will be stalled (just take
one to isolate those pages) anyway. Maybe this leads to an earlier OOM
killer invocation as untrottled reclaimers will be able to conclude
there is no progress rather than being throttled on the direct reclaim.
That being said I am not saying the patch is incorrect. Nevertheless, I
do not think we want to merge this patch without a better understanding
what is going on in your specific case and what kind of runtime
difference does the patch make in that case. From your previous email it
seems like the actual case is mostly memory stress test that manages to
fill out the memory to push almost all the file LRU while anon LRU is
not reclaimable for some reason. That shouldn't be terribly hard to
reproduce.
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2023-11-22 13:56 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-13 23:34 [patch 0/2] mm: too_many_isolated can stall due to out of sync VM counters Marcelo Tosatti
2023-11-13 23:34 ` [patch 1/2] mm: vmstat: introduce node_page_state_pages_snapshot Marcelo Tosatti
2023-11-13 23:34 ` [patch 2/2] mm: vmstat: use node_page_state_snapshot in too_many_isolated Marcelo Tosatti
2023-11-14 8:20 ` [patch 0/2] mm: too_many_isolated can stall due to out of sync VM counters Michal Hocko
2023-11-14 12:26 ` Marcelo Tosatti
2023-11-14 12:46 ` Michal Hocko
2023-11-21 13:35 ` Marcelo Tosatti
2023-11-22 11:23 ` Marcelo Tosatti
2023-11-22 11:26 ` Marcelo Tosatti
2023-11-22 13:56 ` Michal Hocko
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox