From: Marcelo Tosatti <mtosatti@redhat.com>
To: Christoph Lameter <cl@linux.com>
Cc: Aaron Tomlin <atomlin@atomlin.com>,
Frederic Weisbecker <frederic@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
Russell King <linux@armlinux.org.uk>,
Huacai Chen <chenhuacai@kernel.org>,
Heiko Carstens <hca@linux.ibm.com>,
x86@kernel.org, Vlastimil Babka <vbabka@suse.cz>,
Michal Hocko <mhocko@suse.com>,
Marcelo Tosatti <mtosatti@redhat.com>
Subject: [PATCH v6 10/12] mm/vmstat: switch vmstat shepherd to flush per-CPU counters remotely
Date: Tue, 14 Mar 2023 15:59:24 -0300 [thread overview]
Message-ID: <20230314185951.779596601@redhat.com> (raw)
In-Reply-To: <20230314185914.836510860@redhat.com>
Now that the counters are modified via cmpxchg both CPU locally
(via the account functions), and remotely (via cpu_vm_stats_fold),
its possible to switch vmstat_shepherd to perform the per-CPU
vmstats folding remotely.
This fixes the following two problems:
1. A customer provided evidence which indicates that
the idle tick was stopped; albeit, CPU-specific vmstat
counters still remained populated.
Thus one can only assume quiet_vmstat() was not
invoked on return to the idle loop. If I understand
correctly, I suspect this divergence might erroneously
prevent a reclaim attempt by kswapd. If the number of
zone specific free pages are below their per-cpu drift
value then zone_page_state_snapshot() is used to
compute a more accurate view of the aforementioned
statistic. Thus any task blocked on the NUMA node
specific pfmemalloc_wait queue will be unable to make
significant progress via direct reclaim unless it is
killed after being woken up by kswapd
(see throttle_direct_reclaim()). The evidence is:
- The process was trapped in throttle_direct_reclaim().
The function wait_event_killable() was called to wait condition
allow_direct_reclaim(pgdat) for current node to be true.
The allow_direct_reclaim(pgdat) examined the number of free pages
on the node by zone_page_state() which just returns value in
zone->vm_stat[NR_FREE_PAGES].
- On node #1, zone->vm_stat[NR_FREE_PAGES] was 0.
However, the freelist on this node was not empty.
- This inconsistent of vmstat value was caused by percpu vmstat on
nohz_full cpus. Every increment/decrement of vmstat is performed
on percpu vmstat counter at first, then pooled diffs are cumulated
to the zone's vmstat counter in timely manner. However, on nohz_full
cpus (in case of this customer's system, 48 of 52 cpus) these pooled
diffs were not cumulated once the cpu had no event on it so that
the cpu started sleeping infinitely.
I checked percpu vmstat and found there were total 69 counts not
cumulated to the zone's vmstat counter yet.
- In this situation, kswapd did not help the trapped process.
In pgdat_balanced(), zone_wakermark_ok_safe() examined the number
of free pages on the node by zone_page_state_snapshot() which
checks pending counts on percpu vmstat.
Therefore kswapd could know there were 69 free pages correctly.
Since zone->_watermark = {8, 20, 32}, kswapd did not work because
69 was greater than 32 as high watermark.
2. With a SCHED_FIFO task that busy loops on a given CPU,
and kworker for that CPU at SCHED_OTHER priority,
queuing work to sync per-vmstats will either cause that
work to never execute, or stalld (i.e. stall daemon)
boosts kworker priority which causes a latency
violation
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Index: linux-vmstat-remote/mm/vmstat.c
===================================================================
--- linux-vmstat-remote.orig/mm/vmstat.c
+++ linux-vmstat-remote/mm/vmstat.c
@@ -2043,6 +2043,23 @@ static void vmstat_shepherd(struct work_
static DECLARE_DEFERRABLE_WORK(shepherd, vmstat_shepherd);
+#ifdef CONFIG_HAVE_CMPXCHG_LOCAL
+/* Flush counters remotely if CPU uses cmpxchg to update its per-CPU counters */
+static void vmstat_shepherd(struct work_struct *w)
+{
+ int cpu;
+
+ cpus_read_lock();
+ for_each_online_cpu(cpu) {
+ cpu_vm_stats_fold(cpu);
+ cond_resched();
+ }
+ cpus_read_unlock();
+
+ schedule_delayed_work(&shepherd,
+ round_jiffies_relative(sysctl_stat_interval));
+}
+#else
static void vmstat_shepherd(struct work_struct *w)
{
int cpu;
@@ -2062,6 +2079,7 @@ static void vmstat_shepherd(struct work_
schedule_delayed_work(&shepherd,
round_jiffies_relative(sysctl_stat_interval));
}
+#endif
static void __init start_shepherd_timer(void)
{
next prev parent reply other threads:[~2023-03-14 19:01 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-14 18:59 [PATCH v6 00/12] fold per-CPU vmstats remotely Marcelo Tosatti
2023-03-14 18:59 ` [PATCH v6 01/12] this_cpu_cmpxchg: ARM64: switch this_cpu_cmpxchg to locked, add _local function Marcelo Tosatti
2023-03-14 18:59 ` [PATCH v6 02/12] this_cpu_cmpxchg: loongarch: " Marcelo Tosatti
2023-03-14 18:59 ` [PATCH v6 03/12] this_cpu_cmpxchg: S390: " Marcelo Tosatti
2023-03-14 18:59 ` [PATCH v6 04/12] this_cpu_cmpxchg: x86: " Marcelo Tosatti
2023-03-14 18:59 ` [PATCH v6 05/12] add this_cpu_cmpxchg_local and asm-generic definitions Marcelo Tosatti
2023-03-14 18:59 ` [PATCH v6 06/12] convert this_cpu_cmpxchg users to this_cpu_cmpxchg_local Marcelo Tosatti
2023-03-14 18:59 ` [PATCH v6 07/12] mm/vmstat: switch counter modification to cmpxchg Marcelo Tosatti
2023-03-14 18:59 ` [PATCH v6 08/12] vmstat: switch per-cpu vmstat counters to 32-bits Marcelo Tosatti
2023-03-14 18:59 ` [PATCH v6 09/12] mm/vmstat: use xchg in cpu_vm_stats_fold Marcelo Tosatti
2023-03-14 18:59 ` Marcelo Tosatti [this message]
2023-03-14 18:59 ` [PATCH v6 11/12] mm/vmstat: refresh stats remotely instead of via work item Marcelo Tosatti
2023-03-14 18:59 ` [PATCH v6 12/12] vmstat: add pcp remote node draining via cpu_vm_stats_fold Marcelo Tosatti
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230314185951.779596601@redhat.com \
--to=mtosatti@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=atomlin@atomlin.com \
--cc=chenhuacai@kernel.org \
--cc=cl@linux.com \
--cc=frederic@kernel.org \
--cc=hca@linux.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux@armlinux.org.uk \
--cc=mhocko@suse.com \
--cc=vbabka@suse.cz \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox