linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Marcelo Tosatti <mtosatti@redhat.com>
To: Michal Hocko <mhocko@suse.com>
Cc: Christoph Lameter <cl@linux.com>,
	Aaron Tomlin <atomlin@atomlin.com>,
	Frederic Weisbecker <frederic@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	Russell King <linux@armlinux.org.uk>,
	Huacai Chen <chenhuacai@kernel.org>,
	Heiko Carstens <hca@linux.ibm.com>,
	x86@kernel.org, Vlastimil Babka <vbabka@suse.cz>
Subject: Re: [PATCH v5 00/12] fold per-CPU vmstats remotely
Date: Tue, 14 Mar 2023 09:59:37 -0300	[thread overview]
Message-ID: <ZBBvuYkWgtVXCV7J@tpad> (raw)
In-Reply-To: <ZBBn0evSQeuiNna4@dhcp22.suse.cz>

On Tue, Mar 14, 2023 at 01:25:53PM +0100, Michal Hocko wrote:
> On Mon 13-03-23 13:25:07, Marcelo Tosatti wrote:
> > This patch series addresses the following two problems:
> > 
> >     1. A customer provided some evidence which indicates that
> >        the idle tick was stopped; albeit, CPU-specific vmstat
> >        counters still remained populated.
> > 
> >        Thus one can only assume quiet_vmstat() was not
> >        invoked on return to the idle loop. If I understand
> >        correctly, I suspect this divergence might erroneously
> >        prevent a reclaim attempt by kswapd. If the number of
> >        zone specific free pages are below their per-cpu drift
> >        value then zone_page_state_snapshot() is used to
> >        compute a more accurate view of the aforementioned
> >        statistic.  Thus any task blocked on the NUMA node
> >        specific pfmemalloc_wait queue will be unable to make
> >        significant progress via direct reclaim unless it is
> >        killed after being woken up by kswapd
> >        (see throttle_direct_reclaim())
> 
> I have hard time to follow the actual problem described above. Are you
> suggesting that a lack of pcp vmstat counters update has led to
> reclaim issues? What is the said "evidence"? Could you share more of the
> story please?


  - The process was trapped in throttle_direct_reclaim().
    The function wait_event_killable() was called to wait condition
    allow_direct_reclaim(pgdat) for current node to be true.
    The allow_direct_reclaim(pgdat) examined the number of free pages
    on the node by zone_page_state() which just returns value in
    zone->vm_stat[NR_FREE_PAGES].

  - On node #1, zone->vm_stat[NR_FREE_PAGES] was 0.
    However, the freelist on this node was not empty.

  - This inconsistent of vmstat value was caused by percpu vmstat on
    nohz_full cpus. Every increment/decrement of vmstat is performed
    on percpu vmstat counter at first, then pooled diffs are cumulated
    to the zone's vmstat counter in timely manner. However, on nohz_full
    cpus (in case of this customer's system, 48 of 52 cpus) these pooled
    diffs were not cumulated once the cpu had no event on it so that
    the cpu started sleeping infinitely.
    I checked percpu vmstat and found there were total 69 counts not
    cumulated to the zone's vmstat counter yet.

  - In this situation, kswapd did not help the trapped process.
    In pgdat_balanced(), zone_wakermark_ok_safe() examined the number
    of free pages on the node by zone_page_state_snapshot() which
    checks pending counts on percpu vmstat.
    Therefore kswapd could know there were 69 free pages correctly.
    Since zone->_watermark = {8, 20, 32}, kswapd did not work because
    69 was greater than 32 as high watermark.

  - As the result:
    - The process waited allow_direct_reclaim(pgdat) to be true.
      - allow_direct_reclaim() saw 0 via zone_page_state().
        It woke kswapd since 0 was lower than min watermark.
    - The kswapd did nothing.
      - kswapd saw 69 via zone_page_state_snapshot().
        It woke waiters without performing memory reclaim
        since 69 is greater than high watermark.
    - The process woked by kswapd soon restart waiting for kswapd.
      - Still allow_direct_reclaim() saw 0 via zone_page_state().
        It woke kswapd since 0 was lower than min watermark.

> 
> >     2. With a SCHED_FIFO task that busy loops on a given CPU,
> >        and kworker for that CPU at SCHED_OTHER priority,
> >        queuing work to sync per-vmstats will either cause that
> >        work to never execute, or stalld (i.e. stall daemon)
> >        boosts kworker priority which causes a latency
> >        violation
> 
> Why is that a problem? Out-of-sync stats shouldn't cause major problems.
> Or can they?

Consider SCHED_FIFO task that is polling the network queue (say
testpmd).

	do {
	 	if (net_registers->state & DATA_AVAILABLE) {
			process_data)();
		}
	 } while (!stopped);

Since this task runs at SCHED_FIFO priority, kworker won't 
be scheduled to run (therefore per-CPU vmstats won't be
flushed to global vmstats). 

Or, if testpmd runs at SCHED_OTHER, then the work item to
flush per-CPU vmstats causes

	testpmd -> kworker
	kworker: flush per-CPU vmstats
	kworker -> testpmd

And this might cause undesired latencies to the packets being
processed by the testpmd task.



  reply	other threads:[~2023-03-14 13:00 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-13 16:25 Marcelo Tosatti
2023-03-13 16:25 ` [PATCH v5 01/12] this_cpu_cmpxchg: ARM64: switch this_cpu_cmpxchg to locked, add _local function Marcelo Tosatti
2023-03-13 16:25 ` [PATCH v5 02/12] this_cpu_cmpxchg: loongarch: " Marcelo Tosatti
2023-03-13 16:25 ` [PATCH v5 03/12] this_cpu_cmpxchg: S390: " Marcelo Tosatti
2023-03-13 16:25 ` [PATCH v5 04/12] this_cpu_cmpxchg: x86: " Marcelo Tosatti
2023-03-13 16:25 ` [PATCH v5 05/12] add this_cpu_cmpxchg_local and asm-generic definitions Marcelo Tosatti
2023-03-13 16:25 ` [PATCH v5 06/12] convert this_cpu_cmpxchg users to this_cpu_cmpxchg_local Marcelo Tosatti
2023-03-13 16:25 ` [PATCH v5 07/12] mm/vmstat: switch counter modification to cmpxchg Marcelo Tosatti
2023-03-13 16:25 ` [PATCH v5 08/12] vmstat: switch per-cpu vmstat counters to 32-bits Marcelo Tosatti
2023-03-13 16:25 ` [PATCH v5 09/12] mm/vmstat: use xchg in cpu_vm_stats_fold Marcelo Tosatti
2023-03-13 16:25 ` [PATCH v5 10/12] mm/vmstat: switch vmstat shepherd to flush per-CPU counters remotely Marcelo Tosatti
2023-03-13 16:25 ` [PATCH v5 11/12] mm/vmstat: refresh stats remotely instead of via work item Marcelo Tosatti
2023-03-13 16:25 ` [PATCH v5 12/12] vmstat: add pcp remote node draining via cpu_vm_stats_fold Marcelo Tosatti
2023-03-14 12:25 ` [PATCH v5 00/12] fold per-CPU vmstats remotely Michal Hocko
2023-03-14 12:59   ` Marcelo Tosatti [this message]
2023-03-14 13:00     ` Marcelo Tosatti
2023-03-14 14:31     ` Michal Hocko
2023-03-14 18:49       ` Marcelo Tosatti
2023-03-14 21:01         ` Michal Hocko
2023-03-15  0:29           ` Marcelo Tosatti

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZBBvuYkWgtVXCV7J@tpad \
    --to=mtosatti@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=atomlin@atomlin.com \
    --cc=chenhuacai@kernel.org \
    --cc=cl@linux.com \
    --cc=frederic@kernel.org \
    --cc=hca@linux.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux@armlinux.org.uk \
    --cc=mhocko@suse.com \
    --cc=vbabka@suse.cz \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox