linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: Nicolas Saenz Julienne <nsaenzju@redhat.com>
Cc: frederic@kernel.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, tglx@linutronix.de, cl@linux.com,
	peterz@infradead.org, juri.lelli@redhat.com, mingo@redhat.com,
	mtosatti@redhat.com, nilal@redhat.com, mgorman@suse.de,
	ppandit@redhat.com, williams@redhat.com, bigeasy@linutronix.de,
	anna-maria@linutronix.de, linux-rt-users@vger.kernel.org
Subject: Re: [PATCH 0/6] mm: Remote LRU per-cpu pagevec cache/per-cpu page list drain support
Date: Tue, 21 Sep 2021 10:51:55 -0700	[thread overview]
Message-ID: <20210921105155.73961c904b1f3bb5a40912c6@linux-foundation.org> (raw)
In-Reply-To: <20210921161323.607817-1-nsaenzju@redhat.com>

On Tue, 21 Sep 2021 18:13:18 +0200 Nicolas Saenz Julienne <nsaenzju@redhat.com> wrote:

> This series introduces an alternative locking scheme around mm/swap.c's per-cpu
> LRU pagevec caches and mm/page_alloc.c's per-cpu page lists which will allow
> for remote CPUs to drain them. Currently, only a local CPU is permitted to
> change its per-cpu lists, and it's expected to do so, on-demand, whenever a
> process demands it (by means of queueing an drain task on the local CPU). Most
> systems will handle this promptly, but it'll cause problems for NOHZ_FULL CPUs
> that can't take any sort of interruption without breaking their functional
> guarantees (latency, bandwidth, etc...). Having a way for these processes to
> remotely drain the lists themselves will make co-existing with isolated CPUs
> possible, at the cost of more constraining locks.
> 
> Fortunately for non-NOHZ_FULL users, the alternative locking scheme and remote
> drain code are conditional to a static key which is disabled by default. This
> guarantees minimal functional or performance regressions. The feature will only
> be enabled if NOHZ_FULL's initialization process was successful.

That all looks pretty straightforward.  Obvious problems are:

- Very little test coverage for the spinlocked code paths.  Virtually
  all test setups will be using local_lock() and the code path you care
  about will go untested.

  I hope that whoever does test the spinlock version will be running
  full debug kernels, including lockdep.  Because adding a spinlock
  where the rest of the code expects local_lock might introduce
  problems.

  A fix for all of this would be to enable the spin_lock code paths
  to be tested more widely.  Perhaps you could add a boot-time kernel
  parameter (or, not as good, a Kconfig thing) which forces the use of
  the spinlock code even on non-NOHZ_FULL systems.

  Or perhaps this debug/testing mode _should_ be enabled by Kconfig,
  so kernel fuzzers sometimes turn it on.

  Please have a think about all of this?

- Maintainability.  Few other MM developers will think about this new
  spinlocked mode much, and they are unlikely to runtime test the
  spinlock mode.  Adding the force-spinlocks-mode-on knob will help
  with this.




  parent reply	other threads:[~2021-09-21 17:51 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-21 16:13 Nicolas Saenz Julienne
2021-09-21 16:13 ` [PATCH 1/6] mm/swap: Introduce lru_cpu_needs_drain() Nicolas Saenz Julienne
2021-09-21 16:13 ` [PATCH 2/6] mm/swap: Introduce alternative per-cpu LRU cache locking Nicolas Saenz Julienne
2021-09-21 22:03   ` Peter Zijlstra
2021-09-22  8:47     ` nsaenzju
2021-09-22  9:20       ` Sebastian Andrzej Siewior
2021-09-22  9:50         ` nsaenzju
2021-09-22 11:37       ` Peter Zijlstra
2021-09-22 11:43         ` nsaenzju
2021-09-21 16:13 ` [PATCH 3/6] mm/swap: Allow remote LRU cache draining Nicolas Saenz Julienne
2021-09-21 16:13 ` [PATCH 4/6] mm/page_alloc: Introduce alternative per-cpu list locking Nicolas Saenz Julienne
2021-09-21 16:13 ` [PATCH 5/6] mm/page_alloc: Allow remote per-cpu page list draining Nicolas Saenz Julienne
2021-09-21 16:13 ` [PATCH 6/6] sched/isolation: Enable 'remote_pcpu_cache_access' on NOHZ_FULL systems Nicolas Saenz Julienne
2021-09-21 17:51 ` Andrew Morton [this message]
2021-09-21 17:59 ` [PATCH 0/6] mm: Remote LRU per-cpu pagevec cache/per-cpu page list drain support Vlastimil Babka
2021-09-22 11:28   ` Peter Zijlstra
2021-09-22 22:09     ` Thomas Gleixner
2021-09-23  7:12       ` Vlastimil Babka
2021-09-23 10:36         ` Thomas Gleixner
2021-09-27  9:30       ` nsaenzju

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210921105155.73961c904b1f3bb5a40912c6@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=anna-maria@linutronix.de \
    --cc=bigeasy@linutronix.de \
    --cc=cl@linux.com \
    --cc=frederic@kernel.org \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-rt-users@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=mtosatti@redhat.com \
    --cc=nilal@redhat.com \
    --cc=nsaenzju@redhat.com \
    --cc=peterz@infradead.org \
    --cc=ppandit@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=williams@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox