linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: nsaenzju@redhat.com
To: Vlastimil Babka <vbabka@suse.cz>, akpm@linux-foundation.org
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	frederic@kernel.org,  tglx@linutronix.de, peterz@infradead.org,
	mtosatti@redhat.com, nilal@redhat.com,  mgorman@suse.de,
	linux-rt-users@vger.kernel.org, cl@linux.com, paulmck@kernel.org,
	 ppandit@redhat.com
Subject: Re: [RFC 0/3] mm/page_alloc: Remote per-cpu lists drain support
Date: Wed, 13 Oct 2021 14:50:08 +0200	[thread overview]
Message-ID: <06e96819a964ca4b4ba504d0da71e81d79f3a87b.camel@redhat.com> (raw)
In-Reply-To: <38d28332-6b15-b353-5bcb-f691455c6495@suse.cz>

Hi Vlastimil, thanks for spending time on this.
Also, excuse me if I over explain things.

On Tue, 2021-10-12 at 17:45 +0200, Vlastimil Babka wrote:
> On 10/8/21 18:19, Nicolas Saenz Julienne wrote:
> > This series replaces mm/page_alloc's per-cpu lists drain mechanism in order for
> > it to be able to be run remotely. Currently, only a local CPU is permitted to
> > change its per-cpu lists, and it's expected to do so, on-demand, whenever a
> > process demands it (by means of queueing a drain task on the local CPU). Most
> > systems will handle this promptly, but it'll cause problems for NOHZ_FULL CPUs
> > that can't take any sort of interruption without breaking their functional
> > guarantees (latency, bandwidth, etc...). Having a way for these processes to
> > remotely drain the lists themselves will make co-existing with isolated CPUs
> > possible, and comes with minimal performance[1]/memory cost to other users.
> > 
> > The new algorithm will atomically switch the pointer to the per-cpu lists and
> > use RCU to make sure it's not being used before draining them. 
> > 
> > I'm interested in an sort of feedback, but especially validating that the
> > approach is acceptable, and any tests/benchmarks you'd like to see run against
> 
> So let's consider the added alloc/free fast paths overhead:
> - Patch 1 - __alloc_pages_bulk() used to determine pcp_list once, now it's
> determined for each allocated page in __rmqueue_pcplist().

This one I can avoid, I missed the performance aspect of it. I was aiming at
making the code bearable.

> - Patch 2 - adds indirection from pcp->$foo to pcp->lp->$foo in each operation
> - Patch 3
>   - extra irqsave/irqrestore in free_pcppages_bulk (amortized)
>   - rcu_dereference_check() in free_unref_page_commit() and __rmqueue_pcplist()

Yes.

> BTW - I'm not sure if the RCU usage is valid here.
>
> The "read side" (normal operations) is using:
> rcu_dereference_check(pcp->lp,
> 		lockdep_is_held(this_cpu_ptr(&pagesets.lock)));
> 
> where the lockdep parameter according to the comments for
> rcu_dereference_check() means
> 
> "indicate to lockdep that foo->bar may only be dereferenced if either
> rcu_read_lock() is held, or that the lock required to replace the bar struct
> at foo->bar is held."

You missed the "Could be used to" at the beginning of the sentence :). That
said, I believe this is similar to what I'm doing, only that the situation is
more complex.

> but you are not taking rcu_read_lock() 

I am taking the rcu_read_lock() implicitly, it's explained in 'struct
per_cpu_pages', and in more depth below.

> and the "write side" (remote draining) actually doesn't take pagesets.lock,
> so it's not true that the "lock required to replace ... is held"? The write
> side uses rcu_replace_pointer(..., mutex_is_locked(&pcpu_drain_mutex))
> which is a different lock.

The thing 'pagesets.lock' protects against is concurrent access to pcp->lp's
content, as opposed to its address. pcp->lp is dereferenced atomically, so no
need for locking on that operation.

The drain side never accesses pcp->lp's contents concurrently, it changes
pcp->lp's address and makes sure all CPUs are in sync with the new address
before clearing the stale data.

Just for the record, I think a better representation of what 'check' in
rcu_dereference means is:

 * Do an rcu_dereference(), but check that the conditions under which the
 * dereference will take place are correct.  Typically the conditions
 * indicate the various locking conditions that should be held at that
 * point. The check should return true if the conditions are satisfied.
 * An implicit check for being in an RCU read-side critical section
 * (rcu_read_lock()) is included.

So for the read side, that is, code reading pcp->lp's address and its contents,
the conditions to be met are: being in a RCU critical section, to make sure RCU
is keeping track of it, and holding 'pagesets.lock', to avoid concurrently
accessing pcp->lp's contents. The later is achieved either by disabling local
irqs or disabling migration and getting a per-cpu rt_spinlock. Conveniently
these are actions that implicitly delimit an RCU critical section (see [1] and
[2]). So the 'pagesets.lock' check fully covers the read side locking/RCU
concerns.

On the write side, the drain has to make sure pcp->lp address change is atomic
(this is achieved through rcu_replace_pointer()) and that lp->drain is emptied
before a happens. So checking for pcpu_drain_mutex being held is good enough.

> IOW, synchronize_rcu_expedited() AFAICS has nothing (no rcu_read_lock() to
> synchronize against? Might accidentally work on !RT thanks to disabled irqs,
> but not sure about with RT lock semantics of the local_lock...
>
> So back to overhead, if I'm correct above we can assume that there would be
> also rcu_read_lock() in the fast paths.

As I explained above, no need.

> The alternative proposed by tglx was IIRC that there would be a spinlock on
> each cpu, which would be mostly uncontended except when draining. Maybe an
> uncontended spin lock/unlock would have lower overhead than all of the
> above? It would be certainly simpler, so I would probably try that first and
> see if it's acceptable?

You have a point here. I'll provide a performance rundown of both solutions.
This one is a bit more complex that's for sure.

Thanks!

[1] See rcu_read_lock()'s description: "synchronize_rcu() wait for regions of
    code with preemption disabled, including regions of code with interrupts or
    softirqs disabled."

[2] See kernel/locking/spinlock_rt.c: "The RT [spinlock] substitutions
    explicitly disable migration and take rcu_read_lock() across the lock held
    section."

-- 
Nicolás Sáenz



  reply	other threads:[~2021-10-13 12:50 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-08 16:19 Nicolas Saenz Julienne
2021-10-08 16:19 ` [RFC 1/3] mm/page_alloc: Simplify __rmqueue_pcplist()'s arguments Nicolas Saenz Julienne
2021-10-08 16:19 ` [RFC 2/3] mm/page_alloc: Access lists in 'struct per_cpu_pages' indirectly Nicolas Saenz Julienne
2021-10-08 16:19 ` [RFC 3/3] mm/page_alloc: Add remote draining support to per-cpu lists Nicolas Saenz Julienne
2021-10-12 15:45 ` [RFC 0/3] mm/page_alloc: Remote per-cpu lists drain support Vlastimil Babka
2021-10-13 12:50   ` nsaenzju [this message]
2021-10-21  8:27     ` Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=06e96819a964ca4b4ba504d0da71e81d79f3a87b.camel@redhat.com \
    --to=nsaenzju@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=frederic@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-rt-users@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mtosatti@redhat.com \
    --cc=nilal@redhat.com \
    --cc=paulmck@kernel.org \
    --cc=peterz@infradead.org \
    --cc=ppandit@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox