From: Vlastimil Babka <vbabka@suse.cz>
To: Nicolas Saenz Julienne <nsaenzju@redhat.com>, akpm@linux-foundation.org
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
frederic@kernel.org, tglx@linutronix.de, peterz@infradead.org,
mtosatti@redhat.com, nilal@redhat.com, mgorman@suse.de,
linux-rt-users@vger.kernel.org, cl@linux.com, paulmck@kernel.org,
ppandit@redhat.com
Subject: Re: [RFC 0/3] mm/page_alloc: Remote per-cpu lists drain support
Date: Tue, 12 Oct 2021 17:45:42 +0200 [thread overview]
Message-ID: <38d28332-6b15-b353-5bcb-f691455c6495@suse.cz> (raw)
In-Reply-To: <20211008161922.942459-1-nsaenzju@redhat.com>
On 10/8/21 18:19, Nicolas Saenz Julienne wrote:
> This series replaces mm/page_alloc's per-cpu lists drain mechanism in order for
> it to be able to be run remotely. Currently, only a local CPU is permitted to
> change its per-cpu lists, and it's expected to do so, on-demand, whenever a
> process demands it (by means of queueing a drain task on the local CPU). Most
> systems will handle this promptly, but it'll cause problems for NOHZ_FULL CPUs
> that can't take any sort of interruption without breaking their functional
> guarantees (latency, bandwidth, etc...). Having a way for these processes to
> remotely drain the lists themselves will make co-existing with isolated CPUs
> possible, and comes with minimal performance[1]/memory cost to other users.
>
> The new algorithm will atomically switch the pointer to the per-cpu lists and
> use RCU to make sure it's not being used before draining them.
>
> I'm interested in an sort of feedback, but especially validating that the
> approach is acceptable, and any tests/benchmarks you'd like to see run against
So let's consider the added alloc/free fast paths overhead:
- Patch 1 - __alloc_pages_bulk() used to determine pcp_list once, now it's
determined for each allocated page in __rmqueue_pcplist().
- Patch 2 - adds indirection from pcp->$foo to pcp->lp->$foo in each operation
- Patch 3
- extra irqsave/irqrestore in free_pcppages_bulk (amortized)
- rcu_dereference_check() in free_unref_page_commit() and __rmqueue_pcplist()
BTW - I'm not sure if the RCU usage is valid here.
The "read side" (normal operations) is using:
rcu_dereference_check(pcp->lp,
lockdep_is_held(this_cpu_ptr(&pagesets.lock)));
where the lockdep parameter according to the comments for
rcu_dereference_check() means
"indicate to lockdep that foo->bar may only be dereferenced if either
rcu_read_lock() is held, or that the lock required to replace the bar struct
at foo->bar is held."
but you are not taking rcu_read_lock() and the "write side" (remote
draining) actually doesn't take pagesets.lock, so it's not true that the
"lock required to replace ... is held"? The write side uses
rcu_replace_pointer(...,
mutex_is_locked(&pcpu_drain_mutex))
which is a different lock.
IOW, synchronize_rcu_expedited() AFAICS has nothing (no rcu_read_lock() to
synchronize against? Might accidentally work on !RT thanks to disabled irqs,
but not sure about with RT lock semantics of the local_lock...
So back to overhead, if I'm correct above we can assume that there would be
also rcu_read_lock() in the fast paths.
The alternative proposed by tglx was IIRC that there would be a spinlock on
each cpu, which would be mostly uncontended except when draining. Maybe an
uncontended spin lock/unlock would have lower overhead than all of the
above? It would be certainly simpler, so I would probably try that first and
see if it's acceptable?
> it. For now, I've been testing this successfully on both arm64 and x86_64
> systems while forcing high memory pressure (i.e. forcing the
> page_alloc's slow path).
>
> Patches 1-2 serve as cleanups/preparation to make patch 3 easier to follow.
>
> Here's my previous attempt at fixing this:
> https://lkml.org/lkml/2021/9/21/599
>
> [1] Proper performance numbers will be provided if the approach is deemed
> acceptable. That said, mm/page_alloc.c's fast paths only grow by an extra
> pointer indirection and a compiler barrier, which I think is unlikely to be
> measurable.
>
> ---
>
> Nicolas Saenz Julienne (3):
> mm/page_alloc: Simplify __rmqueue_pcplist()'s arguments
> mm/page_alloc: Access lists in 'struct per_cpu_pages' indirectly
> mm/page_alloc: Add remote draining support to per-cpu lists
>
> include/linux/mmzone.h | 24 +++++-
> mm/page_alloc.c | 173 +++++++++++++++++++++--------------------
> mm/vmstat.c | 6 +-
> 3 files changed, 114 insertions(+), 89 deletions(-)
>
next prev parent reply other threads:[~2021-10-12 15:45 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-08 16:19 Nicolas Saenz Julienne
2021-10-08 16:19 ` [RFC 1/3] mm/page_alloc: Simplify __rmqueue_pcplist()'s arguments Nicolas Saenz Julienne
2021-10-08 16:19 ` [RFC 2/3] mm/page_alloc: Access lists in 'struct per_cpu_pages' indirectly Nicolas Saenz Julienne
2021-10-08 16:19 ` [RFC 3/3] mm/page_alloc: Add remote draining support to per-cpu lists Nicolas Saenz Julienne
2021-10-12 15:45 ` Vlastimil Babka [this message]
2021-10-13 12:50 ` [RFC 0/3] mm/page_alloc: Remote per-cpu lists drain support nsaenzju
2021-10-21 8:27 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=38d28332-6b15-b353-5bcb-f691455c6495@suse.cz \
--to=vbabka@suse.cz \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=frederic@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-rt-users@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mtosatti@redhat.com \
--cc=nilal@redhat.com \
--cc=nsaenzju@redhat.com \
--cc=paulmck@kernel.org \
--cc=peterz@infradead.org \
--cc=ppandit@redhat.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox