From: Pedro Falcato <pfalcato@suse.de>
To: Harry Yoo <harry.yoo@oracle.com>
Cc: Vlastimil Babka <vbabka@suse.cz>,
Christoph Lameter <cl@gentwo.org>,
David Rientjes <rientjes@google.com>,
Andrew Morton <akpm@linux-foundation.org>,
Dennis Zhou <dennis@kernel.org>, Tejun Heo <tj@kernel.org>,
Mateusz Guzik <mjguzik@gmail.com>,
Jamal Hadi Salim <jhs@mojatatu.com>,
Cong Wang <xiyou.wangcong@gmail.com>,
Jiri Pirko <jiri@resnulli.us>, Vlad Buslov <vladbu@nvidia.com>,
Yevgeny Kliteynik <kliteyn@nvidia.com>, Jan Kara <jack@suse.cz>,
Byungchul Park <byungchul@sk.com>,
linux-mm@kvack.org, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH 0/7] Reviving the slab destructor to tackle the percpu allocator scalability problem
Date: Thu, 24 Apr 2025 12:28:37 +0100 [thread overview]
Message-ID: <lr2nridih62djx5ccdijiyacdz2hrubsh52tj6bivr6yfgajsj@mgziscqwlmtp> (raw)
In-Reply-To: <20250424080755.272925-1-harry.yoo@oracle.com>
On Thu, Apr 24, 2025 at 05:07:48PM +0900, Harry Yoo wrote:
> Overview
> ========
>
> The slab destructor feature existed in early days of slab allocator(s).
> It was removed by the commit c59def9f222d ("Slab allocators: Drop support
> for destructors") in 2007 due to lack of serious use cases at that time.
>
> Eighteen years later, Mateusz Guzik proposed [1] re-introducing a slab
> constructor/destructor pair to mitigate the global serialization point
> (pcpu_alloc_mutex) that occurs when each slab object allocates and frees
> percpu memory during its lifetime.
>
> Consider mm_struct: it allocates two percpu regions (mm_cid and rss_stat),
> so each allocate–free cycle requires two expensive acquire/release on
> that mutex.
>
> We can mitigate this contention by retaining the percpu regions after
> the object is freed and releasing them only when the backing slab pages
> are freed.
>
> How to do this with slab constructors and destructors: the constructor
> allocates percpu memory, and the destructor frees it when the slab pages
> are reclaimed; this slightly alters the constructor’s semantics,
> as it can now fail.
>
I really really really really don't like this. We're opening a pandora's box
of locking issues for slab deadlocks and other subtle issues. IMO the best
solution there would be, what, failing dtors? which says a lot about the whole
situation...
Case in point:
What happens if you allocate a slab and start ->ctor()-ing objects, and then
one of the ctors fails? We need to free the ctor, but not without ->dtor()-ing
everything back (AIUI this is not handled in this series, yet). Besides this
complication, if failing dtors were added into the mix, we'd be left with a
half-initialized slab(!!) in the middle of the cache waiting to get freed,
without being able to.
Then there are obviously other problems like: whatever you're calling must
not ever require the slab allocator (directly or indirectly) and must not
do direct reclaim (ever!), at the risk of a deadlock. The pcpu allocator
is a no-go (AIUI!) already because of such issues.
Then there's the separate (but adjacent, particularly as we're considering
this series due to performance improvements) issue that the ctor() and
dtor() interfaces are terrible, in the sense that they do not let you batch
in any way shape or form (requiring us to lock/unlock many times, allocate
many times, etc). If this is done for performance improvements, I would prefer
a superior ctor/dtor interface that takes something like a slab iterator and
lets you do these things.
The ghost of 1992 Solaris still haunts us...
> This series is functional (although not compatible with MM debug
> features yet), but still far from perfect. I’m actively refining it and
> would appreciate early feedback before I improve it further. :)
>
> This series is based on slab/for-next [2].
>
> Performance Improvement
> =======================
>
> I measured the benefit of this series for two different users:
> exec() and tc filter insertion/removal.
>
> exec() throughput
> -----------------
>
> The performance of exec() is important when short-lived processes are
> frequently created. For example: shell-heavy workloads and running many
> test cases [3].
>
> I measured exec() throughput with a microbenchmark:
> - 33% of exec() throughput gain on 2-socket machine with 192 CPUs,
> - 4.56% gain on a desktop with 24 hardware threads, and
> - Even 4% gain on a single-threaded exec() throughput.
>
> Further investigation showed that this was due to the overhead of
> acquiring/releasing pcpu_alloc_mutex and its contention.
>
> See patch 7 for more detail on the experiment.
>
> Traffic Filter Insertion and Removal
> ------------------------------------
>
> Each tc filter allocates three percpu memory regions per tc_action object,
> so frequently inserting and removing filters contend heavily on the same
> mutex.
>
> In the Linux-kernel tools/testing tc-filter benchmark (see patch 4 for
> more detail), I observed a 26% reduction in system time and observed
> much less contention on pcpu_alloc_mutex with this series.
>
> I saw in old mailing list threads Mellanox (now NVIDIA) engineers cared
> about tc filter insertion rate; these changes may still benefit
> workloads they run today.
>
The performance improvements are obviously fantastic, but I do wonder
if things could be fixed by just fixing the underlying problems, instead
of tapering over them with slab allocator magic and dubious object lifecycles.
In this case, the big issue is that the pcpu allocator does not scale well.
--
Pedro
next prev parent reply other threads:[~2025-04-24 11:29 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-24 8:07 Harry Yoo
2025-04-24 8:07 ` [RFC PATCH 1/7] mm/slab: refactor freelist shuffle Harry Yoo
2025-04-24 8:07 ` [RFC PATCH 2/7] treewide, slab: allow slab constructor to return an error Harry Yoo
2025-04-24 8:07 ` [RFC PATCH 3/7] mm/slab: revive the destructor feature in slab allocator Harry Yoo
2025-04-24 8:07 ` [RFC PATCH 4/7] net/sched/act_api: use slab ctor/dtor to reduce contention on pcpu alloc Harry Yoo
2025-04-24 8:07 ` [RFC PATCH 5/7] mm/percpu: allow (un)charging objects without alloc/free Harry Yoo
2025-04-24 8:07 ` [RFC PATCH 6/7] lib/percpu_counter: allow (un)charging percpu counters " Harry Yoo
2025-04-24 8:07 ` [RFC PATCH 7/7] kernel/fork: improve exec() throughput with slab ctor/dtor pair Harry Yoo
2025-04-24 9:29 ` [RFC PATCH 0/7] Reviving the slab destructor to tackle the percpu allocator scalability problem Mateusz Guzik
2025-04-24 9:58 ` Harry Yoo
2025-04-24 15:00 ` Mateusz Guzik
2025-04-24 11:28 ` Pedro Falcato [this message]
2025-04-24 15:20 ` Mateusz Guzik
2025-04-24 16:11 ` Mateusz Guzik
2025-04-25 7:40 ` Harry Yoo
2025-04-25 10:12 ` Harry Yoo
2025-04-25 10:42 ` Pedro Falcato
2025-04-28 1:18 ` Harry Yoo
2025-04-30 19:49 ` Mateusz Guzik
2025-05-12 11:00 ` Harry Yoo
2025-04-24 15:50 ` Christoph Lameter (Ampere)
2025-04-24 16:03 ` Mateusz Guzik
2025-04-24 16:39 ` Christoph Lameter (Ampere)
2025-04-24 17:26 ` Mateusz Guzik
2025-04-24 18:47 ` Tejun Heo
2025-04-25 10:10 ` Harry Yoo
2025-04-25 19:03 ` Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=lr2nridih62djx5ccdijiyacdz2hrubsh52tj6bivr6yfgajsj@mgziscqwlmtp \
--to=pfalcato@suse.de \
--cc=akpm@linux-foundation.org \
--cc=byungchul@sk.com \
--cc=cl@gentwo.org \
--cc=dennis@kernel.org \
--cc=harry.yoo@oracle.com \
--cc=jack@suse.cz \
--cc=jhs@mojatatu.com \
--cc=jiri@resnulli.us \
--cc=kliteyn@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mjguzik@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=rientjes@google.com \
--cc=tj@kernel.org \
--cc=vbabka@suse.cz \
--cc=vladbu@nvidia.com \
--cc=xiyou.wangcong@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox