linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mateusz Guzik <mjguzik@gmail.com>
To: Pedro Falcato <pfalcato@suse.de>
Cc: Harry Yoo <harry.yoo@oracle.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	 Christoph Lameter <cl@gentwo.org>,
	David Rientjes <rientjes@google.com>,
	 Andrew Morton <akpm@linux-foundation.org>,
	Dennis Zhou <dennis@kernel.org>, Tejun Heo <tj@kernel.org>,
	 Jamal Hadi Salim <jhs@mojatatu.com>,
	Cong Wang <xiyou.wangcong@gmail.com>,
	 Jiri Pirko <jiri@resnulli.us>, Vlad Buslov <vladbu@nvidia.com>,
	 Yevgeny Kliteynik <kliteyn@nvidia.com>, Jan Kara <jack@suse.cz>,
	Byungchul Park <byungchul@sk.com>,
	 linux-mm@kvack.org, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH 0/7] Reviving the slab destructor to tackle the percpu allocator scalability problem
Date: Thu, 24 Apr 2025 17:20:59 +0200	[thread overview]
Message-ID: <CAGudoHF8-tpc3nJeJ3gF2_GZZGp_raMBu4GXC_5omWMc7LhN1w@mail.gmail.com> (raw)
In-Reply-To: <lr2nridih62djx5ccdijiyacdz2hrubsh52tj6bivr6yfgajsj@mgziscqwlmtp>

On Thu, Apr 24, 2025 at 1:28 PM Pedro Falcato <pfalcato@suse.de> wrote:
> > How to do this with slab constructors and destructors: the constructor
> > allocates percpu memory, and the destructor frees it when the slab pages
> > are reclaimed; this slightly alters the constructor’s semantics,
> > as it can now fail.
> >
>
> I really really really really don't like this. We're opening a pandora's box
> of locking issues for slab deadlocks and other subtle issues. IMO the best
> solution there would be, what, failing dtors? which says a lot about the whole
> situation...
>

I noted the need to use leaf spin locks in my IRC conversations with
Harry and later in this very thread, it is a bummer this bit did not
make into the cover letter -- hopefully it would have avoided this
exchange.

I'm going to summarize this again here:

By API contract the dtor can only take a leaf spinlock, in this case one which:
1. disables irqs
2. is the last lock in the dependency chain, as in no locks are taken
while holding it

That way there is no possibility of a deadlock.

This poses a question on how to enforce it and this bit is easy: for
example one can add leaf-spinlock notion to lockdep. Then a misuse on
allocation side is going to complain immediately even without
triggering reclaim. Further, if one would feel so inclined, a test
module can walk the list of all slab caches and do a populate/reclaim
cycle on those with the ctor/dtor pair.

Then there is the matter of particular consumers being ready to do
what they need to on the dtor side only with the spinlock. Does not
sound like a fundamental problem.

> Case in point:
> What happens if you allocate a slab and start ->ctor()-ing objects, and then
> one of the ctors fails? We need to free the ctor, but not without ->dtor()-ing
> everything back (AIUI this is not handled in this series, yet). Besides this
> complication, if failing dtors were added into the mix, we'd be left with a
> half-initialized slab(!!) in the middle of the cache waiting to get freed,
> without being able to.
>

Per my previous paragraph failing dtors would be a self-induced problem.

I can agree one has to roll things back if ctors don't work out, but I
don't think this poses a significant problem.

> Then there are obviously other problems like: whatever you're calling must
> not ever require the slab allocator (directly or indirectly) and must not
> do direct reclaim (ever!), at the risk of a deadlock. The pcpu allocator
> is a no-go (AIUI!) already because of such issues.
>

I don't see how that's true.

> Then there's the separate (but adjacent, particularly as we're considering
> this series due to performance improvements) issue that the ctor() and
> dtor() interfaces are terrible, in the sense that they do not let you batch
> in any way shape or form (requiring us to lock/unlock many times, allocate
> many times, etc). If this is done for performance improvements, I would prefer
> a superior ctor/dtor interface that takes something like a slab iterator and
> lets you do these things.
>

Batching this is also something I mentioned and indeed is a "nice to
have" change. Note however that the work you are suggesting to batch
now also on every alloc/free cycle, so doing it once per creation of a
given object instead is already a win.

-- 
Mateusz Guzik <mjguzik gmail.com>


  reply	other threads:[~2025-04-24 15:21 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-04-24  8:07 Harry Yoo
2025-04-24  8:07 ` [RFC PATCH 1/7] mm/slab: refactor freelist shuffle Harry Yoo
2025-04-24  8:07 ` [RFC PATCH 2/7] treewide, slab: allow slab constructor to return an error Harry Yoo
2025-04-24  8:07 ` [RFC PATCH 3/7] mm/slab: revive the destructor feature in slab allocator Harry Yoo
2025-04-24  8:07 ` [RFC PATCH 4/7] net/sched/act_api: use slab ctor/dtor to reduce contention on pcpu alloc Harry Yoo
2025-04-24  8:07 ` [RFC PATCH 5/7] mm/percpu: allow (un)charging objects without alloc/free Harry Yoo
2025-04-24  8:07 ` [RFC PATCH 6/7] lib/percpu_counter: allow (un)charging percpu counters " Harry Yoo
2025-04-24  8:07 ` [RFC PATCH 7/7] kernel/fork: improve exec() throughput with slab ctor/dtor pair Harry Yoo
2025-04-24  9:29 ` [RFC PATCH 0/7] Reviving the slab destructor to tackle the percpu allocator scalability problem Mateusz Guzik
2025-04-24  9:58   ` Harry Yoo
2025-04-24 15:00     ` Mateusz Guzik
2025-04-24 11:28 ` Pedro Falcato
2025-04-24 15:20   ` Mateusz Guzik [this message]
2025-04-24 16:11     ` Mateusz Guzik
2025-04-25  7:40     ` Harry Yoo
2025-04-25 10:12   ` Harry Yoo
2025-04-25 10:42     ` Pedro Falcato
2025-04-28  1:18       ` Harry Yoo
2025-04-30 19:49       ` Mateusz Guzik
2025-05-12 11:00         ` Harry Yoo
2025-04-24 15:50 ` Christoph Lameter (Ampere)
2025-04-24 16:03   ` Mateusz Guzik
2025-04-24 16:39     ` Christoph Lameter (Ampere)
2025-04-24 17:26       ` Mateusz Guzik
2025-04-24 18:47 ` Tejun Heo
2025-04-25 10:10   ` Harry Yoo
2025-04-25 19:03     ` Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAGudoHF8-tpc3nJeJ3gF2_GZZGp_raMBu4GXC_5omWMc7LhN1w@mail.gmail.com \
    --to=mjguzik@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=byungchul@sk.com \
    --cc=cl@gentwo.org \
    --cc=dennis@kernel.org \
    --cc=harry.yoo@oracle.com \
    --cc=jack@suse.cz \
    --cc=jhs@mojatatu.com \
    --cc=jiri@resnulli.us \
    --cc=kliteyn@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=netdev@vger.kernel.org \
    --cc=pfalcato@suse.de \
    --cc=rientjes@google.com \
    --cc=tj@kernel.org \
    --cc=vbabka@suse.cz \
    --cc=vladbu@nvidia.com \
    --cc=xiyou.wangcong@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox