linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Alexei Starovoitov <alexei.starovoitov@gmail.com>
Cc: bpf <bpf@vger.kernel.org>, linux-mm <linux-mm@kvack.org>,
	Harry Yoo <harry.yoo@oracle.com>,
	Shakeel Butt <shakeel.butt@linux.dev>,
	Michal Hocko <mhocko@suse.com>,
	Andrii Nakryiko <andrii@kernel.org>,
	Kumar Kartikeya Dwivedi <memxor@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Johannes Weiner <hannes@cmpxchg.org>
Subject: Re: [PATCH v2 3/6] locking/local_lock: Introduce local_lock_lockdep_start/end()
Date: Mon, 14 Jul 2025 17:35:52 +0200	[thread overview]
Message-ID: <12615023-1762-49fc-9c86-2e1d9f5997f3@suse.cz> (raw)
In-Reply-To: <20250714110639.uOaKJEfL@linutronix.de>

On 7/14/25 13:06, Sebastian Andrzej Siewior wrote:
> On 2025-07-11 19:19:26 [-0700], Alexei Starovoitov wrote:
>> > If there is no parent check then we could do "normal lock" on both
>> > sides.
>> 
>> How would ___slab_alloc() know whether there was a parent check or not?
>> 
>> imo keeping local_lock_irqsave() as-is is cleaner,
>> since if there is no parent check lockdep will rightfully complain.
> 
> what about this:
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 7e2ffe1d46c6c..3520d1c25c205 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3693,6 +3693,34 @@ static inline void *freeze_slab(struct kmem_cache *s, struct slab *slab)
>  	return freelist;
>  }
>  
> +static void local_lock_cpu_slab(struct kmem_cache *s, const gfp_t gfp_flags,
> +				unsigned long *flags)
> +{
> +	bool allow_spin = gfpflags_allow_spinning(gfp_flags);
> +
> +	/*
> +	 * ___slab_alloc()'s caller is supposed to check if kmem_cache::kmem_cache_cpu::lock
> +	 * can be acquired without a deadlock before invoking the function.
> +	 *
> +	 * On PREEMPT_RT an invocation is not possible from IRQ-off or preempt
> +	 * disabled context. The lock will always be acquired and if needed it
> +	 * block and sleep until the lock is available.
> +	 *
> +	 * On !PREEMPT_RT allocations from any context but NMI are safe. The lock
> +	 * is always acquired with disabled interrupts meaning it is always
> +	 * possible to it.
> +	 * In NMI context it is needed to check if the lock is acquired. If it is not,
> +	 * it is safe to acquire it. The trylock semantic is used to tell lockdep
> +	 * that we don't spin. The BUG_ON() will not trigger if it is safe to acquire
> +	 * the lock.
> +	 *
> +	 */
> +	if (!IS_ENABLED(CONFIG_PREEMPT_RT) && !allow_spin)
> +		BUG_ON(!local_trylock_irqsave(&s->cpu_slab->lock, *flags));
> +	else
> +		local_lock_irqsave(&s->cpu_slab->lock, *flags);

If we go with this, then I think the better approach would be simply:

if (unlikely(!local_trylock_irqsave(&s->cpu_slab->lock, *flags))
	local_lock_irqsave(&s->cpu_slab->lock, *flags);

- no branches before the likely to succeed local_trylock_irqsave()
- the unlikely local_lock_irqsave() fallback exists to handle the PREEMPT_RT
case / provide lockdep checks in case of screwing up
- we don't really need to evaluate allow_spin or add BUG_ON() (which is
actively disallowed to add these days anyway) - if we screw up, either
lockdep will splat, or we deadlock

Also I'm thinking on !PREEMPT_RT && !LOCKDEP we don't even need the fallback
local_lock_irqsave part? The trylock is supposed to always succeed, right?
Either we allow spinning and that means we're not under kmalloc_nolock() and
should not be interrupting the locked section (as before this series). Or
it's the opposite and then the earlier local_lock_is_locked() check should
have prevented us from going here. So I guess we could just trylock without
checking the return value - any screw up should blow up quickly even without
the BUG_ON().

> +}
> +
>  /*
>   * Slow path. The lockless freelist is empty or we need to perform
>   * debugging duties.
> @@ -3765,7 +3793,8 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
>  		goto deactivate_slab;
>  
>  	/* must check again c->slab in case we got preempted and it changed */
> -	local_lock_irqsave(&s->cpu_slab->lock, flags);
> +	local_lock_cpu_slab(s, gfpflags, &flags);
> +
>  	if (unlikely(slab != c->slab)) {
>  		local_unlock_irqrestore(&s->cpu_slab->lock, flags);
>  		goto reread_slab;
> @@ -3803,7 +3832,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
>  
>  deactivate_slab:
>  
> -	local_lock_irqsave(&s->cpu_slab->lock, flags);
> +	local_lock_cpu_slab(s, gfpflags, &flags);
>  	if (slab != c->slab) {
>  		local_unlock_irqrestore(&s->cpu_slab->lock, flags);
>  		goto reread_slab;
> @@ -3819,7 +3848,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
>  
>  #ifdef CONFIG_SLUB_CPU_PARTIAL
>  	while (slub_percpu_partial(c)) {
> -		local_lock_irqsave(&s->cpu_slab->lock, flags);
> +		local_lock_cpu_slab(s, gfpflags, &flags);
>  		if (unlikely(c->slab)) {
>  			local_unlock_irqrestore(&s->cpu_slab->lock, flags);
>  			goto reread_slab;
> @@ -3947,7 +3976,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
>  
>  retry_load_slab:
>  
> -	local_lock_irqsave(&s->cpu_slab->lock, flags);
> +	local_lock_cpu_slab(s, gfpflags, &flags);
>  	if (unlikely(c->slab)) {
>  		void *flush_freelist = c->freelist;
>  		struct slab *flush_slab = c->slab;
> @@ -4003,12 +4032,8 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
>  			p = ERR_PTR(-EBUSY);
>  			goto out;
>  		}
> -		local_lock_lockdep_start(&s->cpu_slab->lock);
> -		p = ___slab_alloc(s, gfpflags, node, addr, c, orig_size);
> -		local_lock_lockdep_end(&s->cpu_slab->lock);
> -	} else {
> -		p = ___slab_alloc(s, gfpflags, node, addr, c, orig_size);
>  	}
> +	p = ___slab_alloc(s, gfpflags, node, addr, c, orig_size);
>  out:
>  #ifdef CONFIG_PREEMPT_COUNT
>  	slub_put_cpu_ptr(s->cpu_slab);
> 
> 
> Sebastian



  reply	other threads:[~2025-07-14 15:35 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-09  1:52 [PATCH v2 0/6] slab: Re-entrant kmalloc_nolock() Alexei Starovoitov
2025-07-09  1:52 ` [PATCH v2 1/6] locking/local_lock: Expose dep_map in local_trylock_t Alexei Starovoitov
2025-07-11  8:02   ` Sebastian Andrzej Siewior
2025-07-09  1:52 ` [PATCH v2 2/6] locking/local_lock: Introduce local_lock_is_locked() Alexei Starovoitov
2025-07-11  7:52   ` Sebastian Andrzej Siewior
2025-07-09  1:53 ` [PATCH v2 3/6] locking/local_lock: Introduce local_lock_lockdep_start/end() Alexei Starovoitov
2025-07-11  7:50   ` Sebastian Andrzej Siewior
2025-07-11  9:55     ` Vlastimil Babka
2025-07-11 15:17       ` Sebastian Andrzej Siewior
2025-07-11 15:23         ` Vlastimil Babka
2025-07-12  2:19         ` Alexei Starovoitov
2025-07-14 11:06           ` Sebastian Andrzej Siewior
2025-07-14 15:35             ` Vlastimil Babka [this message]
2025-07-14 15:54               ` Sebastian Andrzej Siewior
2025-07-14 17:52             ` Alexei Starovoitov
2025-07-14 18:33               ` Vlastimil Babka
2025-07-14 18:46                 ` Alexei Starovoitov
2025-07-15  6:56                   ` Vlastimil Babka
2025-07-15 17:29                     ` Alexei Starovoitov
2025-07-15 17:48                       ` Vlastimil Babka
2025-07-15 21:00                         ` Alexei Starovoitov
2025-07-09  1:53 ` [PATCH v2 4/6] mm: Allow GFP_ACCOUNT to be used in alloc_pages_nolock() Alexei Starovoitov
2025-07-09 14:20   ` Vlastimil Babka
2025-07-09  1:53 ` [PATCH v2 5/6] mm: Introduce alloc_frozen_pages_nolock() Alexei Starovoitov
2025-07-09 14:21   ` Vlastimil Babka
2025-07-09  1:53 ` [PATCH v2 6/6] slab: Introduce kmalloc_nolock() and kfree_nolock() Alexei Starovoitov
2025-07-10  9:36   ` Vlastimil Babka
2025-07-10 10:21     ` Harry Yoo
2025-07-10 15:05       ` Vlastimil Babka
2025-07-10 19:13         ` Alexei Starovoitov
2025-07-11  6:06           ` Harry Yoo
2025-07-11 10:30           ` Vlastimil Babka
2025-07-12  1:55             ` Alexei Starovoitov
2025-07-10 19:21     ` Alexei Starovoitov
2025-07-11  7:26   ` Sebastian Andrzej Siewior
2025-07-11  7:36   ` Harry Yoo
2025-07-11  7:40     ` Harry Yoo
2025-07-11 10:48     ` Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=12615023-1762-49fc-9c86-2e1d9f5997f3@suse.cz \
    --to=vbabka@suse.cz \
    --cc=akpm@linux-foundation.org \
    --cc=alexei.starovoitov@gmail.com \
    --cc=andrii@kernel.org \
    --cc=bigeasy@linutronix.de \
    --cc=bpf@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=harry.yoo@oracle.com \
    --cc=linux-mm@kvack.org \
    --cc=memxor@gmail.com \
    --cc=mhocko@suse.com \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=shakeel.butt@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox