linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/slub: fix comments about fastpath limitation on PREEMPT_RT
@ 2022-08-25  1:57 Hyeonggon Yoo
  2022-08-25 13:11 ` Vlastimil Babka
  0 siblings, 1 reply; 2+ messages in thread
From: Hyeonggon Yoo @ 2022-08-25  1:57 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Vlastimil Babka, Roman Gushchin
  Cc: Hyeonggon Yoo, Sebastian Andrzej Siewior, Thomas Gleixner,
	Mike Galbraith, linux-mm, linux-kernel

With PREEMPT_RT disabling interrupt is unnecessary as there is
no user of slab in hardirq context on PREEMPT_RT.

The limitation of lockless fastpath on PREEMPT_RT comes from the fact
that local_lock does not disable preemption on PREEMPT_RT.

Fix comments accordingly.

Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
 mm/slub.c | 23 ++++++++++++++---------
 1 file changed, 14 insertions(+), 9 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 30c2ee9e8a29..aa42ac6013b8 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -100,7 +100,7 @@
  *   except the stat counters. This is a percpu structure manipulated only by
  *   the local cpu, so the lock protects against being preempted or interrupted
  *   by an irq. Fast path operations rely on lockless operations instead.
- *   On PREEMPT_RT, the local lock does not actually disable irqs (and thus
+ *   On PREEMPT_RT, the local lock does not actually disable preemption (and thus
  *   prevent the lockless operations), so fastpath operations also need to take
  *   the lock and are no longer lockless.
  *
@@ -3185,10 +3185,12 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_l
 	slab = c->slab;
 	/*
 	 * We cannot use the lockless fastpath on PREEMPT_RT because if a
-	 * slowpath has taken the local_lock_irqsave(), it is not protected
-	 * against a fast path operation in an irq handler. So we need to take
-	 * the slow path which uses local_lock. It is still relatively fast if
-	 * there is a suitable cpu freelist.
+	 * slowpath has taken the local_lock which does not disable preemption
+	 * on PREEMPT_RT, it is not protected against a fast path operation in
+	 * another thread that does not take the local_lock.
+	 *
+	 * So we need to take the slow path which uses local_lock. It is still
+	 * relatively fast if there is a suitable cpu freelist.
 	 */
 	if (IS_ENABLED(CONFIG_PREEMPT_RT) ||
 	    unlikely(!object || !slab || !node_match(slab, node))) {
@@ -3457,10 +3459,13 @@ static __always_inline void do_slab_free(struct kmem_cache *s,
 #else /* CONFIG_PREEMPT_RT */
 		/*
 		 * We cannot use the lockless fastpath on PREEMPT_RT because if
-		 * a slowpath has taken the local_lock_irqsave(), it is not
-		 * protected against a fast path operation in an irq handler. So
-		 * we need to take the local_lock. We shouldn't simply defer to
-		 * __slab_free() as that wouldn't use the cpu freelist at all.
+		 * a slowpath has taken the local_lock which does not disable
+		 * preemption on PREEMPT_RT, it is not protected against a
+		 * fast path operation in another thread that does not take
+		 * the local_lock.
+		 *
+		 * So we need to take the local_lock. We shouldn't simply defer
+		 * to __slab_free() as that wouldn't use the cpu freelist at all.
 		 */
 		void **freelist;
 
-- 
2.32.0



^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH] mm/slub: fix comments about fastpath limitation on PREEMPT_RT
  2022-08-25  1:57 [PATCH] mm/slub: fix comments about fastpath limitation on PREEMPT_RT Hyeonggon Yoo
@ 2022-08-25 13:11 ` Vlastimil Babka
  0 siblings, 0 replies; 2+ messages in thread
From: Vlastimil Babka @ 2022-08-25 13:11 UTC (permalink / raw)
  To: Hyeonggon Yoo, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Roman Gushchin
  Cc: Sebastian Andrzej Siewior, Thomas Gleixner, Mike Galbraith,
	linux-mm, linux-kernel

On 8/25/22 03:57, Hyeonggon Yoo wrote:
> With PREEMPT_RT disabling interrupt is unnecessary as there is
> no user of slab in hardirq context on PREEMPT_RT.
> 
> The limitation of lockless fastpath on PREEMPT_RT comes from the fact
> that local_lock does not disable preemption on PREEMPT_RT.
> 
> Fix comments accordingly.
> 
> Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>

Just FTR, as "slub: Make PREEMPT_RT support less convoluted" patch dealt
with these comments already, there's now nothing left to apply from below.

> ---
>  mm/slub.c | 23 ++++++++++++++---------
>  1 file changed, 14 insertions(+), 9 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 30c2ee9e8a29..aa42ac6013b8 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -100,7 +100,7 @@
>   *   except the stat counters. This is a percpu structure manipulated only by
>   *   the local cpu, so the lock protects against being preempted or interrupted
>   *   by an irq. Fast path operations rely on lockless operations instead.
> - *   On PREEMPT_RT, the local lock does not actually disable irqs (and thus
> + *   On PREEMPT_RT, the local lock does not actually disable preemption (and thus
>   *   prevent the lockless operations), so fastpath operations also need to take
>   *   the lock and are no longer lockless.
>   *
> @@ -3185,10 +3185,12 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_l
>  	slab = c->slab;
>  	/*
>  	 * We cannot use the lockless fastpath on PREEMPT_RT because if a
> -	 * slowpath has taken the local_lock_irqsave(), it is not protected
> -	 * against a fast path operation in an irq handler. So we need to take
> -	 * the slow path which uses local_lock. It is still relatively fast if
> -	 * there is a suitable cpu freelist.
> +	 * slowpath has taken the local_lock which does not disable preemption
> +	 * on PREEMPT_RT, it is not protected against a fast path operation in
> +	 * another thread that does not take the local_lock.
> +	 *
> +	 * So we need to take the slow path which uses local_lock. It is still
> +	 * relatively fast if there is a suitable cpu freelist.
>  	 */
>  	if (IS_ENABLED(CONFIG_PREEMPT_RT) ||
>  	    unlikely(!object || !slab || !node_match(slab, node))) {
> @@ -3457,10 +3459,13 @@ static __always_inline void do_slab_free(struct kmem_cache *s,
>  #else /* CONFIG_PREEMPT_RT */
>  		/*
>  		 * We cannot use the lockless fastpath on PREEMPT_RT because if
> -		 * a slowpath has taken the local_lock_irqsave(), it is not
> -		 * protected against a fast path operation in an irq handler. So
> -		 * we need to take the local_lock. We shouldn't simply defer to
> -		 * __slab_free() as that wouldn't use the cpu freelist at all.
> +		 * a slowpath has taken the local_lock which does not disable
> +		 * preemption on PREEMPT_RT, it is not protected against a
> +		 * fast path operation in another thread that does not take
> +		 * the local_lock.
> +		 *
> +		 * So we need to take the local_lock. We shouldn't simply defer
> +		 * to __slab_free() as that wouldn't use the cpu freelist at all.
>  		 */
>  		void **freelist;
>  



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-08-25 13:11 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-25  1:57 [PATCH] mm/slub: fix comments about fastpath limitation on PREEMPT_RT Hyeonggon Yoo
2022-08-25 13:11 ` Vlastimil Babka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox