linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH] mm, swap: don't disable preemption while taking the per-CPU cache
@ 2017-06-23 10:12 Sebastian Andrzej Siewior
  2017-06-23 10:34 ` Michal Hocko
  0 siblings, 1 reply; 6+ messages in thread
From: Sebastian Andrzej Siewior @ 2017-06-23 10:12 UTC (permalink / raw)
  To: Tim Chen; +Cc: tglx, ying.huang, Michal Hocko, linux-mm

get_cpu_var() disables preemption and returns the per-CPU version of the
variable. Disabling preemption is useful to ensure atomic access to the
variable within the critical section.
In this case however, after the per-CPU version of the variable is
obtained the ->free_lock is acquired. For that reason it seems the raw
accessor could be used. It only seems that ->slots_ret should be
retested (because with disabled preemption this variable can not be set
to NULL otherwise).

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 mm/swap_slots.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/mm/swap_slots.c b/mm/swap_slots.c
index 58f6c78f1dad..51c304477482 100644
--- a/mm/swap_slots.c
+++ b/mm/swap_slots.c
@@ -272,11 +272,11 @@ int free_swap_slot(swp_entry_t entry)
 {
 	struct swap_slots_cache *cache;
 
-	cache = &get_cpu_var(swp_slots);
+	cache = raw_cpu_ptr(&swp_slots);
 	if (use_swap_slot_cache && cache->slots_ret) {
 		spin_lock_irq(&cache->free_lock);
 		/* Swap slots cache may be deactivated before acquiring lock */
-		if (!use_swap_slot_cache) {
+		if (!use_swap_slot_cache || !cache->slots_ret) {
 			spin_unlock_irq(&cache->free_lock);
 			goto direct_free;
 		}
@@ -296,7 +296,6 @@ int free_swap_slot(swp_entry_t entry)
 direct_free:
 		swapcache_free_entries(&entry, 1);
 	}
-	put_cpu_var(swp_slots);
 
 	return 0;
 }
-- 
2.13.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH] mm, swap: don't disable preemption while taking the per-CPU cache
  2017-06-23 10:12 [RFC PATCH] mm, swap: don't disable preemption while taking the per-CPU cache Sebastian Andrzej Siewior
@ 2017-06-23 10:34 ` Michal Hocko
  2017-06-23 11:47   ` [PATCH] " Sebastian Andrzej Siewior
  0 siblings, 1 reply; 6+ messages in thread
From: Michal Hocko @ 2017-06-23 10:34 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior; +Cc: Tim Chen, tglx, ying.huang, linux-mm

On Fri 23-06-17 12:12:54, Sebastian Andrzej Siewior wrote:
> get_cpu_var() disables preemption and returns the per-CPU version of the
> variable. Disabling preemption is useful to ensure atomic access to the
> variable within the critical section.
> In this case however, after the per-CPU version of the variable is
> obtained the ->free_lock is acquired. For that reason it seems the raw
> accessor could be used. It only seems that ->slots_ret should be
> retested (because with disabled preemption this variable can not be set
> to NULL otherwise).

The changelog doesn't explain, why does this change matter. Disabling
preemption shortly before taking a spinlock shouldn't make much
difference. I suspect you care because of RT, right? In that case spell
that in the changelog and explain why it matters.

Other than hat the patch looks good to me.

> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
>  mm/swap_slots.c | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/swap_slots.c b/mm/swap_slots.c
> index 58f6c78f1dad..51c304477482 100644
> --- a/mm/swap_slots.c
> +++ b/mm/swap_slots.c
> @@ -272,11 +272,11 @@ int free_swap_slot(swp_entry_t entry)
>  {
>  	struct swap_slots_cache *cache;
>  
> -	cache = &get_cpu_var(swp_slots);
> +	cache = raw_cpu_ptr(&swp_slots);
>  	if (use_swap_slot_cache && cache->slots_ret) {
>  		spin_lock_irq(&cache->free_lock);
>  		/* Swap slots cache may be deactivated before acquiring lock */
> -		if (!use_swap_slot_cache) {
> +		if (!use_swap_slot_cache || !cache->slots_ret) {
>  			spin_unlock_irq(&cache->free_lock);
>  			goto direct_free;
>  		}
> @@ -296,7 +296,6 @@ int free_swap_slot(swp_entry_t entry)
>  direct_free:
>  		swapcache_free_entries(&entry, 1);
>  	}
> -	put_cpu_var(swp_slots);
>  
>  	return 0;
>  }
> -- 
> 2.13.1
> 

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH] mm, swap: don't disable preemption while taking the per-CPU cache
  2017-06-23 10:34 ` Michal Hocko
@ 2017-06-23 11:47   ` Sebastian Andrzej Siewior
  2017-06-23 12:02     ` Michal Hocko
  0 siblings, 1 reply; 6+ messages in thread
From: Sebastian Andrzej Siewior @ 2017-06-23 11:47 UTC (permalink / raw)
  To: Michal Hocko; +Cc: Tim Chen, tglx, ying.huang, linux-mm, Andrew Morton

get_cpu_var() disables preemption and returns the per-CPU version of the
variable. Disabling preemption is useful to ensure atomic access to the
variable within the critical section.
In this case however, after the per-CPU version of the variable is
obtained the ->free_lock is acquired. For that reason it seems the raw
accessor could be used. It only seems that ->slots_ret should be
retested (because with disabled preemption this variable can not be set
to NULL otherwise).
This popped up during PREEMPT-RT testing because it tries to take
spinlocks in a preempt disabled section.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
On 2017-06-23 12:34:23 [+0200], Michal Hocko wrote:
> The changelog doesn't explain, why does this change matter. Disabling
> preemption shortly before taking a spinlock shouldn't make much
> difference. I suspect you care because of RT, right? In that case spell
> that in the changelog and explain why it matters.

yes, it is bad for RT. I added the RT pieces as explanation.

> Other than hat the patch looks good to me.

Thank you. +akpm.

 mm/swap_slots.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/mm/swap_slots.c b/mm/swap_slots.c
index 58f6c78f1dad..51c304477482 100644
--- a/mm/swap_slots.c
+++ b/mm/swap_slots.c
@@ -272,11 +272,11 @@ int free_swap_slot(swp_entry_t entry)
 {
 	struct swap_slots_cache *cache;
 
-	cache = &get_cpu_var(swp_slots);
+	cache = raw_cpu_ptr(&swp_slots);
 	if (use_swap_slot_cache && cache->slots_ret) {
 		spin_lock_irq(&cache->free_lock);
 		/* Swap slots cache may be deactivated before acquiring lock */
-		if (!use_swap_slot_cache) {
+		if (!use_swap_slot_cache || !cache->slots_ret) {
 			spin_unlock_irq(&cache->free_lock);
 			goto direct_free;
 		}
@@ -296,7 +296,6 @@ int free_swap_slot(swp_entry_t entry)
 direct_free:
 		swapcache_free_entries(&entry, 1);
 	}
-	put_cpu_var(swp_slots);
 
 	return 0;
 }
-- 
2.13.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm, swap: don't disable preemption while taking the per-CPU cache
  2017-06-23 11:47   ` [PATCH] " Sebastian Andrzej Siewior
@ 2017-06-23 12:02     ` Michal Hocko
  2017-06-23 12:08       ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 6+ messages in thread
From: Michal Hocko @ 2017-06-23 12:02 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Tim Chen, tglx, ying.huang, linux-mm, Andrew Morton

On Fri 23-06-17 13:47:55, Sebastian Andrzej Siewior wrote:
> get_cpu_var() disables preemption and returns the per-CPU version of the
> variable. Disabling preemption is useful to ensure atomic access to the
> variable within the critical section.
> In this case however, after the per-CPU version of the variable is
> obtained the ->free_lock is acquired. For that reason it seems the raw
> accessor could be used. It only seems that ->slots_ret should be
> retested (because with disabled preemption this variable can not be set
> to NULL otherwise).
> This popped up during PREEMPT-RT testing because it tries to take
> spinlocks in a preempt disabled section.

Ohh, because the spinlock can sleep with PREEMPT-RT right? Don't we have
much more places like that? It is perfectly valid to take a spinlock
while the preemption is disabled. E.g. we do take ptl lock inside
kmap_atomic sections which disables preemption on 32b systems.

> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
> On 2017-06-23 12:34:23 [+0200], Michal Hocko wrote:
> > The changelog doesn't explain, why does this change matter. Disabling
> > preemption shortly before taking a spinlock shouldn't make much
> > difference. I suspect you care because of RT, right? In that case spell
> > that in the changelog and explain why it matters.
> 
> yes, it is bad for RT. I added the RT pieces as explanation.
> 
> > Other than hat the patch looks good to me.
> 
> Thank you. +akpm.
> 
>  mm/swap_slots.c | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/swap_slots.c b/mm/swap_slots.c
> index 58f6c78f1dad..51c304477482 100644
> --- a/mm/swap_slots.c
> +++ b/mm/swap_slots.c
> @@ -272,11 +272,11 @@ int free_swap_slot(swp_entry_t entry)
>  {
>  	struct swap_slots_cache *cache;
>  
> -	cache = &get_cpu_var(swp_slots);
> +	cache = raw_cpu_ptr(&swp_slots);
>  	if (use_swap_slot_cache && cache->slots_ret) {
>  		spin_lock_irq(&cache->free_lock);
>  		/* Swap slots cache may be deactivated before acquiring lock */
> -		if (!use_swap_slot_cache) {
> +		if (!use_swap_slot_cache || !cache->slots_ret) {
>  			spin_unlock_irq(&cache->free_lock);
>  			goto direct_free;
>  		}
> @@ -296,7 +296,6 @@ int free_swap_slot(swp_entry_t entry)
>  direct_free:
>  		swapcache_free_entries(&entry, 1);
>  	}
> -	put_cpu_var(swp_slots);
>  
>  	return 0;
>  }
> -- 
> 2.13.1
> 

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm, swap: don't disable preemption while taking the per-CPU cache
  2017-06-23 12:02     ` Michal Hocko
@ 2017-06-23 12:08       ` Sebastian Andrzej Siewior
  2017-06-23 12:13         ` Michal Hocko
  0 siblings, 1 reply; 6+ messages in thread
From: Sebastian Andrzej Siewior @ 2017-06-23 12:08 UTC (permalink / raw)
  To: Michal Hocko; +Cc: Tim Chen, tglx, ying.huang, linux-mm, Andrew Morton

On 2017-06-23 14:02:33 [+0200], Michal Hocko wrote:
> On Fri 23-06-17 13:47:55, Sebastian Andrzej Siewior wrote:
> > get_cpu_var() disables preemption and returns the per-CPU version of the
> > variable. Disabling preemption is useful to ensure atomic access to the
> > variable within the critical section.
> > In this case however, after the per-CPU version of the variable is
> > obtained the ->free_lock is acquired. For that reason it seems the raw
> > accessor could be used. It only seems that ->slots_ret should be
> > retested (because with disabled preemption this variable can not be set
> > to NULL otherwise).
> > This popped up during PREEMPT-RT testing because it tries to take
> > spinlocks in a preempt disabled section.
> 
> Ohh, because the spinlock can sleep with PREEMPT-RT right? Don't we have
yup.

> much more places like that? It is perfectly valid to take a spinlock
well we have more than just this one patch to fix things like that :)
The easy/simple things (like this one which is valid in RT and !RT) I
try to push upstream asap and the other remain in the RT tree.

> while the preemption is disabled. E.g. we do take ptl lock inside
> kmap_atomic sections which disables preemption on 32b systems.
we don't disable preemption in kmap_atomic(). It would be bad :)

> > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> 
> Acked-by: Michal Hocko <mhocko@suse.com>
Thanks.

Sebastian

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm, swap: don't disable preemption while taking the per-CPU cache
  2017-06-23 12:08       ` Sebastian Andrzej Siewior
@ 2017-06-23 12:13         ` Michal Hocko
  0 siblings, 0 replies; 6+ messages in thread
From: Michal Hocko @ 2017-06-23 12:13 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Tim Chen, tglx, ying.huang, linux-mm, Andrew Morton

On Fri 23-06-17 14:08:42, Sebastian Andrzej Siewior wrote:
> On 2017-06-23 14:02:33 [+0200], Michal Hocko wrote:
> > On Fri 23-06-17 13:47:55, Sebastian Andrzej Siewior wrote:
> > > get_cpu_var() disables preemption and returns the per-CPU version of the
> > > variable. Disabling preemption is useful to ensure atomic access to the
> > > variable within the critical section.
> > > In this case however, after the per-CPU version of the variable is
> > > obtained the ->free_lock is acquired. For that reason it seems the raw
> > > accessor could be used. It only seems that ->slots_ret should be
> > > retested (because with disabled preemption this variable can not be set
> > > to NULL otherwise).
> > > This popped up during PREEMPT-RT testing because it tries to take
> > > spinlocks in a preempt disabled section.
> > 
> > Ohh, because the spinlock can sleep with PREEMPT-RT right? Don't we have
> yup.
> 
> > much more places like that? It is perfectly valid to take a spinlock
> well we have more than just this one patch to fix things like that :)
> The easy/simple things (like this one which is valid in RT and !RT) I
> try to push upstream asap and the other remain in the RT tree.

yeah, makes sense to me.

> > while the preemption is disabled. E.g. we do take ptl lock inside
> > kmap_atomic sections which disables preemption on 32b systems.
> we don't disable preemption in kmap_atomic(). It would be bad :)

Ohh, I didn't know about that.

> > > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > 
> > Acked-by: Michal Hocko <mhocko@suse.com>
> Thanks.
> 
> Sebastian

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2017-06-23 12:13 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-23 10:12 [RFC PATCH] mm, swap: don't disable preemption while taking the per-CPU cache Sebastian Andrzej Siewior
2017-06-23 10:34 ` Michal Hocko
2017-06-23 11:47   ` [PATCH] " Sebastian Andrzej Siewior
2017-06-23 12:02     ` Michal Hocko
2017-06-23 12:08       ` Sebastian Andrzej Siewior
2017-06-23 12:13         ` Michal Hocko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox