* [PATCH] mm/vmalloc: Fix unlock order in s_stop()
@ 2020-12-13 18:08 Waiman Long
2020-12-13 18:39 ` Uladzislau Rezki
2020-12-14 9:39 ` David Hildenbrand
0 siblings, 2 replies; 8+ messages in thread
From: Waiman Long @ 2020-12-13 18:08 UTC (permalink / raw)
To: Andrew Morton, Uladzislau Rezki (Sony)
Cc: linux-mm, linux-kernel, Waiman Long
When multiple locks are acquired, they should be released in reverse
order. For s_start() and s_stop() in mm/vmalloc.c, that is not the
case.
s_start: mutex_lock(&vmap_purge_lock); spin_lock(&vmap_area_lock);
s_stop : mutex_unlock(&vmap_purge_lock); spin_unlock(&vmap_area_lock);
This unlock sequence, though allowed, is not optimal. If a waiter is
present, mutex_unlock() will need to go through the slowpath of waking
up the waiter with preemption disabled. Fix that by releasing the
spinlock first before the mutex.
Fixes: e36176be1c39 ("mm/vmalloc: rework vmap_area_lock")
Signed-off-by: Waiman Long <longman@redhat.com>
---
mm/vmalloc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 6ae491a8b210..75913f685c71 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3448,11 +3448,11 @@ static void *s_next(struct seq_file *m, void *p, loff_t *pos)
}
static void s_stop(struct seq_file *m, void *p)
- __releases(&vmap_purge_lock)
__releases(&vmap_area_lock)
+ __releases(&vmap_purge_lock)
{
- mutex_unlock(&vmap_purge_lock);
spin_unlock(&vmap_area_lock);
+ mutex_unlock(&vmap_purge_lock);
}
static void show_numa_info(struct seq_file *m, struct vm_struct *v)
--
2.18.1
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH] mm/vmalloc: Fix unlock order in s_stop()
2020-12-13 18:08 [PATCH] mm/vmalloc: Fix unlock order in s_stop() Waiman Long
@ 2020-12-13 18:39 ` Uladzislau Rezki
2020-12-13 19:42 ` Waiman Long
2020-12-13 21:51 ` Matthew Wilcox
2020-12-14 9:39 ` David Hildenbrand
1 sibling, 2 replies; 8+ messages in thread
From: Uladzislau Rezki @ 2020-12-13 18:39 UTC (permalink / raw)
To: Waiman Long
Cc: Andrew Morton, Uladzislau Rezki (Sony), linux-mm, linux-kernel
On Sun, Dec 13, 2020 at 01:08:43PM -0500, Waiman Long wrote:
> When multiple locks are acquired, they should be released in reverse
> order. For s_start() and s_stop() in mm/vmalloc.c, that is not the
> case.
>
> s_start: mutex_lock(&vmap_purge_lock); spin_lock(&vmap_area_lock);
> s_stop : mutex_unlock(&vmap_purge_lock); spin_unlock(&vmap_area_lock);
>
> This unlock sequence, though allowed, is not optimal. If a waiter is
> present, mutex_unlock() will need to go through the slowpath of waking
> up the waiter with preemption disabled. Fix that by releasing the
> spinlock first before the mutex.
>
> Fixes: e36176be1c39 ("mm/vmalloc: rework vmap_area_lock")
> Signed-off-by: Waiman Long <longman@redhat.com>
> ---
> mm/vmalloc.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 6ae491a8b210..75913f685c71 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -3448,11 +3448,11 @@ static void *s_next(struct seq_file *m, void *p, loff_t *pos)
> }
>
> static void s_stop(struct seq_file *m, void *p)
> - __releases(&vmap_purge_lock)
> __releases(&vmap_area_lock)
> + __releases(&vmap_purge_lock)
> {
> - mutex_unlock(&vmap_purge_lock);
> spin_unlock(&vmap_area_lock);
> + mutex_unlock(&vmap_purge_lock);
> }
>
> static void show_numa_info(struct seq_file *m, struct vm_struct *v)
BTW, if navigation over both list is an issue, for example when there
are multiple heavy readers of /proc/vmallocinfo, i think, it make sense
to implement RCU safe lists iteration and get rid of both locks.
As for the patch: Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Thanks!
--
Vlad Rezki
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH] mm/vmalloc: Fix unlock order in s_stop()
2020-12-13 18:39 ` Uladzislau Rezki
@ 2020-12-13 19:42 ` Waiman Long
2020-12-13 21:51 ` Matthew Wilcox
1 sibling, 0 replies; 8+ messages in thread
From: Waiman Long @ 2020-12-13 19:42 UTC (permalink / raw)
To: Uladzislau Rezki; +Cc: Andrew Morton, linux-mm, linux-kernel
On 12/13/20 1:39 PM, Uladzislau Rezki wrote:
> On Sun, Dec 13, 2020 at 01:08:43PM -0500, Waiman Long wrote:
>> When multiple locks are acquired, they should be released in reverse
>> order. For s_start() and s_stop() in mm/vmalloc.c, that is not the
>> case.
>>
>> s_start: mutex_lock(&vmap_purge_lock); spin_lock(&vmap_area_lock);
>> s_stop : mutex_unlock(&vmap_purge_lock); spin_unlock(&vmap_area_lock);
>>
>> This unlock sequence, though allowed, is not optimal. If a waiter is
>> present, mutex_unlock() will need to go through the slowpath of waking
>> up the waiter with preemption disabled. Fix that by releasing the
>> spinlock first before the mutex.
>>
>> Fixes: e36176be1c39 ("mm/vmalloc: rework vmap_area_lock")
>> Signed-off-by: Waiman Long <longman@redhat.com>
>> ---
>> mm/vmalloc.c | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>> index 6ae491a8b210..75913f685c71 100644
>> --- a/mm/vmalloc.c
>> +++ b/mm/vmalloc.c
>> @@ -3448,11 +3448,11 @@ static void *s_next(struct seq_file *m, void *p, loff_t *pos)
>> }
>>
>> static void s_stop(struct seq_file *m, void *p)
>> - __releases(&vmap_purge_lock)
>> __releases(&vmap_area_lock)
>> + __releases(&vmap_purge_lock)
>> {
>> - mutex_unlock(&vmap_purge_lock);
>> spin_unlock(&vmap_area_lock);
>> + mutex_unlock(&vmap_purge_lock);
>> }
>>
>> static void show_numa_info(struct seq_file *m, struct vm_struct *v)
> BTW, if navigation over both list is an issue, for example when there
> are multiple heavy readers of /proc/vmallocinfo, i think, it make sense
> to implement RCU safe lists iteration and get rid of both locks.
Making it lockless is certainly better, but doing lockless the right way
is tricky. I will probably keep it as it unless there is a significant
advantage of doing so.
Cheers,
Longman
>
> As for the patch: Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
>
> Thanks!
>
> --
> Vlad Rezki
>
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH] mm/vmalloc: Fix unlock order in s_stop()
2020-12-13 18:39 ` Uladzislau Rezki
2020-12-13 19:42 ` Waiman Long
@ 2020-12-13 21:51 ` Matthew Wilcox
[not found] ` <20201214151128.GA2094@pc638.lan>
1 sibling, 1 reply; 8+ messages in thread
From: Matthew Wilcox @ 2020-12-13 21:51 UTC (permalink / raw)
To: Uladzislau Rezki; +Cc: Waiman Long, Andrew Morton, linux-mm, linux-kernel
On Sun, Dec 13, 2020 at 07:39:36PM +0100, Uladzislau Rezki wrote:
> On Sun, Dec 13, 2020 at 01:08:43PM -0500, Waiman Long wrote:
> > When multiple locks are acquired, they should be released in reverse
> > order. For s_start() and s_stop() in mm/vmalloc.c, that is not the
> > case.
> >
> > s_start: mutex_lock(&vmap_purge_lock); spin_lock(&vmap_area_lock);
> > s_stop : mutex_unlock(&vmap_purge_lock); spin_unlock(&vmap_area_lock);
> >
> > This unlock sequence, though allowed, is not optimal. If a waiter is
> > present, mutex_unlock() will need to go through the slowpath of waking
> > up the waiter with preemption disabled. Fix that by releasing the
> > spinlock first before the mutex.
> >
> > Fixes: e36176be1c39 ("mm/vmalloc: rework vmap_area_lock")
> > Signed-off-by: Waiman Long <longman@redhat.com>
> > ---
> > mm/vmalloc.c | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > index 6ae491a8b210..75913f685c71 100644
> > --- a/mm/vmalloc.c
> > +++ b/mm/vmalloc.c
> > @@ -3448,11 +3448,11 @@ static void *s_next(struct seq_file *m, void *p, loff_t *pos)
> > }
> >
> > static void s_stop(struct seq_file *m, void *p)
> > - __releases(&vmap_purge_lock)
> > __releases(&vmap_area_lock)
> > + __releases(&vmap_purge_lock)
> > {
> > - mutex_unlock(&vmap_purge_lock);
> > spin_unlock(&vmap_area_lock);
> > + mutex_unlock(&vmap_purge_lock);
> > }
> >
> > static void show_numa_info(struct seq_file *m, struct vm_struct *v)
> BTW, if navigation over both list is an issue, for example when there
> are multiple heavy readers of /proc/vmallocinfo, i think, it make sense
> to implement RCU safe lists iteration and get rid of both locks.
If we need to iterate the list efficiently, i'd suggest getting rid of
the list and using an xarray instead. maybe a maple tree, once that code
is better exercised.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] mm/vmalloc: Fix unlock order in s_stop()
2020-12-13 18:08 [PATCH] mm/vmalloc: Fix unlock order in s_stop() Waiman Long
2020-12-13 18:39 ` Uladzislau Rezki
@ 2020-12-14 9:39 ` David Hildenbrand
2020-12-14 15:05 ` Waiman Long
1 sibling, 1 reply; 8+ messages in thread
From: David Hildenbrand @ 2020-12-14 9:39 UTC (permalink / raw)
To: Waiman Long, Andrew Morton, Uladzislau Rezki (Sony)
Cc: linux-mm, linux-kernel
On 13.12.20 19:08, Waiman Long wrote:
> When multiple locks are acquired, they should be released in reverse
> order. For s_start() and s_stop() in mm/vmalloc.c, that is not the
> case.
>
> s_start: mutex_lock(&vmap_purge_lock); spin_lock(&vmap_area_lock);
> s_stop : mutex_unlock(&vmap_purge_lock); spin_unlock(&vmap_area_lock);
>
> This unlock sequence, though allowed, is not optimal. If a waiter is
> present, mutex_unlock() will need to go through the slowpath of waking
> up the waiter with preemption disabled. Fix that by releasing the
> spinlock first before the mutex.
>
> Fixes: e36176be1c39 ("mm/vmalloc: rework vmap_area_lock")
I'm not sure if this classifies as "Fixes". As you correctly state "is
not optimal". But yeah, releasing a spinlock after releasing a mutex
looks weird already.
> Signed-off-by: Waiman Long <longman@redhat.com>
> ---
> mm/vmalloc.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 6ae491a8b210..75913f685c71 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -3448,11 +3448,11 @@ static void *s_next(struct seq_file *m, void *p, loff_t *pos)
> }
>
> static void s_stop(struct seq_file *m, void *p)
> - __releases(&vmap_purge_lock)
> __releases(&vmap_area_lock)
> + __releases(&vmap_purge_lock)
> {
> - mutex_unlock(&vmap_purge_lock);
> spin_unlock(&vmap_area_lock);
> + mutex_unlock(&vmap_purge_lock);
> }
>
> static void show_numa_info(struct seq_file *m, struct vm_struct *v)
>
Reviewed-by: David Hildenbrand <david@redhat.com>
--
Thanks,
David / dhildenb
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH] mm/vmalloc: Fix unlock order in s_stop()
2020-12-14 9:39 ` David Hildenbrand
@ 2020-12-14 15:05 ` Waiman Long
0 siblings, 0 replies; 8+ messages in thread
From: Waiman Long @ 2020-12-14 15:05 UTC (permalink / raw)
To: David Hildenbrand, Andrew Morton, Uladzislau Rezki (Sony)
Cc: linux-mm, linux-kernel
On 12/14/20 4:39 AM, David Hildenbrand wrote:
> On 13.12.20 19:08, Waiman Long wrote:
>> When multiple locks are acquired, they should be released in reverse
>> order. For s_start() and s_stop() in mm/vmalloc.c, that is not the
>> case.
>>
>> s_start: mutex_lock(&vmap_purge_lock); spin_lock(&vmap_area_lock);
>> s_stop : mutex_unlock(&vmap_purge_lock); spin_unlock(&vmap_area_lock);
>>
>> This unlock sequence, though allowed, is not optimal. If a waiter is
>> present, mutex_unlock() will need to go through the slowpath of waking
>> up the waiter with preemption disabled. Fix that by releasing the
>> spinlock first before the mutex.
>>
>> Fixes: e36176be1c39 ("mm/vmalloc: rework vmap_area_lock")
> I'm not sure if this classifies as "Fixes". As you correctly state "is
> not optimal". But yeah, releasing a spinlock after releasing a mutex
> looks weird already.
>
Yes, it may not be technically a real bug fix. However, the order just
doesn't look right. That is why I sent out a patch to address that.
Cheers,
Longman
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2020-12-14 17:56 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-13 18:08 [PATCH] mm/vmalloc: Fix unlock order in s_stop() Waiman Long
2020-12-13 18:39 ` Uladzislau Rezki
2020-12-13 19:42 ` Waiman Long
2020-12-13 21:51 ` Matthew Wilcox
[not found] ` <20201214151128.GA2094@pc638.lan>
2020-12-14 15:37 ` Matthew Wilcox
2020-12-14 17:56 ` Uladzislau Rezki
2020-12-14 9:39 ` David Hildenbrand
2020-12-14 15:05 ` Waiman Long
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox