* [PATCH v3] mmap_lock: Change trace and locking order
@ 2021-09-07 20:15 Liam Howlett
2021-09-07 20:25 ` Steven Rostedt
0 siblings, 1 reply; 2+ messages in thread
From: Liam Howlett @ 2021-09-07 20:15 UTC (permalink / raw)
To: linux-mm, linux-kernel, Andrew Morton
Cc: Steven Rostedt, Michel Lespinasse, Vlastimil Babka, Matthew Wilcox
The ordering of the printed messages from the mmap_lock trace can occur
out of order. This results in confusing trace logs such as:
task cpu atomic counter: message
---------------------------------------------
task-749 [006] .... 14437980: mmap_lock_acquire_returned: mm=00000000c94d28b8 memcg_path= write=true success=true
task-750 [007] .... 14437981: mmap_lock_acquire_returned: mm=00000000c94d28b8 memcg_path= write=true success=true
task-749 [006] .... 14437983: mmap_lock_released: mm=00000000c94d28b8 memcg_path= write=true
When the actual series of evens are as follows:
task-749 [006] mmap_lock_acquire_returned: mm=00000000c94d28b8 memcg_path= write=true success=true
task-749 [006] mmap_lock_released: mm=00000000c94d28b8 memcg_path= write=true
task-750 [007] mmap_lock_acquire_returned: mm=00000000c94d28b8 memcg_path= write=true success=true
The incorrect ordering of the trace log happens because the release log
is outside of the lock itself. The ordering can be guaranteed by
protecting the acquire success and release trace logs by the lock.
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Suggested-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
include/linux/mmap_lock.h | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
index 0540f0156f58..b179f1e3541a 100644
--- a/include/linux/mmap_lock.h
+++ b/include/linux/mmap_lock.h
@@ -101,14 +101,14 @@ static inline bool mmap_write_trylock(struct mm_struct *mm)
static inline void mmap_write_unlock(struct mm_struct *mm)
{
- up_write(&mm->mmap_lock);
__mmap_lock_trace_released(mm, true);
+ up_write(&mm->mmap_lock);
}
static inline void mmap_write_downgrade(struct mm_struct *mm)
{
- downgrade_write(&mm->mmap_lock);
__mmap_lock_trace_acquire_returned(mm, false, true);
+ downgrade_write(&mm->mmap_lock);
}
static inline void mmap_read_lock(struct mm_struct *mm)
@@ -140,8 +140,8 @@ static inline bool mmap_read_trylock(struct mm_struct *mm)
static inline void mmap_read_unlock(struct mm_struct *mm)
{
- up_read(&mm->mmap_lock);
__mmap_lock_trace_released(mm, false);
+ up_read(&mm->mmap_lock);
}
static inline bool mmap_read_trylock_non_owner(struct mm_struct *mm)
@@ -155,8 +155,8 @@ static inline bool mmap_read_trylock_non_owner(struct mm_struct *mm)
static inline void mmap_read_unlock_non_owner(struct mm_struct *mm)
{
- up_read_non_owner(&mm->mmap_lock);
__mmap_lock_trace_released(mm, false);
+ up_read_non_owner(&mm->mmap_lock);
}
static inline void mmap_assert_locked(struct mm_struct *mm)
--
2.30.2
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [PATCH v3] mmap_lock: Change trace and locking order
2021-09-07 20:15 [PATCH v3] mmap_lock: Change trace and locking order Liam Howlett
@ 2021-09-07 20:25 ` Steven Rostedt
0 siblings, 0 replies; 2+ messages in thread
From: Steven Rostedt @ 2021-09-07 20:25 UTC (permalink / raw)
To: Liam Howlett
Cc: linux-mm, linux-kernel, Andrew Morton, Michel Lespinasse,
Vlastimil Babka, Matthew Wilcox
On Tue, 7 Sep 2021 20:15:19 +0000
Liam Howlett <liam.howlett@oracle.com> wrote:
> The ordering of the printed messages from the mmap_lock trace can occur
> out of order. This results in confusing trace logs such as:
>
> task cpu atomic counter: message
> ---------------------------------------------
> task-749 [006] .... 14437980: mmap_lock_acquire_returned: mm=00000000c94d28b8 memcg_path= write=true success=true
> task-750 [007] .... 14437981: mmap_lock_acquire_returned: mm=00000000c94d28b8 memcg_path= write=true success=true
> task-749 [006] .... 14437983: mmap_lock_released: mm=00000000c94d28b8 memcg_path= write=true
>
> When the actual series of evens are as follows:
>
> task-749 [006] mmap_lock_acquire_returned: mm=00000000c94d28b8 memcg_path= write=true success=true
> task-749 [006] mmap_lock_released: mm=00000000c94d28b8 memcg_path= write=true
>
> task-750 [007] mmap_lock_acquire_returned: mm=00000000c94d28b8 memcg_path= write=true success=true
>
> The incorrect ordering of the trace log happens because the release log
> is outside of the lock itself. The ordering can be guaranteed by
> protecting the acquire success and release trace logs by the lock.
>
> Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
> Suggested-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
FYI,
If you received Acks for a patch, and you resend just to update the change
log, you can then include those acks in that email, as the acks were
already done for the code change. If you change the code, you may need to
ask to get the review/acks again.
But since this time you only changed the change log, and the code is still
the same, you should have included:
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
-- Steve
> ---
> include/linux/mmap_lock.h | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
> index 0540f0156f58..b179f1e3541a 100644
> --- a/include/linux/mmap_lock.h
> +++ b/include/linux/mmap_lock.h
> @@ -101,14 +101,14 @@ static inline bool mmap_write_trylock(struct mm_struct *mm)
>
> static inline void mmap_write_unlock(struct mm_struct *mm)
> {
> - up_write(&mm->mmap_lock);
> __mmap_lock_trace_released(mm, true);
> + up_write(&mm->mmap_lock);
> }
>
> static inline void mmap_write_downgrade(struct mm_struct *mm)
> {
> - downgrade_write(&mm->mmap_lock);
> __mmap_lock_trace_acquire_returned(mm, false, true);
> + downgrade_write(&mm->mmap_lock);
> }
>
> static inline void mmap_read_lock(struct mm_struct *mm)
> @@ -140,8 +140,8 @@ static inline bool mmap_read_trylock(struct mm_struct *mm)
>
> static inline void mmap_read_unlock(struct mm_struct *mm)
> {
> - up_read(&mm->mmap_lock);
> __mmap_lock_trace_released(mm, false);
> + up_read(&mm->mmap_lock);
> }
>
> static inline bool mmap_read_trylock_non_owner(struct mm_struct *mm)
> @@ -155,8 +155,8 @@ static inline bool mmap_read_trylock_non_owner(struct mm_struct *mm)
>
> static inline void mmap_read_unlock_non_owner(struct mm_struct *mm)
> {
> - up_read_non_owner(&mm->mmap_lock);
> __mmap_lock_trace_released(mm, false);
> + up_read_non_owner(&mm->mmap_lock);
> }
>
> static inline void mmap_assert_locked(struct mm_struct *mm)
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2021-09-07 20:25 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-07 20:15 [PATCH v3] mmap_lock: Change trace and locking order Liam Howlett
2021-09-07 20:25 ` Steven Rostedt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox