linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Liam Howlett <liam.howlett@oracle.com>
To: "linux-mm@kvack.org" <linux-mm@kvack.org>
Cc: Michel Lespinasse <walken.cr@gmail.com>
Subject: Strange mmap_lock tracepoint
Date: Tue, 31 Aug 2021 20:40:05 +0000	[thread overview]
Message-ID: <20210831203959.lwofr24z63wjsgkc@revolver> (raw)


Hello,

I've been trying to trace the mmap_lock calls using tracepoints and
captured this behaviour which I cannot explain.  This was with the maple
tree v2 patches running ebizzy with multiple threads in an x86_64 KVM
using 8 virtual CPUs.

AFAICT, there are zero callers that use the mmap_lock directly besides a
prefetchw(&mm->mmap_lock);


          ebizzy-803     [000] ....  5376.655366: lock_release: 00000000de8cf25e lock
          ebizzy-803     [000] ....  5376.655366: lock_release: 00000000b5e38448 ptlock_ptr(page)
          ebizzy-803     [000] ....  5376.655367: lock_release: 000000006b581afd &mm->mmap_lock
          ebizzy-803     [000] ....  5376.655367: mmap_lock_released: mm=000000001de7b122 memcg_path= write=false
 
          ebizzy-803     [000] ....  5376.655369: mmap_lock_start_locking: mm=000000001de7b122 memcg_path= write=false
 
          ebizzy-803     [000] ....  5376.655369: mmap_lock_acquire_returned: mm=000000001de7b122 memcg_path= write=false success=false
 
          ebizzy-803     [000] ....  5376.655369: mmap_lock_start_locking: mm=000000001de7b122 memcg_path= write=false
 
          ebizzy-803     [000] ....  5376.655369: lock_acquire: 000000006b581afd read &mm->mmap_lock
          ebizzy-803     [000] d...  5376.655369: lock_acquire: 00000000984d533e &sem->wait_lock
          ebizzy-803     [000] d...  5376.655370: lock_release: 00000000984d533e &sem->wait_lock
          ebizzy-803     [000] d...  5376.655370: lock_acquire: 00000000cbb1a456 &rq->__lock
          ebizzy-803     [000] d...  5376.655371: lock_acquire: 00000000b3d53a93 read rcu_read_lock
          ebizzy-803     [000] d...  5376.655371: lock_release: 00000000b3d53a93 rcu_read_lock
          ebizzy-803     [000] d...  5376.655371: lock_acquire: 00000000b3d53a93 read rcu_read_lock
          ebizzy-803     [000] d...  5376.655371: lock_release: 00000000b3d53a93 rcu_read_lock
          ebizzy-803     [000] d...  5376.655372: lock_acquire: 00000000b3d53a93 read rcu_read_lock
          ebizzy-803     [000] d...  5376.655372: lock_release: 00000000b3d53a93 rcu_read_lock
          ebizzy-803     [000] d...  5376.655372: lock_release: 00000000cbb1a456 &rq->__lock
          <idle>-0       [000] d...  5376.655373: lock_acquire: 00000000cbb1a456 &rq->__lock
          <idle>-0       [000] d...  5376.655373: lock_release: 00000000cbb1a456 &rq->__lock
          <idle>-0       [000] d...  5376.655373: lock_acquire: 00000000bb7b405b read tk_core.seq.seqcount
          <idle>-0       [000] d...  5376.655374: lock_release: 00000000bb7b405b tk_core.seq.seqcount
          <idle>-0       [000] d...  5376.655374: lock_acquire: 000000007c3ac380 read jiffies_seq.seqcount
          <idle>-0       [000] d...  5376.655374: lock_release: 000000007c3ac380 jiffies_seq.seqcount
          <idle>-0       [000] d.h.  5376.655460: lock_acquire: 00000000bb7b405b read tk_core.seq.seqcount
          <idle>-0       [000] d.h.  5376.655460: lock_release: 00000000bb7b405b tk_core.seq.seqcount
          <idle>-0       [000] d.h.  5376.655460: lock_acquire: 00000000f7c85b8f hrtimer_bases.lock
          <idle>-0       [000] d.h.  5376.655461: lock_acquire: 00000000bb7b405b read tk_core.seq.seqcount
          <idle>-0       [000] d.h.  5376.655461: lock_release: 00000000bb7b405b tk_core.seq.seqcount
          <idle>-0       [000] d.h.  5376.655461: lock_release: 00000000f7c85b8f hrtimer_bases.lock
          <idle>-0       [000] d.h.  5376.655462: lock_acquire: 00000000bb7b405b read tk_core.seq.seqcount
          <idle>-0       [000] d.h.  5376.655462: lock_release: 00000000bb7b405b tk_core.seq.seqcount
          <idle>-0       [000] d.h.  5376.655462: lock_acquire: 00000000cbb1a456 &rq->__lock
          <idle>-0       [000] d.h.  5376.655463: lock_release: 00000000cbb1a456 &rq->__lock
          <idle>-0       [000] d.h.  5376.655463: lock_acquire: 00000000f7c85b8f hrtimer_bases.lock
          <idle>-0       [000] d.h.  5376.655464: lock_release: 00000000f7c85b8f hrtimer_bases.lock
          <idle>-0       [000] d.h.  5376.655464: lock_acquire: 00000000bb7b405b read tk_core.seq.seqcount
          <idle>-0       [000] d.h.  5376.655464: lock_release: 00000000bb7b405b tk_core.seq.seqcount
          <idle>-0       [000] d...  5376.655466: lock_acquire: 00000000bb7b405b read tk_core.seq.seqcount
          <idle>-0       [000] d...  5376.655466: lock_release: 00000000bb7b405b tk_core.seq.seqcount
          <idle>-0       [000] d...  5376.655467: lock_acquire: 000000007c3ac380 read jiffies_seq.seqcount
          <idle>-0       [000] d...  5376.655467: lock_release: 000000007c3ac380 jiffies_seq.seqcount
          <idle>-0       [000] d.h.  5376.655488: lock_acquire: 00000000bb7b405b read tk_core.seq.seqcount
          <idle>-0       [000] d.h.  5376.655489: lock_release: 00000000bb7b405b tk_core.seq.seqcount
          <idle>-0       [000] d.h.  5376.655489: lock_acquire: 00000000cbb1a456 &rq->__lock
          <idle>-0       [000] dNh.  5376.655490: lock_release: 00000000cbb1a456 &rq->__lock
          <idle>-0       [000] dN..  5376.655490: lock_acquire: 00000000cbb1a456 &rq->__lock
          <idle>-0       [000] d...  5376.655491: lock_release: 00000000cbb1a456 &rq->__lock
          ebizzy-803     [000] d...  5376.655491: lock_acquire: 00000000cbb1a456 &rq->__lock
          ebizzy-803     [000] d...  5376.655492: lock_release: 00000000cbb1a456 &rq->__lock
          ebizzy-803     [000] ....  5376.655492: mmap_lock_acquire_returned: mm=000000001de7b122 memcg_path= write=false success=true

The following tracepoints were enabled:
  events/mmap_lock/enable
  events/lock/enable

My reading of the above trace is that the ebizzy thread dropped the lock
and immediately attempted to reacquire and detected it was in contention
so the thread added itself to the list and went to sleep only to wake up
and get the lock?

Doesn't that mean the mmap_lock is _not_ contended?  The lack of
contention makes sense since there is no tracepoint of an attempt to
acquire the lock between the dropping of the lock and the same MM waking
up to get the lock.

What is even stranger is if I change the value of _Q_PENDING_LOOPS from
(1 << 9) to (1 << 10), the benchmark performs better.

From the above trace and the effects of the _Q_PENDING_LOOPS change, it
looks like the lock is somehow remaining in a state that causes a
failure to acquire the lock even when it is not held or contended?


Thanks,
Liam

             reply	other threads:[~2021-08-31 20:40 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-31 20:40 Liam Howlett [this message]
2021-09-01 10:52 ` Vlastimil Babka
2021-09-01 17:05   ` Liam Howlett
2021-09-03  1:57     ` Liam Howlett

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210831203959.lwofr24z63wjsgkc@revolver \
    --to=liam.howlett@oracle.com \
    --cc=linux-mm@kvack.org \
    --cc=walken.cr@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox