linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Bharata B Rao <bharata@amd.com>
To: Mateusz Guzik <mjguzik@gmail.com>, Vlastimil Babka <vbabka@suse.cz>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, nikunj@amd.com,
	"Upadhyay, Neeraj" <Neeraj.Upadhyay@amd.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	David Hildenbrand <david@redhat.com>,
	willy@infradead.org, yuzhao@google.com, kinseyho@google.com,
	Mel Gorman <mgorman@suse.de>
Subject: Re: Hard and soft lockups with FIO and LTP runs on a large system
Date: Thu, 18 Jul 2024 14:30:02 +0530	[thread overview]
Message-ID: <44fb1971-f3d3-4af8-9bac-aceb2fedd2a6@amd.com> (raw)
In-Reply-To: <CAGudoHESB-+kHPJO+4MHnUDPJXGP87=yJ2QrW3q8pkO5z7OLRw@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 3538 bytes --]

On 17-Jul-24 4:59 PM, Mateusz Guzik wrote:
> On Wed, Jul 17, 2024 at 11:42 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>>
>> On 7/3/24 5:11 PM, Bharata B Rao wrote:
>>> The general observation is that the problem usually surfaces when the
>>> system free memory goes very low and page cache/buffer consumption hits
>>> the ceiling. Most of the times the two contended locks are lruvec and
>>> inode->i_lock spinlocks.
>>>
> [snip mm stuff]
> 
> There are numerous avoidable i_lock acquires (including some only
> showing up under load), but I don't know if they play any role in this
> particular test.
> 
> Collecting all traces would definitely help, locked up or not, for example:
> bpftrace -e 'kprobe:queued_spin_lock_slowpath { @[kstack()] = count();
> }' -o traces

Here are the top 3 traces collected while the full list from a 30s 
collection duration when the workload was running, is attached.

@[
     native_queued_spin_lock_slowpath+1
     __remove_mapping+98
     remove_mapping+22
     mapping_evict_folio+118
     mapping_try_invalidate+214
     invalidate_mapping_pages+16
     invalidate_bdev+60
     blkdev_common_ioctl+1527
     blkdev_ioctl+265
     __x64_sys_ioctl+149
     x64_sys_call+4629
     do_syscall_64+126
     entry_SYSCALL_64_after_hwframe+118
]: 1787212
@[
     native_queued_spin_lock_slowpath+1
     folio_wait_bit_common+205
     filemap_get_pages+1543
     filemap_read+231
     blkdev_read_iter+111
     aio_read+242
     io_submit_one+546
     __x64_sys_io_submit+132
     x64_sys_call+6617
     do_syscall_64+126
     entry_SYSCALL_64_after_hwframe+118
]: 7922497
@[
     native_queued_spin_lock_slowpath+1
     clear_shadow_entry+92
     mapping_try_invalidate+337
     invalidate_mapping_pages+16
     invalidate_bdev+60
     blkdev_common_ioctl+1527
     blkdev_ioctl+265
     __x64_sys_ioctl+149
     x64_sys_call+4629
     do_syscall_64+126
     entry_SYSCALL_64_after_hwframe+118
]: 10357614

> 
> As for clear_shadow_entry mentioned in the opening mail, the content is:
>          spin_lock(&mapping->host->i_lock);
>          xa_lock_irq(&mapping->i_pages);
>          __clear_shadow_entry(mapping, index, entry);
>          xa_unlock_irq(&mapping->i_pages);
>          if (mapping_shrinkable(mapping))
>                  inode_add_lru(mapping->host);
>          spin_unlock(&mapping->host->i_lock);
> 
> so for all I know it's all about the xarray thing, not the i_lock per se.

The soft lockup signature has _raw_spin_lock and not _raw_spin_lock_irq 
and hence concluded it to be i_lock. Re-pasting the clear_shadow_entry 
softlockup here again:

kernel: watchdog: BUG: soft lockup - CPU#29 stuck for 11s! [fio:2701649]
kernel: CPU: 29 PID: 2701649 Comm: fio Tainted: G             L
6.10.0-rc3-mglru-irqstrc #24
kernel: RIP: 0010:native_queued_spin_lock_slowpath+0x2b4/0x300
kernel: Call Trace:
kernel:  <IRQ>
kernel:  ? show_regs+0x69/0x80
kernel:  ? watchdog_timer_fn+0x223/0x2b0
kernel:  ? __pfx_watchdog_timer_fn+0x10/0x10
<SNIP>
kernel:  </IRQ>
kernel:  <TASK>
kernel:  ? asm_sysvec_apic_timer_interrupt+0x1b/0x20
kernel:  ? native_queued_spin_lock_slowpath+0x2b4/0x300
kernel:  _raw_spin_lock+0x38/0x50
kernel:  clear_shadow_entry+0x3d/0x100
kernel:  ? __pfx_workingset_update_node+0x10/0x10
kernel:  mapping_try_invalidate+0x117/0x1d0
kernel:  invalidate_mapping_pages+0x10/0x20
kernel:  invalidate_bdev+0x3c/0x50
kernel:  blkdev_common_ioctl+0x5f7/0xa90
kernel:  blkdev_ioctl+0x109/0x270
kernel:  x64_sys_call+0x1215/0x20d0
kernel:  do_syscall_64+0x7e/0x130

Regards,
Bharata.

[-- Attachment #2: traces.gz --]
[-- Type: application/x-gzip, Size: 83505 bytes --]

  reply	other threads:[~2024-07-18  9:00 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-07-03 15:11 Bharata B Rao
2024-07-06 22:42 ` Yu Zhao
2024-07-08 14:34   ` Bharata B Rao
2024-07-08 16:17     ` Yu Zhao
2024-07-09  4:30       ` Bharata B Rao
2024-07-09  5:58         ` Yu Zhao
2024-07-11  5:43           ` Bharata B Rao
2024-07-15  5:19             ` Bharata B Rao
2024-07-19 20:21               ` Yu Zhao
2024-07-20  7:57                 ` Mateusz Guzik
2024-07-22  4:17                   ` Bharata B Rao
2024-07-22  4:12                 ` Bharata B Rao
2024-07-25  9:59               ` zhaoyang.huang
2024-07-26  3:26                 ` Zhaoyang Huang
2024-07-29  4:49                   ` Bharata B Rao
2024-08-13 11:04           ` Usama Arif
2024-08-13 17:43             ` Yu Zhao
2024-07-17  9:37         ` Vlastimil Babka
2024-07-17 10:50           ` Bharata B Rao
2024-07-17 11:15             ` Hillf Danton
2024-07-18  9:02               ` Bharata B Rao
2024-07-10 12:03   ` Bharata B Rao
2024-07-10 12:24     ` Mateusz Guzik
2024-07-10 13:04       ` Mateusz Guzik
2024-07-15  5:22         ` Bharata B Rao
2024-07-15  6:48           ` Mateusz Guzik
2024-07-10 18:04     ` Yu Zhao
2024-07-17  9:42 ` Vlastimil Babka
2024-07-17 10:31   ` Bharata B Rao
2024-07-17 16:44     ` Karim Manaouil
2024-07-17 11:29   ` Mateusz Guzik
2024-07-18  9:00     ` Bharata B Rao [this message]
2024-07-18 12:11       ` Mateusz Guzik
2024-07-19  6:16         ` Bharata B Rao
2024-07-19  7:06           ` Yu Zhao
2024-07-19 14:26           ` Mateusz Guzik
2024-07-17 16:34   ` Karim Manaouil

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=44fb1971-f3d3-4af8-9bac-aceb2fedd2a6@amd.com \
    --to=bharata@amd.com \
    --cc=Neeraj.Upadhyay@amd.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=kinseyho@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mjguzik@gmail.com \
    --cc=nikunj@amd.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox