linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yu Zhao <yuzhao@google.com>
To: liuye <liuye@kylinos.cn>, Hugh Dickins <hughd@google.com>
Cc: akpm@linux-foundation.org, linux-mm@kvack.org,
	 linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm/vmscan: Fix hard LOCKUP in function isolate_lru_folios
Date: Thu, 5 Dec 2024 12:17:07 -0700	[thread overview]
Message-ID: <CAOUHufZ3AWY0xTowme_sDN+gbvMhi=KWQKah=Q9NprHvMtBHYA@mail.gmail.com> (raw)
In-Reply-To: <cc60dff6-7561-ed60-532a-8862bc4a9914@kylinos.cn>

On Thu, Dec 5, 2024 at 8:19 AM liuye <liuye@kylinos.cn> wrote:
>
>
> Friendly ping.
>
> Thanks.

Hugh has responded on your "v2 RESEND":

https://lore.kernel.org/linux-mm/dae8ea77-2bc1-8ee9-b94b-207e2c8e1b8d@google.com/

> On 2024/9/6 上午9:16, liuye wrote:
> >
> >
> > On 2024/8/23 上午10:04, liuye wrote:
> >> I'm sorry to bother you about that, but it looks like the following email send 7 days ago,
> >> did not receive a response from you. Do you mind having a look at this
> >> when you have a bit of free time please?
> >>
> >>>>> Fixes: b2e18757f2c9 ("mm, vmscan: begin reclaiming pages on a per-node basis")
> >>>>
> >>>> Merged in 2016.
> >>>>
> >>>> Under what circumstances does it occur?
> >>>
> >>> User processe are requesting a large amount of memory and keep page active.
> >>> Then a module continuously requests memory from ZONE_DMA32 area.
> >>> Memory reclaim will be triggered due to ZONE_DMA32 watermark alarm reached.
> >>> However pages in the LRU(active_anon) list are mostly from
> >>> the ZONE_NORMAL area.
> >>>
> >>>> Can you please describe how to reproduce this?
> >>>
> >>> Terminal 1: Construct to continuously increase pages active(anon).
> >>> mkdir /tmp/memory
> >>> mount -t tmpfs -o size=1024000M tmpfs /tmp/memory
> >>> dd if=/dev/zero of=/tmp/memory/block bs=4M
> >>> tail /tmp/memory/block
> >>>
> >>> Terminal 2:
> >>> vmstat -a 1
> >>> active will increase.
> >>> procs -----------memory---------- ---swap-- -----io---- -system-- -------cpu-------
> >>>  r  b   swpd   free  inact active   si   so    bi    bo   in   cs us sy id wa st gu
> >>>  1  0      0 1445623076 45898836 83646008    0    0     0     0 1807 1682  0  0 100  0  0  0
> >>>  1  0      0 1445623076 43450228 86094616    0    0     0     0 1677 1468  0  0 100  0  0  0
> >>>  1  0      0 1445623076 41003480 88541364    0    0     0     0 1985 2022  0  0 100  0  0  0
> >>>  1  0      0 1445623076 38557088 90987756    0    0     0     4 1731 1544  0  0 100  0  0  0
> >>>  1  0      0 1445623076 36109688 93435156    0    0     0     0 1755 1501  0  0 100  0  0  0
> >>>  1  0      0 1445619552 33663256 95881632    0    0     0     0 2015 1678  0  0 100  0  0  0
> >>>  1  0      0 1445619804 31217140 98327792    0    0     0     0 2058 2212  0  0 100  0  0  0
> >>>  1  0      0 1445619804 28769988 100774944    0    0     0     0 1729 1585  0  0 100  0  0  0
> >>>  1  0      0 1445619804 26322348 103222584    0    0     0     0 1774 1575  0  0 100  0  0  0
> >>>  1  0      0 1445619804 23875592 105669340    0    0     0     4 1738 1604  0  0 100  0  0  0
> >>>
> >>> cat /proc/meminfo | head
> >>> Active(anon) increase.
> >>> MemTotal:       1579941036 kB
> >>> MemFree:        1445618500 kB
> >>> MemAvailable:   1453013224 kB
> >>> Buffers:            6516 kB
> >>> Cached:         128653956 kB
> >>> SwapCached:            0 kB
> >>> Active:         118110812 kB
> >>> Inactive:       11436620 kB
> >>> Active(anon):   115345744 kB
> >>> Inactive(anon):   945292 kB
> >>>
> >>> When the Active(anon) is 115345744 kB, insmod module triggers the ZONE_DMA32 watermark.
> >>>
> >>> perf show nr_scanned=28835844.
> >>> 28835844 * 4k = 115343376KB approximately equal to 115345744 kB.
> >>>
> >>> perf record -e vmscan:mm_vmscan_lru_isolate -aR
> >>> perf script
> >>> isolate_mode=0 classzone=1 order=1 nr_requested=32 nr_scanned=2 nr_skipped=2 nr_taken=0 lru=active_anon
> >>> isolate_mode=0 classzone=1 order=1 nr_requested=32 nr_scanned=0 nr_skipped=0 nr_taken=0 lru=active_anon
> >>> isolate_mode=0 classzone=1 order=0 nr_requested=32 nr_scanned=28835844 nr_skipped=28835844 nr_taken=0 lru=active_anon
> >>> isolate_mode=0 classzone=1 order=1 nr_requested=32 nr_scanned=28835844 nr_skipped=28835844 nr_taken=0 lru=active_anon
> >>> isolate_mode=0 classzone=1 order=0 nr_requested=32 nr_scanned=29 nr_skipped=29 nr_taken=0 lru=active_anon
> >>> isolate_mode=0 classzone=1 order=0 nr_requested=32 nr_scanned=0 nr_skipped=0 nr_taken=0 lru=active_anon
> >>>
> >>> If increase Active(anon) to 1000G then insmod module triggers the ZONE_DMA32 watermark. hard lockup will occur.
> >>>
> >>> In my device nr_scanned = 0000000003e3e937 when hard lockup. Convert to memory size 0x0000000003e3e937 * 4KB = 261072092 KB.
> >>>
> >>> #5 [ffffc90006fb7c28] isolate_lru_folios at ffffffffa597df53
> >>>     ffffc90006fb7c30: 0000000000000020 0000000000000000
> >>>     ffffc90006fb7c40: ffffc90006fb7d40 ffff88812cbd3000
> >>>     ffffc90006fb7c50: ffffc90006fb7d30 0000000106fb7de8
> >>>     ffffc90006fb7c60: ffffea04a2197008 ffffea0006ed4a48
> >>>     ffffc90006fb7c70: 0000000000000000 0000000000000000
> >>>     ffffc90006fb7c80: 0000000000000000 0000000000000000
> >>>     ffffc90006fb7c90: 0000000000000000 0000000000000000
> >>>     ffffc90006fb7ca0: 0000000000000000 0000000003e3e937
> >>>     ffffc90006fb7cb0: 0000000000000000 0000000000000000
> >>>     ffffc90006fb7cc0: 8d7c0b56b7874b00 ffff88812cbd3000
> >>>
> >>>> Why do you think it took eight years to be discovered?
> >>>
> >>> The problem requires the following conditions to occur:
> >>> 1. The device memory should be large enough.
> >>> 2. Pages in the LRU(active_anon) list are mostly from the ZONE_NORMAL area.
> >>> 3. The memory in ZONE_DMA32 needs to reach the watermark.
> >>>
> >>> If the memory is not large enough, or if the usage design of ZONE_DMA32 area memory is reasonable, this problem is difficult to detect.
> >>>
> >>> notes:
> >>> The problem is most likely to occur in ZONE_DMA32 and ZONE_NORMAL, but other suitable scenarios may also trigger the problem.
> >>>
> >>>> It looks like that will fix, but perhaps something more fundamental
> >>>> needs to be done - we're doing a tremendous amount of pretty pointless
> >>>> work here.  Answers to my above questions will help us resolve this.
> >>>>
> >>>> Thanks.
> >>>
> >>> Please refer to the above explanation for details.
> >>>
> >>> Thanks.
> >>
> >> Thanks.
> >>
> > Friendly ping.
> >
>


  reply	other threads:[~2024-12-05 19:17 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20240815025226.8973-1-liuye@kylinos.cn>
2024-08-23  2:04 ` Re: " liuye
2024-09-03  2:34   ` liuye
2024-09-03  3:03   ` liuye
2024-09-06  1:16   ` liuye
2024-09-11  2:56     ` liuye
2024-12-05 19:17       ` Yu Zhao [this message]
2024-08-14  9:18 liuye
2024-08-14 21:27 ` Andrew Morton
2024-09-25  0:22 ` Andrew Morton
2024-09-25  8:37   ` liuye
2024-09-25  9:29     ` Andrew Morton
2024-09-25  9:53       ` liuye

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAOUHufZ3AWY0xTowme_sDN+gbvMhi=KWQKah=Q9NprHvMtBHYA@mail.gmail.com' \
    --to=yuzhao@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=hughd@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=liuye@kylinos.cn \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox