linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* Re: Re: Re: [PATCH] mm/vmscan: Fix hard LOCKUP in function isolate_lru_folios
       [not found] <20240815025226.8973-1-liuye@kylinos.cn>
@ 2024-08-23  2:04 ` liuye
  2024-09-03  2:34   ` liuye
                     ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: liuye @ 2024-08-23  2:04 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, liuye

I'm sorry to bother you about that, but it looks like the following email send 7 days ago, 
did not receive a response from you. Do you mind having a look at this 
when you have a bit of free time please?

> > > Fixes: b2e18757f2c9 ("mm, vmscan: begin reclaiming pages on a per-node basis")
> > 
> > Merged in 2016.
> > 
> > Under what circumstances does it occur?  
> 
> User processe are requesting a large amount of memory and keep page active.
> Then a module continuously requests memory from ZONE_DMA32 area.
> Memory reclaim will be triggered due to ZONE_DMA32 watermark alarm reached.
> However pages in the LRU(active_anon) list are mostly from 
> the ZONE_NORMAL area.
> 
> > Can you please describe how to reproduce this?  
> 
> Terminal 1: Construct to continuously increase pages active(anon). 
> mkdir /tmp/memory
> mount -t tmpfs -o size=1024000M tmpfs /tmp/memory
> dd if=/dev/zero of=/tmp/memory/block bs=4M
> tail /tmp/memory/block
> 
> Terminal 2:
> vmstat -a 1
> active will increase.
> procs -----------memory---------- ---swap-- -----io---- -system-- -------cpu-------
>  r  b   swpd   free  inact active   si   so    bi    bo   in   cs us sy id wa st gu
>  1  0      0 1445623076 45898836 83646008    0    0     0     0 1807 1682  0  0 100  0  0  0
>  1  0      0 1445623076 43450228 86094616    0    0     0     0 1677 1468  0  0 100  0  0  0
>  1  0      0 1445623076 41003480 88541364    0    0     0     0 1985 2022  0  0 100  0  0  0
>  1  0      0 1445623076 38557088 90987756    0    0     0     4 1731 1544  0  0 100  0  0  0
>  1  0      0 1445623076 36109688 93435156    0    0     0     0 1755 1501  0  0 100  0  0  0
>  1  0      0 1445619552 33663256 95881632    0    0     0     0 2015 1678  0  0 100  0  0  0
>  1  0      0 1445619804 31217140 98327792    0    0     0     0 2058 2212  0  0 100  0  0  0
>  1  0      0 1445619804 28769988 100774944    0    0     0     0 1729 1585  0  0 100  0  0  0
>  1  0      0 1445619804 26322348 103222584    0    0     0     0 1774 1575  0  0 100  0  0  0
>  1  0      0 1445619804 23875592 105669340    0    0     0     4 1738 1604  0  0 100  0  0  0
> 
> cat /proc/meminfo | head
> Active(anon) increase.
> MemTotal:       1579941036 kB
> MemFree:        1445618500 kB
> MemAvailable:   1453013224 kB
> Buffers:            6516 kB
> Cached:         128653956 kB
> SwapCached:            0 kB
> Active:         118110812 kB
> Inactive:       11436620 kB
> Active(anon):   115345744 kB   
> Inactive(anon):   945292 kB
> 
> When the Active(anon) is 115345744 kB, insmod module triggers the ZONE_DMA32 watermark.
> 
> perf show nr_scanned=28835844. 
> 28835844 * 4k = 115343376KB approximately equal to 115345744 kB.
> 
> perf record -e vmscan:mm_vmscan_lru_isolate -aR
> perf script
> isolate_mode=0 classzone=1 order=1 nr_requested=32 nr_scanned=2 nr_skipped=2 nr_taken=0 lru=active_anon
> isolate_mode=0 classzone=1 order=1 nr_requested=32 nr_scanned=0 nr_skipped=0 nr_taken=0 lru=active_anon
> isolate_mode=0 classzone=1 order=0 nr_requested=32 nr_scanned=28835844 nr_skipped=28835844 nr_taken=0 lru=active_anon
> isolate_mode=0 classzone=1 order=1 nr_requested=32 nr_scanned=28835844 nr_skipped=28835844 nr_taken=0 lru=active_anon
> isolate_mode=0 classzone=1 order=0 nr_requested=32 nr_scanned=29 nr_skipped=29 nr_taken=0 lru=active_anon
> isolate_mode=0 classzone=1 order=0 nr_requested=32 nr_scanned=0 nr_skipped=0 nr_taken=0 lru=active_anon
> 
> If increase Active(anon) to 1000G then insmod module triggers the ZONE_DMA32 watermark. hard lockup will occur.
> 
> In my device nr_scanned = 0000000003e3e937 when hard lockup. Convert to memory size 0x0000000003e3e937 * 4KB = 261072092 KB.
> 
> #5 [ffffc90006fb7c28] isolate_lru_folios at ffffffffa597df53
>     ffffc90006fb7c30: 0000000000000020 0000000000000000 
>     ffffc90006fb7c40: ffffc90006fb7d40 ffff88812cbd3000 
>     ffffc90006fb7c50: ffffc90006fb7d30 0000000106fb7de8 
>     ffffc90006fb7c60: ffffea04a2197008 ffffea0006ed4a48 
>     ffffc90006fb7c70: 0000000000000000 0000000000000000 
>     ffffc90006fb7c80: 0000000000000000 0000000000000000 
>     ffffc90006fb7c90: 0000000000000000 0000000000000000 
>     ffffc90006fb7ca0: 0000000000000000 0000000003e3e937 
>     ffffc90006fb7cb0: 0000000000000000 0000000000000000 
>     ffffc90006fb7cc0: 8d7c0b56b7874b00 ffff88812cbd3000 
> 
> > Why do you think it took eight years to be discovered?
> 
> The problem requires the following conditions to occur:
> 1. The device memory should be large enough.
> 2. Pages in the LRU(active_anon) list are mostly from the ZONE_NORMAL area.
> 3. The memory in ZONE_DMA32 needs to reach the watermark.
> 
> If the memory is not large enough, or if the usage design of ZONE_DMA32 area memory is reasonable, this problem is difficult to detect.
> 
> notes:
> The problem is most likely to occur in ZONE_DMA32 and ZONE_NORMAL, but other suitable scenarios may also trigger the problem.
> 
> > It looks like that will fix, but perhaps something more fundamental
> > needs to be done - we're doing a tremendous amount of pretty pointless
> > work here.  Answers to my above questions will help us resolve this.
> > 
> > Thanks.
> 
> Please refer to the above explanation for details.
> 
> Thanks.

Thanks.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm/vmscan: Fix hard LOCKUP in function isolate_lru_folios
  2024-08-23  2:04 ` Re: Re: [PATCH] mm/vmscan: Fix hard LOCKUP in function isolate_lru_folios liuye
@ 2024-09-03  2:34   ` liuye
  2024-09-03  3:03   ` liuye
  2024-09-06  1:16   ` liuye
  2 siblings, 0 replies; 6+ messages in thread
From: liuye @ 2024-09-03  2:34 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 5458 bytes --]


On 2024/8/23 上午10:04, liuye wrote:
> I'm sorry to bother you about that, but it looks like the following email send 7 days ago,
> did not receive a response from you. Do you mind having a look at this
> when you have a bit of free time please?
>
>>>> Fixes: b2e18757f2c9 ("mm, vmscan: begin reclaiming pages on a per-node basis")
>>> Merged in 2016.
>>>
>>> Under what circumstances does it occur?
>> User processe are requesting a large amount of memory and keep page active.
>> Then a module continuously requests memory from ZONE_DMA32 area.
>> Memory reclaim will be triggered due to ZONE_DMA32 watermark alarm reached.
>> However pages in the LRU(active_anon) list are mostly from
>> the ZONE_NORMAL area.
>>
>>> Can you please describe how to reproduce this?
>> Terminal 1: Construct to continuously increase pages active(anon).
>> mkdir /tmp/memory
>> mount -t tmpfs -o size=1024000M tmpfs /tmp/memory
>> dd if=/dev/zero of=/tmp/memory/block bs=4M
>> tail /tmp/memory/block
>>
>> Terminal 2:
>> vmstat -a 1
>> active will increase.
>> procs -----------memory---------- ---swap-- -----io---- -system-- -------cpu-------
>>   r  b   swpd   free  inact active   si   so    bi    bo   in   cs us sy id wa st gu
>>   1  0      0 1445623076 45898836 83646008    0    0     0     0 1807 1682  0  0 100  0  0  0
>>   1  0      0 1445623076 43450228 86094616    0    0     0     0 1677 1468  0  0 100  0  0  0
>>   1  0      0 1445623076 41003480 88541364    0    0     0     0 1985 2022  0  0 100  0  0  0
>>   1  0      0 1445623076 38557088 90987756    0    0     0     4 1731 1544  0  0 100  0  0  0
>>   1  0      0 1445623076 36109688 93435156    0    0     0     0 1755 1501  0  0 100  0  0  0
>>   1  0      0 1445619552 33663256 95881632    0    0     0     0 2015 1678  0  0 100  0  0  0
>>   1  0      0 1445619804 31217140 98327792    0    0     0     0 2058 2212  0  0 100  0  0  0
>>   1  0      0 1445619804 28769988 100774944    0    0     0     0 1729 1585  0  0 100  0  0  0
>>   1  0      0 1445619804 26322348 103222584    0    0     0     0 1774 1575  0  0 100  0  0  0
>>   1  0      0 1445619804 23875592 105669340    0    0     0     4 1738 1604  0  0 100  0  0  0
>>
>> cat /proc/meminfo | head
>> Active(anon) increase.
>> MemTotal:       1579941036 kB
>> MemFree:        1445618500 kB
>> MemAvailable:   1453013224 kB
>> Buffers:            6516 kB
>> Cached:         128653956 kB
>> SwapCached:            0 kB
>> Active:         118110812 kB
>> Inactive:       11436620 kB
>> Active(anon):   115345744 kB
>> Inactive(anon):   945292 kB
>>
>> When the Active(anon) is 115345744 kB, insmod module triggers the ZONE_DMA32 watermark.
>>
>> perf show nr_scanned=28835844.
>> 28835844 * 4k = 115343376KB approximately equal to 115345744 kB.
>>
>> perf record -e vmscan:mm_vmscan_lru_isolate -aR
>> perf script
>> isolate_mode=0 classzone=1 order=1 nr_requested=32 nr_scanned=2 nr_skipped=2 nr_taken=0 lru=active_anon
>> isolate_mode=0 classzone=1 order=1 nr_requested=32 nr_scanned=0 nr_skipped=0 nr_taken=0 lru=active_anon
>> isolate_mode=0 classzone=1 order=0 nr_requested=32 nr_scanned=28835844 nr_skipped=28835844 nr_taken=0 lru=active_anon
>> isolate_mode=0 classzone=1 order=1 nr_requested=32 nr_scanned=28835844 nr_skipped=28835844 nr_taken=0 lru=active_anon
>> isolate_mode=0 classzone=1 order=0 nr_requested=32 nr_scanned=29 nr_skipped=29 nr_taken=0 lru=active_anon
>> isolate_mode=0 classzone=1 order=0 nr_requested=32 nr_scanned=0 nr_skipped=0 nr_taken=0 lru=active_anon
>>
>> If increase Active(anon) to 1000G then insmod module triggers the ZONE_DMA32 watermark. hard lockup will occur.
>>
>> In my device nr_scanned = 0000000003e3e937 when hard lockup. Convert to memory size 0x0000000003e3e937 * 4KB = 261072092 KB.
>>
>> #5 [ffffc90006fb7c28] isolate_lru_folios at ffffffffa597df53
>>      ffffc90006fb7c30: 0000000000000020 0000000000000000
>>      ffffc90006fb7c40: ffffc90006fb7d40 ffff88812cbd3000
>>      ffffc90006fb7c50: ffffc90006fb7d30 0000000106fb7de8
>>      ffffc90006fb7c60: ffffea04a2197008 ffffea0006ed4a48
>>      ffffc90006fb7c70: 0000000000000000 0000000000000000
>>      ffffc90006fb7c80: 0000000000000000 0000000000000000
>>      ffffc90006fb7c90: 0000000000000000 0000000000000000
>>      ffffc90006fb7ca0: 0000000000000000 0000000003e3e937
>>      ffffc90006fb7cb0: 0000000000000000 0000000000000000
>>      ffffc90006fb7cc0: 8d7c0b56b7874b00 ffff88812cbd3000
>>
>>> Why do you think it took eight years to be discovered?
>> The problem requires the following conditions to occur:
>> 1. The device memory should be large enough.
>> 2. Pages in the LRU(active_anon) list are mostly from the ZONE_NORMAL area.
>> 3. The memory in ZONE_DMA32 needs to reach the watermark.
>>
>> If the memory is not large enough, or if the usage design of ZONE_DMA32 area memory is reasonable, this problem is difficult to detect.
>>
>> notes:
>> The problem is most likely to occur in ZONE_DMA32 and ZONE_NORMAL, but other suitable scenarios may also trigger the problem.
>>
>>> It looks like that will fix, but perhaps something more fundamental
>>> needs to be done - we're doing a tremendous amount of pretty pointless
>>> work here.  Answers to my above questions will help us resolve this.
>>>
>>> Thanks.
>> Please refer to the above explanation for details.
>>
>> Thanks.
> Thanks.

Friendly ping.



[-- Attachment #2: Type: text/html, Size: 6439 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm/vmscan: Fix hard LOCKUP in function isolate_lru_folios
  2024-08-23  2:04 ` Re: Re: [PATCH] mm/vmscan: Fix hard LOCKUP in function isolate_lru_folios liuye
  2024-09-03  2:34   ` liuye
@ 2024-09-03  3:03   ` liuye
  2024-09-06  1:16   ` liuye
  2 siblings, 0 replies; 6+ messages in thread
From: liuye @ 2024-09-03  3:03 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel



On 2024/8/23 上午10:04, liuye wrote:
> I'm sorry to bother you about that, but it looks like the following email send 7 days ago, 
> did not receive a response from you. Do you mind having a look at this 
> when you have a bit of free time please?
> 
>>>> Fixes: b2e18757f2c9 ("mm, vmscan: begin reclaiming pages on a per-node basis")
>>>
>>> Merged in 2016.
>>>
>>> Under what circumstances does it occur?  
>>
>> User processe are requesting a large amount of memory and keep page active.
>> Then a module continuously requests memory from ZONE_DMA32 area.
>> Memory reclaim will be triggered due to ZONE_DMA32 watermark alarm reached.
>> However pages in the LRU(active_anon) list are mostly from 
>> the ZONE_NORMAL area.
>>
>>> Can you please describe how to reproduce this?  
>>
>> Terminal 1: Construct to continuously increase pages active(anon). 
>> mkdir /tmp/memory
>> mount -t tmpfs -o size=1024000M tmpfs /tmp/memory
>> dd if=/dev/zero of=/tmp/memory/block bs=4M
>> tail /tmp/memory/block
>>
>> Terminal 2:
>> vmstat -a 1
>> active will increase.
>> procs -----------memory---------- ---swap-- -----io---- -system-- -------cpu-------
>>  r  b   swpd   free  inact active   si   so    bi    bo   in   cs us sy id wa st gu
>>  1  0      0 1445623076 45898836 83646008    0    0     0     0 1807 1682  0  0 100  0  0  0
>>  1  0      0 1445623076 43450228 86094616    0    0     0     0 1677 1468  0  0 100  0  0  0
>>  1  0      0 1445623076 41003480 88541364    0    0     0     0 1985 2022  0  0 100  0  0  0
>>  1  0      0 1445623076 38557088 90987756    0    0     0     4 1731 1544  0  0 100  0  0  0
>>  1  0      0 1445623076 36109688 93435156    0    0     0     0 1755 1501  0  0 100  0  0  0
>>  1  0      0 1445619552 33663256 95881632    0    0     0     0 2015 1678  0  0 100  0  0  0
>>  1  0      0 1445619804 31217140 98327792    0    0     0     0 2058 2212  0  0 100  0  0  0
>>  1  0      0 1445619804 28769988 100774944    0    0     0     0 1729 1585  0  0 100  0  0  0
>>  1  0      0 1445619804 26322348 103222584    0    0     0     0 1774 1575  0  0 100  0  0  0
>>  1  0      0 1445619804 23875592 105669340    0    0     0     4 1738 1604  0  0 100  0  0  0
>>
>> cat /proc/meminfo | head
>> Active(anon) increase.
>> MemTotal:       1579941036 kB
>> MemFree:        1445618500 kB
>> MemAvailable:   1453013224 kB
>> Buffers:            6516 kB
>> Cached:         128653956 kB
>> SwapCached:            0 kB
>> Active:         118110812 kB
>> Inactive:       11436620 kB
>> Active(anon):   115345744 kB   
>> Inactive(anon):   945292 kB
>>
>> When the Active(anon) is 115345744 kB, insmod module triggers the ZONE_DMA32 watermark.
>>
>> perf show nr_scanned=28835844. 
>> 28835844 * 4k = 115343376KB approximately equal to 115345744 kB.
>>
>> perf record -e vmscan:mm_vmscan_lru_isolate -aR
>> perf script
>> isolate_mode=0 classzone=1 order=1 nr_requested=32 nr_scanned=2 nr_skipped=2 nr_taken=0 lru=active_anon
>> isolate_mode=0 classzone=1 order=1 nr_requested=32 nr_scanned=0 nr_skipped=0 nr_taken=0 lru=active_anon
>> isolate_mode=0 classzone=1 order=0 nr_requested=32 nr_scanned=28835844 nr_skipped=28835844 nr_taken=0 lru=active_anon
>> isolate_mode=0 classzone=1 order=1 nr_requested=32 nr_scanned=28835844 nr_skipped=28835844 nr_taken=0 lru=active_anon
>> isolate_mode=0 classzone=1 order=0 nr_requested=32 nr_scanned=29 nr_skipped=29 nr_taken=0 lru=active_anon
>> isolate_mode=0 classzone=1 order=0 nr_requested=32 nr_scanned=0 nr_skipped=0 nr_taken=0 lru=active_anon
>>
>> If increase Active(anon) to 1000G then insmod module triggers the ZONE_DMA32 watermark. hard lockup will occur.
>>
>> In my device nr_scanned = 0000000003e3e937 when hard lockup. Convert to memory size 0x0000000003e3e937 * 4KB = 261072092 KB.
>>
>> #5 [ffffc90006fb7c28] isolate_lru_folios at ffffffffa597df53
>>     ffffc90006fb7c30: 0000000000000020 0000000000000000 
>>     ffffc90006fb7c40: ffffc90006fb7d40 ffff88812cbd3000 
>>     ffffc90006fb7c50: ffffc90006fb7d30 0000000106fb7de8 
>>     ffffc90006fb7c60: ffffea04a2197008 ffffea0006ed4a48 
>>     ffffc90006fb7c70: 0000000000000000 0000000000000000 
>>     ffffc90006fb7c80: 0000000000000000 0000000000000000 
>>     ffffc90006fb7c90: 0000000000000000 0000000000000000 
>>     ffffc90006fb7ca0: 0000000000000000 0000000003e3e937 
>>     ffffc90006fb7cb0: 0000000000000000 0000000000000000 
>>     ffffc90006fb7cc0: 8d7c0b56b7874b00 ffff88812cbd3000 
>>
>>> Why do you think it took eight years to be discovered?
>>
>> The problem requires the following conditions to occur:
>> 1. The device memory should be large enough.
>> 2. Pages in the LRU(active_anon) list are mostly from the ZONE_NORMAL area.
>> 3. The memory in ZONE_DMA32 needs to reach the watermark.
>>
>> If the memory is not large enough, or if the usage design of ZONE_DMA32 area memory is reasonable, this problem is difficult to detect.
>>
>> notes:
>> The problem is most likely to occur in ZONE_DMA32 and ZONE_NORMAL, but other suitable scenarios may also trigger the problem.
>>
>>> It looks like that will fix, but perhaps something more fundamental
>>> needs to be done - we're doing a tremendous amount of pretty pointless
>>> work here.  Answers to my above questions will help us resolve this.
>>>
>>> Thanks.
>>
>> Please refer to the above explanation for details.
>>
>> Thanks.
> 
> Thanks.
> 
Friendly ping. 


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm/vmscan: Fix hard LOCKUP in function isolate_lru_folios
  2024-08-23  2:04 ` Re: Re: [PATCH] mm/vmscan: Fix hard LOCKUP in function isolate_lru_folios liuye
  2024-09-03  2:34   ` liuye
  2024-09-03  3:03   ` liuye
@ 2024-09-06  1:16   ` liuye
  2024-09-11  2:56     ` liuye
  2 siblings, 1 reply; 6+ messages in thread
From: liuye @ 2024-09-06  1:16 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel



On 2024/8/23 上午10:04, liuye wrote:
> I'm sorry to bother you about that, but it looks like the following email send 7 days ago, 
> did not receive a response from you. Do you mind having a look at this 
> when you have a bit of free time please?
> 
>>>> Fixes: b2e18757f2c9 ("mm, vmscan: begin reclaiming pages on a per-node basis")
>>>
>>> Merged in 2016.
>>>
>>> Under what circumstances does it occur?  
>>
>> User processe are requesting a large amount of memory and keep page active.
>> Then a module continuously requests memory from ZONE_DMA32 area.
>> Memory reclaim will be triggered due to ZONE_DMA32 watermark alarm reached.
>> However pages in the LRU(active_anon) list are mostly from 
>> the ZONE_NORMAL area.
>>
>>> Can you please describe how to reproduce this?  
>>
>> Terminal 1: Construct to continuously increase pages active(anon). 
>> mkdir /tmp/memory
>> mount -t tmpfs -o size=1024000M tmpfs /tmp/memory
>> dd if=/dev/zero of=/tmp/memory/block bs=4M
>> tail /tmp/memory/block
>>
>> Terminal 2:
>> vmstat -a 1
>> active will increase.
>> procs -----------memory---------- ---swap-- -----io---- -system-- -------cpu-------
>>  r  b   swpd   free  inact active   si   so    bi    bo   in   cs us sy id wa st gu
>>  1  0      0 1445623076 45898836 83646008    0    0     0     0 1807 1682  0  0 100  0  0  0
>>  1  0      0 1445623076 43450228 86094616    0    0     0     0 1677 1468  0  0 100  0  0  0
>>  1  0      0 1445623076 41003480 88541364    0    0     0     0 1985 2022  0  0 100  0  0  0
>>  1  0      0 1445623076 38557088 90987756    0    0     0     4 1731 1544  0  0 100  0  0  0
>>  1  0      0 1445623076 36109688 93435156    0    0     0     0 1755 1501  0  0 100  0  0  0
>>  1  0      0 1445619552 33663256 95881632    0    0     0     0 2015 1678  0  0 100  0  0  0
>>  1  0      0 1445619804 31217140 98327792    0    0     0     0 2058 2212  0  0 100  0  0  0
>>  1  0      0 1445619804 28769988 100774944    0    0     0     0 1729 1585  0  0 100  0  0  0
>>  1  0      0 1445619804 26322348 103222584    0    0     0     0 1774 1575  0  0 100  0  0  0
>>  1  0      0 1445619804 23875592 105669340    0    0     0     4 1738 1604  0  0 100  0  0  0
>>
>> cat /proc/meminfo | head
>> Active(anon) increase.
>> MemTotal:       1579941036 kB
>> MemFree:        1445618500 kB
>> MemAvailable:   1453013224 kB
>> Buffers:            6516 kB
>> Cached:         128653956 kB
>> SwapCached:            0 kB
>> Active:         118110812 kB
>> Inactive:       11436620 kB
>> Active(anon):   115345744 kB   
>> Inactive(anon):   945292 kB
>>
>> When the Active(anon) is 115345744 kB, insmod module triggers the ZONE_DMA32 watermark.
>>
>> perf show nr_scanned=28835844. 
>> 28835844 * 4k = 115343376KB approximately equal to 115345744 kB.
>>
>> perf record -e vmscan:mm_vmscan_lru_isolate -aR
>> perf script
>> isolate_mode=0 classzone=1 order=1 nr_requested=32 nr_scanned=2 nr_skipped=2 nr_taken=0 lru=active_anon
>> isolate_mode=0 classzone=1 order=1 nr_requested=32 nr_scanned=0 nr_skipped=0 nr_taken=0 lru=active_anon
>> isolate_mode=0 classzone=1 order=0 nr_requested=32 nr_scanned=28835844 nr_skipped=28835844 nr_taken=0 lru=active_anon
>> isolate_mode=0 classzone=1 order=1 nr_requested=32 nr_scanned=28835844 nr_skipped=28835844 nr_taken=0 lru=active_anon
>> isolate_mode=0 classzone=1 order=0 nr_requested=32 nr_scanned=29 nr_skipped=29 nr_taken=0 lru=active_anon
>> isolate_mode=0 classzone=1 order=0 nr_requested=32 nr_scanned=0 nr_skipped=0 nr_taken=0 lru=active_anon
>>
>> If increase Active(anon) to 1000G then insmod module triggers the ZONE_DMA32 watermark. hard lockup will occur.
>>
>> In my device nr_scanned = 0000000003e3e937 when hard lockup. Convert to memory size 0x0000000003e3e937 * 4KB = 261072092 KB.
>>
>> #5 [ffffc90006fb7c28] isolate_lru_folios at ffffffffa597df53
>>     ffffc90006fb7c30: 0000000000000020 0000000000000000 
>>     ffffc90006fb7c40: ffffc90006fb7d40 ffff88812cbd3000 
>>     ffffc90006fb7c50: ffffc90006fb7d30 0000000106fb7de8 
>>     ffffc90006fb7c60: ffffea04a2197008 ffffea0006ed4a48 
>>     ffffc90006fb7c70: 0000000000000000 0000000000000000 
>>     ffffc90006fb7c80: 0000000000000000 0000000000000000 
>>     ffffc90006fb7c90: 0000000000000000 0000000000000000 
>>     ffffc90006fb7ca0: 0000000000000000 0000000003e3e937 
>>     ffffc90006fb7cb0: 0000000000000000 0000000000000000 
>>     ffffc90006fb7cc0: 8d7c0b56b7874b00 ffff88812cbd3000 
>>
>>> Why do you think it took eight years to be discovered?
>>
>> The problem requires the following conditions to occur:
>> 1. The device memory should be large enough.
>> 2. Pages in the LRU(active_anon) list are mostly from the ZONE_NORMAL area.
>> 3. The memory in ZONE_DMA32 needs to reach the watermark.
>>
>> If the memory is not large enough, or if the usage design of ZONE_DMA32 area memory is reasonable, this problem is difficult to detect.
>>
>> notes:
>> The problem is most likely to occur in ZONE_DMA32 and ZONE_NORMAL, but other suitable scenarios may also trigger the problem.
>>
>>> It looks like that will fix, but perhaps something more fundamental
>>> needs to be done - we're doing a tremendous amount of pretty pointless
>>> work here.  Answers to my above questions will help us resolve this.
>>>
>>> Thanks.
>>
>> Please refer to the above explanation for details.
>>
>> Thanks.
> 
> Thanks.
> 
Friendly ping. 


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm/vmscan: Fix hard LOCKUP in function isolate_lru_folios
  2024-09-06  1:16   ` liuye
@ 2024-09-11  2:56     ` liuye
  2024-12-05 19:17       ` Yu Zhao
  0 siblings, 1 reply; 6+ messages in thread
From: liuye @ 2024-09-11  2:56 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel


Friendly ping.  

Thanks.


On 2024/9/6 上午9:16, liuye wrote:
> 
> 
> On 2024/8/23 上午10:04, liuye wrote:
>> I'm sorry to bother you about that, but it looks like the following email send 7 days ago, 
>> did not receive a response from you. Do you mind having a look at this 
>> when you have a bit of free time please?
>>
>>>>> Fixes: b2e18757f2c9 ("mm, vmscan: begin reclaiming pages on a per-node basis")
>>>>
>>>> Merged in 2016.
>>>>
>>>> Under what circumstances does it occur?  
>>>
>>> User processe are requesting a large amount of memory and keep page active.
>>> Then a module continuously requests memory from ZONE_DMA32 area.
>>> Memory reclaim will be triggered due to ZONE_DMA32 watermark alarm reached.
>>> However pages in the LRU(active_anon) list are mostly from 
>>> the ZONE_NORMAL area.
>>>
>>>> Can you please describe how to reproduce this?  
>>>
>>> Terminal 1: Construct to continuously increase pages active(anon). 
>>> mkdir /tmp/memory
>>> mount -t tmpfs -o size=1024000M tmpfs /tmp/memory
>>> dd if=/dev/zero of=/tmp/memory/block bs=4M
>>> tail /tmp/memory/block
>>>
>>> Terminal 2:
>>> vmstat -a 1
>>> active will increase.
>>> procs -----------memory---------- ---swap-- -----io---- -system-- -------cpu-------
>>>  r  b   swpd   free  inact active   si   so    bi    bo   in   cs us sy id wa st gu
>>>  1  0      0 1445623076 45898836 83646008    0    0     0     0 1807 1682  0  0 100  0  0  0
>>>  1  0      0 1445623076 43450228 86094616    0    0     0     0 1677 1468  0  0 100  0  0  0
>>>  1  0      0 1445623076 41003480 88541364    0    0     0     0 1985 2022  0  0 100  0  0  0
>>>  1  0      0 1445623076 38557088 90987756    0    0     0     4 1731 1544  0  0 100  0  0  0
>>>  1  0      0 1445623076 36109688 93435156    0    0     0     0 1755 1501  0  0 100  0  0  0
>>>  1  0      0 1445619552 33663256 95881632    0    0     0     0 2015 1678  0  0 100  0  0  0
>>>  1  0      0 1445619804 31217140 98327792    0    0     0     0 2058 2212  0  0 100  0  0  0
>>>  1  0      0 1445619804 28769988 100774944    0    0     0     0 1729 1585  0  0 100  0  0  0
>>>  1  0      0 1445619804 26322348 103222584    0    0     0     0 1774 1575  0  0 100  0  0  0
>>>  1  0      0 1445619804 23875592 105669340    0    0     0     4 1738 1604  0  0 100  0  0  0
>>>
>>> cat /proc/meminfo | head
>>> Active(anon) increase.
>>> MemTotal:       1579941036 kB
>>> MemFree:        1445618500 kB
>>> MemAvailable:   1453013224 kB
>>> Buffers:            6516 kB
>>> Cached:         128653956 kB
>>> SwapCached:            0 kB
>>> Active:         118110812 kB
>>> Inactive:       11436620 kB
>>> Active(anon):   115345744 kB   
>>> Inactive(anon):   945292 kB
>>>
>>> When the Active(anon) is 115345744 kB, insmod module triggers the ZONE_DMA32 watermark.
>>>
>>> perf show nr_scanned=28835844. 
>>> 28835844 * 4k = 115343376KB approximately equal to 115345744 kB.
>>>
>>> perf record -e vmscan:mm_vmscan_lru_isolate -aR
>>> perf script
>>> isolate_mode=0 classzone=1 order=1 nr_requested=32 nr_scanned=2 nr_skipped=2 nr_taken=0 lru=active_anon
>>> isolate_mode=0 classzone=1 order=1 nr_requested=32 nr_scanned=0 nr_skipped=0 nr_taken=0 lru=active_anon
>>> isolate_mode=0 classzone=1 order=0 nr_requested=32 nr_scanned=28835844 nr_skipped=28835844 nr_taken=0 lru=active_anon
>>> isolate_mode=0 classzone=1 order=1 nr_requested=32 nr_scanned=28835844 nr_skipped=28835844 nr_taken=0 lru=active_anon
>>> isolate_mode=0 classzone=1 order=0 nr_requested=32 nr_scanned=29 nr_skipped=29 nr_taken=0 lru=active_anon
>>> isolate_mode=0 classzone=1 order=0 nr_requested=32 nr_scanned=0 nr_skipped=0 nr_taken=0 lru=active_anon
>>>
>>> If increase Active(anon) to 1000G then insmod module triggers the ZONE_DMA32 watermark. hard lockup will occur.
>>>
>>> In my device nr_scanned = 0000000003e3e937 when hard lockup. Convert to memory size 0x0000000003e3e937 * 4KB = 261072092 KB.
>>>
>>> #5 [ffffc90006fb7c28] isolate_lru_folios at ffffffffa597df53
>>>     ffffc90006fb7c30: 0000000000000020 0000000000000000 
>>>     ffffc90006fb7c40: ffffc90006fb7d40 ffff88812cbd3000 
>>>     ffffc90006fb7c50: ffffc90006fb7d30 0000000106fb7de8 
>>>     ffffc90006fb7c60: ffffea04a2197008 ffffea0006ed4a48 
>>>     ffffc90006fb7c70: 0000000000000000 0000000000000000 
>>>     ffffc90006fb7c80: 0000000000000000 0000000000000000 
>>>     ffffc90006fb7c90: 0000000000000000 0000000000000000 
>>>     ffffc90006fb7ca0: 0000000000000000 0000000003e3e937 
>>>     ffffc90006fb7cb0: 0000000000000000 0000000000000000 
>>>     ffffc90006fb7cc0: 8d7c0b56b7874b00 ffff88812cbd3000 
>>>
>>>> Why do you think it took eight years to be discovered?
>>>
>>> The problem requires the following conditions to occur:
>>> 1. The device memory should be large enough.
>>> 2. Pages in the LRU(active_anon) list are mostly from the ZONE_NORMAL area.
>>> 3. The memory in ZONE_DMA32 needs to reach the watermark.
>>>
>>> If the memory is not large enough, or if the usage design of ZONE_DMA32 area memory is reasonable, this problem is difficult to detect.
>>>
>>> notes:
>>> The problem is most likely to occur in ZONE_DMA32 and ZONE_NORMAL, but other suitable scenarios may also trigger the problem.
>>>
>>>> It looks like that will fix, but perhaps something more fundamental
>>>> needs to be done - we're doing a tremendous amount of pretty pointless
>>>> work here.  Answers to my above questions will help us resolve this.
>>>>
>>>> Thanks.
>>>
>>> Please refer to the above explanation for details.
>>>
>>> Thanks.
>>
>> Thanks.
>>
> Friendly ping. 
> 


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm/vmscan: Fix hard LOCKUP in function isolate_lru_folios
  2024-09-11  2:56     ` liuye
@ 2024-12-05 19:17       ` Yu Zhao
  0 siblings, 0 replies; 6+ messages in thread
From: Yu Zhao @ 2024-12-05 19:17 UTC (permalink / raw)
  To: liuye, Hugh Dickins; +Cc: akpm, linux-mm, linux-kernel

On Thu, Dec 5, 2024 at 8:19 AM liuye <liuye@kylinos.cn> wrote:
>
>
> Friendly ping.
>
> Thanks.

Hugh has responded on your "v2 RESEND":

https://lore.kernel.org/linux-mm/dae8ea77-2bc1-8ee9-b94b-207e2c8e1b8d@google.com/

> On 2024/9/6 上午9:16, liuye wrote:
> >
> >
> > On 2024/8/23 上午10:04, liuye wrote:
> >> I'm sorry to bother you about that, but it looks like the following email send 7 days ago,
> >> did not receive a response from you. Do you mind having a look at this
> >> when you have a bit of free time please?
> >>
> >>>>> Fixes: b2e18757f2c9 ("mm, vmscan: begin reclaiming pages on a per-node basis")
> >>>>
> >>>> Merged in 2016.
> >>>>
> >>>> Under what circumstances does it occur?
> >>>
> >>> User processe are requesting a large amount of memory and keep page active.
> >>> Then a module continuously requests memory from ZONE_DMA32 area.
> >>> Memory reclaim will be triggered due to ZONE_DMA32 watermark alarm reached.
> >>> However pages in the LRU(active_anon) list are mostly from
> >>> the ZONE_NORMAL area.
> >>>
> >>>> Can you please describe how to reproduce this?
> >>>
> >>> Terminal 1: Construct to continuously increase pages active(anon).
> >>> mkdir /tmp/memory
> >>> mount -t tmpfs -o size=1024000M tmpfs /tmp/memory
> >>> dd if=/dev/zero of=/tmp/memory/block bs=4M
> >>> tail /tmp/memory/block
> >>>
> >>> Terminal 2:
> >>> vmstat -a 1
> >>> active will increase.
> >>> procs -----------memory---------- ---swap-- -----io---- -system-- -------cpu-------
> >>>  r  b   swpd   free  inact active   si   so    bi    bo   in   cs us sy id wa st gu
> >>>  1  0      0 1445623076 45898836 83646008    0    0     0     0 1807 1682  0  0 100  0  0  0
> >>>  1  0      0 1445623076 43450228 86094616    0    0     0     0 1677 1468  0  0 100  0  0  0
> >>>  1  0      0 1445623076 41003480 88541364    0    0     0     0 1985 2022  0  0 100  0  0  0
> >>>  1  0      0 1445623076 38557088 90987756    0    0     0     4 1731 1544  0  0 100  0  0  0
> >>>  1  0      0 1445623076 36109688 93435156    0    0     0     0 1755 1501  0  0 100  0  0  0
> >>>  1  0      0 1445619552 33663256 95881632    0    0     0     0 2015 1678  0  0 100  0  0  0
> >>>  1  0      0 1445619804 31217140 98327792    0    0     0     0 2058 2212  0  0 100  0  0  0
> >>>  1  0      0 1445619804 28769988 100774944    0    0     0     0 1729 1585  0  0 100  0  0  0
> >>>  1  0      0 1445619804 26322348 103222584    0    0     0     0 1774 1575  0  0 100  0  0  0
> >>>  1  0      0 1445619804 23875592 105669340    0    0     0     4 1738 1604  0  0 100  0  0  0
> >>>
> >>> cat /proc/meminfo | head
> >>> Active(anon) increase.
> >>> MemTotal:       1579941036 kB
> >>> MemFree:        1445618500 kB
> >>> MemAvailable:   1453013224 kB
> >>> Buffers:            6516 kB
> >>> Cached:         128653956 kB
> >>> SwapCached:            0 kB
> >>> Active:         118110812 kB
> >>> Inactive:       11436620 kB
> >>> Active(anon):   115345744 kB
> >>> Inactive(anon):   945292 kB
> >>>
> >>> When the Active(anon) is 115345744 kB, insmod module triggers the ZONE_DMA32 watermark.
> >>>
> >>> perf show nr_scanned=28835844.
> >>> 28835844 * 4k = 115343376KB approximately equal to 115345744 kB.
> >>>
> >>> perf record -e vmscan:mm_vmscan_lru_isolate -aR
> >>> perf script
> >>> isolate_mode=0 classzone=1 order=1 nr_requested=32 nr_scanned=2 nr_skipped=2 nr_taken=0 lru=active_anon
> >>> isolate_mode=0 classzone=1 order=1 nr_requested=32 nr_scanned=0 nr_skipped=0 nr_taken=0 lru=active_anon
> >>> isolate_mode=0 classzone=1 order=0 nr_requested=32 nr_scanned=28835844 nr_skipped=28835844 nr_taken=0 lru=active_anon
> >>> isolate_mode=0 classzone=1 order=1 nr_requested=32 nr_scanned=28835844 nr_skipped=28835844 nr_taken=0 lru=active_anon
> >>> isolate_mode=0 classzone=1 order=0 nr_requested=32 nr_scanned=29 nr_skipped=29 nr_taken=0 lru=active_anon
> >>> isolate_mode=0 classzone=1 order=0 nr_requested=32 nr_scanned=0 nr_skipped=0 nr_taken=0 lru=active_anon
> >>>
> >>> If increase Active(anon) to 1000G then insmod module triggers the ZONE_DMA32 watermark. hard lockup will occur.
> >>>
> >>> In my device nr_scanned = 0000000003e3e937 when hard lockup. Convert to memory size 0x0000000003e3e937 * 4KB = 261072092 KB.
> >>>
> >>> #5 [ffffc90006fb7c28] isolate_lru_folios at ffffffffa597df53
> >>>     ffffc90006fb7c30: 0000000000000020 0000000000000000
> >>>     ffffc90006fb7c40: ffffc90006fb7d40 ffff88812cbd3000
> >>>     ffffc90006fb7c50: ffffc90006fb7d30 0000000106fb7de8
> >>>     ffffc90006fb7c60: ffffea04a2197008 ffffea0006ed4a48
> >>>     ffffc90006fb7c70: 0000000000000000 0000000000000000
> >>>     ffffc90006fb7c80: 0000000000000000 0000000000000000
> >>>     ffffc90006fb7c90: 0000000000000000 0000000000000000
> >>>     ffffc90006fb7ca0: 0000000000000000 0000000003e3e937
> >>>     ffffc90006fb7cb0: 0000000000000000 0000000000000000
> >>>     ffffc90006fb7cc0: 8d7c0b56b7874b00 ffff88812cbd3000
> >>>
> >>>> Why do you think it took eight years to be discovered?
> >>>
> >>> The problem requires the following conditions to occur:
> >>> 1. The device memory should be large enough.
> >>> 2. Pages in the LRU(active_anon) list are mostly from the ZONE_NORMAL area.
> >>> 3. The memory in ZONE_DMA32 needs to reach the watermark.
> >>>
> >>> If the memory is not large enough, or if the usage design of ZONE_DMA32 area memory is reasonable, this problem is difficult to detect.
> >>>
> >>> notes:
> >>> The problem is most likely to occur in ZONE_DMA32 and ZONE_NORMAL, but other suitable scenarios may also trigger the problem.
> >>>
> >>>> It looks like that will fix, but perhaps something more fundamental
> >>>> needs to be done - we're doing a tremendous amount of pretty pointless
> >>>> work here.  Answers to my above questions will help us resolve this.
> >>>>
> >>>> Thanks.
> >>>
> >>> Please refer to the above explanation for details.
> >>>
> >>> Thanks.
> >>
> >> Thanks.
> >>
> > Friendly ping.
> >
>


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2024-12-05 19:17 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20240815025226.8973-1-liuye@kylinos.cn>
2024-08-23  2:04 ` Re: Re: [PATCH] mm/vmscan: Fix hard LOCKUP in function isolate_lru_folios liuye
2024-09-03  2:34   ` liuye
2024-09-03  3:03   ` liuye
2024-09-06  1:16   ` liuye
2024-09-11  2:56     ` liuye
2024-12-05 19:17       ` Yu Zhao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox