linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Kairui Song <ryncsn@gmail.com>
To: Usama Arif <usamaarif642@gmail.com>
Cc: akpm@linux-foundation.org, linux-mm@kvack.org,
	hannes@cmpxchg.org,  riel@surriel.com, shakeel.butt@linux.dev,
	roman.gushchin@linux.dev,  yuzhao@google.com, david@redhat.com,
	baohua@kernel.org, ryan.roberts@arm.com,  rppt@kernel.org,
	willy@infradead.org, cerasuolodomenico@gmail.com,
	 corbet@lwn.net, linux-kernel@vger.kernel.org,
	linux-doc@vger.kernel.org,  kernel-team@meta.com,
	Shuang Zhai <zhais@google.com>
Subject: Re: [PATCH v3 1/6] mm: free zapped tail pages when splitting isolated thp
Date: Sat, 17 Aug 2024 02:11:10 +0800	[thread overview]
Message-ID: <CAMgjq7BfVKQQtK-8SF1RW85aUFO9YuxkU-1QRVZ_MDhGw34JrQ@mail.gmail.com> (raw)
In-Reply-To: <403b7f3c-6e5b-4030-ab1c-3198f36e3f73@gmail.com>

On Sat, Aug 17, 2024 at 1:03 AM Usama Arif <usamaarif642@gmail.com> wrote:
> On 16/08/2024 17:55, Kairui Song wrote:
> > On Fri, Aug 16, 2024 at 3:16 AM Usama Arif <usamaarif642@gmail.com> wrote:
> >> On 15/08/2024 19:47, Kairui Song wrote:
> >>> On Tue, Aug 13, 2024 at 8:03 PM Usama Arif <usamaarif642@gmail.com> wrote:
> >>>>
> >>>> From: Yu Zhao <yuzhao@google.com>
> >>>>
> >>>> If a tail page has only two references left, one inherited from the
> >>>> isolation of its head and the other from lru_add_page_tail() which we
> >>>> are about to drop, it means this tail page was concurrently zapped.
> >>>> Then we can safely free it and save page reclaim or migration the
> >>>> trouble of trying it.
> >>>>
> >>>> Signed-off-by: Yu Zhao <yuzhao@google.com>
> >>>> Tested-by: Shuang Zhai <zhais@google.com>
> >>>> Signed-off-by: Usama Arif <usamaarif642@gmail.com>
> >>>> Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> >>>> ---
> >>>>  mm/huge_memory.c | 27 +++++++++++++++++++++++++++
> >>>>  1 file changed, 27 insertions(+)
> >>>
> >>> Hi, Usama, Yu
> >>>
> >>> This commit is causing the kernel to panic very quickly with build
> >>> kernel test on top of tmpfs with all mTHP enabled, the panic comes
> >>> after:
> >>>
> >>
> >> Hi,
> >>
> >> Thanks for pointing this out. It is a very silly bug I have introduced going from v1 page version to the folio version of the patch in v3.
> >>
> >> Doing below over this patch will fix it:
> >>
> >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> >> index 907813102430..a6ca454e1168 100644
> >> --- a/mm/huge_memory.c
> >> +++ b/mm/huge_memory.c
> >> @@ -3183,7 +3183,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
> >>
> >>                         folio_clear_active(new_folio);
> >>                         folio_clear_unevictable(new_folio);
> >> -                       if (!folio_batch_add(&free_folios, folio)) {
> >> +                       if (!folio_batch_add(&free_folios, new_folio)) {
> >>                                 mem_cgroup_uncharge_folios(&free_folios);
> >>                                 free_unref_folios(&free_folios);
> >>                         }
> >>
> >>
> >> I will include it in the next revision.
> >>
> >
> > Hi,
> >
> > After the fix, I'm still seeing below panic:
> > [   24.926629] list_del corruption. prev->next should be
> > ffffea000491cf88, but was ffffea0006207708. (prev=ffffea000491cfc8)
> > [   24.930783] ------------[ cut here ]------------
> > [   24.932519] kernel BUG at lib/list_debug.c:64!
> > [   24.934325] Oops: invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
> > [   24.936339] CPU: 32 UID: 0 PID: 2112 Comm: gzip Not tainted
> > 6.11.0-rc3.ptch+ #147
> > [   24.938575] Hardware name: Red Hat KVM/RHEL-AV, BIOS 0.0.0 02/06/2015
> > [   24.940680] RIP: 0010:__list_del_entry_valid_or_report+0xaa/0xc0
> > [   24.942536] Code: 8c ff 0f 0b 48 89 fe 48 c7 c7 f8 9d 51 82 e8 9d
> > 36 8c ff 0f 0b 48 89 d1 48 89 f2 48 89 fe 48 c7 c7 30 9e 51 82 e8 86
> > 36 8c ff <0f> 0b 48 c7 c7 80 9e 51 82 e8 78 36 8c ff 0f 0b 66 0f 1f 44
> > 00 00
> > [   24.948418] RSP: 0018:ffffc90005c2b770 EFLAGS: 00010246
> > [   24.949996] RAX: 000000000000006d RBX: ffffea000491cf88 RCX: 0000000000000000
> > [   24.952293] RDX: 0000000000000000 RSI: ffff889ffee1c180 RDI: ffff889ffee1c180
> > [   24.954616] RBP: ffffea000491cf80 R08: 0000000000000000 R09: c0000000ffff7fff
> > [   24.956908] R10: 0000000000000001 R11: ffffc90005c2b5a8 R12: ffffc90005c2b954
> > [   24.959253] R13: ffffc90005c2bbc0 R14: ffffc90005c2b7c0 R15: ffffc90005c2b940
> > [   24.961410] FS:  00007fe5a235e740(0000) GS:ffff889ffee00000(0000)
> > knlGS:0000000000000000
> > [   24.963587] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > [   24.965112] CR2: 00007fe5a24ddcd0 CR3: 000000010cb40001 CR4: 0000000000770eb0
> > [   24.967037] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > [   24.968933] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> > [   24.970802] PKRU: 55555554
> > [   24.971559] Call Trace:
> > [   24.972241]  <TASK>
> > [   24.972805]  ? __die_body+0x1e/0x60
> > [   24.973756]  ? die+0x3c/0x60
> > [   24.974450]  ? do_trap+0xe8/0x110
> > [   24.975235]  ? __list_del_entry_valid_or_report+0xaa/0xc0
> > [   24.976543]  ? do_error_trap+0x65/0x80
> > [   24.977542]  ? __list_del_entry_valid_or_report+0xaa/0xc0
> > [   24.978891]  ? exc_invalid_op+0x50/0x70
> > [   24.979870]  ? __list_del_entry_valid_or_report+0xaa/0xc0
> > [   24.981295]  ? asm_exc_invalid_op+0x1a/0x20
> > [   24.982389]  ? __list_del_entry_valid_or_report+0xaa/0xc0
> > [   24.983781]  shrink_folio_list+0x39a/0x1200
> > [   24.984898]  shrink_inactive_list+0x1c0/0x420
> > [   24.986082]  shrink_lruvec+0x5db/0x780
> > [   24.987078]  shrink_node+0x243/0xb00
> > [   24.988063]  ? get_pfnblock_flags_mask.constprop.117+0x1d/0x50
> > [   24.989622]  do_try_to_free_pages+0xbd/0x4e0
> > [   24.990732]  try_to_free_mem_cgroup_pages+0x107/0x230
> > [   24.992034]  try_charge_memcg+0x184/0x5d0
> > [   24.993145]  obj_cgroup_charge_pages+0x38/0x110
> > [   24.994326]  __memcg_kmem_charge_page+0x8d/0xf0
> > [   24.995531]  __alloc_pages_noprof+0x278/0x360
> > [   24.996712]  alloc_pages_mpol_noprof+0xf0/0x230
> > [   24.997896]  pipe_write+0x2ad/0x5f0
> > [   24.998837]  ? __pfx_tick_nohz_handler+0x10/0x10
> > [   25.000234]  ? update_process_times+0x8c/0xa0
> > [   25.001377]  ? timerqueue_add+0x77/0x90
> > [   25.002257]  vfs_write+0x39b/0x420
> > [   25.003083]  ksys_write+0xbd/0xd0
> > [   25.003950]  do_syscall_64+0x47/0x110
> > [   25.004917]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
> > [   25.006210] RIP: 0033:0x7fe5a246f784
> > [   25.007149] Code: c7 00 16 00 00 00 b8 ff ff ff ff c3 66 2e 0f 1f
> > 84 00 00 00 00 00 f3 0f 1e fa 80 3d c5 08 0e 00 00 74 13 b8 01 00 00
> > 00 0f 05 <48> 3d 00 f0 ff ff 77 54 c3 0f 1f 00 55 48 89 e5 48 83 ec 20
> > 48 89
> > [   25.011961] RSP: 002b:00007ffdb0057b38 EFLAGS: 00000202 ORIG_RAX:
> > 0000000000000001
> > [   25.013946] RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007fe5a246f784
> > [   25.015817] RDX: 0000000000008000 RSI: 0000558c0d311420 RDI: 0000000000000001
> > [   25.017717] RBP: 00007ffdb0057b60 R08: 0000558c0d258c40 R09: 0000558c0d311420
> > [   25.019618] R10: 00007ffdb0057600 R11: 0000000000000202 R12: 0000000000008000
> > [   25.021519] R13: 0000558c0d311420 R14: 0000000000000029 R15: 0000000000001f8d
> > [   25.023412]  </TASK>
> > [   25.023998] Modules linked in:
> > [   25.024900] ---[ end trace 0000000000000000 ]---
> > [   25.026329] RIP: 0010:__list_del_entry_valid_or_report+0xaa/0xc0
> > [   25.027885] Code: 8c ff 0f 0b 48 89 fe 48 c7 c7 f8 9d 51 82 e8 9d
> > 36 8c ff 0f 0b 48 89 d1 48 89 f2 48 89 fe 48 c7 c7 30 9e 51 82 e8 86
> > 36 8c ff <0f> 0b 48 c7 c7 80 9e 51 82 e8 78 36 8c ff 0f 0b 66 0f 1f 44
> > 00 00
> > [   25.032525] RSP: 0018:ffffc90005c2b770 EFLAGS: 00010246
> > [   25.033892] RAX: 000000000000006d RBX: ffffea000491cf88 RCX: 0000000000000000
> > [   25.035758] RDX: 0000000000000000 RSI: ffff889ffee1c180 RDI: ffff889ffee1c180
> > [   25.037661] RBP: ffffea000491cf80 R08: 0000000000000000 R09: c0000000ffff7fff
> > [   25.039543] R10: 0000000000000001 R11: ffffc90005c2b5a8 R12: ffffc90005c2b954
> > [   25.041426] R13: ffffc90005c2bbc0 R14: ffffc90005c2b7c0 R15: ffffc90005c2b940
> > [   25.043323] FS:  00007fe5a235e740(0000) GS:ffff889ffee00000(0000)
> > knlGS:0000000000000000
> > [   25.045478] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > [   25.047013] CR2: 00007fe5a24ddcd0 CR3: 000000010cb40001 CR4: 0000000000770eb0
> > [   25.048935] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > [   25.050858] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> > [   25.052881] PKRU: 55555554
> > [   25.053634] Kernel panic - not syncing: Fatal exception
> > [   25.056902] Kernel Offset: disabled
> > [   25.057827] ---[ end Kernel panic - not syncing: Fatal exception ]---
> >
> > If I revert the fix and this patch, the panic is gone, let me know if
> > I can help debug it.
>
> Yes, this is also needed to prevent race with shrink_folio:
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index a6ca454e1168..75f5b059e804 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3183,6 +3183,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
>
>                         folio_clear_active(new_folio);
>                         folio_clear_unevictable(new_folio);
> +                       list_del(&new_folio->lru);
>                         if (!folio_batch_add(&free_folios, new_folio)) {
>                                 mem_cgroup_uncharge_folios(&free_folios);
>                                 free_unref_folios(&free_folios);
>
>
> I have tested this so should be ok, but let me know otherwise.
>
> I will include this in the next revision I will send soon.
>
> Thanks.

Thanks for the update, the panic problem is gone.


  reply	other threads:[~2024-08-16 18:11 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-13 12:02 [PATCH v3 0/6] mm: split underutilized THPs Usama Arif
2024-08-13 12:02 ` [PATCH v3 1/6] mm: free zapped tail pages when splitting isolated thp Usama Arif
2024-08-15 18:47   ` Kairui Song
2024-08-15 19:16     ` Usama Arif
2024-08-16 16:55       ` Kairui Song
2024-08-16 17:02         ` Usama Arif
2024-08-16 18:11           ` Kairui Song [this message]
2024-08-13 12:02 ` [PATCH v3 2/6] mm: remap unused subpages to shared zeropage " Usama Arif
2024-08-13 12:02 ` [PATCH v3 3/6] mm: selftest to verify zero-filled pages are mapped to zeropage Usama Arif
2024-08-13 12:02 ` [PATCH v3 4/6] mm: Introduce a pageflag for partially mapped folios Usama Arif
2024-08-14  3:30   ` Yu Zhao
2024-08-14 10:20     ` Usama Arif
2024-08-14 10:44   ` Barry Song
2024-08-14 10:52     ` Barry Song
2024-08-14 11:11     ` Usama Arif
2024-08-14 11:20       ` Barry Song
2024-08-14 11:26         ` Barry Song
2024-08-14 11:30         ` Usama Arif
2024-08-14 11:10   ` Barry Song
2024-08-14 11:20     ` Usama Arif
2024-08-14 11:23       ` Barry Song
2024-08-14 12:36         ` Usama Arif
2024-08-14 23:05           ` Barry Song
2024-08-15 15:25             ` Usama Arif
2024-08-15 23:30               ` Andrew Morton
2024-08-16  2:50                 ` Yu Zhao
2024-08-15 16:33   ` David Hildenbrand
2024-08-15 17:10     ` Usama Arif
2024-08-15 21:06       ` Barry Song
2024-08-15 21:08       ` David Hildenbrand
2024-08-16 15:44   ` Matthew Wilcox
2024-08-16 16:08     ` Usama Arif
2024-08-16 16:28       ` Matthew Wilcox
2024-08-16 16:41         ` Usama Arif
2024-08-13 12:02 ` [PATCH v3 5/6] mm: split underutilized THPs Usama Arif
2024-08-13 12:02 ` [PATCH v3 6/6] mm: add sysfs entry to disable splitting " Usama Arif
2024-08-13 17:22 ` [PATCH v3 0/6] mm: split " Andi Kleen
2024-08-14 10:13   ` Usama Arif
2024-08-18  5:13 ` Hugh Dickins
2024-08-18  7:45   ` David Hildenbrand
2024-08-19  2:38     ` Usama Arif
2024-08-19  2:36   ` Usama Arif

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAMgjq7BfVKQQtK-8SF1RW85aUFO9YuxkU-1QRVZ_MDhGw34JrQ@mail.gmail.com \
    --to=ryncsn@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=cerasuolodomenico@gmail.com \
    --cc=corbet@lwn.net \
    --cc=david@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@meta.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=riel@surriel.com \
    --cc=roman.gushchin@linux.dev \
    --cc=rppt@kernel.org \
    --cc=ryan.roberts@arm.com \
    --cc=shakeel.butt@linux.dev \
    --cc=usamaarif642@gmail.com \
    --cc=willy@infradead.org \
    --cc=yuzhao@google.com \
    --cc=zhais@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox