From: Jinjiang Tu <tujinjiang@huawei.com>
To: David Hildenbrand <david@redhat.com>, <osalvador@suse.de>,
<akpm@linux-foundation.org>, <nao.horiguchi@gmail.com>,
<zi.yan@cs.rutgers.edu>
Cc: <linux-mm@kvack.org>, <wangkefeng.wang@huawei.com>,
<sunnanyong@huawei.com>
Subject: Re: [PATCH] mm/memory_hotplug: fix call folio_test_large with tail page in do_migrate_range
Date: Tue, 25 Mar 2025 11:02:30 +0800 [thread overview]
Message-ID: <68ab727b-dc3d-327f-33b6-25bbfce8530e@huawei.com> (raw)
In-Reply-To: <899807c3-931f-43e6-bf3e-188787a4205a@redhat.com>
在 2025/3/24 21:44, David Hildenbrand 写道:
> On 24.03.25 14:17, Jinjiang Tu wrote:
>> We triggered the below BUG:
>>
>> page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x2
>> pfn:0x240402
>> head: order:9 mapcount:0 entire_mapcount:0 nr_pages_mapped:0
>> pincount:0
>> flags: 0x1ffffe0000000040(head|node=1|zone=3|lastcpupid=0x1ffff)
>> page_type: f4(hugetlb)
>> page dumped because: VM_BUG_ON_PAGE(page->compound_head & 1)
>> ------------[ cut here ]------------
>> kernel BUG at ./include/linux/page-flags.h:310!
>> Internal error: Oops - BUG: 00000000f2000800 [#1] PREEMPT SMP
>> Modules linked in:
>> CPU: 7 UID: 0 PID: 166 Comm: sh Not tainted 6.14.0-rc7-dirty #374
>> Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015
>> pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
>> pc : const_folio_flags+0x3c/0x58
>> lr : const_folio_flags+0x3c/0x58
>> Call trace:
>> const_folio_flags+0x3c/0x58 (P)
>> do_migrate_range+0x164/0x720
>> offline_pages+0x63c/0x6fc
>> memory_subsys_offline+0x190/0x1f4
>> device_offline+0xc0/0x13c
>> state_store+0x90/0xd8
>> dev_attr_store+0x18/0x2c
>> sysfs_kf_write+0x44/0x54
>> kernfs_fop_write_iter+0x120/0x1cc
>> vfs_write+0x240/0x378
>> ksys_write+0x70/0x108
>> __arm64_sys_write+0x1c/0x28
>> invoke_syscall+0x48/0x10c
>> el0_svc_common.constprop.0+0x40/0xe0
>>
>> When allocating a hugetlb folio, between the folio is taken from buddy
>> and prep_compound_page() is called, start_isolate_page_range() and
>> do_migrate_range() is called. When do_migrate_range() scans the head
>> page
>> of the hugetlb folio, the compound_head field isn't set, so scans the
>> tail page next. And at this time, the compound_head field of tail
>> page is
>> set, folio_test_large() is called by tail page, thus triggers
>> VM_BUG_ON().
>>
>> To fix it, get folio refcount before calling folio_test_large().
>>
>> Fixes: 8135d8926c08 ("mm: memory_hotplug: memory hotremove supports
>> thp migration")
>> Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com>
>> ---
>> mm/memory_hotplug.c | 12 +++---------
>> 1 file changed, 3 insertions(+), 9 deletions(-)
>>
>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>> index 16cf9e17077e..f600c26ce5de 100644
>> --- a/mm/memory_hotplug.c
>> +++ b/mm/memory_hotplug.c
>> @@ -1813,21 +1813,15 @@ static void do_migrate_range(unsigned long
>> start_pfn, unsigned long end_pfn)
>> page = pfn_to_page(pfn);
>> folio = page_folio(page);
>> - /*
>> - * No reference or lock is held on the folio, so it might
>> - * be modified concurrently (e.g. split). As such,
>> - * folio_nr_pages() may read garbage. This is fine as the
>> outer
>> - * loop will revisit the split folio later.
>> - */
>> - if (folio_test_large(folio))
>> - pfn = folio_pfn(folio) + folio_nr_pages(folio) - 1;
>> -
>> if (!folio_try_get(folio))
>> continue;
>> if (unlikely(page_folio(page) != folio))
>> goto put_folio;
>> + if (folio_test_large(folio))
>> + pfn = folio_pfn(folio) + folio_nr_pages(folio) - 1;
>
> Moving that will not make it able to skip the large frozen
> (refcount==0, e.g., free hugetlb) folio in the continue/put_folio case
> above. Hmmmm ..
For free hugetlb, pfn is increased by 1 in each loop. This leads to skip
free hugetlb slower.
>
> We could similarly to dumping folios, snapshot them, so we can read
> stable data.
extract the code in __dump_page()? But snapshot may lead to
do_migrate_range() slower too.
next prev parent reply other threads:[~2025-03-25 3:02 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-24 13:17 Jinjiang Tu
2025-03-24 13:44 ` David Hildenbrand
2025-03-25 3:02 ` Jinjiang Tu [this message]
2025-03-25 13:18 ` Oscar Salvador
2025-03-25 19:05 ` David Hildenbrand
2025-03-26 2:40 ` Jinjiang Tu
2025-03-26 12:53 ` David Hildenbrand
2025-03-27 11:19 ` Jinjiang Tu
2025-03-28 23:37 ` Andrew Morton
2025-04-01 16:44 ` David Hildenbrand
2025-04-07 1:59 ` Andrew Morton
2025-04-07 7:07 ` David Hildenbrand
2025-03-28 13:23 ` Oscar Salvador
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=68ab727b-dc3d-327f-33b6-25bbfce8530e@huawei.com \
--to=tujinjiang@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=linux-mm@kvack.org \
--cc=nao.horiguchi@gmail.com \
--cc=osalvador@suse.de \
--cc=sunnanyong@huawei.com \
--cc=wangkefeng.wang@huawei.com \
--cc=zi.yan@cs.rutgers.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox