From: Andrew Morton <akpm@linux-foundation.org>
To: Yuan Liu <yuan1.liu@intel.com>
Cc: baolu.lu@linux.intel.com, david@redhat.com, rppt@kernel.org,
linux-mm@kvack.org
Subject: Re: [PATCH] mm: fix huge page table not free after memory unplug
Date: Mon, 22 Dec 2025 17:15:12 -0800 [thread overview]
Message-ID: <20251222171512.edfb1e839e3f007d89825363@linux-foundation.org> (raw)
In-Reply-To: <20251222041117.44865-1-yuan1.liu@intel.com>
On Sun, 21 Dec 2025 23:11:17 -0500 Yuan Liu <yuan1.liu@intel.com> wrote:
> newly plugged memory is marked as prot_sethuge via phys_pmd_init
> without setting PG_head. During memory unplug, free_hugepage_table
> frees the page table as 2M, but pagetable_free handles it as 4K.
>
> The following test case of memory unplug for a VM [1], tested in
> the environment [2], show that results.
>
> +-----------------------+------+------+
> |Check System Memory |Plug |Unplug|
> |via free -h |256GB |256GB |
> +-----------------------+------+------+
> | Free 4K page table |257GB |5.6GB |
> +-----------------------+------+------+
> | Free 2M page table |257GB |1.7GB |
> +-----------------------+------+------+
>
> [1] Qemu commands to unhotplug 256G memory for a VM:
> object_add memory-backend-ram,id=hotmem0,size=256G,share=on
> device_add virtio-mem-pci,id=vmem1,memdev=hotmem0,bus=port1
> qom-set vmem1 requested-size 256G (Plug Memory)
> qom-set vmem1 requested-size 0G (Unplug Memory)
>
> [2] Hardware : Intel Icelake server
> Guest Kernel : v6.19-rc1
> Qemu : v9.0.0
>
> Launch VM:
> qemu-system-x86_64 -accel kvm -cpu host \
> -drive file=./Centos10_cloud.qcow2,format=qcow2,if=virtio \
> -drive file=./seed.img,format=raw,if=virtio \
> -smp 3,cores=3,threads=1,sockets=1,maxcpus=3 \
> -m 2G,slots=10,maxmem=2052472M \
> -device pcie-root-port,id=port1,bus=pcie.0,slot=1,multifunction=on \
> -device pcie-root-port,id=port2,bus=pcie.0,slot=2 \
> -nographic -machine q35 \
> -nic user,hostfwd=tcp::3000-:22
>
> Guest kernel auto-onlines newly added memory blocks:
> echo online > /sys/devices/system/memory/auto_online_blocks
>
> ...
>
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -1028,7 +1028,7 @@ static void __meminit free_pagetable(struct page *page, int order)
> free_reserved_pages(page, nr_pages);
> #endif
> } else {
> - pagetable_free(page_ptdesc(page));
> + __free_pages(page, order);
> }
> }
This reverts half of bf9e4e30f353 ("x86/mm: use pagetable_free()").
What about the other half? The below change that patch made to
arch/x86/mm/pat/set_memory.c - is that OK?
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -429,7 +429,7 @@ static void cpa_collapse_large_pages(struct cpa_data *cpa)
list_for_each_entry_safe(ptdesc, tmp, &pgtables, pt_list) {
list_del(&ptdesc->pt_list);
- __free_page(ptdesc_page(ptdesc));
+ pagetable_free(ptdesc);
}
}
next prev parent reply other threads:[~2025-12-23 1:15 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-22 4:11 Yuan Liu
2025-12-23 1:15 ` Andrew Morton [this message]
2025-12-23 20:01 ` Mike Rapoport
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251222171512.edfb1e839e3f007d89825363@linux-foundation.org \
--to=akpm@linux-foundation.org \
--cc=baolu.lu@linux.intel.com \
--cc=david@redhat.com \
--cc=linux-mm@kvack.org \
--cc=rppt@kernel.org \
--cc=yuan1.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox