From: Michael Roth <michael.roth@amd.com>
To: Ackerley Tng <ackerleytng@google.com>
Cc: <kvm@vger.kernel.org>, <linux-mm@kvack.org>,
<linux-kernel@vger.kernel.org>, <x86@kernel.org>,
<linux-fsdevel@vger.kernel.org>, <aik@amd.com>,
<ajones@ventanamicro.com>, <akpm@linux-foundation.org>,
<amoorthy@google.com>, <anthony.yznaga@oracle.com>,
<anup@brainfault.org>, <aou@eecs.berkeley.edu>,
<bfoster@redhat.com>, <binbin.wu@linux.intel.com>,
<brauner@kernel.org>, <catalin.marinas@arm.com>,
<chao.p.peng@intel.com>, <chenhuacai@kernel.org>,
<dave.hansen@intel.com>, <david@redhat.com>,
<dmatlack@google.com>, <dwmw@amazon.co.uk>,
<erdemaktas@google.com>, <fan.du@intel.com>, <fvdl@google.com>,
<graf@amazon.com>, <haibo1.xu@intel.com>, <hch@infradead.org>,
<hughd@google.com>, <ira.weiny@intel.com>,
<isaku.yamahata@intel.com>, <jack@suse.cz>, <james.morse@arm.com>,
<jarkko@kernel.org>, <jgg@ziepe.ca>, <jgowans@amazon.com>,
<jhubbard@nvidia.com>, <jroedel@suse.de>, <jthoughton@google.com>,
<jun.miao@intel.com>, <kai.huang@intel.com>, <keirf@google.com>,
<kent.overstreet@linux.dev>, <kirill.shutemov@intel.com>,
<liam.merwick@oracle.com>, <maciej.wieczor-retman@intel.com>,
<mail@maciej.szmigiero.name>, <maz@kernel.org>, <mic@digikod.net>,
<mpe@ellerman.id.au>, <muchun.song@linux.dev>, <nikunj@amd.com>,
<nsaenz@amazon.es>, <oliver.upton@linux.dev>,
<palmer@dabbelt.com>, <pankaj.gupta@amd.com>,
<paul.walmsley@sifive.com>, <pbonzini@redhat.com>,
<pdurrant@amazon.co.uk>, <peterx@redhat.com>, <pgonda@google.com>,
<pvorel@suse.cz>, <qperret@google.com>,
<quic_cvanscha@quicinc.com>, <quic_eberman@quicinc.com>,
<quic_mnalajal@quicinc.com>, <quic_pderrin@quicinc.com>,
<quic_pheragu@quicinc.com>, <quic_svaddagi@quicinc.com>,
<quic_tsoni@quicinc.com>, <richard.weiyang@gmail.com>,
<rick.p.edgecombe@intel.com>, <rientjes@google.com>,
<roypat@amazon.co.uk>, <rppt@kernel.org>, <seanjc@google.com>,
<shuah@kernel.org>, <steven.price@arm.com>,
<steven.sistare@oracle.com>, <suzuki.poulose@arm.com>,
<tabba@google.com>, <thomas.lendacky@amd.com>,
<usama.arif@bytedance.com>, <vannapurve@google.com>,
<vbabka@suse.cz>, <viro@zeniv.linux.org.uk>,
<vkuznets@redhat.com>, <wei.w.wang@intel.com>, <will@kernel.org>,
<willy@infradead.org>, <xiaoyao.li@intel.com>,
<yan.y.zhao@intel.com>, <yilun.xu@intel.com>,
<yuzenghui@huawei.com>, <zhiquan1.li@intel.com>
Subject: Re: [RFC PATCH v2 35/51] mm: guestmem_hugetlb: Add support for splitting and merging pages
Date: Tue, 16 Sep 2025 17:28:01 -0500 [thread overview]
Message-ID: <20250916222801.dlew6mq7kog2q5ni@amd.com> (raw)
In-Reply-To: <2ae41e0d80339da2b57011622ac2288fed65cd01.1747264138.git.ackerleytng@google.com>
On Wed, May 14, 2025 at 04:42:14PM -0700, Ackerley Tng wrote:
> These functions allow guest_memfd to split and merge HugeTLB pages,
> and clean them up on freeing the page.
>
> For merging and splitting pages on conversion, guestmem_hugetlb
> expects the refcount on the pages to already be 0. The caller must
> ensure that.
>
> For conversions, guest_memfd ensures that the refcounts are already 0
> by checking that there are no unexpected refcounts, and then freezing
> the expected refcounts away. On unexpected refcounts, guest_memfd will
> return an error to userspace.
>
> For truncation, on unexpected refcounts, guest_memfd will return an
> error to userspace.
>
> For truncation on closing, guest_memfd will just remove its own
> refcounts (the filemap refcounts) and mark split pages with
> PGTY_guestmem_hugetlb.
>
> The presence of PGTY_guestmem_hugetlb will trigger the folio_put()
> callback to handle further cleanup. This cleanup process will merge
> pages (with refcount 0, since cleanup is triggered from folio_put())
> before returning the pages to HugeTLB.
>
> Since the merging process is long, it is deferred to a worker thread
> since folio_put() could be called from atomic context.
>
> Change-Id: Ib04a3236f1e7250fd9af827630c334d40fb09d40
> Signed-off-by: Ackerley Tng <ackerleytng@google.com>
> Co-developed-by: Vishal Annapurve <vannapurve@google.com>
> Signed-off-by: Vishal Annapurve <vannapurve@google.com>
> ---
> include/linux/guestmem.h | 3 +
> mm/guestmem_hugetlb.c | 349 ++++++++++++++++++++++++++++++++++++++-
> 2 files changed, 347 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/guestmem.h b/include/linux/guestmem.h
> index 4b2d820274d9..3ee816d1dd34 100644
> --- a/include/linux/guestmem.h
> +++ b/include/linux/guestmem.h
> @@ -8,6 +8,9 @@ struct guestmem_allocator_operations {
> void *(*inode_setup)(size_t size, u64 flags);
> void (*inode_teardown)(void *private, size_t inode_size);
> struct folio *(*alloc_folio)(void *private);
> + int (*split_folio)(struct folio *folio);
> + void (*merge_folio)(struct folio *folio);
> + void (*free_folio)(struct folio *folio);
> /*
> * Returns the number of PAGE_SIZE pages in a page that this guestmem
> * allocator provides.
> diff --git a/mm/guestmem_hugetlb.c b/mm/guestmem_hugetlb.c
> index ec5a188ca2a7..8727598cf18e 100644
> --- a/mm/guestmem_hugetlb.c
> +++ b/mm/guestmem_hugetlb.c
> @@ -11,15 +11,12 @@
> #include <linux/mm.h>
> #include <linux/mm_types.h>
> #include <linux/pagemap.h>
> +#include <linux/xarray.h>
>
> #include <uapi/linux/guestmem.h>
>
> #include "guestmem_hugetlb.h"
> -
> -void guestmem_hugetlb_handle_folio_put(struct folio *folio)
> -{
> - WARN_ONCE(1, "A placeholder that shouldn't trigger. Work in progress.");
> -}
> +#include "hugetlb_vmemmap.h"
>
> struct guestmem_hugetlb_private {
> struct hstate *h;
> @@ -34,6 +31,339 @@ static size_t guestmem_hugetlb_nr_pages_in_folio(void *priv)
> return pages_per_huge_page(private->h);
> }
>
> +static DEFINE_XARRAY(guestmem_hugetlb_stash);
> +
> +struct guestmem_hugetlb_metadata {
> + void *_hugetlb_subpool;
> + void *_hugetlb_cgroup;
> + void *_hugetlb_hwpoison;
> + void *private;
> +};
> +
> +struct guestmem_hugetlb_stash_item {
> + struct guestmem_hugetlb_metadata hugetlb_metadata;
> + /* hstate tracks the original size of this folio. */
> + struct hstate *h;
> + /* Count of split pages, individually freed, waiting to be merged. */
> + atomic_t nr_pages_waiting_to_be_merged;
> +};
> +
> +struct workqueue_struct *guestmem_hugetlb_wq __ro_after_init;
> +static struct work_struct guestmem_hugetlb_cleanup_work;
> +static LLIST_HEAD(guestmem_hugetlb_cleanup_list);
> +
> +static inline void guestmem_hugetlb_register_folio_put_callback(struct folio *folio)
> +{
> + __folio_set_guestmem_hugetlb(folio);
> +}
> +
> +static inline void guestmem_hugetlb_unregister_folio_put_callback(struct folio *folio)
> +{
> + __folio_clear_guestmem_hugetlb(folio);
> +}
> +
> +static inline void guestmem_hugetlb_defer_cleanup(struct folio *folio)
> +{
> + struct llist_node *node;
> +
> + /*
> + * Reuse the folio->mapping pointer as a struct llist_node, since
> + * folio->mapping is NULL at this point.
> + */
> + BUILD_BUG_ON(sizeof(folio->mapping) != sizeof(struct llist_node));
> + node = (struct llist_node *)&folio->mapping;
> +
> + /*
> + * Only schedule work if list is previously empty. Otherwise,
> + * schedule_work() had been called but the workfn hasn't retrieved the
> + * list yet.
> + */
> + if (llist_add(node, &guestmem_hugetlb_cleanup_list))
> + queue_work(guestmem_hugetlb_wq, &guestmem_hugetlb_cleanup_work);
> +}
> +
> +void guestmem_hugetlb_handle_folio_put(struct folio *folio)
> +{
> + guestmem_hugetlb_unregister_folio_put_callback(folio);
> +
> + /*
> + * folio_put() can be called in interrupt context, hence do the work
> + * outside of interrupt context
> + */
> + guestmem_hugetlb_defer_cleanup(folio);
> +}
> +
> +/*
> + * Stash existing hugetlb metadata. Use this function just before splitting a
> + * hugetlb page.
> + */
> +static inline void
> +__guestmem_hugetlb_stash_metadata(struct guestmem_hugetlb_metadata *metadata,
> + struct folio *folio)
> +{
> + /*
> + * (folio->page + 1) doesn't have to be stashed since those fields are
> + * known on split/reconstruct and will be reinitialized anyway.
> + */
> +
> + /*
> + * subpool is created for every guest_memfd inode, but the folios will
> + * outlive the inode, hence we store the subpool here.
> + */
> + metadata->_hugetlb_subpool = folio->_hugetlb_subpool;
> + /*
> + * _hugetlb_cgroup has to be stored for freeing
> + * later. _hugetlb_cgroup_rsvd does not, since it is NULL for
> + * guest_memfd folios anyway. guest_memfd reservations are handled in
> + * the inode.
> + */
> + metadata->_hugetlb_cgroup = folio->_hugetlb_cgroup;
> + metadata->_hugetlb_hwpoison = folio->_hugetlb_hwpoison;
> +
> + /*
> + * HugeTLB flags are stored in folio->private. stash so that ->private
> + * can be used by core-mm.
> + */
> + metadata->private = folio->private;
> +}
> +
> +static int guestmem_hugetlb_stash_metadata(struct folio *folio)
> +{
> + XA_STATE(xas, &guestmem_hugetlb_stash, 0);
> + struct guestmem_hugetlb_stash_item *stash;
> + void *entry;
> +
> + stash = kzalloc(sizeof(*stash), 1);
> + if (!stash)
> + return -ENOMEM;
> +
> + stash->h = folio_hstate(folio);
> + __guestmem_hugetlb_stash_metadata(&stash->hugetlb_metadata, folio);
> +
> + xas_set_order(&xas, folio_pfn(folio), folio_order(folio));
> +
> + xas_lock(&xas);
> + entry = xas_store(&xas, stash);
> + xas_unlock(&xas);
> +
> + if (xa_is_err(entry)) {
> + kfree(stash);
> + return xa_err(entry);
> + }
> +
> + return 0;
> +}
> +
> +static inline void
> +__guestmem_hugetlb_unstash_metadata(struct guestmem_hugetlb_metadata *metadata,
> + struct folio *folio)
> +{
> + folio->_hugetlb_subpool = metadata->_hugetlb_subpool;
> + folio->_hugetlb_cgroup = metadata->_hugetlb_cgroup;
> + folio->_hugetlb_cgroup_rsvd = NULL;
> + folio->_hugetlb_hwpoison = metadata->_hugetlb_hwpoison;
> +
> + folio_change_private(folio, metadata->private);
Hi Ackerley,
We've been doing some testing with this series on top of David's
guestmemfd-preview branch with some SNP enablement[1][2] to exercise
this code along with the NUMA support from Shivank (BTW, I know you
have v3 in the works so let me know if we can help with testing that
as well).
One issue we hit is if you do a split->merge sequence the unstash of the
private data will result in folio_test_hugetlb_vmemmap_optimized() reporting
true even though the hugetlb_vmemmap_optimize_folio() call hasn't been
performed yet, and when that does get called it will be skipped, so some HVO
optimization can be lost in this way.
More troublesome however is if you later split the folio again,
hugetlb_vmemmap_restore_folio() may cause a BUG_ON() since the flags are in a
state that's not consistent with the state of the folio/vmemmap.
The following patch seems to resolve the issue but I'm not sure what the
best approach would be:
https://github.com/AMDESE/linux/commit/b1f25956f18d32730b8d4ded6d77e980091eb4d3
Thanks,
Mike
[1] https://github.com/AMDESE/linux/commits/snp-hugetlb-v2-wip0/
[2] https://github.com/AMDESE/qemu/tree/snp-hugetlb-dev-wip0
next prev parent reply other threads:[~2025-09-16 22:29 UTC|newest]
Thread overview: 242+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-05-14 23:41 [RFC PATCH v2 00/51] 1G page support for guest_memfd Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 01/51] KVM: guest_memfd: Make guest mem use guest mem inodes instead of anonymous inodes Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 02/51] KVM: guest_memfd: Introduce and use shareability to guard faulting Ackerley Tng
2025-05-27 3:54 ` Yan Zhao
2025-05-29 18:20 ` Ackerley Tng
2025-05-30 8:53 ` Fuad Tabba
2025-05-30 18:32 ` Ackerley Tng
2025-06-02 9:43 ` Fuad Tabba
2025-05-27 8:25 ` Binbin Wu
2025-05-27 8:43 ` Binbin Wu
2025-05-29 18:26 ` Ackerley Tng
2025-05-29 20:37 ` Ackerley Tng
2025-05-29 5:42 ` Michael Roth
2025-06-11 21:51 ` Ackerley Tng
2025-07-02 23:25 ` Michael Roth
2025-07-03 0:46 ` Vishal Annapurve
2025-07-03 0:52 ` Vishal Annapurve
2025-07-03 4:12 ` Michael Roth
2025-07-03 5:10 ` Vishal Annapurve
2025-07-03 20:39 ` Michael Roth
2025-07-07 14:55 ` Vishal Annapurve
2025-07-12 0:10 ` Michael Roth
2025-07-12 17:53 ` Vishal Annapurve
2025-08-12 8:23 ` Fuad Tabba
2025-08-13 17:11 ` Ira Weiny
2025-06-11 22:10 ` Ackerley Tng
2025-08-01 0:01 ` Yan Zhao
2025-08-14 21:35 ` Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 03/51] KVM: selftests: Update guest_memfd_test for INIT_PRIVATE flag Ackerley Tng
2025-05-15 13:49 ` Ira Weiny
2025-05-16 17:42 ` Ackerley Tng
2025-05-16 19:31 ` Ira Weiny
2025-05-27 8:53 ` Binbin Wu
2025-05-30 19:59 ` Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 04/51] KVM: guest_memfd: Introduce KVM_GMEM_CONVERT_SHARED/PRIVATE ioctls Ackerley Tng
2025-05-15 14:50 ` Ira Weiny
2025-05-16 17:53 ` Ackerley Tng
2025-05-20 9:22 ` Fuad Tabba
2025-05-20 13:02 ` Vishal Annapurve
2025-05-20 13:44 ` Fuad Tabba
2025-05-20 14:11 ` Vishal Annapurve
2025-05-20 14:33 ` Fuad Tabba
2025-05-20 16:02 ` Vishal Annapurve
2025-05-20 18:05 ` Fuad Tabba
2025-05-20 19:40 ` Ackerley Tng
2025-05-21 12:36 ` Fuad Tabba
2025-05-21 14:42 ` Vishal Annapurve
2025-05-21 15:21 ` Fuad Tabba
2025-05-21 15:51 ` Vishal Annapurve
2025-05-21 18:27 ` Fuad Tabba
2025-05-22 14:52 ` Sean Christopherson
2025-05-22 15:07 ` Fuad Tabba
2025-05-22 16:26 ` Sean Christopherson
2025-05-23 10:12 ` Fuad Tabba
2025-06-24 8:23 ` Alexey Kardashevskiy
2025-06-24 13:08 ` Jason Gunthorpe
2025-06-24 14:10 ` Vishal Annapurve
2025-06-27 4:49 ` Alexey Kardashevskiy
2025-06-27 15:17 ` Vishal Annapurve
2025-06-30 0:19 ` Alexey Kardashevskiy
2025-06-30 14:19 ` Vishal Annapurve
2025-07-10 6:57 ` Alexey Kardashevskiy
2025-07-10 17:58 ` Jason Gunthorpe
2025-07-02 8:35 ` Yan Zhao
2025-07-02 13:54 ` Vishal Annapurve
2025-07-02 14:13 ` Jason Gunthorpe
2025-07-02 14:32 ` Vishal Annapurve
2025-07-10 10:50 ` Xu Yilun
2025-07-10 17:54 ` Jason Gunthorpe
2025-07-11 4:31 ` Xu Yilun
2025-07-11 9:33 ` Xu Yilun
2025-07-16 22:22 ` Ackerley Tng
2025-07-17 9:32 ` Xu Yilun
2025-07-17 16:56 ` Ackerley Tng
2025-07-18 2:48 ` Xu Yilun
2025-07-18 14:15 ` Jason Gunthorpe
2025-07-21 14:18 ` Xu Yilun
2025-07-18 15:13 ` Ira Weiny
2025-07-21 9:58 ` Xu Yilun
2025-07-22 18:17 ` Ackerley Tng
2025-07-22 19:25 ` Edgecombe, Rick P
2025-05-28 3:16 ` Binbin Wu
2025-05-30 20:10 ` Ackerley Tng
2025-06-03 0:54 ` Binbin Wu
2025-05-14 23:41 ` [RFC PATCH v2 05/51] KVM: guest_memfd: Skip LRU for guest_memfd folios Ackerley Tng
2025-05-28 7:01 ` Binbin Wu
2025-05-30 20:32 ` Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 06/51] KVM: Query guest_memfd for private/shared status Ackerley Tng
2025-05-27 3:55 ` Yan Zhao
2025-05-28 8:08 ` Binbin Wu
2025-05-28 9:55 ` Yan Zhao
2025-05-14 23:41 ` [RFC PATCH v2 07/51] KVM: guest_memfd: Add CAP KVM_CAP_GMEM_CONVERSION Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 08/51] KVM: selftests: Test flag validity after guest_memfd supports conversions Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 09/51] KVM: selftests: Test faulting with respect to GUEST_MEMFD_FLAG_INIT_PRIVATE Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 10/51] KVM: selftests: Refactor vm_mem_add to be more flexible Ackerley Tng
2025-10-02 22:10 ` Sean Christopherson
2025-05-14 23:41 ` [RFC PATCH v2 11/51] KVM: selftests: Allow cleanup of ucall_pool from host Ackerley Tng
2025-10-02 18:43 ` Sean Christopherson
2025-05-14 23:41 ` [RFC PATCH v2 12/51] KVM: selftests: Test conversion flows for guest_memfd Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 13/51] KVM: selftests: Add script to exercise private_mem_conversions_test Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 14/51] KVM: selftests: Update private_mem_conversions_test to mmap guest_memfd Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 15/51] KVM: selftests: Update script to map shared memory from guest_memfd Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 16/51] mm: hugetlb: Consolidate interpretation of gbl_chg within alloc_hugetlb_folio() Ackerley Tng
2025-05-15 2:09 ` Matthew Wilcox
2025-05-28 8:55 ` Binbin Wu
2025-07-07 18:27 ` James Houghton
2025-05-14 23:41 ` [RFC PATCH v2 17/51] mm: hugetlb: Cleanup interpretation of gbl_chg in alloc_hugetlb_folio() Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 18/51] mm: hugetlb: Cleanup interpretation of map_chg_state within alloc_hugetlb_folio() Ackerley Tng
2025-07-07 18:08 ` James Houghton
2025-05-14 23:41 ` [RFC PATCH v2 19/51] mm: hugetlb: Rename alloc_surplus_hugetlb_folio Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 20/51] mm: mempolicy: Refactor out policy_node_nodemask() Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 21/51] mm: hugetlb: Inline huge_node() into callers Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 22/51] mm: hugetlb: Refactor hugetlb allocation functions Ackerley Tng
2025-05-31 23:45 ` Ira Weiny
2025-06-13 22:03 ` Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 23/51] mm: hugetlb: Refactor out hugetlb_alloc_folio() Ackerley Tng
2025-06-01 0:38 ` Ira Weiny
2025-06-13 22:07 ` Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 24/51] mm: hugetlb: Add option to create new subpool without using surplus Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 25/51] mm: truncate: Expose preparation steps for truncate_inode_pages_final Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 26/51] mm: Consolidate freeing of typed folios on final folio_put() Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 27/51] mm: hugetlb: Expose hugetlb_subpool_{get,put}_pages() Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 28/51] mm: Introduce guestmem_hugetlb to support folio_put() handling of guestmem pages Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 29/51] mm: guestmem_hugetlb: Wrap HugeTLB as an allocator for guest_memfd Ackerley Tng
2025-05-16 14:07 ` Ackerley Tng
2025-05-16 20:33 ` Ackerley Tng
2025-09-16 22:41 ` Michael Roth
2025-09-16 22:55 ` Michael Roth
2025-09-18 6:38 ` Ackerley Tng
2025-10-03 14:35 ` Sean Christopherson
2025-10-03 16:05 ` Sean Christopherson
2025-05-14 23:42 ` [RFC PATCH v2 30/51] mm: truncate: Expose truncate_inode_folio() Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 31/51] KVM: x86: Set disallow_lpage on base_gfn and guest_memfd pgoff misalignment Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 32/51] KVM: guest_memfd: Support guestmem_hugetlb as custom allocator Ackerley Tng
2025-05-23 10:47 ` Yan Zhao
2025-08-12 9:13 ` Tony Lindgren
2025-05-14 23:42 ` [RFC PATCH v2 33/51] KVM: guest_memfd: Allocate and truncate from " Ackerley Tng
2025-05-21 18:05 ` Vishal Annapurve
2025-05-22 23:12 ` Edgecombe, Rick P
2025-05-28 10:58 ` Yan Zhao
2025-06-03 7:43 ` Binbin Wu
2025-07-16 22:13 ` Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 34/51] mm: hugetlb: Add functions to add/delete folio from hugetlb lists Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 35/51] mm: guestmem_hugetlb: Add support for splitting and merging pages Ackerley Tng
2025-09-16 22:28 ` Michael Roth [this message]
2025-09-18 6:53 ` Ackerley Tng
2025-10-03 16:11 ` Sean Christopherson
2025-05-14 23:42 ` [RFC PATCH v2 36/51] mm: Convert split_folio() macro to function Ackerley Tng
2025-05-21 16:40 ` Edgecombe, Rick P
2025-05-14 23:42 ` [RFC PATCH v2 37/51] filemap: Pass address_space mapping to ->free_folio() Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 38/51] KVM: guest_memfd: Split allocator pages for guest_memfd use Ackerley Tng
2025-05-22 22:19 ` Edgecombe, Rick P
2025-06-05 17:15 ` Ackerley Tng
2025-06-05 17:53 ` Edgecombe, Rick P
2025-06-05 17:15 ` Ackerley Tng
2025-06-05 17:16 ` Ackerley Tng
2025-06-05 17:16 ` Ackerley Tng
2025-06-05 17:16 ` Ackerley Tng
2025-05-27 4:30 ` Yan Zhao
2025-05-27 4:38 ` Yan Zhao
2025-06-05 17:50 ` Ackerley Tng
2025-05-27 8:45 ` Yan Zhao
2025-06-05 19:10 ` Ackerley Tng
2025-06-16 11:15 ` Yan Zhao
2025-06-05 5:24 ` Binbin Wu
2025-06-05 19:16 ` Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 39/51] KVM: guest_memfd: Merge and truncate on fallocate(PUNCH_HOLE) Ackerley Tng
2025-05-28 11:00 ` Yan Zhao
2025-05-28 16:39 ` Ackerley Tng
2025-05-29 3:26 ` Yan Zhao
2025-09-26 9:14 ` Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 40/51] KVM: guest_memfd: Update kvm_gmem_mapping_order to account for page status Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 41/51] KVM: Add CAP to indicate support for HugeTLB as custom allocator Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 42/51] KVM: selftests: Add basic selftests for hugetlb-backed guest_memfd Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 43/51] KVM: selftests: Update conversion flows test for HugeTLB Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 44/51] KVM: selftests: Test truncation paths of guest_memfd Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 45/51] KVM: selftests: Test allocation and conversion of subfolios Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 46/51] KVM: selftests: Test that guest_memfd usage is reported via hugetlb Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 47/51] KVM: selftests: Support various types of backing sources for private memory Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 48/51] KVM: selftests: Update test for various private memory backing source types Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 49/51] KVM: selftests: Update private_mem_conversions_test.sh to test with HugeTLB pages Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 50/51] KVM: selftests: Add script to test HugeTLB statistics Ackerley Tng
2025-05-15 18:03 ` [RFC PATCH v2 00/51] 1G page support for guest_memfd Edgecombe, Rick P
2025-05-15 18:42 ` Vishal Annapurve
2025-05-15 23:35 ` Edgecombe, Rick P
2025-05-16 0:57 ` Sean Christopherson
2025-05-16 2:12 ` Edgecombe, Rick P
2025-05-16 13:11 ` Vishal Annapurve
2025-05-16 16:45 ` Edgecombe, Rick P
2025-05-16 17:51 ` Sean Christopherson
2025-05-16 19:14 ` Edgecombe, Rick P
2025-05-16 20:25 ` Dave Hansen
2025-05-16 21:42 ` Edgecombe, Rick P
2025-05-16 17:45 ` Sean Christopherson
2025-05-16 13:09 ` Jason Gunthorpe
2025-05-16 17:04 ` Edgecombe, Rick P
2025-05-16 0:22 ` [RFC PATCH v2 51/51] KVM: selftests: Test guest_memfd for accuracy of st_blocks Ackerley Tng
2025-05-16 19:48 ` [RFC PATCH v2 00/51] 1G page support for guest_memfd Ira Weiny
2025-05-16 19:59 ` Ira Weiny
2025-05-16 20:26 ` Ackerley Tng
2025-05-16 22:43 ` Ackerley Tng
2025-06-19 8:13 ` Yan Zhao
2025-06-19 8:59 ` Xiaoyao Li
2025-06-19 9:18 ` Xiaoyao Li
2025-06-19 9:28 ` Yan Zhao
2025-06-19 9:45 ` Xiaoyao Li
2025-06-19 9:49 ` Xiaoyao Li
2025-06-29 18:28 ` Vishal Annapurve
2025-06-30 3:14 ` Yan Zhao
2025-06-30 14:14 ` Vishal Annapurve
2025-07-01 5:23 ` Yan Zhao
2025-07-01 19:48 ` Vishal Annapurve
2025-07-07 23:25 ` Sean Christopherson
2025-07-08 0:14 ` Vishal Annapurve
2025-07-08 1:08 ` Edgecombe, Rick P
2025-07-08 14:20 ` Sean Christopherson
2025-07-08 14:52 ` Edgecombe, Rick P
2025-07-08 15:07 ` Vishal Annapurve
2025-07-08 15:31 ` Edgecombe, Rick P
2025-07-08 17:16 ` Vishal Annapurve
2025-07-08 17:39 ` Edgecombe, Rick P
2025-07-08 18:03 ` Sean Christopherson
2025-07-08 18:13 ` Edgecombe, Rick P
2025-07-08 18:55 ` Sean Christopherson
2025-07-08 21:23 ` Edgecombe, Rick P
2025-07-09 14:28 ` Vishal Annapurve
2025-07-09 15:00 ` Sean Christopherson
2025-07-10 1:30 ` Vishal Annapurve
2025-07-10 23:33 ` Sean Christopherson
2025-07-11 21:18 ` Vishal Annapurve
2025-07-12 17:33 ` Vishal Annapurve
2025-07-09 15:17 ` Edgecombe, Rick P
2025-07-10 3:39 ` Vishal Annapurve
2025-07-08 19:28 ` Vishal Annapurve
2025-07-08 19:58 ` Sean Christopherson
2025-07-08 22:54 ` Vishal Annapurve
2025-07-08 15:38 ` Sean Christopherson
2025-07-08 16:22 ` Fuad Tabba
2025-07-08 17:25 ` Sean Christopherson
2025-07-08 18:37 ` Fuad Tabba
2025-07-16 23:06 ` Ackerley Tng
2025-06-26 23:19 ` Ackerley Tng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250916222801.dlew6mq7kog2q5ni@amd.com \
--to=michael.roth@amd.com \
--cc=ackerleytng@google.com \
--cc=aik@amd.com \
--cc=ajones@ventanamicro.com \
--cc=akpm@linux-foundation.org \
--cc=amoorthy@google.com \
--cc=anthony.yznaga@oracle.com \
--cc=anup@brainfault.org \
--cc=aou@eecs.berkeley.edu \
--cc=bfoster@redhat.com \
--cc=binbin.wu@linux.intel.com \
--cc=brauner@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=chao.p.peng@intel.com \
--cc=chenhuacai@kernel.org \
--cc=dave.hansen@intel.com \
--cc=david@redhat.com \
--cc=dmatlack@google.com \
--cc=dwmw@amazon.co.uk \
--cc=erdemaktas@google.com \
--cc=fan.du@intel.com \
--cc=fvdl@google.com \
--cc=graf@amazon.com \
--cc=haibo1.xu@intel.com \
--cc=hch@infradead.org \
--cc=hughd@google.com \
--cc=ira.weiny@intel.com \
--cc=isaku.yamahata@intel.com \
--cc=jack@suse.cz \
--cc=james.morse@arm.com \
--cc=jarkko@kernel.org \
--cc=jgg@ziepe.ca \
--cc=jgowans@amazon.com \
--cc=jhubbard@nvidia.com \
--cc=jroedel@suse.de \
--cc=jthoughton@google.com \
--cc=jun.miao@intel.com \
--cc=kai.huang@intel.com \
--cc=keirf@google.com \
--cc=kent.overstreet@linux.dev \
--cc=kirill.shutemov@intel.com \
--cc=kvm@vger.kernel.org \
--cc=liam.merwick@oracle.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=maciej.wieczor-retman@intel.com \
--cc=mail@maciej.szmigiero.name \
--cc=maz@kernel.org \
--cc=mic@digikod.net \
--cc=mpe@ellerman.id.au \
--cc=muchun.song@linux.dev \
--cc=nikunj@amd.com \
--cc=nsaenz@amazon.es \
--cc=oliver.upton@linux.dev \
--cc=palmer@dabbelt.com \
--cc=pankaj.gupta@amd.com \
--cc=paul.walmsley@sifive.com \
--cc=pbonzini@redhat.com \
--cc=pdurrant@amazon.co.uk \
--cc=peterx@redhat.com \
--cc=pgonda@google.com \
--cc=pvorel@suse.cz \
--cc=qperret@google.com \
--cc=quic_cvanscha@quicinc.com \
--cc=quic_eberman@quicinc.com \
--cc=quic_mnalajal@quicinc.com \
--cc=quic_pderrin@quicinc.com \
--cc=quic_pheragu@quicinc.com \
--cc=quic_svaddagi@quicinc.com \
--cc=quic_tsoni@quicinc.com \
--cc=richard.weiyang@gmail.com \
--cc=rick.p.edgecombe@intel.com \
--cc=rientjes@google.com \
--cc=roypat@amazon.co.uk \
--cc=rppt@kernel.org \
--cc=seanjc@google.com \
--cc=shuah@kernel.org \
--cc=steven.price@arm.com \
--cc=steven.sistare@oracle.com \
--cc=suzuki.poulose@arm.com \
--cc=tabba@google.com \
--cc=thomas.lendacky@amd.com \
--cc=usama.arif@bytedance.com \
--cc=vannapurve@google.com \
--cc=vbabka@suse.cz \
--cc=viro@zeniv.linux.org.uk \
--cc=vkuznets@redhat.com \
--cc=wei.w.wang@intel.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
--cc=xiaoyao.li@intel.com \
--cc=yan.y.zhao@intel.com \
--cc=yilun.xu@intel.com \
--cc=yuzenghui@huawei.com \
--cc=zhiquan1.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox