From: Ackerley Tng <ackerleytng@google.com>
To: Yan Zhao <yan.y.zhao@intel.com>
Cc: vannapurve@google.com, chenyi.qiang@intel.com, tabba@google.com,
quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com,
peterx@redhat.com, david@redhat.com, rientjes@google.com,
fvdl@google.com, jthoughton@google.com, seanjc@google.com,
pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com,
jun.miao@intel.com, isaku.yamahata@intel.com,
muchun.song@linux.dev, erdemaktas@google.com,
qperret@google.com, jhubbard@nvidia.com, willy@infradead.org,
shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com,
kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org,
richard.weiyang@gmail.com, anup@brainfault.org,
haibo1.xu@intel.com, ajones@ventanamicro.com,
vkuznets@redhat.com, maciej.wieczor-retman@intel.com,
pgonda@google.com, oliver.upton@linux.dev,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
kvm@vger.kernel.org, linux-kselftest@vger.kernel.org
Subject: Re: [RFC PATCH 39/39] KVM: guest_memfd: Dynamically split/reconstruct HugeTLB page
Date: Wed, 30 Apr 2025 13:09:33 -0700 [thread overview]
Message-ID: <diqz4iy5xvgi.fsf@ackerleytng-ctop.c.googlers.com> (raw)
In-Reply-To: <aA7UXI0NB7oQQrL2@yzhao56-desk.sh.intel.com> (message from Yan Zhao on Mon, 28 Apr 2025 09:05:32 +0800)
Yan Zhao <yan.y.zhao@intel.com> writes:
> On Fri, Apr 25, 2025 at 03:45:20PM -0700, Ackerley Tng wrote:
>> Yan Zhao <yan.y.zhao@intel.com> writes:
>>
>> > On Thu, Apr 24, 2025 at 11:15:11AM -0700, Ackerley Tng wrote:
>> >> Vishal Annapurve <vannapurve@google.com> writes:
>> >>
>> >> > On Thu, Apr 24, 2025 at 1:15 AM Yan Zhao <yan.y.zhao@intel.com> wrote:
>> >> >>
>> >> >> On Thu, Apr 24, 2025 at 01:55:51PM +0800, Chenyi Qiang wrote:
>> >> >> >
>> >> >> >
>> >> >> > On 4/24/2025 12:25 PM, Yan Zhao wrote:
>> >> >> > > On Thu, Apr 24, 2025 at 09:09:22AM +0800, Yan Zhao wrote:
>> >> >> > >> On Wed, Apr 23, 2025 at 03:02:02PM -0700, Ackerley Tng wrote:
>> >> >> > >>> Yan Zhao <yan.y.zhao@intel.com> writes:
>> >> >> > >>>
>> >> >> > >>>> On Tue, Sep 10, 2024 at 11:44:10PM +0000, Ackerley Tng wrote:
>> >> >> > >>>>> +/*
>> >> >> > >>>>> + * Allocates and then caches a folio in the filemap. Returns a folio with
>> >> >> > >>>>> + * refcount of 2: 1 after allocation, and 1 taken by the filemap.
>> >> >> > >>>>> + */
>> >> >> > >>>>> +static struct folio *kvm_gmem_hugetlb_alloc_and_cache_folio(struct inode *inode,
>> >> >> > >>>>> + pgoff_t index)
>> >> >> > >>>>> +{
>> >> >> > >>>>> + struct kvm_gmem_hugetlb *hgmem;
>> >> >> > >>>>> + pgoff_t aligned_index;
>> >> >> > >>>>> + struct folio *folio;
>> >> >> > >>>>> + int nr_pages;
>> >> >> > >>>>> + int ret;
>> >> >> > >>>>> +
>> >> >> > >>>>> + hgmem = kvm_gmem_hgmem(inode);
>> >> >> > >>>>> + folio = kvm_gmem_hugetlb_alloc_folio(hgmem->h, hgmem->spool);
>> >> >> > >>>>> + if (IS_ERR(folio))
>> >> >> > >>>>> + return folio;
>> >> >> > >>>>> +
>> >> >> > >>>>> + nr_pages = 1UL << huge_page_order(hgmem->h);
>> >> >> > >>>>> + aligned_index = round_down(index, nr_pages);
>> >> >> > >>>> Maybe a gap here.
>> >> >> > >>>>
>> >> >> > >>>> When a guest_memfd is bound to a slot where slot->base_gfn is not aligned to
>> >> >> > >>>> 2M/1G and slot->gmem.pgoff is 0, even if an index is 2M/1G aligned, the
>> >> >> > >>>> corresponding GFN is not 2M/1G aligned.
>> >> >> > >>>
>> >> >> > >>> Thanks for looking into this.
>> >> >> > >>>
>> >> >> > >>> In 1G page support for guest_memfd, the offset and size are always
>> >> >> > >>> hugepage aligned to the hugepage size requested at guest_memfd creation
>> >> >> > >>> time, and it is true that when binding to a memslot, slot->base_gfn and
>> >> >> > >>> slot->npages may not be hugepage aligned.
>> >> >> > >>>
>> >> >> > >>>>
>> >> >> > >>>> However, TDX requires that private huge pages be 2M aligned in GFN.
>> >> >> > >>>>
>> >> >> > >>>
>> >> >> > >>> IIUC other factors also contribute to determining the mapping level in
>> >> >> > >>> the guest page tables, like lpage_info and .private_max_mapping_level()
>> >> >> > >>> in kvm_x86_ops.
>> >> >> > >>>
>> >> >> > >>> If slot->base_gfn and slot->npages are not hugepage aligned, lpage_info
>> >> >> > >>> will track that and not allow faulting into guest page tables at higher
>> >> >> > >>> granularity.
>> >> >> > >>
>> >> >> > >> lpage_info only checks the alignments of slot->base_gfn and
>> >> >> > >> slot->base_gfn + npages. e.g.,
>> >> >> > >>
>> >> >> > >> if slot->base_gfn is 8K, npages is 8M, then for this slot,
>> >> >> > >> lpage_info[2M][0].disallow_lpage = 1, which is for GFN [4K, 2M+8K);
>> >> >> > >> lpage_info[2M][1].disallow_lpage = 0, which is for GFN [2M+8K, 4M+8K);
>> >> >> > >> lpage_info[2M][2].disallow_lpage = 0, which is for GFN [4M+8K, 6M+8K);
>> >> >> > >> lpage_info[2M][3].disallow_lpage = 1, which is for GFN [6M+8K, 8M+8K);
>> >> >> >
>> >> >> > Should it be?
>> >> >> > lpage_info[2M][0].disallow_lpage = 1, which is for GFN [8K, 2M);
>> >> >> > lpage_info[2M][1].disallow_lpage = 0, which is for GFN [2M, 4M);
>> >> >> > lpage_info[2M][2].disallow_lpage = 0, which is for GFN [4M, 6M);
>> >> >> > lpage_info[2M][3].disallow_lpage = 0, which is for GFN [6M, 8M);
>> >> >> > lpage_info[2M][4].disallow_lpage = 1, which is for GFN [8M, 8M+8K);
>> >> >> Right. Good catch. Thanks!
>> >> >>
>> >> >> Let me update the example as below:
>> >> >> slot->base_gfn is 2 (for GPA 8KB), npages 2000 (for a 8MB range)
>> >> >>
>> >> >> lpage_info[2M][0].disallow_lpage = 1, which is for GPA [8KB, 2MB);
>> >> >> lpage_info[2M][1].disallow_lpage = 0, which is for GPA [2MB, 4MB);
>> >> >> lpage_info[2M][2].disallow_lpage = 0, which is for GPA [4MB, 6MB);
>> >> >> lpage_info[2M][3].disallow_lpage = 0, which is for GPA [6MB, 8MB);
>> >> >> lpage_info[2M][4].disallow_lpage = 1, which is for GPA [8MB, 8MB+8KB);
>> >> >>
>> >> >> lpage_info indicates that a 2MB mapping is alllowed to cover GPA 4MB and GPA
>> >> >> 4MB+16KB. However, their aligned_index values lead guest_memfd to allocate two
>> >> >> 2MB folios, whose physical addresses may not be contiguous.
>> >> >>
>> >> >> Additionally, if the guest accesses two GPAs, e.g., GPA 2MB+8KB and GPA 4MB,
>> >> >> KVM could create two 2MB mappings to cover GPA ranges [2MB, 4MB), [4MB, 6MB).
>> >> >> However, guest_memfd just allocates the same 2MB folio for both faults.
>> >> >>
>> >> >>
>> >> >> >
>> >> >> > >>
>> >> >> > >> ---------------------------------------------------------
>> >> >> > >> | | | | | | | | |
>> >> >> > >> 8K 2M 2M+8K 4M 4M+8K 6M 6M+8K 8M 8M+8K
>> >> >> > >>
>> >> >> > >> For GFN 6M and GFN 6M+4K, as they both belong to lpage_info[2M][2], huge
>> >> >> > >> page is allowed. Also, they have the same aligned_index 2 in guest_memfd.
>> >> >> > >> So, guest_memfd allocates the same huge folio of 2M order for them.
>> >> >> > > Sorry, sent too fast this morning. The example is not right. The correct
>> >> >> > > one is:
>> >> >> > >
>> >> >> > > For GFN 4M and GFN 4M+16K, lpage_info indicates that 2M is allowed. So,
>> >> >> > > KVM will create a 2M mapping for them.
>> >> >> > >
>> >> >> > > However, in guest_memfd, GFN 4M and GFN 4M+16K do not correspond to the
>> >> >> > > same 2M folio and physical addresses may not be contiguous.
>> >> >
>> >> > Then during binding, guest memfd offset misalignment with hugepage
>> >> > should be same as gfn misalignment. i.e.
>> >> >
>> >> > (offset & ~huge_page_mask(h)) == ((slot->base_gfn << PAGE_SHIFT) &
>> >> > ~huge_page_mask(h));
>> >> >
>> >> > For non guest_memfd backed scenarios, KVM allows slot gfn ranges that
>> >> > are not hugepage aligned, so guest_memfd should also be able to
>> >> > support non-hugepage aligned memslots.
>> >> >
>> >>
>> >> I drew up a picture [1] which hopefully clarifies this.
>> >>
>> >> Thanks for pointing this out, I understand better now and we will add an
>> >> extra constraint during memslot binding of guest_memfd to check that gfn
>> >> offsets within a hugepage must be guest_memfd offsets.
>> > I'm a bit confused.
>> >
>> > As "index = gfn - slot->base_gfn + slot->gmem.pgoff", do you mean you are going
>> > to force "slot->base_gfn == slot->gmem.pgoff" ?
>> >
>> > For some memory region, e.g., "pc.ram", it's divided into 2 parts:
>> > - one with offset 0, size 0x80000000(2G),
>> > positioned at GPA 0, which is below GPA 4G;
>> > - one with offset 0x80000000(2G), size 0x80000000(2G),
>> > positioned at GPA 0x100000000(4G), which is above GPA 4G.
>> >
>> > For the second part, its slot->base_gfn is 0x100000000, while slot->gmem.pgoff
>> > is 0x80000000.
>> >
>>
>> Nope I don't mean to enforce that they are equal, we just need the
>> offsets within the page to be equal.
>>
>> I edited Vishal's code snippet, perhaps it would help explain better:
>>
>> page_size is the size of the hugepage, so in our example,
>>
>> page_size = SZ_2M;
>> page_mask = ~(page_size - 1);
> page_mask = page_size - 1 ?
>
Yes, thank you!
>> offset_within_page = slot->gmem.pgoff & page_mask;
>> gfn_within_page = (slot->base_gfn << PAGE_SHIFT) & page_mask;
>>
>> We will enforce that
>>
>> offset_within_page == gfn_within_page;
> For "pc.ram", if it has 2.5G below 4G, it would be configured as follows
> - slot 1: slot->gmem.pgoff=0, base GPA 0, size=2.5G
> - slot 2: slot->gmem.pgoff=2.5G, base GPA 4G, size=1.5G
>
> When binding these two slots to the same guest_memfd created with flag
> KVM_GUEST_MEMFD_HUGE_1GB:
> - binding the 1st slot will succeed;
> - binding the 2nd slot will fail.
>
> What options does userspace have in this scenario?
> It can't reduce the flag to KVM_GUEST_MEMFD_HUGE_2MB. Adjusting the gmem.pgoff
> isn't ideal either.
>
> What about something similar as below?
>
> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> index d2feacd14786..87c33704a748 100644
> --- a/virt/kvm/guest_memfd.c
> +++ b/virt/kvm/guest_memfd.c
> @@ -1842,8 +1842,16 @@ __kvm_gmem_get_pfn(struct file *file, struct kvm_memory_slot *slot,
> }
>
> *pfn = folio_file_pfn(folio, index);
> - if (max_order)
> - *max_order = folio_order(folio);
> + if (max_order) {
> + int order;
> +
> + order = folio_order(folio);
> +
> + while (order > 0 && ((slot->base_gfn ^ slot->gmem.pgoff) & ((1 << order) - 1)))
> + order--;
> +
> + *max_order = order;
> + }
>
> *is_prepared = folio_test_uptodate(folio);
> return folio;
>
Vishal was wondering how this is working before guest_memfd was
introduced, for other backing memory like HugeTLB.
I then poked around and found this [1]. I will be adding a similar check
for any slot where kvm_slot_can_be_private(slot).
Yan, that should work, right?
[1] https://github.com/torvalds/linux/blob/b6ea1680d0ac0e45157a819c41b46565f4616186/arch/x86/kvm/x86.c#L12996
>> >> Adding checks at binding time will allow hugepage-unaligned offsets (to
>> >> be at parity with non-guest_memfd backing memory) but still fix this
>> >> issue.
>> >>
>> >> lpage_info will make sure that ranges near the bounds will be
>> >> fragmented, but the hugepages in the middle will still be mappable as
>> >> hugepages.
>> >>
>> >> [1] https://lpc.events/event/18/contributions/1764/attachments/1409/3706/binding-must-have-same-alignment.svg
next prev parent reply other threads:[~2025-04-30 20:09 UTC|newest]
Thread overview: 130+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-09-10 23:43 [RFC PATCH 00/39] 1G page support for guest_memfd Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 01/39] mm: hugetlb: Simplify logic in dequeue_hugetlb_folio_vma() Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 02/39] mm: hugetlb: Refactor vma_has_reserves() to should_use_hstate_resv() Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 03/39] mm: hugetlb: Remove unnecessary check for avoid_reserve Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 04/39] mm: mempolicy: Refactor out policy_node_nodemask() Ackerley Tng
2024-09-11 16:46 ` Gregory Price
2024-09-10 23:43 ` [RFC PATCH 05/39] mm: hugetlb: Refactor alloc_buddy_hugetlb_folio_with_mpol() to interpret mempolicy instead of vma Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 06/39] mm: hugetlb: Refactor dequeue_hugetlb_folio_vma() to use mpol Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 07/39] mm: hugetlb: Refactor out hugetlb_alloc_folio Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 08/39] mm: truncate: Expose preparation steps for truncate_inode_pages_final Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 09/39] mm: hugetlb: Expose hugetlb_subpool_{get,put}_pages() Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 10/39] mm: hugetlb: Add option to create new subpool without using surplus Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 11/39] mm: hugetlb: Expose hugetlb_acct_memory() Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 12/39] mm: hugetlb: Move and expose hugetlb_zero_partial_page() Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 13/39] KVM: guest_memfd: Make guest mem use guest mem inodes instead of anonymous inodes Ackerley Tng
2025-04-02 4:01 ` Yan Zhao
2025-04-23 20:22 ` Ackerley Tng
2025-04-24 3:53 ` Yan Zhao
2024-09-10 23:43 ` [RFC PATCH 14/39] KVM: guest_memfd: hugetlb: initialization and cleanup Ackerley Tng
2024-09-20 9:17 ` Vishal Annapurve
2024-10-01 23:00 ` Ackerley Tng
2024-12-01 17:59 ` Peter Xu
2025-02-13 9:47 ` Ackerley Tng
2025-02-26 18:55 ` Ackerley Tng
2025-03-06 17:33 ` Peter Xu
2024-09-10 23:43 ` [RFC PATCH 15/39] KVM: guest_memfd: hugetlb: allocate and truncate from hugetlb Ackerley Tng
2024-09-13 22:26 ` Elliot Berman
2024-10-03 20:23 ` Ackerley Tng
2024-10-30 9:01 ` Jun Miao
2025-02-11 1:21 ` Ackerley Tng
2024-12-01 17:55 ` Peter Xu
2025-02-13 7:52 ` Ackerley Tng
2025-02-13 16:48 ` Peter Xu
2024-09-10 23:43 ` [RFC PATCH 16/39] KVM: guest_memfd: Add page alignment check for hugetlb guest_memfd Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 17/39] KVM: selftests: Add basic selftests for hugetlb-backed guest_memfd Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 18/39] KVM: selftests: Support various types of backing sources for private memory Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 19/39] KVM: selftests: Update test for various private memory backing source types Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 20/39] KVM: selftests: Add private_mem_conversions_test.sh Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 21/39] KVM: selftests: Test that guest_memfd usage is reported via hugetlb Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 22/39] mm: hugetlb: Expose vmemmap optimization functions Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 23/39] mm: hugetlb: Expose HugeTLB functions for promoting/demoting pages Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 24/39] mm: hugetlb: Add functions to add/move/remove from hugetlb lists Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 25/39] KVM: guest_memfd: Split HugeTLB pages for guest_memfd use Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 26/39] KVM: guest_memfd: Track faultability within a struct kvm_gmem_private Ackerley Tng
2024-10-10 16:06 ` Peter Xu
2024-10-11 23:32 ` Ackerley Tng
2024-10-15 21:34 ` Peter Xu
2024-10-15 23:42 ` Ackerley Tng
2024-10-16 8:45 ` David Hildenbrand
2024-10-16 20:16 ` Peter Xu
2024-10-16 22:51 ` Jason Gunthorpe
2024-10-16 23:49 ` Peter Xu
2024-10-16 23:54 ` Jason Gunthorpe
2024-10-17 14:58 ` Peter Xu
2024-10-17 16:47 ` Jason Gunthorpe
2024-10-17 17:05 ` Peter Xu
2024-10-17 17:10 ` Jason Gunthorpe
2024-10-17 19:11 ` Peter Xu
2024-10-17 19:18 ` Jason Gunthorpe
2024-10-17 19:29 ` David Hildenbrand
2024-10-18 7:15 ` Patrick Roy
2024-10-18 7:50 ` David Hildenbrand
2024-10-18 9:34 ` Patrick Roy
2024-10-17 17:11 ` David Hildenbrand
2024-10-17 17:16 ` Jason Gunthorpe
2024-10-17 17:55 ` David Hildenbrand
2024-10-17 18:26 ` Vishal Annapurve
2024-10-17 14:56 ` David Hildenbrand
2024-10-17 15:02 ` David Hildenbrand
2024-10-16 8:50 ` David Hildenbrand
2024-10-16 10:48 ` Vishal Annapurve
2024-10-16 11:54 ` David Hildenbrand
2024-10-16 11:57 ` Jason Gunthorpe
2025-02-25 20:37 ` Peter Xu
2025-04-23 22:07 ` Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 27/39] KVM: guest_memfd: Allow mmapping guest_memfd files Ackerley Tng
2025-01-20 22:42 ` Peter Xu
2025-04-23 20:25 ` Ackerley Tng
2025-03-04 23:24 ` Peter Xu
2025-04-02 4:07 ` Yan Zhao
2025-04-23 20:28 ` Ackerley Tng
2024-09-10 23:43 ` [RFC PATCH 28/39] KVM: guest_memfd: Use vm_type to determine default faultability Ackerley Tng
2024-09-10 23:44 ` [RFC PATCH 29/39] KVM: Handle conversions in the SET_MEMORY_ATTRIBUTES ioctl Ackerley Tng
2024-09-10 23:44 ` [RFC PATCH 30/39] KVM: guest_memfd: Handle folio preparation for guest_memfd mmap Ackerley Tng
2024-09-16 20:00 ` Elliot Berman
2024-10-03 21:32 ` Ackerley Tng
2024-10-03 23:43 ` Ackerley Tng
2024-10-08 19:30 ` Sean Christopherson
2024-10-07 15:56 ` Patrick Roy
2024-10-08 18:07 ` Ackerley Tng
2024-10-08 19:56 ` Sean Christopherson
2024-10-09 3:51 ` Manwaring, Derek
2024-10-09 13:52 ` Andrew Cooper
2024-10-10 16:21 ` Patrick Roy
2024-10-10 19:27 ` Manwaring, Derek
2024-10-17 23:16 ` Ackerley Tng
2024-10-18 7:10 ` Patrick Roy
2024-09-10 23:44 ` [RFC PATCH 31/39] KVM: selftests: Allow vm_set_memory_attributes to be used without asserting return value of 0 Ackerley Tng
2024-09-10 23:44 ` [RFC PATCH 32/39] KVM: selftests: Test using guest_memfd memory from userspace Ackerley Tng
2024-09-10 23:44 ` [RFC PATCH 33/39] KVM: selftests: Test guest_memfd memory sharing between guest and host Ackerley Tng
2024-09-10 23:44 ` [RFC PATCH 34/39] KVM: selftests: Add notes in private_mem_kvm_exits_test for mmap-able guest_memfd Ackerley Tng
2024-09-10 23:44 ` [RFC PATCH 35/39] KVM: selftests: Test that pinned pages block KVM from setting memory attributes to PRIVATE Ackerley Tng
2024-09-10 23:44 ` [RFC PATCH 36/39] KVM: selftests: Refactor vm_mem_add to be more flexible Ackerley Tng
2024-09-10 23:44 ` [RFC PATCH 37/39] KVM: selftests: Add helper to perform madvise by memslots Ackerley Tng
2024-09-10 23:44 ` [RFC PATCH 38/39] KVM: selftests: Update private_mem_conversions_test for mmap()able guest_memfd Ackerley Tng
2024-09-10 23:44 ` [RFC PATCH 39/39] KVM: guest_memfd: Dynamically split/reconstruct HugeTLB page Ackerley Tng
2025-04-03 12:33 ` Yan Zhao
2025-04-23 22:02 ` Ackerley Tng
2025-04-24 1:09 ` Yan Zhao
2025-04-24 4:25 ` Yan Zhao
2025-04-24 5:55 ` Chenyi Qiang
2025-04-24 8:13 ` Yan Zhao
2025-04-24 14:10 ` Vishal Annapurve
2025-04-24 18:15 ` Ackerley Tng
2025-04-25 4:02 ` Yan Zhao
2025-04-25 22:45 ` Ackerley Tng
2025-04-28 1:05 ` Yan Zhao
2025-04-28 19:02 ` Vishal Annapurve
2025-04-30 20:09 ` Ackerley Tng [this message]
2025-05-06 1:23 ` Yan Zhao
2025-05-06 19:22 ` Ackerley Tng
2025-05-07 3:15 ` Yan Zhao
2025-05-13 17:33 ` Ackerley Tng
2024-09-11 6:56 ` [RFC PATCH 00/39] 1G page support for guest_memfd Michal Hocko
2024-09-14 1:08 ` Du, Fan
2024-09-14 13:34 ` Vishal Annapurve
2025-01-28 9:42 ` Amit Shah
2025-02-03 8:35 ` Ackerley Tng
2025-02-06 11:07 ` Amit Shah
2025-02-07 6:25 ` Ackerley Tng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=diqz4iy5xvgi.fsf@ackerleytng-ctop.c.googlers.com \
--to=ackerleytng@google.com \
--cc=ajones@ventanamicro.com \
--cc=anup@brainfault.org \
--cc=bfoster@redhat.com \
--cc=brauner@kernel.org \
--cc=chenyi.qiang@intel.com \
--cc=david@redhat.com \
--cc=erdemaktas@google.com \
--cc=fan.du@intel.com \
--cc=fvdl@google.com \
--cc=haibo1.xu@intel.com \
--cc=isaku.yamahata@intel.com \
--cc=jgg@nvidia.com \
--cc=jhubbard@nvidia.com \
--cc=jthoughton@google.com \
--cc=jun.miao@intel.com \
--cc=kent.overstreet@linux.dev \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=maciej.wieczor-retman@intel.com \
--cc=muchun.song@linux.dev \
--cc=oliver.upton@linux.dev \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=pgonda@google.com \
--cc=pvorel@suse.cz \
--cc=qperret@google.com \
--cc=quic_eberman@quicinc.com \
--cc=richard.weiyang@gmail.com \
--cc=rientjes@google.com \
--cc=roypat@amazon.co.uk \
--cc=rppt@kernel.org \
--cc=seanjc@google.com \
--cc=shuah@kernel.org \
--cc=tabba@google.com \
--cc=vannapurve@google.com \
--cc=vkuznets@redhat.com \
--cc=willy@infradead.org \
--cc=yan.y.zhao@intel.com \
--cc=zhiquan1.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox