From: "Gupta, Pankaj" <pankaj.gupta@amd.com>
To: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
Sean Christopherson <seanjc@google.com>,
Andrew Morton <akpm@linux-foundation.org>,
Joerg Roedel <jroedel@suse.de>, Ard Biesheuvel <ardb@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>,
Kuppuswamy Sathyanarayanan
<sathyanarayanan.kuppuswamy@linux.intel.com>,
David Rientjes <rientjes@google.com>,
Vlastimil Babka <vbabka@suse.cz>,
Tom Lendacky <thomas.lendacky@amd.com>,
Thomas Gleixner <tglx@linutronix.de>,
Peter Zijlstra <peterz@infradead.org>,
Paolo Bonzini <pbonzini@redhat.com>,
Ingo Molnar <mingo@redhat.com>,
Varad Gautam <varad.gautam@suse.com>,
Dario Faggioli <dfaggioli@suse.com>,
Dave Hansen <dave.hansen@intel.com>,
Mike Rapoport <rppt@kernel.org>,
David Hildenbrand <david@redhat.com>,
marcelo.cerri@canonical.com, tim.gardner@canonical.com,
khalid.elmously@canonical.com, philip.cox@canonical.com,
x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev,
linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org,
Mike Rapoport <rppt@linux.ibm.com>
Subject: Re: [PATCHv7 02/14] mm: Add support for unaccepted memory
Date: Tue, 14 Jun 2022 14:57:29 +0200 [thread overview]
Message-ID: <e1c19522-8006-f02e-c8a4-c3d1b8067482@amd.com> (raw)
In-Reply-To: <20220614120231.48165-3-kirill.shutemov@linux.intel.com>
On 6/14/2022 2:02 PM, Kirill A. Shutemov wrote:
> UEFI Specification version 2.9 introduces the concept of memory
> acceptance. Some Virtual Machine platforms, such as Intel TDX or AMD
> SEV-SNP, require memory to be accepted before it can be used by the
> guest. Accepting happens via a protocol specific to the Virtual Machine
> platform.
>
> There are several ways kernel can deal with unaccepted memory:
>
> 1. Accept all the memory during the boot. It is easy to implement and
> it doesn't have runtime cost once the system is booted. The downside
> is very long boot time.
>
> Accept can be parallelized to multiple CPUs to keep it manageable
> (i.e. via DEFERRED_STRUCT_PAGE_INIT), but it tends to saturate
> memory bandwidth and does not scale beyond the point.
>
> 2. Accept a block of memory on the first use. It requires more
> infrastructure and changes in page allocator to make it work, but
> it provides good boot time.
>
> On-demand memory accept means latency spikes every time kernel steps
> onto a new memory block. The spikes will go away once workload data
> set size gets stabilized or all memory gets accepted.
>
> 3. Accept all memory in background. Introduce a thread (or multiple)
> that gets memory accepted proactively. It will minimize time the
> system experience latency spikes on memory allocation while keeping
> low boot time.
>
> This approach cannot function on its own. It is an extension of #2:
> background memory acceptance requires functional scheduler, but the
> page allocator may need to tap into unaccepted memory before that.
>
> The downside of the approach is that these threads also steal CPU
> cycles and memory bandwidth from the user's workload and may hurt
> user experience.
>
> Implement #2 for now. It is a reasonable default. Some workloads may
> want to use #1 or #3 and they can be implemented later based on user's
> demands.
>
> Support of unaccepted memory requires a few changes in core-mm code:
>
> - memblock has to accept memory on allocation;
>
> - page allocator has to accept memory on the first allocation of the
> page;
>
> Memblock change is trivial.
>
> The page allocator is modified to accept pages on the first allocation.
> The new page type (encoded in the _mapcount) -- PageUnaccepted() -- is
> used to indicate that the page requires acceptance.
>
> Architecture has to provide two helpers if it wants to support
> unaccepted memory:
>
> - accept_memory() makes a range of physical addresses accepted.
>
> - range_contains_unaccepted_memory() checks anything within the range
> of physical addresses requires acceptance.
>
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Acked-by: Mike Rapoport <rppt@linux.ibm.com> # memblock
> Reviewed-by: David Hildenbrand <david@redhat.com>
> ---
> include/linux/page-flags.h | 31 +++++++++++++
> mm/internal.h | 12 +++++
> mm/memblock.c | 9 ++++
> mm/page_alloc.c | 89 +++++++++++++++++++++++++++++++++++++-
> 4 files changed, 139 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index e66f7aa3191d..444ba8f4bfa0 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -942,6 +942,14 @@ static inline bool is_page_hwpoison(struct page *page)
> #define PG_offline 0x00000100
> #define PG_table 0x00000200
> #define PG_guard 0x00000400
> +#define PG_unaccepted 0x00000800
> +
> +/*
> + * Page types allowed at page_expected_state()
> + *
> + * PageUnaccepted() will get cleared in post_alloc_hook().
> + */
> +#define PAGE_TYPES_EXPECTED PG_unaccepted
>
> #define PageType(page, flag) \
> ((page->page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE)
> @@ -967,6 +975,18 @@ static __always_inline void __ClearPage##uname(struct page *page) \
> page->page_type |= PG_##lname; \
> }
>
> +#define PAGE_TYPE_OPS_FALSE(uname) \
> +static __always_inline int Page##uname(struct page *page) \
> +{ \
> + return false; \
> +} \
> +static __always_inline void __SetPage##uname(struct page *page) \
> +{ \
> +} \
> +static __always_inline void __ClearPage##uname(struct page *page) \
> +{ \
> +}
> +
> /*
> * PageBuddy() indicates that the page is free and in the buddy system
> * (see mm/page_alloc.c).
> @@ -997,6 +1017,17 @@ PAGE_TYPE_OPS(Buddy, buddy)
> */
> PAGE_TYPE_OPS(Offline, offline)
>
> +/*
> + * PageUnaccepted() indicates that the page has to be "accepted" before it can
> + * be read or written. The page allocator must call accept_page() before
> + * touching the page or returning it to the caller.
> + */
> +#ifdef CONFIG_UNACCEPTED_MEMORY
> +PAGE_TYPE_OPS(Unaccepted, unaccepted)
> +#else
> +PAGE_TYPE_OPS_FALSE(Unaccepted)
> +#endif
> +
> extern void page_offline_freeze(void);
> extern void page_offline_thaw(void);
> extern void page_offline_begin(void);
> diff --git a/mm/internal.h b/mm/internal.h
> index c0f8fbe0445b..7c1a653edfc8 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -861,4 +861,16 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags);
>
> DECLARE_PER_CPU(struct per_cpu_nodestat, boot_nodestats);
>
> +#ifndef CONFIG_UNACCEPTED_MEMORY
> +static inline bool range_contains_unaccepted_memory(phys_addr_t start,
> + phys_addr_t end)
> +{
> + return false;
> +}
> +
> +static inline void accept_memory(phys_addr_t start, phys_addr_t end)
> +{
> +}
> +#endif
> +
> #endif /* __MM_INTERNAL_H */
> diff --git a/mm/memblock.c b/mm/memblock.c
> index e4f03a6e8e56..a1f7f8b304d5 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -1405,6 +1405,15 @@ phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size,
> */
> kmemleak_alloc_phys(found, size, 0, 0);
>
> + /*
> + * Some Virtual Machine platforms, such as Intel TDX or AMD SEV-SNP,
> + * require memory to be accepted before it can be used by the
> + * guest.
> + *
> + * Accept the memory of the allocated buffer.
> + */
> + accept_memory(found, found + size);
> +
> return found;
> }
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index e008a3df0485..279c2746aaa8 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -122,6 +122,12 @@ typedef int __bitwise fpi_t;
> */
> #define FPI_SKIP_KASAN_POISON ((__force fpi_t)BIT(2))
>
> +/*
> + * Check if the page needs to be marked as PageUnaccepted().
> + * Used for the new pages added to the buddy allocator for the first time.
> + */
> +#define FPI_UNACCEPTED_SLOWPATH ((__force fpi_t)BIT(3))
> +
> /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
> static DEFINE_MUTEX(pcp_batch_high_lock);
> #define MIN_PERCPU_PAGELIST_HIGH_FRACTION (8)
> @@ -993,6 +999,35 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn,
> NULL) != NULL;
> }
>
> +/*
> + * Page acceptance can be very slow. Do not call under critical locks.
> + */
> +static void accept_page(struct page *page, unsigned int order)
> +{
> + phys_addr_t start = page_to_phys(page);
> + int i;
> +
> + if (!PageUnaccepted(page))
> + return;
> +
> + accept_memory(start, start + (PAGE_SIZE << order));
> + __ClearPageUnaccepted(page);
> +
> + /* Assert that there is no PageUnaccepted() on tail pages */
> + if (IS_ENABLED(CONFIG_DEBUG_VM)) {
> + for (i = 1; i < (1 << order); i++)
> + VM_BUG_ON_PAGE(PageUnaccepted(page + i), page + i);
> + }
> +}
> +
> +static bool page_contains_unaccepted(struct page *page, unsigned int order)
> +{
> + phys_addr_t start = page_to_phys(page);
> + phys_addr_t end = start + (PAGE_SIZE << order);
> +
> + return range_contains_unaccepted_memory(start, end);
> +}
> +
> /*
> * Freeing function for a buddy system allocator.
> *
> @@ -1027,6 +1062,7 @@ static inline void __free_one_page(struct page *page,
> unsigned long combined_pfn;
> struct page *buddy;
> bool to_tail;
> + bool page_needs_acceptance = false;
>
> VM_BUG_ON(!zone_is_initialized(zone));
> VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);
> @@ -1038,6 +1074,11 @@ static inline void __free_one_page(struct page *page,
> VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page);
> VM_BUG_ON_PAGE(bad_range(zone, page), page);
>
> + if (PageUnaccepted(page)) {
> + page_needs_acceptance = true;
> + __ClearPageUnaccepted(page);
> + }
> +
> while (order < MAX_ORDER - 1) {
> if (compaction_capture(capc, page, order, migratetype)) {
> __mod_zone_freepage_state(zone, -(1 << order),
> @@ -1072,6 +1113,13 @@ static inline void __free_one_page(struct page *page,
> clear_page_guard(zone, buddy, order, migratetype);
> else
> del_page_from_free_list(buddy, zone, order);
> +
> + /* Mark page unaccepted if any of merged pages were unaccepted */
> + if (PageUnaccepted(buddy)) {
> + page_needs_acceptance = true;
> + __ClearPageUnaccepted(buddy);
> + }
> +
> combined_pfn = buddy_pfn & pfn;
> page = page + (combined_pfn - pfn);
> pfn = combined_pfn;
> @@ -1081,6 +1129,23 @@ static inline void __free_one_page(struct page *page,
> done_merging:
> set_buddy_order(page, order);
>
> + /*
> + * The page gets marked as PageUnaccepted() if any of merged-in pages
> + * is PageUnaccepted().
> + *
> + * New pages, just being added to buddy allocator, do not have
> + * PageUnaccepted() set. FPI_UNACCEPTED_SLOWPATH indicates that the
> + * page is new and page_contains_unaccepted() check is required to
> + * determinate if acceptance is required.
> + *
> + * Avoid calling page_contains_unaccepted() if it is known that the page
> + * needs acceptance. It can be costly.
> + */
> + if (!page_needs_acceptance && (fpi_flags & FPI_UNACCEPTED_SLOWPATH))
> + page_needs_acceptance = page_contains_unaccepted(page, order);
> + if (page_needs_acceptance)
> + __SetPageUnaccepted(page);
> +
> if (fpi_flags & FPI_TO_TAIL)
> to_tail = true;
> else if (is_shuffle_order(order))
> @@ -1164,7 +1229,13 @@ int split_free_page(struct page *free_page,
> static inline bool page_expected_state(struct page *page,
> unsigned long check_flags)
> {
> - if (unlikely(atomic_read(&page->_mapcount) != -1))
> + /*
> + * The page must not be mapped to userspace and must not have
> + * a PageType other than listed in PAGE_TYPES_EXPECTED.
> + *
> + * Note, bit cleared means the page type is set.
> + */
> + if (unlikely((atomic_read(&page->_mapcount) | PAGE_TYPES_EXPECTED) != -1))
> return false;
>
> if (unlikely((unsigned long)page->mapping |
> @@ -1669,7 +1740,9 @@ void __free_pages_core(struct page *page, unsigned int order)
> * Bypass PCP and place fresh pages right to the tail, primarily
> * relevant for memory onlining.
> */
> - __free_pages_ok(page, order, FPI_TO_TAIL | FPI_SKIP_KASAN_POISON);
> + __free_pages_ok(page, order,
> + FPI_TO_TAIL | FPI_SKIP_KASAN_POISON |
> + FPI_UNACCEPTED_SLOWPATH);
> }
>
> #ifdef CONFIG_NUMA
> @@ -1822,6 +1895,9 @@ static void __init deferred_free_range(unsigned long pfn,
> return;
> }
>
> + /* Accept chunks smaller than page-block upfront */
> + accept_memory(PFN_PHYS(pfn), PFN_PHYS(pfn + nr_pages));
> +
> for (i = 0; i < nr_pages; i++, page++, pfn++) {
> if ((pfn & (pageblock_nr_pages - 1)) == 0)
> set_pageblock_migratetype(page, MIGRATE_MOVABLE);
> @@ -2281,6 +2357,13 @@ static inline void expand(struct zone *zone, struct page *page,
> if (set_page_guard(zone, &page[size], high, migratetype))
> continue;
>
> + /*
> + * Transfer PageUnaccepted() to the newly split pages so
> + * they can be accepted after dropping the zone lock.
> + */
> + if (PageUnaccepted(page))
> + __SetPageUnaccepted(&page[size]);
> +
> add_to_free_list(&page[size], zone, high, migratetype);
> set_buddy_order(&page[size], high);
> }
> @@ -2411,6 +2494,8 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
> */
> kernel_unpoison_pages(page, 1 << order);
>
> + accept_page(page, order);
> +
> /*
> * As memory initialization might be integrated into KASAN,
> * KASAN unpoisoning and memory initializion code must be
Reviewed previous version(v6) of this patch. Seems no change in this
version except '!PageUnaccepted' check addition in function
'accept_page' and rename of function 'page_contains_unaccepted'.
Acked-by: Pankaj Gupta <pankaj.gupta@amd.com>
next prev parent reply other threads:[~2022-06-14 12:57 UTC|newest]
Thread overview: 139+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-06-14 12:02 [PATCHv7 00/14] mm, x86/cc: Implement " Kirill A. Shutemov
2022-06-14 12:02 ` [PATCHv7 01/14] x86/boot: Centralize __pa()/__va() definitions Kirill A. Shutemov
2022-06-23 17:37 ` Dave Hansen
2022-06-14 12:02 ` [PATCHv7 02/14] mm: Add support for unaccepted memory Kirill A. Shutemov
2022-06-14 12:57 ` Gupta, Pankaj [this message]
2022-06-17 19:28 ` Tom Lendacky
2022-06-17 20:53 ` Tom Lendacky
2022-07-21 15:14 ` Borislav Petkov
2022-07-21 15:49 ` Dave Hansen
2022-07-22 19:18 ` Borislav Petkov
2022-07-22 19:30 ` Dave Hansen
2022-07-25 12:23 ` Borislav Petkov
2022-07-25 12:38 ` David Hildenbrand
2022-07-25 12:53 ` Borislav Petkov
2022-07-26 14:30 ` David Hildenbrand
2022-07-25 13:00 ` Mike Rapoport
2022-07-25 13:05 ` Borislav Petkov
2022-08-05 11:49 ` Vlastimil Babka
2022-08-05 12:09 ` David Hildenbrand
2022-08-05 13:38 ` Vlastimil Babka
2022-08-05 14:22 ` David Hildenbrand
2022-08-05 14:53 ` Dave Hansen
2022-08-05 14:41 ` Dave Hansen
2022-08-05 18:17 ` Vlastimil Babka
2022-08-08 15:55 ` Dave Hansen
2022-08-10 14:19 ` Mel Gorman
2022-08-15 21:08 ` Dionna Amalie Glaze
2022-08-15 22:02 ` Tom Lendacky
2022-08-29 16:02 ` Dionna Amalie Glaze
2022-08-29 16:19 ` Dave Hansen
2022-09-06 17:50 ` Dionna Amalie Glaze
2022-09-08 12:11 ` Mike Rapoport
2022-09-08 16:23 ` Dionna Amalie Glaze
2022-09-08 19:28 ` Mike Rapoport
2022-09-22 14:31 ` Tom Lendacky
2022-09-24 1:03 ` Kirill A. Shutemov
2022-09-24 9:36 ` Mike Rapoport
2022-09-26 12:10 ` Kirill A. Shutemov
2022-09-26 13:38 ` Tom Lendacky
2022-09-26 15:42 ` Kirill A. Shutemov
2022-09-26 15:42 ` Tom Lendacky
2022-06-14 12:02 ` [PATCHv7 03/14] mm: Report unaccepted memory in meminfo Kirill A. Shutemov
2022-07-26 14:33 ` David Hildenbrand
2022-06-14 12:02 ` [PATCHv7 04/14] efi/x86: Get full memory map in allocate_e820() Kirill A. Shutemov
2022-07-25 13:02 ` Borislav Petkov
2022-06-14 12:02 ` [PATCHv7 05/14] x86/boot: Add infrastructure required for unaccepted memory support Kirill A. Shutemov
2022-06-15 10:19 ` Peter Zijlstra
2022-06-15 15:05 ` Kirill A. Shutemov
2022-07-17 17:16 ` Borislav Petkov
2022-07-25 21:33 ` Borislav Petkov
2022-06-14 12:02 ` [PATCHv7 06/14] efi/x86: Implement support for unaccepted memory Kirill A. Shutemov
2022-06-22 19:58 ` Dave Hansen
2022-07-26 8:35 ` Borislav Petkov
2022-06-14 12:02 ` [PATCHv7 07/14] x86/boot/compressed: Handle " Kirill A. Shutemov
2022-06-14 12:02 ` [PATCHv7 08/14] x86/mm: Reserve unaccepted memory bitmap Kirill A. Shutemov
2022-07-26 9:07 ` Borislav Petkov
2022-11-30 1:28 ` Kirill A. Shutemov
2022-12-01 9:37 ` Mike Rapoport
2022-12-01 13:47 ` Kirill A. Shutemov
2022-06-14 12:02 ` [PATCHv7 09/14] x86/mm: Provide helpers for unaccepted memory Kirill A. Shutemov
2022-06-14 12:02 ` [PATCHv7 10/14] x86/mm: Avoid load_unaligned_zeropad() stepping into " Kirill A. Shutemov
2022-06-23 17:19 ` Dave Hansen
2022-07-26 10:21 ` Borislav Petkov
2022-08-02 23:46 ` Dave Hansen
2022-08-03 14:02 ` Dave Hansen
2022-08-11 11:26 ` Borislav Petkov
2022-08-13 16:11 ` Andy Lutomirski
2022-08-13 21:13 ` Kirill A. Shutemov
2022-08-13 16:04 ` Andy Lutomirski
2022-08-13 20:58 ` Kirill A. Shutemov
2022-07-26 17:25 ` Borislav Petkov
2022-07-26 17:46 ` Dave Hansen
2022-07-26 20:17 ` Andy Lutomirski
2022-08-09 11:38 ` Kirill A. Shutemov
2022-08-13 16:03 ` Andy Lutomirski
2022-08-13 21:02 ` Kirill A. Shutemov
2022-06-14 12:02 ` [PATCHv7 11/14] x86: Disable kexec if system has " Kirill A. Shutemov
2022-06-23 17:23 ` Dave Hansen
2022-06-23 21:48 ` Eric W. Biederman
2022-06-24 2:00 ` Kirill A. Shutemov
2022-06-28 23:51 ` Kirill A. Shutemov
2022-06-29 0:10 ` Dave Hansen
2022-06-29 0:59 ` Kirill A. Shutemov
2022-07-04 7:18 ` Dave Young
2022-06-14 12:02 ` [PATCHv7 12/14] x86/tdx: Make _tdx_hypercall() and __tdx_module_call() available in boot stub Kirill A. Shutemov
2022-06-23 17:25 ` Dave Hansen
2022-06-14 12:02 ` [PATCHv7 13/14] x86/tdx: Refactor try_accept_one() Kirill A. Shutemov
2022-06-23 17:31 ` Dave Hansen
2022-07-26 10:58 ` Borislav Petkov
2022-06-14 12:02 ` [PATCHv7 14/14] x86/tdx: Add unaccepted memory support Kirill A. Shutemov
2022-06-24 16:22 ` Dave Hansen
2022-06-27 10:42 ` Kirill A. Shutemov
2022-07-26 14:51 ` Borislav Petkov
2022-08-09 11:45 ` Kirill A. Shutemov
2022-08-10 10:27 ` Borislav Petkov
2022-06-24 16:37 ` [PATCHv7 00/14] mm, x86/cc: Implement support for unaccepted memory Peter Gonda
2022-06-24 16:57 ` Dave Hansen
2022-06-24 17:06 ` Marc Orr
2022-06-24 17:09 ` Dave Hansen
2022-06-24 17:15 ` Peter Gonda
2022-06-24 17:19 ` Marc Orr
2022-06-24 17:21 ` Peter Gonda
2022-06-24 17:47 ` Dave Hansen
2022-06-24 18:10 ` Peter Gonda
2022-06-24 18:13 ` Dave Hansen
2022-06-24 17:40 ` Michael Roth
2022-06-24 17:58 ` Michael Roth
2022-06-24 18:05 ` Peter Gonda
2022-06-27 11:30 ` Kirill A. Shutemov
2022-06-27 11:54 ` Ard Biesheuvel
2022-06-27 12:22 ` Kirill A. Shutemov
2022-06-27 16:17 ` Peter Gonda
2022-06-27 16:33 ` Ard Biesheuvel
2022-06-27 22:38 ` Kirill A. Shutemov
2022-06-28 17:17 ` Ard Biesheuvel
2022-07-18 17:21 ` Kirill A. Shutemov
2022-07-18 23:32 ` Dionna Amalie Glaze
2022-07-19 0:31 ` Dionna Amalie Glaze
2022-07-19 18:29 ` Dionna Amalie Glaze
2022-07-19 19:13 ` Borislav Petkov
2022-07-19 20:45 ` Ard Biesheuvel
2022-07-19 21:23 ` Borislav Petkov
2022-07-19 21:35 ` Dave Hansen
2022-07-19 21:50 ` Borislav Petkov
2022-07-19 22:01 ` Kirill A. Shutemov
2022-07-19 22:02 ` Dave Hansen
2022-07-19 22:08 ` Tom Lendacky
2022-07-20 0:26 ` Marc Orr
2022-07-20 5:44 ` Borislav Petkov
2022-07-20 17:03 ` Marc Orr
2022-07-22 15:07 ` Borislav Petkov
2022-07-21 17:12 ` Dave Hansen
2022-07-23 11:14 ` Ard Biesheuvel
2022-07-28 22:01 ` Dionna Amalie Glaze
2022-08-09 11:14 ` Kirill A. Shutemov
2022-08-09 11:36 ` Ard Biesheuvel
2022-08-09 11:54 ` Kirill A. Shutemov
2022-08-09 21:09 ` Dionna Amalie Glaze
2022-07-19 2:48 ` Yao, Jiewen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e1c19522-8006-f02e-c8a4-c3d1b8067482@amd.com \
--to=pankaj.gupta@amd.com \
--cc=ak@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=ardb@kernel.org \
--cc=bp@alien8.de \
--cc=dave.hansen@intel.com \
--cc=david@redhat.com \
--cc=dfaggioli@suse.com \
--cc=jroedel@suse.de \
--cc=khalid.elmously@canonical.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-coco@lists.linux.dev \
--cc=linux-efi@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=marcelo.cerri@canonical.com \
--cc=mingo@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peterz@infradead.org \
--cc=philip.cox@canonical.com \
--cc=rientjes@google.com \
--cc=rppt@kernel.org \
--cc=rppt@linux.ibm.com \
--cc=sathyanarayanan.kuppuswamy@linux.intel.com \
--cc=seanjc@google.com \
--cc=tglx@linutronix.de \
--cc=thomas.lendacky@amd.com \
--cc=tim.gardner@canonical.com \
--cc=varad.gautam@suse.com \
--cc=vbabka@suse.cz \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox