From: Yuan Yao <yuan.yao@linux.intel.com>
To: Kai Huang <kai.huang@intel.com>
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
linux-mm@kvack.org, x86@kernel.org, dave.hansen@intel.com,
kirill.shutemov@linux.intel.com, tony.luck@intel.com,
peterz@infradead.org, tglx@linutronix.de, bp@alien8.de,
mingo@redhat.com, hpa@zytor.com, seanjc@google.com,
pbonzini@redhat.com, david@redhat.com, dan.j.williams@intel.com,
rafael.j.wysocki@intel.com, ashok.raj@intel.com,
reinette.chatre@intel.com, len.brown@intel.com,
ak@linux.intel.com, isaku.yamahata@intel.com,
ying.huang@intel.com, chao.gao@intel.com,
sathyanarayanan.kuppuswamy@linux.intel.com, nik.borisov@suse.com,
bagasdotme@gmail.com, sagis@google.com, imammedo@redhat.com
Subject: Re: [PATCH v12 13/22] x86/virt/tdx: Designate reserved areas for all TDMRs
Date: Wed, 5 Jul 2023 13:29:38 +0800 [thread overview]
Message-ID: <20230705052938.g5igtdcbklfd7bkp@yy-desk-7060> (raw)
In-Reply-To: <932971243b1b842a59d3fb2b6506823bd732db18.1687784645.git.kai.huang@intel.com>
On Tue, Jun 27, 2023 at 02:12:43AM +1200, Kai Huang wrote:
> As the last step of constructing TDMRs, populate reserved areas for all
> TDMRs. For each TDMR, put all memory holes within this TDMR to the
> reserved areas. And for all PAMTs which overlap with this TDMR, put
> all the overlapping parts to reserved areas too.
Reviewed-by: Yuan Yao <yuan.yao@intel.com>
>
> Signed-off-by: Kai Huang <kai.huang@intel.com>
> Reviewed-by: Isaku Yamahata <isaku.yamahata@intel.com>
> Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> ---
>
> v11 -> v12:
> - Code change due to tdmr_get_pamt() change from returning pfn/npages to
> base/size
> - Added Kirill's tag
>
> v10 -> v11:
> - No update
>
> v9 -> v10:
> - No change.
>
> v8 -> v9:
> - Added comment around 'tdmr_add_rsvd_area()' to point out it doesn't do
> optimization to save reserved areas. (Dave).
>
> v7 -> v8: (Dave)
> - "set_up" -> "populate" in function name change (Dave).
> - Improved comment suggested by Dave.
> - Other changes due to 'struct tdmr_info_list'.
>
> v6 -> v7:
> - No change.
>
> v5 -> v6:
> - Rebase due to using 'tdx_memblock' instead of memblock.
> - Split tdmr_set_up_rsvd_areas() into two functions to handle memory
> hole and PAMT respectively.
> - Added Isaku's Reviewed-by.
>
>
> ---
> arch/x86/virt/vmx/tdx/tdx.c | 217 ++++++++++++++++++++++++++++++++++--
> 1 file changed, 209 insertions(+), 8 deletions(-)
>
> diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
> index fd5417577f26..2bcace5cb25c 100644
> --- a/arch/x86/virt/vmx/tdx/tdx.c
> +++ b/arch/x86/virt/vmx/tdx/tdx.c
> @@ -25,6 +25,7 @@
> #include <linux/sizes.h>
> #include <linux/pfn.h>
> #include <linux/align.h>
> +#include <linux/sort.h>
> #include <asm/msr-index.h>
> #include <asm/msr.h>
> #include <asm/archrandom.h>
> @@ -634,6 +635,207 @@ static unsigned long tdmrs_count_pamt_kb(struct tdmr_info_list *tdmr_list)
> return pamt_size / 1024;
> }
>
> +static int tdmr_add_rsvd_area(struct tdmr_info *tdmr, int *p_idx, u64 addr,
> + u64 size, u16 max_reserved_per_tdmr)
> +{
> + struct tdmr_reserved_area *rsvd_areas = tdmr->reserved_areas;
> + int idx = *p_idx;
> +
> + /* Reserved area must be 4K aligned in offset and size */
> + if (WARN_ON(addr & ~PAGE_MASK || size & ~PAGE_MASK))
> + return -EINVAL;
> +
> + if (idx >= max_reserved_per_tdmr) {
> + pr_warn("initialization failed: TDMR [0x%llx, 0x%llx): reserved areas exhausted.\n",
> + tdmr->base, tdmr_end(tdmr));
> + return -ENOSPC;
> + }
> +
> + /*
> + * Consume one reserved area per call. Make no effort to
> + * optimize or reduce the number of reserved areas which are
> + * consumed by contiguous reserved areas, for instance.
> + */
> + rsvd_areas[idx].offset = addr - tdmr->base;
> + rsvd_areas[idx].size = size;
> +
> + *p_idx = idx + 1;
> +
> + return 0;
> +}
> +
> +/*
> + * Go through @tmb_list to find holes between memory areas. If any of
> + * those holes fall within @tdmr, set up a TDMR reserved area to cover
> + * the hole.
> + */
> +static int tdmr_populate_rsvd_holes(struct list_head *tmb_list,
> + struct tdmr_info *tdmr,
> + int *rsvd_idx,
> + u16 max_reserved_per_tdmr)
> +{
> + struct tdx_memblock *tmb;
> + u64 prev_end;
> + int ret;
> +
> + /*
> + * Start looking for reserved blocks at the
> + * beginning of the TDMR.
> + */
> + prev_end = tdmr->base;
> + list_for_each_entry(tmb, tmb_list, list) {
> + u64 start, end;
> +
> + start = PFN_PHYS(tmb->start_pfn);
> + end = PFN_PHYS(tmb->end_pfn);
> +
> + /* Break if this region is after the TDMR */
> + if (start >= tdmr_end(tdmr))
> + break;
> +
> + /* Exclude regions before this TDMR */
> + if (end < tdmr->base)
> + continue;
> +
> + /*
> + * Skip over memory areas that
> + * have already been dealt with.
> + */
> + if (start <= prev_end) {
> + prev_end = end;
> + continue;
> + }
> +
> + /* Add the hole before this region */
> + ret = tdmr_add_rsvd_area(tdmr, rsvd_idx, prev_end,
> + start - prev_end,
> + max_reserved_per_tdmr);
> + if (ret)
> + return ret;
> +
> + prev_end = end;
> + }
> +
> + /* Add the hole after the last region if it exists. */
> + if (prev_end < tdmr_end(tdmr)) {
> + ret = tdmr_add_rsvd_area(tdmr, rsvd_idx, prev_end,
> + tdmr_end(tdmr) - prev_end,
> + max_reserved_per_tdmr);
> + if (ret)
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +/*
> + * Go through @tdmr_list to find all PAMTs. If any of those PAMTs
> + * overlaps with @tdmr, set up a TDMR reserved area to cover the
> + * overlapping part.
> + */
> +static int tdmr_populate_rsvd_pamts(struct tdmr_info_list *tdmr_list,
> + struct tdmr_info *tdmr,
> + int *rsvd_idx,
> + u16 max_reserved_per_tdmr)
> +{
> + int i, ret;
> +
> + for (i = 0; i < tdmr_list->nr_consumed_tdmrs; i++) {
> + struct tdmr_info *tmp = tdmr_entry(tdmr_list, i);
> + unsigned long pamt_base, pamt_size, pamt_end;
> +
> + tdmr_get_pamt(tmp, &pamt_base, &pamt_size);
> + /* Each TDMR must already have PAMT allocated */
> + WARN_ON_ONCE(!pamt_size|| !pamt_base);
> +
> + pamt_end = pamt_base + pamt_size;
> + /* Skip PAMTs outside of the given TDMR */
> + if ((pamt_end <= tdmr->base) ||
> + (pamt_base >= tdmr_end(tdmr)))
> + continue;
> +
> + /* Only mark the part within the TDMR as reserved */
> + if (pamt_base < tdmr->base)
> + pamt_base = tdmr->base;
> + if (pamt_end > tdmr_end(tdmr))
> + pamt_end = tdmr_end(tdmr);
> +
> + ret = tdmr_add_rsvd_area(tdmr, rsvd_idx, pamt_base,
> + pamt_end - pamt_base,
> + max_reserved_per_tdmr);
> + if (ret)
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +/* Compare function called by sort() for TDMR reserved areas */
> +static int rsvd_area_cmp_func(const void *a, const void *b)
> +{
> + struct tdmr_reserved_area *r1 = (struct tdmr_reserved_area *)a;
> + struct tdmr_reserved_area *r2 = (struct tdmr_reserved_area *)b;
> +
> + if (r1->offset + r1->size <= r2->offset)
> + return -1;
> + if (r1->offset >= r2->offset + r2->size)
> + return 1;
> +
> + /* Reserved areas cannot overlap. The caller must guarantee. */
> + WARN_ON_ONCE(1);
> + return -1;
> +}
> +
> +/*
> + * Populate reserved areas for the given @tdmr, including memory holes
> + * (via @tmb_list) and PAMTs (via @tdmr_list).
> + */
> +static int tdmr_populate_rsvd_areas(struct tdmr_info *tdmr,
> + struct list_head *tmb_list,
> + struct tdmr_info_list *tdmr_list,
> + u16 max_reserved_per_tdmr)
> +{
> + int ret, rsvd_idx = 0;
> +
> + ret = tdmr_populate_rsvd_holes(tmb_list, tdmr, &rsvd_idx,
> + max_reserved_per_tdmr);
> + if (ret)
> + return ret;
> +
> + ret = tdmr_populate_rsvd_pamts(tdmr_list, tdmr, &rsvd_idx,
> + max_reserved_per_tdmr);
> + if (ret)
> + return ret;
> +
> + /* TDX requires reserved areas listed in address ascending order */
> + sort(tdmr->reserved_areas, rsvd_idx, sizeof(struct tdmr_reserved_area),
> + rsvd_area_cmp_func, NULL);
> +
> + return 0;
> +}
> +
> +/*
> + * Populate reserved areas for all TDMRs in @tdmr_list, including memory
> + * holes (via @tmb_list) and PAMTs.
> + */
> +static int tdmrs_populate_rsvd_areas_all(struct tdmr_info_list *tdmr_list,
> + struct list_head *tmb_list,
> + u16 max_reserved_per_tdmr)
> +{
> + int i;
> +
> + for (i = 0; i < tdmr_list->nr_consumed_tdmrs; i++) {
> + int ret;
> +
> + ret = tdmr_populate_rsvd_areas(tdmr_entry(tdmr_list, i),
> + tmb_list, tdmr_list, max_reserved_per_tdmr);
> + if (ret)
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> /*
> * Construct a list of TDMRs on the preallocated space in @tdmr_list
> * to cover all TDX memory regions in @tmb_list based on the TDX module
> @@ -653,14 +855,13 @@ static int construct_tdmrs(struct list_head *tmb_list,
> sysinfo->pamt_entry_size);
> if (ret)
> return ret;
> - /*
> - * TODO:
> - *
> - * - Designate reserved areas for each TDMR.
> - *
> - * Return -EINVAL until constructing TDMRs is done
> - */
> - return -EINVAL;
> +
> + ret = tdmrs_populate_rsvd_areas_all(tdmr_list, tmb_list,
> + sysinfo->max_reserved_per_tdmr);
> + if (ret)
> + tdmrs_free_pamt_all(tdmr_list);
> +
> + return ret;
> }
>
> static int init_tdx_module(void)
> --
> 2.40.1
>
next prev parent reply other threads:[~2023-07-05 5:29 UTC|newest]
Thread overview: 159+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-26 14:12 [PATCH v12 00/22] TDX host kernel support Kai Huang
2023-06-26 14:12 ` [PATCH v12 01/22] x86/tdx: Define TDX supported page sizes as macros Kai Huang
2023-06-26 14:12 ` [PATCH v12 02/22] x86/virt/tdx: Detect TDX during kernel boot Kai Huang
2023-06-26 14:12 ` [PATCH v12 03/22] x86/virt/tdx: Make INTEL_TDX_HOST depend on X86_X2APIC Kai Huang
2023-06-26 14:12 ` [PATCH v12 04/22] x86/cpu: Detect TDX partial write machine check erratum Kai Huang
2023-06-29 11:22 ` David Hildenbrand
2023-06-26 14:12 ` [PATCH v12 05/22] x86/virt/tdx: Add SEAMCALL infrastructure Kai Huang
2023-06-27 9:48 ` kirill.shutemov
2023-06-27 10:28 ` Huang, Kai
2023-06-27 11:36 ` kirill.shutemov
2023-06-28 0:19 ` Isaku Yamahata
2023-06-28 3:09 ` Chao Gao
2023-06-28 3:34 ` Huang, Kai
2023-06-28 11:50 ` kirill.shutemov
2023-06-28 23:31 ` Huang, Kai
2023-06-29 11:25 ` David Hildenbrand
2023-06-28 12:58 ` Peter Zijlstra
2023-06-28 13:54 ` Peter Zijlstra
2023-06-28 23:25 ` Huang, Kai
2023-06-29 10:15 ` kirill.shutemov
2023-06-28 23:21 ` Huang, Kai
2023-06-29 3:40 ` Huang, Kai
2023-06-26 14:12 ` [PATCH v12 06/22] x86/virt/tdx: Handle SEAMCALL running out of entropy error Kai Huang
2023-06-28 13:02 ` Peter Zijlstra
2023-06-28 23:30 ` Huang, Kai
2023-06-26 14:12 ` [PATCH v12 07/22] x86/virt/tdx: Add skeleton to enable TDX on demand Kai Huang
2023-06-26 21:21 ` Sathyanarayanan Kuppuswamy
2023-06-27 10:37 ` Huang, Kai
2023-06-27 9:50 ` kirill.shutemov
2023-06-27 10:34 ` Huang, Kai
2023-06-27 12:18 ` kirill.shutemov
2023-06-27 22:37 ` Huang, Kai
2023-06-28 0:28 ` Huang, Kai
2023-06-28 11:55 ` kirill.shutemov
2023-06-28 13:35 ` Peter Zijlstra
2023-06-29 0:15 ` Huang, Kai
2023-06-30 9:22 ` Peter Zijlstra
2023-06-30 10:09 ` Huang, Kai
2023-06-30 18:42 ` Isaku Yamahata
2023-07-01 8:15 ` Huang, Kai
2023-06-28 0:31 ` Isaku Yamahata
2023-06-28 13:04 ` Peter Zijlstra
2023-06-29 0:00 ` Huang, Kai
2023-06-30 9:25 ` Peter Zijlstra
2023-06-30 9:48 ` Huang, Kai
2023-06-28 13:08 ` Peter Zijlstra
2023-06-29 0:08 ` Huang, Kai
2023-06-28 13:17 ` Peter Zijlstra
2023-06-29 0:10 ` Huang, Kai
2023-06-30 9:26 ` Peter Zijlstra
2023-06-30 9:55 ` Huang, Kai
2023-06-30 18:30 ` Peter Zijlstra
2023-06-30 19:05 ` Isaku Yamahata
2023-06-30 21:24 ` Sean Christopherson
2023-06-30 21:58 ` Dan Williams
2023-06-30 23:13 ` Dave Hansen
2023-07-03 10:38 ` Peter Zijlstra
2023-07-03 10:49 ` Peter Zijlstra
2023-07-03 14:40 ` Dave Hansen
2023-07-03 15:03 ` Peter Zijlstra
2023-07-03 15:26 ` Dave Hansen
2023-07-03 17:55 ` kirill.shutemov
2023-07-03 18:26 ` Dave Hansen
2023-07-05 7:14 ` Peter Zijlstra
2023-07-04 16:58 ` Peter Zijlstra
2023-07-04 21:50 ` Huang, Kai
2023-07-05 7:16 ` Peter Zijlstra
2023-07-05 7:54 ` Huang, Kai
2023-07-05 14:34 ` Dave Hansen
2023-07-05 14:57 ` Peter Zijlstra
2023-07-06 14:49 ` Dave Hansen
2023-07-10 17:58 ` Sean Christopherson
2023-06-29 11:31 ` David Hildenbrand
2023-06-29 22:58 ` Huang, Kai
2023-06-26 14:12 ` [PATCH v12 08/22] x86/virt/tdx: Get information about TDX module and TDX-capable memory Kai Huang
2023-06-27 9:51 ` kirill.shutemov
2023-06-27 10:45 ` Huang, Kai
2023-06-27 11:37 ` kirill.shutemov
2023-06-27 11:46 ` Huang, Kai
2023-06-28 14:10 ` Peter Zijlstra
2023-06-29 9:15 ` Huang, Kai
2023-06-30 9:34 ` Peter Zijlstra
2023-06-30 9:58 ` Huang, Kai
2023-06-26 14:12 ` [PATCH v12 09/22] x86/virt/tdx: Use all system memory when initializing TDX module as TDX memory Kai Huang
2023-06-28 14:17 ` Peter Zijlstra
2023-06-29 0:57 ` Huang, Kai
2023-07-11 11:38 ` David Hildenbrand
2023-07-11 12:27 ` Huang, Kai
2023-06-26 14:12 ` [PATCH v12 10/22] x86/virt/tdx: Add placeholder to construct TDMRs to cover all TDX memory regions Kai Huang
2023-06-26 14:12 ` [PATCH v12 11/22] x86/virt/tdx: Fill out " Kai Huang
2023-07-04 7:28 ` Yuan Yao
2023-06-26 14:12 ` [PATCH v12 12/22] x86/virt/tdx: Allocate and set up PAMTs for TDMRs Kai Huang
2023-06-27 9:51 ` kirill.shutemov
2023-07-04 7:40 ` Yuan Yao
2023-07-04 8:59 ` Huang, Kai
2023-07-11 11:42 ` David Hildenbrand
2023-07-11 11:49 ` Huang, Kai
2023-07-11 11:55 ` David Hildenbrand
2023-06-26 14:12 ` [PATCH v12 13/22] x86/virt/tdx: Designate reserved areas for all TDMRs Kai Huang
2023-07-05 5:29 ` Yuan Yao [this message]
2023-06-26 14:12 ` [PATCH v12 14/22] x86/virt/tdx: Configure TDX module with the TDMRs and global KeyID Kai Huang
2023-07-05 6:49 ` Yuan Yao
2023-06-26 14:12 ` [PATCH v12 15/22] x86/virt/tdx: Configure global KeyID on all packages Kai Huang
2023-07-05 8:13 ` Yuan Yao
2023-06-26 14:12 ` [PATCH v12 16/22] x86/virt/tdx: Initialize all TDMRs Kai Huang
2023-07-06 5:31 ` Yuan Yao
2023-06-26 14:12 ` [PATCH v12 17/22] x86/kexec: Flush cache of TDX private memory Kai Huang
2023-06-26 14:12 ` [PATCH v12 18/22] x86/virt/tdx: Keep TDMRs when module initialization is successful Kai Huang
2023-06-28 9:04 ` Nikolay Borisov
2023-06-29 1:03 ` Huang, Kai
2023-06-28 12:23 ` kirill.shutemov
2023-06-28 12:48 ` Nikolay Borisov
2023-06-29 0:24 ` Huang, Kai
2023-06-26 14:12 ` [PATCH v12 19/22] x86/kexec(): Reset TDX private memory on platforms with TDX erratum Kai Huang
2023-06-28 9:20 ` Nikolay Borisov
2023-06-29 0:32 ` Dave Hansen
2023-06-29 0:58 ` Huang, Kai
2023-06-29 3:19 ` Huang, Kai
2023-06-29 5:38 ` Huang, Kai
2023-06-29 9:45 ` Huang, Kai
2023-06-29 9:48 ` Nikolay Borisov
2023-06-28 12:29 ` kirill.shutemov
2023-06-29 0:27 ` Huang, Kai
2023-07-07 4:01 ` Yuan Yao
2023-06-26 14:12 ` [PATCH v12 20/22] x86/virt/tdx: Allow SEAMCALL to handle #UD and #GP Kai Huang
2023-06-28 12:32 ` kirill.shutemov
2023-06-28 15:29 ` Peter Zijlstra
2023-06-28 20:38 ` Peter Zijlstra
2023-06-28 21:11 ` Peter Zijlstra
2023-06-28 21:16 ` Peter Zijlstra
2023-06-30 9:03 ` kirill.shutemov
2023-06-30 10:02 ` Huang, Kai
2023-06-30 10:22 ` kirill.shutemov
2023-06-30 11:06 ` Huang, Kai
2023-06-29 10:33 ` Huang, Kai
2023-06-30 10:06 ` Peter Zijlstra
2023-06-30 10:18 ` Huang, Kai
2023-06-30 15:16 ` Dave Hansen
2023-07-01 8:16 ` Huang, Kai
2023-06-30 10:21 ` Peter Zijlstra
2023-06-30 11:05 ` Huang, Kai
2023-06-30 12:06 ` Peter Zijlstra
2023-06-30 15:14 ` Peter Zijlstra
2023-07-03 12:15 ` Huang, Kai
2023-07-05 10:21 ` Peter Zijlstra
2023-07-05 11:34 ` Huang, Kai
2023-07-05 12:19 ` Peter Zijlstra
2023-07-05 12:53 ` Huang, Kai
2023-07-05 20:56 ` Isaku Yamahata
2023-07-05 12:21 ` Peter Zijlstra
2023-06-29 11:16 ` kirill.shutemov
2023-06-29 10:00 ` Huang, Kai
2023-06-26 14:12 ` [PATCH v12 21/22] x86/mce: Improve error log of kernel space TDX #MC due to erratum Kai Huang
2023-06-28 12:38 ` kirill.shutemov
2023-07-07 7:26 ` Yuan Yao
2023-06-26 14:12 ` [PATCH v12 22/22] Documentation/x86: Add documentation for TDX host support Kai Huang
2023-06-28 7:04 ` [PATCH v12 00/22] TDX host kernel support Yuan Yao
2023-06-28 8:12 ` Huang, Kai
2023-06-29 1:01 ` Yuan Yao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230705052938.g5igtdcbklfd7bkp@yy-desk-7060 \
--to=yuan.yao@linux.intel.com \
--cc=ak@linux.intel.com \
--cc=ashok.raj@intel.com \
--cc=bagasdotme@gmail.com \
--cc=bp@alien8.de \
--cc=chao.gao@intel.com \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@intel.com \
--cc=david@redhat.com \
--cc=hpa@zytor.com \
--cc=imammedo@redhat.com \
--cc=isaku.yamahata@intel.com \
--cc=kai.huang@intel.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=kvm@vger.kernel.org \
--cc=len.brown@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mingo@redhat.com \
--cc=nik.borisov@suse.com \
--cc=pbonzini@redhat.com \
--cc=peterz@infradead.org \
--cc=rafael.j.wysocki@intel.com \
--cc=reinette.chatre@intel.com \
--cc=sagis@google.com \
--cc=sathyanarayanan.kuppuswamy@linux.intel.com \
--cc=seanjc@google.com \
--cc=tglx@linutronix.de \
--cc=tony.luck@intel.com \
--cc=x86@kernel.org \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox