linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Dave Hansen <dave.hansen@intel.com>
To: Kai Huang <kai.huang@intel.com>,
	linux-kernel@vger.kernel.org, kvm@vger.kernel.org
Cc: linux-mm@kvack.org, peterz@infradead.org, tglx@linutronix.de,
	seanjc@google.com, pbonzini@redhat.com, dan.j.williams@intel.com,
	rafael.j.wysocki@intel.com, kirill.shutemov@linux.intel.com,
	ying.huang@intel.com, reinette.chatre@intel.com,
	len.brown@intel.com, tony.luck@intel.com, ak@linux.intel.com,
	isaku.yamahata@intel.com, chao.gao@intel.com,
	sathyanarayanan.kuppuswamy@linux.intel.com, bagasdotme@gmail.com,
	sagis@google.com, imammedo@redhat.com
Subject: Re: [PATCH v8 07/16] x86/virt/tdx: Use all system memory when initializing TDX module as TDX memory
Date: Fri, 6 Jan 2023 10:18:22 -0800	[thread overview]
Message-ID: <e7b682a1-abdf-ce73-f262-8b7ce946e78e@intel.com> (raw)
In-Reply-To: <8aab33a7db7a408beb403950e21f693b0b0f1f2b.1670566861.git.kai.huang@intel.com>

On 12/8/22 22:52, Kai Huang wrote:
> As a step of initializing the TDX module, the kernel needs to tell the
> TDX module which memory regions can be used by the TDX module as TDX
> guest memory.
> 
> TDX reports a list of "Convertible Memory Region" (CMR) to tell the
> kernel which memory is TDX compatible.  The kernel needs to build a list
> of memory regions (out of CMRs) as "TDX-usable" memory and pass them to
> the TDX module.  Once this is done, those "TDX-usable" memory regions
> are fixed during module's lifetime.
> 
> The initial support of TDX guests will only allocate TDX guest memory
> from the global page allocator.  To keep things simple, just make sure
> all pages in the page allocator are TDX memory.

It's hard to tell what "The initial support of TDX guests" means.  I
*think* you mean "this series".  But, we try not to say "this blah" too
much, so just say this:

	To keep things simple, assume that all TDX-protected memory will
	comes page allocator.  Make sure all pages in the page allocator
	*are* TDX-usable memory.

> To guarantee that, stash off the memblock memory regions at the time of
> initializing the TDX module as TDX's own usable memory regions, and in
> the meantime, register a TDX memory notifier to reject to online any new
> memory in memory hotplug.

First, this is a run-on sentence.  Second, it isn't really clear what
memblocks have to do with this or why you need to stash them off.
Please explain.

> This approach works as in practice all boot-time present DIMMs are TDX
> convertible memory.  However, if any non-TDX-convertible memory has been
> hot-added (i.e. CXL memory via kmem driver) before initializing the TDX
> module, the module initialization will fail.

I really don't know what this is trying to say.

*How* and *why* does this module initialization failure occur?  How do
you implement it and why is it necessary?

> This can also be enhanced in the future, i.e. by allowing adding non-TDX
> memory to a separate NUMA node.  In this case, the "TDX-capable" nodes
> and the "non-TDX-capable" nodes can co-exist, but the kernel/userspace
> needs to guarantee memory pages for TDX guests are always allocated from
> the "TDX-capable" nodes.

Why does it need to be enhanced?  What's the problem?

> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index dd333b46fafb..b36129183035 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -1959,6 +1959,7 @@ config INTEL_TDX_HOST
>  	depends on X86_64
>  	depends on KVM_INTEL
>  	depends on X86_X2APIC
> +	select ARCH_KEEP_MEMBLOCK
>  	help
>  	  Intel Trust Domain Extensions (TDX) protects guest VMs from malicious
>  	  host and certain physical attacks.  This option enables necessary TDX
> diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
> index 216fee7144ee..3a841a77fda4 100644
> --- a/arch/x86/kernel/setup.c
> +++ b/arch/x86/kernel/setup.c
> @@ -1174,6 +1174,8 @@ void __init setup_arch(char **cmdline_p)
>  	 *
>  	 * Moreover, on machines with SandyBridge graphics or in setups that use
>  	 * crashkernel the entire 1M is reserved anyway.
> +	 *
> +	 * Note the host kernel TDX also requires the first 1MB being reserved.
>  	 */
>  	reserve_real_mode();
>  
> diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
> index 6fe505c32599..f010402f443d 100644
> --- a/arch/x86/virt/vmx/tdx/tdx.c
> +++ b/arch/x86/virt/vmx/tdx/tdx.c
> @@ -13,6 +13,13 @@
>  #include <linux/errno.h>
>  #include <linux/printk.h>
>  #include <linux/mutex.h>
> +#include <linux/list.h>
> +#include <linux/slab.h>
> +#include <linux/memblock.h>
> +#include <linux/memory.h>
> +#include <linux/minmax.h>
> +#include <linux/sizes.h>
> +#include <linux/pfn.h>
>  #include <asm/pgtable_types.h>
>  #include <asm/msr.h>
>  #include <asm/tdx.h>
> @@ -25,6 +32,12 @@ enum tdx_module_status_t {
>  	TDX_MODULE_ERROR
>  };
>  
> +struct tdx_memblock {
> +	struct list_head list;
> +	unsigned long start_pfn;
> +	unsigned long end_pfn;
> +};
> +
>  static u32 tdx_keyid_start __ro_after_init;
>  static u32 nr_tdx_keyids __ro_after_init;
>  
> @@ -32,6 +45,9 @@ static enum tdx_module_status_t tdx_module_status;
>  /* Prevent concurrent attempts on TDX detection and initialization */
>  static DEFINE_MUTEX(tdx_module_lock);
>  
> +/* All TDX-usable memory regions */
> +static LIST_HEAD(tdx_memlist);
> +
>  /*
>   * tdx_keyid_start and nr_tdx_keyids indicate that TDX is uninitialized.
>   * This is used in TDX initialization error paths to take it from
> @@ -69,6 +85,50 @@ static int __init record_keyid_partitioning(void)
>  	return 0;
>  }
>  
> +static bool is_tdx_memory(unsigned long start_pfn, unsigned long end_pfn)
> +{
> +	struct tdx_memblock *tmb;
> +
> +	/* Empty list means TDX isn't enabled. */
> +	if (list_empty(&tdx_memlist))
> +		return true;
> +
> +	list_for_each_entry(tmb, &tdx_memlist, list) {
> +		/*
> +		 * The new range is TDX memory if it is fully covered by
> +		 * any TDX memory block.
> +		 *
> +		 * Note TDX memory blocks are originated from memblock
> +		 * memory regions, which can only be contiguous when two
> +		 * regions have different NUMA nodes or flags.  Therefore
> +		 * the new range cannot cross multiple TDX memory blocks.
> +		 */
> +		if (start_pfn >= tmb->start_pfn && end_pfn <= tmb->end_pfn)
> +			return true;
> +	}
> +	return false;
> +}

I don't really like that comment.  It should first state its behavior
and assumptions, like:

	This check assumes that the start_pfn<->end_pfn range does not
	cross multiple tdx_memlist entries.

Only then should it describe why that is OK:

	A single memory hotplug even across mutliple memblocks (from
	which tdx_memlist entries are derived) is impossible.  ... then
	actually explain



> +static int tdx_memory_notifier(struct notifier_block *nb, unsigned long action,
> +			       void *v)
> +{
> +	struct memory_notify *mn = v;
> +
> +	if (action != MEM_GOING_ONLINE)
> +		return NOTIFY_OK;
> +
> +	/*
> +	 * Not all memory is compatible with TDX.  Reject
> +	 * to online any incompatible memory.
> +	 */

This comment isn't quite right either.  There might actually be totally
TDX *compatible* memory here.  It just wasn't configured for use with TDX.

Shouldn't this be something more like:

	/*
	 * The TDX memory configuration is static and can not be
	 * changed.  Reject onlining any memory which is outside
	 * of the static configuration whether it supports TDX or not.
	 */

> +	return is_tdx_memory(mn->start_pfn, mn->start_pfn + mn->nr_pages) ?
> +		NOTIFY_OK : NOTIFY_BAD;
> +}
> +
> +static struct notifier_block tdx_memory_nb = {
> +	.notifier_call = tdx_memory_notifier,
> +};
> +
>  static int __init tdx_init(void)
>  {
>  	int err;
> @@ -89,6 +149,13 @@ static int __init tdx_init(void)
>  		goto no_tdx;
>  	}
>  
> +	err = register_memory_notifier(&tdx_memory_nb);
> +	if (err) {
> +		pr_info("initialization failed: register_memory_notifier() failed (%d)\n",
> +				err);
> +		goto no_tdx;
> +	}
> +
>  	return 0;
>  no_tdx:
>  	clear_tdx();
> @@ -209,6 +276,77 @@ static int tdx_get_sysinfo(struct tdsysinfo_struct *sysinfo,
>  	return 0;
>  }
>  
> +/*
> + * Add a memory region as a TDX memory block.  The caller must make sure
> + * all memory regions are added in address ascending order and don't
> + * overlap.
> + */
> +static int add_tdx_memblock(struct list_head *tmb_list, unsigned long start_pfn,
> +			    unsigned long end_pfn)
> +{
> +	struct tdx_memblock *tmb;
> +
> +	tmb = kmalloc(sizeof(*tmb), GFP_KERNEL);
> +	if (!tmb)
> +		return -ENOMEM;
> +
> +	INIT_LIST_HEAD(&tmb->list);
> +	tmb->start_pfn = start_pfn;
> +	tmb->end_pfn = end_pfn;
> +
> +	list_add_tail(&tmb->list, tmb_list);
> +	return 0;
> +}
> +
> +static void free_tdx_memlist(struct list_head *tmb_list)
> +{
> +	while (!list_empty(tmb_list)) {
> +		struct tdx_memblock *tmb = list_first_entry(tmb_list,
> +				struct tdx_memblock, list);
> +
> +		list_del(&tmb->list);
> +		kfree(tmb);
> +	}
> +}

'tdx_memlist' is written only once at boot and then is read-only, right?

It might be nice to mention that so that the lack of locking doesn't
look problematic.

> +/*
> + * Ensure that all memblock memory regions are convertible to TDX
> + * memory.  Once this has been established, stash the memblock
> + * ranges off in a secondary structure because memblock is modified
> + * in memory hotplug while TDX memory regions are fixed.
> + */

Ahh, that's why we need to "shadow" the memblocks.  Can you add a
sentence on this to the changelog, please?

> +static int build_tdx_memlist(struct list_head *tmb_list)
> +{
> +	unsigned long start_pfn, end_pfn;
> +	int i, ret;
> +
> +	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, NULL) {
> +		/*
> +		 * The first 1MB is not reported as TDX convertible memory.
> +		 * Although the first 1MB is always reserved and won't end up
> +		 * to the page allocator, it is still in memblock's memory
> +		 * regions.  Skip them manually to exclude them as TDX memory.
> +		 */
> +		start_pfn = max(start_pfn, PHYS_PFN(SZ_1M));
> +		if (start_pfn >= end_pfn)
> +			continue;
> +
> +		/*
> +		 * Add the memory regions as TDX memory.  The regions in
> +		 * memblock has already guaranteed they are in address
> +		 * ascending order and don't overlap.
> +		 */
> +		ret = add_tdx_memblock(tmb_list, start_pfn, end_pfn);
> +		if (ret)
> +			goto err;
> +	}
> +
> +	return 0;
> +err:
> +	free_tdx_memlist(tmb_list);
> +	return ret;
> +}
> +
>  static int init_tdx_module(void)
>  {
>  	/*
> @@ -226,10 +364,25 @@ static int init_tdx_module(void)
>  	if (ret)
>  		goto out;
>  
> +	/*
> +	 * The initial support of TDX guests only allocates memory from
> +	 * the global page allocator.  To keep things simple, just make
> +	 * sure all pages in the page allocator are TDX memory.

I didn't like this in the changelog either.  Try to make this "timeless"
rather than refer to what the support is today.  I gave you example text
above.

> +	 * Build the list of "TDX-usable" memory regions which cover all
> +	 * pages in the page allocator to guarantee that.  Do it while
> +	 * holding mem_hotplug_lock read-lock as the memory hotplug code
> +	 * path reads the @tdx_memlist to reject any new memory.
> +	 */
> +	get_online_mems();

Oh, it actually uses the memory hotplug locking for list protection.
That's at least a bit subtle.  Please document that somewhere in the
functions that actually manipulate the list.

I think it's also worth saying something here about the high-level
effects of what's going on:

	Take a snapshot of the memory configuration (memblocks).  This
	snapshot will be used to enable TDX support for *this* memory
	configuration only.  Use a memory hotplug notifier to ensure
	that no other RAM can be added outside of this configuration.

That's it, right?

> +	ret = build_tdx_memlist(&tdx_memlist);
> +	if (ret)
> +		goto out;
> +
>  	/*
>  	 * TODO:
>  	 *
> -	 *  - Build the list of TDX-usable memory regions.
>  	 *  - Construct a list of TDMRs to cover all TDX-usable memory
>  	 *    regions.
>  	 *  - Pick up one TDX private KeyID as the global KeyID.
> @@ -241,6 +394,11 @@ static int init_tdx_module(void)
>  	 */
>  	ret = -EINVAL;
>  out:
> +	/*
> +	 * @tdx_memlist is written here and read at memory hotplug time.
> +	 * Lock out memory hotplug code while building it.
> +	 */
> +	put_online_mems();
>  	return ret;
>  }

You would also be wise to have the folks who do a lot of memory hotplug
work today look at this sooner rather than later.  I _think_ what you
have here is OK, but I'm really rusty on the code itself.



  reply	other threads:[~2023-01-06 18:18 UTC|newest]

Thread overview: 84+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-09  6:52 [PATCH v8 00/16] TDX host kernel support Kai Huang
2022-12-09  6:52 ` [PATCH v8 01/16] x86/tdx: Define TDX supported page sizes as macros Kai Huang
2023-01-06 19:04   ` Dave Hansen
2022-12-09  6:52 ` [PATCH v8 02/16] x86/virt/tdx: Detect TDX during kernel boot Kai Huang
2023-01-06 17:09   ` Dave Hansen
2023-01-08 22:25     ` Huang, Kai
2022-12-09  6:52 ` [PATCH v8 03/16] x86/virt/tdx: Make INTEL_TDX_HOST depend on X86_X2APIC Kai Huang
2023-01-06 19:04   ` Dave Hansen
2022-12-09  6:52 ` [PATCH v8 04/16] x86/virt/tdx: Add skeleton to initialize TDX on demand Kai Huang
2023-01-06 17:14   ` Dave Hansen
2023-01-08 22:26     ` Huang, Kai
2022-12-09  6:52 ` [PATCH v8 05/16] x86/virt/tdx: Implement functions to make SEAMCALL Kai Huang
2023-01-06 17:29   ` Dave Hansen
2023-01-09 10:30     ` Huang, Kai
2023-01-09 19:54       ` Dave Hansen
2023-01-09 22:10         ` Huang, Kai
2022-12-09  6:52 ` [PATCH v8 06/16] x86/virt/tdx: Get information about TDX module and TDX-capable memory Kai Huang
2023-01-06 17:46   ` Dave Hansen
2023-01-09 10:25     ` Huang, Kai
2023-01-09 19:52       ` Dave Hansen
2023-01-09 22:07         ` Huang, Kai
2023-01-09 22:11           ` Dave Hansen
2022-12-09  6:52 ` [PATCH v8 07/16] x86/virt/tdx: Use all system memory when initializing TDX module as TDX memory Kai Huang
2023-01-06 18:18   ` Dave Hansen [this message]
2023-01-09 11:48     ` Huang, Kai
2023-01-09 16:51       ` Dave Hansen
2023-01-10 12:09         ` Huang, Kai
2023-01-10 16:18           ` Dave Hansen
2023-01-11 10:00             ` Huang, Kai
2023-01-12  0:56               ` Huang, Ying
2023-01-12  1:18                 ` Huang, Kai
2023-01-12  1:59                   ` Huang, Ying
2023-01-12  2:22                     ` Huang, Kai
2023-01-12 11:33         ` Huang, Kai
2023-01-18 11:08   ` Huang, Kai
2023-01-18 13:57     ` David Hildenbrand
2023-01-18 19:38       ` Huang, Kai
2022-12-09  6:52 ` [PATCH v8 08/16] x86/virt/tdx: Add placeholder to construct TDMRs to cover all TDX memory regions Kai Huang
2023-01-06 19:24   ` Dave Hansen
2023-01-10  0:40     ` Huang, Kai
2023-01-10  0:47       ` Dave Hansen
2023-01-10  2:23         ` Huang, Kai
2023-01-10 19:12           ` Dave Hansen
2023-01-11  9:23             ` Huang, Kai
2022-12-09  6:52 ` [PATCH v8 09/16] x86/virt/tdx: Fill out " Kai Huang
2023-01-06 19:36   ` Dave Hansen
2023-01-10  0:45     ` Huang, Kai
2022-12-09  6:52 ` [PATCH v8 10/16] x86/virt/tdx: Allocate and set up PAMTs for TDMRs Kai Huang
2023-01-06 21:53   ` Dave Hansen
2023-01-10  0:49     ` Huang, Kai
2023-01-07  0:47   ` Dave Hansen
2023-01-10  0:47     ` Huang, Kai
2022-12-09  6:52 ` [PATCH v8 11/16] x86/virt/tdx: Designate reserved areas for all TDMRs Kai Huang
2023-01-06 22:07   ` Dave Hansen
2023-01-10  1:19     ` Huang, Kai
2023-01-10  1:22       ` Dave Hansen
2023-01-10 11:01         ` Huang, Kai
2023-01-10 15:19           ` Dave Hansen
2023-01-11 10:57             ` Huang, Kai
2023-01-11 16:16               ` Dave Hansen
2023-01-11 22:10                 ` Huang, Kai
2023-01-10 11:01       ` Huang, Kai
2023-01-10 15:17         ` Dave Hansen
2022-12-09  6:52 ` [PATCH v8 12/16] x86/virt/tdx: Designate the global KeyID and configure the TDX module Kai Huang
2023-01-06 22:21   ` Dave Hansen
2023-01-10 10:48     ` Huang, Kai
2023-01-10 16:25       ` Dave Hansen
2023-01-10 23:33         ` Huang, Kai
2022-12-09  6:52 ` [PATCH v8 13/16] x86/virt/tdx: Configure global KeyID on all packages Kai Huang
2023-01-06 22:49   ` Dave Hansen
2023-01-10 10:15     ` Huang, Kai
2023-01-10 16:53       ` Dave Hansen
2023-01-11  0:06         ` Huang, Kai
2022-12-09  6:52 ` [PATCH v8 14/16] x86/virt/tdx: Initialize all TDMRs Kai Huang
2023-01-07  0:17   ` Dave Hansen
2023-01-10 10:23     ` Huang, Kai
2022-12-09  6:52 ` [PATCH v8 15/16] x86/virt/tdx: Flush cache in kexec() when TDX is enabled Kai Huang
2023-01-07  0:35   ` Dave Hansen
2023-01-10 11:29     ` Huang, Kai
2023-01-10 15:27       ` Dave Hansen
2023-01-11  0:13         ` Huang, Kai
2023-01-11  0:30           ` Dave Hansen
2023-01-11  1:58             ` Huang, Kai
2022-12-09  6:52 ` [PATCH v8 16/16] Documentation/x86: Add documentation for TDX host support Kai Huang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e7b682a1-abdf-ce73-f262-8b7ce946e78e@intel.com \
    --to=dave.hansen@intel.com \
    --cc=ak@linux.intel.com \
    --cc=bagasdotme@gmail.com \
    --cc=chao.gao@intel.com \
    --cc=dan.j.williams@intel.com \
    --cc=imammedo@redhat.com \
    --cc=isaku.yamahata@intel.com \
    --cc=kai.huang@intel.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=len.brown@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rafael.j.wysocki@intel.com \
    --cc=reinette.chatre@intel.com \
    --cc=sagis@google.com \
    --cc=sathyanarayanan.kuppuswamy@linux.intel.com \
    --cc=seanjc@google.com \
    --cc=tglx@linutronix.de \
    --cc=tony.luck@intel.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox