linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: Brijesh Singh <brijesh.singh@amd.com>, Borislav Petkov <bp@suse.de>
Cc: simon.guinot@sequanux.org, linux-efi@vger.kernel.org,
	kvm@vger.kernel.org, rkrcmar@redhat.com,
	matt@codeblueprint.co.uk, linux-pci@vger.kernel.org,
	linus.walleij@linaro.org, gary.hook@amd.com, linux-mm@kvack.org,
	paul.gortmaker@windriver.com, hpa@zytor.com, cl@linux.com,
	dan.j.williams@intel.com, aarcange@redhat.com,
	sfr@canb.auug.org.au, andriy.shevchenko@linux.intel.com,
	herbert@gondor.apana.org.au, bhe@redhat.com, xemul@parallels.com,
	joro@8bytes.org, x86@kernel.org, peterz@infradead.org,
	piotr.luc@intel.com, mingo@redhat.com, msalter@redhat.com,
	ross.zwisler@linux.intel.com, dyoung@redhat.com,
	thomas.lendacky@amd.com, jroedel@suse.de, keescook@chromium.org,
	arnd@arndb.de, toshi.kani@hpe.com,
	mathieu.desnoyers@efficios.com, luto@kernel.org,
	devel@linuxdriverproject.org, bhelgaas@google.com,
	tglx@linutronix.de, mchehab@kernel.org, iamjoonsoo.kim@lge.com,
	labbott@fedoraproject.org, tony.luck@intel.com,
	alexandre.bounine@idt.com, kuleshovmail@gmail.com,
	linux-kernel@vger.kernel.org, mcgrof@kernel.org, mst@redhat.com,
	linux-crypto@vger.kernel.org, tj@kernel.org,
	akpm@linux-foundation.org, davem@davemloft.net
Subject: Re: [RFC PATCH v2 14/32] x86: mm: Provide support to use memblock when spliting large pages
Date: Thu, 16 Mar 2017 14:15:19 +0100	[thread overview]
Message-ID: <c3b12452-9533-87c8-4543-8f12f5c90fe3@redhat.com> (raw)
In-Reply-To: <cb6a9a56-2c52-d98d-3ff6-3b61d0e5875e@amd.com>

[-- Attachment #1: Type: text/plain, Size: 1706 bytes --]



On 10/03/2017 23:41, Brijesh Singh wrote:
>> Maybe there's a reason this fires:
>>
>> WARNING: modpost: Found 2 section mismatch(es).
>> To see full details build your kernel with:
>> 'make CONFIG_DEBUG_SECTION_MISMATCH=y'
>>
>> WARNING: vmlinux.o(.text+0x48edc): Section mismatch in reference from
>> the function __change_page_attr() to the function
>> .init.text:memblock_alloc()
>> The function __change_page_attr() references
>> the function __init memblock_alloc().
>> This is often because __change_page_attr lacks a __init
>> annotation or the annotation of memblock_alloc is wrong.
>>
>> WARNING: vmlinux.o(.text+0x491d1): Section mismatch in reference from
>> the function __change_page_attr() to the function
>> .meminit.text:memblock_free()
>> The function __change_page_attr() references
>> the function __meminit memblock_free().
>> This is often because __change_page_attr lacks a __meminit
>> annotation or the annotation of memblock_free is wrong.
>> 
>> But maybe Paolo might have an even better idea...
> 
> I am sure he will have better idea :)

Not sure if it's better or worse, but an alternative idea is to turn
__change_page_attr and __change_page_attr_set_clr inside out, so that:
1) the alloc_pages/__free_page happens in __change_page_attr_set_clr;
2) __change_page_attr_set_clr overall does not beocome more complex.

Then you can introduce __early_change_page_attr_set_clr and/or
early_kernel_map_pages_in_pgd, for use in your next patches.  They use
the memblock allocator instead of alloc/free_page

The attached patch is compile-tested only and almost certainly has some
thinko in it.  But it even skims a few lines from the code so the idea
might have some merit.

Paolo

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: alloc-in-cpa-set-clr.patch --]
[-- Type: text/x-patch; name="alloc-in-cpa-set-clr.patch", Size: 10219 bytes --]

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 28d42130243c..953c8e697562 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -490,11 +490,12 @@ static void __set_pmd_pte(pte_t *kpte, unsigned long address, pte_t pte)
 }
 
 static int
-try_preserve_large_page(pte_t *kpte, unsigned long address,
+try_preserve_large_page(pte_t **p_kpte, unsigned long address,
 			struct cpa_data *cpa)
 {
 	unsigned long nextpage_addr, numpages, pmask, psize, addr, pfn, old_pfn;
-	pte_t new_pte, old_pte, *tmp;
+	pte_t *kpte = *p_kpte;
+	pte_t new_pte, old_pte;
 	pgprot_t old_prot, new_prot, req_prot;
 	int i, do_split = 1;
 	enum pg_level level;
@@ -507,8 +508,8 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,
 	 * Check for races, another CPU might have split this page
 	 * up already:
 	 */
-	tmp = _lookup_address_cpa(cpa, address, &level);
-	if (tmp != kpte)
+	*p_kpte = _lookup_address_cpa(cpa, address, &level);
+	if (*p_kpte != kpte)
 		goto out_unlock;
 
 	switch (level) {
@@ -634,17 +635,18 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
 	unsigned int i, level;
 	pte_t *tmp;
 	pgprot_t ref_prot;
+	int retry = 1;
 
+	if (!debug_pagealloc_enabled())
+		spin_lock(&cpa_lock);
 	spin_lock(&pgd_lock);
 	/*
 	 * Check for races, another CPU might have split this page
 	 * up for us already:
 	 */
 	tmp = _lookup_address_cpa(cpa, address, &level);
-	if (tmp != kpte) {
-		spin_unlock(&pgd_lock);
-		return 1;
-	}
+	if (tmp != kpte)
+		goto out;
 
 	paravirt_alloc_pte(&init_mm, page_to_pfn(base));
 
@@ -671,10 +673,11 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
 		break;
 
 	default:
-		spin_unlock(&pgd_lock);
-		return 1;
+		goto out;
 	}
 
+	retry = 0;
+
 	/*
 	 * Set the GLOBAL flags only if the PRESENT flag is set
 	 * otherwise pmd/pte_present will return true even on a non
@@ -718,28 +721,34 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
 	 * going on.
 	 */
 	__flush_tlb_all();
-	spin_unlock(&pgd_lock);
 
-	return 0;
-}
-
-static int split_large_page(struct cpa_data *cpa, pte_t *kpte,
-			    unsigned long address)
-{
-	struct page *base;
+out:
+	spin_unlock(&pgd_lock);
 
+	/*
+	 * Do a global flush tlb after splitting the large page
+ 	 * and before we do the actual change page attribute in the PTE.
+ 	 *
+ 	 * With out this, we violate the TLB application note, that says
+ 	 * "The TLBs may contain both ordinary and large-page
+	 *  translations for a 4-KByte range of linear addresses. This
+	 *  may occur if software modifies the paging structures so that
+	 *  the page size used for the address range changes. If the two
+	 *  translations differ with respect to page frame or attributes
+	 *  (e.g., permissions), processor behavior is undefined and may
+	 *  be implementation-specific."
+ 	 *
+ 	 * We do this global tlb flush inside the cpa_lock, so that we
+	 * don't allow any other cpu, with stale tlb entries change the
+	 * page attribute in parallel, that also falls into the
+	 * just split large page entry.
+ 	 */
+	if (!retry)
+		flush_tlb_all();
 	if (!debug_pagealloc_enabled())
 		spin_unlock(&cpa_lock);
-	base = alloc_pages(GFP_KERNEL | __GFP_NOTRACK, 0);
-	if (!debug_pagealloc_enabled())
-		spin_lock(&cpa_lock);
-	if (!base)
-		return -ENOMEM;
-
-	if (__split_large_page(cpa, kpte, address, base))
-		__free_page(base);
 
-	return 0;
+	return retry;
 }
 
 static bool try_to_free_pte_page(pte_t *pte)
@@ -1166,30 +1175,26 @@ static int __cpa_process_fault(struct cpa_data *cpa, unsigned long vaddr,
 	}
 }
 
-static int __change_page_attr(struct cpa_data *cpa, int primary)
+static int __change_page_attr(struct cpa_data *cpa, pte_t **p_kpte, unsigned long address,
+			      int primary)
 {
-	unsigned long address;
-	int do_split, err;
 	unsigned int level;
 	pte_t *kpte, old_pte;
+	int err = 0;
 
-	if (cpa->flags & CPA_PAGES_ARRAY) {
-		struct page *page = cpa->pages[cpa->curpage];
-		if (unlikely(PageHighMem(page)))
-			return 0;
-		address = (unsigned long)page_address(page);
-	} else if (cpa->flags & CPA_ARRAY)
-		address = cpa->vaddr[cpa->curpage];
-	else
-		address = *cpa->vaddr;
-repeat:
-	kpte = _lookup_address_cpa(cpa, address, &level);
-	if (!kpte)
-		return __cpa_process_fault(cpa, address, primary);
+	if (!debug_pagealloc_enabled())
+		spin_lock(&cpa_lock);
+	*p_kpte = kpte = _lookup_address_cpa(cpa, address, &level);
+	if (!kpte) {
+		err = __cpa_process_fault(cpa, address, primary);
+		goto out;
+	}
 
 	old_pte = *kpte;
-	if (pte_none(old_pte))
-		return __cpa_process_fault(cpa, address, primary);
+	if (pte_none(old_pte)) {
+		err = __cpa_process_fault(cpa, address, primary);
+		goto out;
+	}
 
 	if (level == PG_LEVEL_4K) {
 		pte_t new_pte;
@@ -1228,59 +1233,27 @@ static int __change_page_attr(struct cpa_data *cpa, int primary)
 			cpa->flags |= CPA_FLUSHTLB;
 		}
 		cpa->numpages = 1;
-		return 0;
+		goto out;
 	}
 
 	/*
 	 * Check, whether we can keep the large page intact
 	 * and just change the pte:
 	 */
-	do_split = try_preserve_large_page(kpte, address, cpa);
-	/*
-	 * When the range fits into the existing large page,
-	 * return. cp->numpages and cpa->tlbflush have been updated in
-	 * try_large_page:
-	 */
-	if (do_split <= 0)
-		return do_split;
-
-	/*
-	 * We have to split the large page:
-	 */
-	err = split_large_page(cpa, kpte, address);
-	if (!err) {
-		/*
-	 	 * Do a global flush tlb after splitting the large page
-	 	 * and before we do the actual change page attribute in the PTE.
-	 	 *
-	 	 * With out this, we violate the TLB application note, that says
-	 	 * "The TLBs may contain both ordinary and large-page
-		 *  translations for a 4-KByte range of linear addresses. This
-		 *  may occur if software modifies the paging structures so that
-		 *  the page size used for the address range changes. If the two
-		 *  translations differ with respect to page frame or attributes
-		 *  (e.g., permissions), processor behavior is undefined and may
-		 *  be implementation-specific."
-	 	 *
-	 	 * We do this global tlb flush inside the cpa_lock, so that we
-		 * don't allow any other cpu, with stale tlb entries change the
-		 * page attribute in parallel, that also falls into the
-		 * just split large page entry.
-	 	 */
-		flush_tlb_all();
-		goto repeat;
-	}
+	err = try_preserve_large_page(p_kpte, address, cpa);
 
+out:
+	if (!debug_pagealloc_enabled())
+		spin_unlock(&cpa_lock);
 	return err;
 }
 
 static int __change_page_attr_set_clr(struct cpa_data *cpa, int checkalias);
 
-static int cpa_process_alias(struct cpa_data *cpa)
+static int cpa_process_alias(struct cpa_data *cpa, unsigned long vaddr)
 {
 	struct cpa_data alias_cpa;
 	unsigned long laddr = (unsigned long)__va(cpa->pfn << PAGE_SHIFT);
-	unsigned long vaddr;
 	int ret;
 
 	if (!pfn_range_is_mapped(cpa->pfn, cpa->pfn + 1))
@@ -1290,16 +1263,6 @@ static int cpa_process_alias(struct cpa_data *cpa)
 	 * No need to redo, when the primary call touched the direct
 	 * mapping already:
 	 */
-	if (cpa->flags & CPA_PAGES_ARRAY) {
-		struct page *page = cpa->pages[cpa->curpage];
-		if (unlikely(PageHighMem(page)))
-			return 0;
-		vaddr = (unsigned long)page_address(page);
-	} else if (cpa->flags & CPA_ARRAY)
-		vaddr = cpa->vaddr[cpa->curpage];
-	else
-		vaddr = *cpa->vaddr;
-
 	if (!(within(vaddr, PAGE_OFFSET,
 		    PAGE_OFFSET + (max_pfn_mapped << PAGE_SHIFT)))) {
 
@@ -1338,33 +1301,64 @@ static int cpa_process_alias(struct cpa_data *cpa)
 	return 0;
 }
 
+static unsigned long cpa_address(struct cpa_data *cpa, unsigned long numpages)
+{
+	/*
+	 * Store the remaining nr of pages for the large page
+	 * preservation check.
+	 */
+	/* for array changes, we can't use large page */
+	cpa->numpages = 1;
+	if (cpa->flags & CPA_PAGES_ARRAY) {
+		struct page *page = cpa->pages[cpa->curpage];
+		if (unlikely(PageHighMem(page)))
+			return -EINVAL;
+		return (unsigned long)page_address(page);
+	} else if (cpa->flags & CPA_ARRAY) {
+		return cpa->vaddr[cpa->curpage];
+	} else {
+		cpa->numpages = numpages;
+		return *cpa->vaddr;
+	}
+}
+
+static void cpa_advance(struct cpa_data *cpa)
+{
+	if (cpa->flags & (CPA_PAGES_ARRAY | CPA_ARRAY))
+		cpa->curpage++;
+	else
+		*cpa->vaddr += cpa->numpages * PAGE_SIZE;
+}
+
 static int __change_page_attr_set_clr(struct cpa_data *cpa, int checkalias)
 {
 	unsigned long numpages = cpa->numpages;
+	unsigned long vaddr;
+	struct page *base;
+	pte_t *kpte;
 	int ret;
 
 	while (numpages) {
-		/*
-		 * Store the remaining nr of pages for the large page
-		 * preservation check.
-		 */
-		cpa->numpages = numpages;
-		/* for array changes, we can't use large page */
-		if (cpa->flags & (CPA_ARRAY | CPA_PAGES_ARRAY))
-			cpa->numpages = 1;
-
-		if (!debug_pagealloc_enabled())
-			spin_lock(&cpa_lock);
-		ret = __change_page_attr(cpa, checkalias);
-		if (!debug_pagealloc_enabled())
-			spin_unlock(&cpa_lock);
-		if (ret)
-			return ret;
-
-		if (checkalias) {
-			ret = cpa_process_alias(cpa);
-			if (ret)
+		vaddr = cpa_address(cpa, numpages);
+		if (!IS_ERR_VALUE(vaddr)) {
+repeat:
+			ret = __change_page_attr(cpa, &kpte, vaddr, checkalias);
+			if (ret < 0)
 				return ret;
+			if (ret) {
+				base = alloc_page(GFP_KERNEL|__GFP_NOTRACK);
+				if (!base)
+					return -ENOMEM;
+				if (__split_large_page(cpa, kpte, vaddr, base))
+					__free_page(base);
+				goto repeat;
+			}
+
+			if (checkalias) {
+				ret = cpa_process_alias(cpa, vaddr);
+				if (ret < 0)
+					return ret;
+			}
 		}
 
 		/*
@@ -1374,11 +1368,7 @@ static int __change_page_attr_set_clr(struct cpa_data *cpa, int checkalias)
 		 */
 		BUG_ON(cpa->numpages > numpages || !cpa->numpages);
 		numpages -= cpa->numpages;
-		if (cpa->flags & (CPA_PAGES_ARRAY | CPA_ARRAY))
-			cpa->curpage++;
-		else
-			*cpa->vaddr += cpa->numpages * PAGE_SIZE;
-
+		cpa_advance(cpa);
 	}
 	return 0;
 }

  reply	other threads:[~2017-03-16 13:15 UTC|newest]

Thread overview: 107+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-02 15:12 [RFC PATCH v2 00/32] x86: Secure Encrypted Virtualization (AMD) Brijesh Singh
2017-03-02 15:12 ` [RFC PATCH v2 01/32] x86: Add the Secure Encrypted Virtualization CPU feature Brijesh Singh
2017-03-03 16:59   ` Borislav Petkov
2017-03-03 21:01     ` Brijesh Singh
2017-03-04 10:11       ` Borislav Petkov
2017-03-06 18:11         ` Brijesh Singh
2017-03-06 20:54           ` Borislav Petkov
2017-03-02 15:12 ` [RFC PATCH v2 02/32] x86: Secure Encrypted Virtualization (SEV) support Brijesh Singh
2017-03-07 11:19   ` Borislav Petkov
2017-03-08 15:06   ` Borislav Petkov
2017-03-02 15:12 ` [RFC PATCH v2 03/32] KVM: SVM: prepare for new bit definition in nested_ctl Brijesh Singh
2017-03-02 15:12 ` [RFC PATCH v2 04/32] KVM: SVM: Add SEV feature definitions to KVM Brijesh Singh
2017-03-07  0:50   ` Borislav Petkov
2017-03-02 15:12 ` [RFC PATCH v2 05/32] x86: Use encrypted access of BOOT related data with SEV Brijesh Singh
2017-03-07 11:09   ` Borislav Petkov
2017-03-16 19:03     ` Tom Lendacky
2017-03-02 15:13 ` [RFC PATCH v2 06/32] x86/pci: Use memremap when walking setup data Brijesh Singh
2017-03-03 20:42   ` Bjorn Helgaas
2017-03-03 21:15     ` Tom Lendacky
2017-03-07  0:03       ` Bjorn Helgaas
2017-03-13 20:08         ` Tom Lendacky
2017-03-02 15:13 ` [RFC PATCH v2 07/32] x86/efi: Access EFI data as encrypted when SEV is active Brijesh Singh
2017-03-07 11:57   ` Borislav Petkov
2017-03-02 15:13 ` [RFC PATCH v2 08/32] x86: Use PAGE_KERNEL protection for ioremap of memory page Brijesh Singh
2017-03-07 14:59   ` Borislav Petkov
2017-03-16 20:04     ` Tom Lendacky
2017-03-17 14:32       ` Tom Lendacky
2017-03-17 14:55         ` Tom Lendacky
2017-03-02 15:13 ` [RFC PATCH v2 09/32] x86: Change early_ioremap to early_memremap for BOOT data Brijesh Singh
2017-03-08  8:46   ` Borislav Petkov
2017-03-02 15:14 ` [RFC PATCH v2 10/32] x86: DMA support for SEV memory encryption Brijesh Singh
2017-03-08 10:56   ` Borislav Petkov
2017-03-02 15:14 ` [RFC PATCH v2 11/32] x86: Unroll string I/O when SEV is active Brijesh Singh
2017-03-02 15:14 ` [RFC PATCH v2 12/32] x86: Add early boot support when running with SEV active Brijesh Singh
2017-03-09 14:07   ` Borislav Petkov
2017-03-09 16:13     ` Paolo Bonzini
2017-03-09 16:29       ` Borislav Petkov
2017-03-10 16:35         ` Brijesh Singh
2017-03-16 10:16           ` Borislav Petkov
2017-03-16 14:28             ` Tom Lendacky
2017-03-16 15:09               ` Borislav Petkov
2017-03-16 16:11                 ` Tom Lendacky
2017-03-16 16:29                   ` Borislav Petkov
2017-03-02 15:15 ` [RFC PATCH v2 13/32] KVM: SVM: Enable SEV by setting the SEV_ENABLE CPU feature Brijesh Singh
2017-03-09 19:29   ` Borislav Petkov
2017-03-02 15:15 ` [RFC PATCH v2 14/32] x86: mm: Provide support to use memblock when spliting large pages Brijesh Singh
2017-03-10 11:06   ` Borislav Petkov
2017-03-10 22:41     ` Brijesh Singh
2017-03-16 13:15       ` Paolo Bonzini [this message]
2017-03-16 18:28       ` Borislav Petkov
2017-03-16 22:25         ` Paolo Bonzini
2017-03-17 10:17           ` Borislav Petkov
2017-03-17 10:47             ` Paolo Bonzini
2017-03-17 10:56               ` Borislav Petkov
2017-03-17 11:03                 ` Paolo Bonzini
2017-03-17 11:33                   ` Borislav Petkov
2017-03-17 14:45                     ` Paolo Bonzini
2017-03-18 16:37                       ` Borislav Petkov
2017-04-06 14:05             ` Brijesh Singh
2017-04-06 17:25               ` Borislav Petkov
2017-04-06 18:37                 ` Brijesh Singh
2017-04-07 11:33                   ` Borislav Petkov
2017-04-07 14:50                     ` Brijesh Singh
2017-03-16 12:28   ` Paolo Bonzini
2017-03-02 15:15 ` [RFC PATCH v2 15/32] x86: Add support for changing memory encryption attribute in early boot Brijesh Singh
2017-03-24 17:12   ` Borislav Petkov
2017-03-27 15:07     ` Brijesh Singh
2017-03-02 15:15 ` [RFC PATCH v2 16/32] x86: kvm: Provide support to create Guest and HV shared per-CPU variables Brijesh Singh
2017-03-16 11:06   ` Paolo Bonzini
2017-03-28 18:39   ` Borislav Petkov
2017-03-29 15:21     ` Paolo Bonzini
2017-03-29 15:32       ` Borislav Petkov
2017-03-02 15:15 ` [RFC PATCH v2 17/32] x86: kvmclock: Clear encryption attribute when SEV is active Brijesh Singh
2017-03-02 15:16 ` [RFC PATCH v2 18/32] kvm: svm: Use the hardware provided GPA instead of page walk Brijesh Singh
2017-03-29 15:14   ` Borislav Petkov
2017-03-29 17:08     ` Brijesh Singh
2017-03-02 15:16 ` [RFC PATCH v2 19/32] crypto: ccp: Introduce the AMD Secure Processor device Brijesh Singh
2017-03-02 17:39   ` Mark Rutland
2017-03-02 19:11     ` Brijesh Singh
2017-03-03 13:55       ` Andy Shevchenko
2017-03-02 15:16 ` [RFC PATCH v2 20/32] crypto: ccp: Add Platform Security Processor (PSP) interface support Brijesh Singh
2017-03-02 15:16 ` [RFC PATCH v2 21/32] crypto: ccp: Add Secure Encrypted Virtualization (SEV) " Brijesh Singh
2017-03-02 15:16 ` [RFC PATCH v2 22/32] kvm: svm: prepare to reserve asid for SEV guest Brijesh Singh
2017-03-02 15:17 ` [RFC PATCH v2 23/32] kvm: introduce KVM_MEMORY_ENCRYPT_OP ioctl Brijesh Singh
2017-03-16 10:25   ` Paolo Bonzini
2017-03-02 15:17 ` [RFC PATCH v2 24/32] kvm: x86: prepare for SEV guest management API support Brijesh Singh
2017-03-16 10:33   ` Paolo Bonzini
2017-03-02 15:17 ` [RFC PATCH v2 25/32] kvm: svm: Add support for SEV LAUNCH_START command Brijesh Singh
2017-03-02 15:17 ` [RFC PATCH v2 26/32] kvm: svm: Add support for SEV LAUNCH_UPDATE_DATA command Brijesh Singh
2017-03-16 10:48   ` Paolo Bonzini
2017-03-16 18:20     ` Brijesh Singh
2017-03-02 15:17 ` [RFC PATCH v2 27/32] kvm: svm: Add support for SEV LAUNCH_FINISH command Brijesh Singh
2017-03-02 15:18 ` [RFC PATCH v2 28/32] kvm: svm: Add support for SEV GUEST_STATUS command Brijesh Singh
2017-03-02 15:18 ` [RFC PATCH v2 29/32] kvm: svm: Add support for SEV DEBUG_DECRYPT command Brijesh Singh
2017-03-16 10:54   ` Paolo Bonzini
2017-03-16 18:41     ` Brijesh Singh
2017-03-17 11:09       ` Paolo Bonzini
2017-03-02 15:18 ` [RFC PATCH v2 30/32] kvm: svm: Add support for SEV DEBUG_ENCRYPT command Brijesh Singh
2017-03-16 11:03   ` Paolo Bonzini
2017-03-16 18:34     ` Brijesh Singh
2017-03-02 15:18 ` [RFC PATCH v2 31/32] kvm: svm: Add support for SEV LAUNCH_MEASURE command Brijesh Singh
2017-03-02 15:18 ` [RFC PATCH v2 32/32] x86: kvm: Pin the guest memory when SEV is active Brijesh Singh
2017-03-16 10:38   ` Paolo Bonzini
2017-03-16 18:17     ` Brijesh Singh
2017-03-03 20:33 ` [RFC PATCH v2 00/32] x86: Secure Encrypted Virtualization (AMD) Bjorn Helgaas
2017-03-03 20:51   ` Borislav Petkov
2017-03-03 21:15   ` Brijesh Singh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c3b12452-9533-87c8-4543-8f12f5c90fe3@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=alexandre.bounine@idt.com \
    --cc=andriy.shevchenko@linux.intel.com \
    --cc=arnd@arndb.de \
    --cc=bhe@redhat.com \
    --cc=bhelgaas@google.com \
    --cc=bp@suse.de \
    --cc=brijesh.singh@amd.com \
    --cc=cl@linux.com \
    --cc=dan.j.williams@intel.com \
    --cc=davem@davemloft.net \
    --cc=devel@linuxdriverproject.org \
    --cc=dyoung@redhat.com \
    --cc=gary.hook@amd.com \
    --cc=herbert@gondor.apana.org.au \
    --cc=hpa@zytor.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=joro@8bytes.org \
    --cc=jroedel@suse.de \
    --cc=keescook@chromium.org \
    --cc=kuleshovmail@gmail.com \
    --cc=kvm@vger.kernel.org \
    --cc=labbott@fedoraproject.org \
    --cc=linus.walleij@linaro.org \
    --cc=linux-crypto@vger.kernel.org \
    --cc=linux-efi@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=matt@codeblueprint.co.uk \
    --cc=mcgrof@kernel.org \
    --cc=mchehab@kernel.org \
    --cc=mingo@redhat.com \
    --cc=msalter@redhat.com \
    --cc=mst@redhat.com \
    --cc=paul.gortmaker@windriver.com \
    --cc=peterz@infradead.org \
    --cc=piotr.luc@intel.com \
    --cc=rkrcmar@redhat.com \
    --cc=ross.zwisler@linux.intel.com \
    --cc=sfr@canb.auug.org.au \
    --cc=simon.guinot@sequanux.org \
    --cc=tglx@linutronix.de \
    --cc=thomas.lendacky@amd.com \
    --cc=tj@kernel.org \
    --cc=tony.luck@intel.com \
    --cc=toshi.kani@hpe.com \
    --cc=x86@kernel.org \
    --cc=xemul@parallels.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox