From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-it0-f70.google.com (mail-it0-f70.google.com [209.85.214.70]) by kanga.kvack.org (Postfix) with ESMTP id A69656B051C for ; Tue, 11 Jul 2017 11:12:54 -0400 (EDT) Received: by mail-it0-f70.google.com with SMTP id v205so32307753itf.8 for ; Tue, 11 Jul 2017 08:12:54 -0700 (PDT) Received: from NAM03-DM3-obe.outbound.protection.outlook.com (mail-dm3nam03on0040.outbound.protection.outlook.com. [104.47.41.40]) by mx.google.com with ESMTPS id l88si176634ioi.278.2017.07.11.08.12.53 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 11 Jul 2017 08:12:53 -0700 (PDT) Subject: Re: [PATCH v9 04/38] x86/CPU/AMD: Add the Secure Memory Encryption CPU feature References: <20170707133804.29711.1616.stgit@tlendack-t1.amdoffice.net> <20170707133850.29711.29549.stgit@tlendack-t1.amdoffice.net> From: Tom Lendacky Message-ID: <602d1182-6f18-5954-c1d9-5f28e7b447b5@amd.com> Date: Tue, 11 Jul 2017 10:12:41 -0500 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Brian Gerst Cc: linux-arch , linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, the arch/x86 maintainers , kexec@lists.infradead.org, Linux Kernel Mailing List , kasan-dev@googlegroups.com, xen-devel@lists.xen.org, Linux-MM , iommu@lists.linux-foundation.org, Brijesh Singh , Toshimitsu Kani , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Matt Fleming , Alexander Potapenko , "H. Peter Anvin" , Larry Woodman , Jonathan Corbet , Joerg Roedel , "Michael S. Tsirkin" , Ingo Molnar , Andrey Ryabinin , Dave Young , Rik van Riel , Arnd Bergmann , Konrad Rzeszutek Wilk , Borislav Petkov , Andy Lutomirski , Boris Ostrovsky , Dmitry Vyukov , Juergen Gross , Thomas Gleixner , Paolo Bonzini On 7/11/2017 12:07 AM, Brian Gerst wrote: > On Mon, Jul 10, 2017 at 3:41 PM, Tom Lendacky wrote: >> On 7/8/2017 7:50 AM, Brian Gerst wrote: >>> >>> On Fri, Jul 7, 2017 at 9:38 AM, Tom Lendacky >>> wrote: >>>> >>>> Update the CPU features to include identifying and reporting on the >>>> Secure Memory Encryption (SME) feature. SME is identified by CPUID >>>> 0x8000001f, but requires BIOS support to enable it (set bit 23 of >>>> MSR_K8_SYSCFG). Only show the SME feature as available if reported by >>>> CPUID and enabled by BIOS. >>>> >>>> Reviewed-by: Borislav Petkov >>>> Signed-off-by: Tom Lendacky >>>> --- >>>> arch/x86/include/asm/cpufeatures.h | 1 + >>>> arch/x86/include/asm/msr-index.h | 2 ++ >>>> arch/x86/kernel/cpu/amd.c | 13 +++++++++++++ >>>> arch/x86/kernel/cpu/scattered.c | 1 + >>>> 4 files changed, 17 insertions(+) >>>> >>>> diff --git a/arch/x86/include/asm/cpufeatures.h >>>> b/arch/x86/include/asm/cpufeatures.h >>>> index 2701e5f..2b692df 100644 >>>> --- a/arch/x86/include/asm/cpufeatures.h >>>> +++ b/arch/x86/include/asm/cpufeatures.h >>>> @@ -196,6 +196,7 @@ >>>> >>>> #define X86_FEATURE_HW_PSTATE ( 7*32+ 8) /* AMD HW-PState */ >>>> #define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* AMD >>>> ProcFeedbackInterface */ >>>> +#define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory >>>> Encryption */ >>> >>> >>> Given that this feature is available only in long mode, this should be >>> added to disabled-features.h as disabled for 32-bit builds. >> >> >> I can add that. If the series needs a re-spin then I'll include this >> change in the series, otherwise I can send a follow-on patch to handle >> the feature for 32-bit builds if that works. >> >> >>> >>>> #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory >>>> Number */ >>>> #define X86_FEATURE_INTEL_PT ( 7*32+15) /* Intel Processor Trace */ >>>> diff --git a/arch/x86/include/asm/msr-index.h >>>> b/arch/x86/include/asm/msr-index.h >>>> index 18b1623..460ac01 100644 >>>> --- a/arch/x86/include/asm/msr-index.h >>>> +++ b/arch/x86/include/asm/msr-index.h >>>> @@ -352,6 +352,8 @@ >>>> #define MSR_K8_TOP_MEM1 0xc001001a >>>> #define MSR_K8_TOP_MEM2 0xc001001d >>>> #define MSR_K8_SYSCFG 0xc0010010 >>>> +#define MSR_K8_SYSCFG_MEM_ENCRYPT_BIT 23 >>>> +#define MSR_K8_SYSCFG_MEM_ENCRYPT >>>> BIT_ULL(MSR_K8_SYSCFG_MEM_ENCRYPT_BIT) >>>> #define MSR_K8_INT_PENDING_MSG 0xc0010055 >>>> /* C1E active bits in int pending message */ >>>> #define K8_INTP_C1E_ACTIVE_MASK 0x18000000 >>>> diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c >>>> index bb5abe8..c47ceee 100644 >>>> --- a/arch/x86/kernel/cpu/amd.c >>>> +++ b/arch/x86/kernel/cpu/amd.c >>>> @@ -611,6 +611,19 @@ static void early_init_amd(struct cpuinfo_x86 *c) >>>> */ >>>> if (cpu_has_amd_erratum(c, amd_erratum_400)) >>>> set_cpu_bug(c, X86_BUG_AMD_E400); >>>> + >>>> + /* >>>> + * BIOS support is required for SME. If BIOS has not enabled SME >>>> + * then don't advertise the feature (set in scattered.c) >>>> + */ >>>> + if (cpu_has(c, X86_FEATURE_SME)) { >>>> + u64 msr; >>>> + >>>> + /* Check if SME is enabled */ >>>> + rdmsrl(MSR_K8_SYSCFG, msr); >>>> + if (!(msr & MSR_K8_SYSCFG_MEM_ENCRYPT)) >>>> + clear_cpu_cap(c, X86_FEATURE_SME); >>>> + } >>> >>> >>> This should be conditional on CONFIG_X86_64. >> >> >> If I make the scattered feature support conditional on CONFIG_X86_64 >> (based on comment below) then cpu_has() will always be false unless >> CONFIG_X86_64 is enabled. So this won't need to be wrapped by the >> #ifdef. > > If you change it to use cpu_feature_enabled(), gcc will see that it is > disabled and eliminate the dead code at compile time. > >>> >>>> } >>>> >>>> static void init_amd_k8(struct cpuinfo_x86 *c) >>>> diff --git a/arch/x86/kernel/cpu/scattered.c >>>> b/arch/x86/kernel/cpu/scattered.c >>>> index 23c2350..05459ad 100644 >>>> --- a/arch/x86/kernel/cpu/scattered.c >>>> +++ b/arch/x86/kernel/cpu/scattered.c >>>> @@ -31,6 +31,7 @@ struct cpuid_bit { >>>> { X86_FEATURE_HW_PSTATE, CPUID_EDX, 7, 0x80000007, 0 }, >>>> { X86_FEATURE_CPB, CPUID_EDX, 9, 0x80000007, 0 }, >>>> { X86_FEATURE_PROC_FEEDBACK, CPUID_EDX, 11, 0x80000007, 0 }, >>>> + { X86_FEATURE_SME, CPUID_EAX, 0, 0x8000001f, 0 }, >>> >>> >>> This should also be conditional. We don't want to set this feature on >>> 32-bit, even if the processor has support. >> >> >> Can do. See comment above about re-spin vs. follow-on patch. >> >> Thanks, >> Tom > > A followup patch will be OK if there is no code that will get confused > by the SME bit being present but not active. The feature bit is mainly there for /proc/cpuinfo. The code uses sme_active() in order to determine how to behave. Under CONFIG_X86_32, sme_active() is always 0. Based on the comment related to patch 7 (ioremap() of ISA range) I may need to re-spin the patchset. I'll include this change following the recommendation from Boris to use the IS_ENABLED(CONFIG_X86_32) check to clear the feature bit. Thanks, Tom > > -- > Brian Gerst > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org