From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f70.google.com (mail-pg0-f70.google.com [74.125.83.70]) by kanga.kvack.org (Postfix) with ESMTP id 6ED3F6B0520 for ; Tue, 11 Jul 2017 11:14:47 -0400 (EDT) Received: by mail-pg0-f70.google.com with SMTP id t8so2602581pgs.5 for ; Tue, 11 Jul 2017 08:14:47 -0700 (PDT) Received: from NAM03-DM3-obe.outbound.protection.outlook.com (mail-dm3nam03on0086.outbound.protection.outlook.com. [104.47.41.86]) by mx.google.com with ESMTPS id v12si149351pfi.61.2017.07.11.08.14.46 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 11 Jul 2017 08:14:46 -0700 (PDT) Subject: Re: [PATCH v9 04/38] x86/CPU/AMD: Add the Secure Memory Encryption CPU feature References: <20170707133804.29711.1616.stgit@tlendack-t1.amdoffice.net> <20170707133850.29711.29549.stgit@tlendack-t1.amdoffice.net> <20170711055659.GA4554@nazgul.tnic> From: Tom Lendacky Message-ID: Date: Tue, 11 Jul 2017 10:14:34 -0500 MIME-Version: 1.0 In-Reply-To: <20170711055659.GA4554@nazgul.tnic> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Borislav Petkov , Brian Gerst Cc: linux-arch , linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, the arch/x86 maintainers , kexec@lists.infradead.org, Linux Kernel Mailing List , kasan-dev@googlegroups.com, xen-devel@lists.xen.org, Linux-MM , iommu@lists.linux-foundation.org, Brijesh Singh , Toshimitsu Kani , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Matt Fleming , Alexander Potapenko , "H. Peter Anvin" , Larry Woodman , Jonathan Corbet , Joerg Roedel , "Michael S. Tsirkin" , Ingo Molnar , Andrey Ryabinin , Dave Young , Rik van Riel , Arnd Bergmann , Konrad Rzeszutek Wilk , Andy Lutomirski , Boris Ostrovsky , Dmitry Vyukov , Juergen Gross , Thomas Gleixner , Paolo Bonzini On 7/11/2017 12:56 AM, Borislav Petkov wrote: > On Tue, Jul 11, 2017 at 01:07:46AM -0400, Brian Gerst wrote: >>> If I make the scattered feature support conditional on CONFIG_X86_64 >>> (based on comment below) then cpu_has() will always be false unless >>> CONFIG_X86_64 is enabled. So this won't need to be wrapped by the >>> #ifdef. >> >> If you change it to use cpu_feature_enabled(), gcc will see that it is >> disabled and eliminate the dead code at compile time. > > Just do this: > > if (cpu_has(c, X86_FEATURE_SME)) { > if (IS_ENABLED(CONFIG_X86_32)) { > clear_cpu_cap(c, X86_FEATURE_SME); > } else { > u64 msr; > > /* Check if SME is enabled */ > rdmsrl(MSR_K8_SYSCFG, msr); > if (!(msr & MSR_K8_SYSCFG_MEM_ENCRYPT)) > clear_cpu_cap(c, X86_FEATURE_SME); > } > } > > so that it is explicit that we disable it on 32-bit and we can save us > the ifdeffery elsewhere. I'll use this method for the change and avoid the #ifdefs. Thanks, Tom > > Thanks. > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org