From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84801C61D90 for ; Tue, 21 Nov 2023 21:20:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BDB536B0494; Tue, 21 Nov 2023 16:20:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B8AD46B0498; Tue, 21 Nov 2023 16:20:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 98FB56B049B; Tue, 21 Nov 2023 16:20:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 84C6F6B0494 for ; Tue, 21 Nov 2023 16:20:42 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 59AF31CB0BC for ; Tue, 21 Nov 2023 21:20:42 +0000 (UTC) X-FDA: 81483230724.13.8B98D4E Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) by imf06.hostedemail.com (Postfix) with ESMTP id 6A17F180029 for ; Tue, 21 Nov 2023 21:20:40 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="TPtz4s/8"; spf=pass (imf06.hostedemail.com: domain of mhkelley58@gmail.com designates 209.85.214.180 as permitted sender) smtp.mailfrom=mhkelley58@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700601640; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7GaTpmJEyBw9jgDXZhROPeIa70x6P8adbd0c2l8Nz7M=; b=QT9zWCLyoN7LQSllwYJv8jevX+CX72fmBOUoXGbsTfSnEa/0nIJ+o2Yp8FE9+G5TCtucRX hYU2PoaHyZ25M/7Fsk010DubL00mUfc+FZsYAL4A+KlZYY0ZFogTumz3dT8b/xaHMQaHPl d3OvIr1POgGb2pGyViYZqTyz64c9ToE= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="TPtz4s/8"; spf=pass (imf06.hostedemail.com: domain of mhkelley58@gmail.com designates 209.85.214.180 as permitted sender) smtp.mailfrom=mhkelley58@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700601640; a=rsa-sha256; cv=none; b=Wl3o+opPVM50QzBpec/FLkTu5jESByYRsIPwNFvYkdENzVM9ht9MIgOo0JQOfDQzZRyvjL vQoWlf0pNRh8CfIWnDqya+qIMPhCD8QYz3x8DuuWHRzRFtSZ6sMPa495jMrYtDQSoNfHtl a7cSFdbn6Xso6l/PKl8sdXTICfGsE/M= Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-1cc5b705769so53531435ad.0 for ; Tue, 21 Nov 2023 13:20:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700601639; x=1701206439; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7GaTpmJEyBw9jgDXZhROPeIa70x6P8adbd0c2l8Nz7M=; b=TPtz4s/8VpUNGG7mLxTL84b0Uz+si8PFQihGhnrb+ZM26G90oePqrJBZBOcGW12miV 7CMRiDUgzORwKa3zWQ7lGGQai52k/9xdyLGJlbP7wZtKjj4aEnDOb5/LIMqu2A6l2wcp JjfGuVgdXy6octVxsduvIgo1ourEkvvNx4oZNNVdrujaGkKeORuc4OYxwe4MgqLY4rJe Cc/R+bXanOwFaPw3ViQ8e8c+nvx2RXXuQT8Itj/gJ5IsRGwKoQqYkJVVEs5yoRVR3aiz XJh+Yr4h0G+8TCz+yn9WxYPYIiifRnL+8R1aQGGqYu17qxbg71gNV6Erc5Y4uQXKuS6c YGPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700601639; x=1701206439; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:to:from:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=7GaTpmJEyBw9jgDXZhROPeIa70x6P8adbd0c2l8Nz7M=; b=QTZHbTYifdxoyg59u7s6nwBBmSDchzTcYfvRrJt0duq6LpZh4wtmmVJ5XPyKPMmTFH jKb2R8iUnPLSiV/pKF8rdmFDDxu3M/hppyUaEcUAL5AoLVsXAnpiXBW7HXEg+OzMH8tE 9YDkFC+uVuVkSIKqHaVPXgcCGcKYjxHxw35F38h3zGhooZ2Z+TI12TsPEvBhKmsxtNXT jqcU8ZAEI+oF86gVv+hz6Wg72w/UjttIS0u89Q/HyK4aSAPLoc64tE4hMVIEA+ox5/AN 5bMyu9CMnkhKv9N93OVspxOb+qM6LlJQmIt5AVHx0vUU97IPMWE4Ay6SOBmMqRcFQyfW VrlA== X-Gm-Message-State: AOJu0YzBKsH5SHE+Qkt+8rPD5xT6z6WemkenyV/c7QwU0R4cbMw7pdtF VLjXhZX9xadp1LmoZPK2Tes= X-Google-Smtp-Source: AGHT+IF20yx0OHR4w+dYeGZB9G5yHsyZo0dLpfvxQzVoTuUqgi8K6gHCFS0wku5f4MEmMYr1ReDSPw== X-Received: by 2002:a17:903:2448:b0:1ce:6589:d1c0 with SMTP id l8-20020a170903244800b001ce6589d1c0mr372895pls.46.1700601639270; Tue, 21 Nov 2023 13:20:39 -0800 (PST) Received: from localhost.localdomain (c-73-254-87-52.hsd1.wa.comcast.net. [73.254.87.52]) by smtp.gmail.com with ESMTPSA id j2-20020a170902758200b001bf52834696sm8281924pll.207.2023.11.21.13.20.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Nov 2023 13:20:39 -0800 (PST) From: mhkelley58@gmail.com X-Google-Original-From: mhklinux@outlook.com To: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, kirill.shutemov@linux.intel.com, kys@microsoft.com, haiyangz@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, luto@kernel.org, peterz@infradead.org, akpm@linux-foundation.org, urezki@gmail.com, hch@infradead.org, lstoakes@gmail.com, thomas.lendacky@amd.com, ardb@kernel.org, jroedel@suse.de, seanjc@google.com, rick.p.edgecombe@intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-kernel@vger.kernel.org, linux-coco@lists.linux.dev, linux-hyperv@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 5/8] x86/mm: Mark CoCo VM pages not present while changing encrypted state Date: Tue, 21 Nov 2023 13:20:13 -0800 Message-Id: <20231121212016.1154303-6-mhklinux@outlook.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231121212016.1154303-1-mhklinux@outlook.com> References: <20231121212016.1154303-1-mhklinux@outlook.com> Reply-To: mhklinux@outlook.com MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 6A17F180029 X-Rspam-User: X-Stat-Signature: 8pyqh43tdfq766699nhc3og9ts18x96y X-Rspamd-Server: rspam01 X-HE-Tag: 1700601640-118007 X-HE-Meta: U2FsdGVkX19uZaXjrmIa5o/3ifatgAsY70K5N7tN8R2m4+1kuCFcXVaJizxC9OYNDfDUyKv9sAr7I8fG1vxHEPQtVS+w/dvxwKiI1ELz2eP1Sa1VX7JCwI3oXPzagBH+T+zxlPCam8gJVyKytindzEiLFDoPfHbfqahkYvPwwp4WhjAxJiY0f6eeaWeGBoug9oDVJNynDRk5218jAIzfwwaa+wpgkwTVxsbFapzC/UCJqNeLyoNy3c6H2u5RX62kw/llkUAeYTZDK321TBKBaF+dDkuu81VO5O8ZxU1e+86vZN3+m8DVnz9boZKlr8cDDyZvCHHT5o1PiYIcxz67sAWHaF+6yo0gwMdRTr53uMobxpmfTqSc0oVFnn2VmusrEw6+5571PSDN2qxNpjfID1SbST8dQSDgq3T24fJg8LEuxCYDSmOGcTrfRKH6CiKL1LsTdVzwiaQANr3n6xVAduPygAhGyD2id5GShAElaebjYbCsMEEIEq+7NrSFkMlYV3Wc+Wnoagoahf0nW0bFnnB5O3jLap0l+32GjPWKqsOl4AdsTlZTfT+zsfGhE2ua21Zl0BOoYl8yTGcjFgY6AxXAkMu4dqjJ9Sbm5v9CVbHolJWd30UIHlnr94PA02riORFoG3ZHIFZZ+GFToWLUZg0PMCuL8Hhr9PDUUE21L+6//hbRr/sUSEHv+MiJNmAos4RGe6xZyA9Pxg3iE8z6DIAxYgOdKredPQLC2apmvOMHMP8xQduFw923FKwq0P9xVduDnbC8SLb7w+0TlPFJPl9n6ylAMeORxYNXZ9iDreGtZCIbe08s8+wZatekKoXycQH/rB6tmQr6vtzqZXB1EXooVdzMH6bT05JVRl6MN/psxGeEeyORLB7iXVqFnXTazCgkEebZ3qBcYU5WWm8e0RHzidvxqJq7OQVWdy7aLF8KO1wNa50ZoGG73oLZLeIHOG9Xnp7vJdIvGNllqXi wrBbkoMm 66RMTsKdo6VVR3PDZquEyMb0P3vRQcUcc0H/Hs6bF1+ooqAgZqCX/SjM3OIA3RQl+aGQX94iORs2C01oI3arKplgMx0dqFSVrmVvKneIUT7JYNwINqD8UPJc6JbRcnViEGDSQsawjjZFqGJBRWN1Z/lk6DytJEPI27kUD9FNIqtONF4kxbKYCZ1U2yeOI1HyF3ofuXM3Vw8Ap0LCzNooynJJOrGZFk20obnkhL5YWI2PnZDfmyox7votVu4BE9iiqgmCsfBCBj9QoojzBX8BbZniH738cE/TwXGMo3C61ALkIoNRUha7LxRzcPGO8FBF+q2mTVXw5qMgFQL+AOUsAtjyQ42RnakbPVj3MBsa1BkjUTVOB7CjDHJcRdwT6trtlherfObMM4Dqnaux3a3Djyd9KvKRu035TBR8bK0bKF6II7gSwUNsBCI93mG3l43PsJg3VJC3GIHgSmPlOtsMqpERczniRTGbeV+FKl6I01nuEmGbHSdCQY1C7cpadTNKhpkG1IP+Aje5BBdg8MOhvFb1AOICpGFfBvj6e8VIPllFwstKg4IB7KDR8Hf0/0dLIyVFHLClglgRgAsw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Michael Kelley In a CoCo VM when a page transitions from encrypted to decrypted, or vice versa, attributes in the PTE must be updated *and* the hypervisor must be notified of the change. Because there are two separate steps, there's a window where the settings are inconsistent. Normally the code that initiates the transition (via set_memory_decrypted() or set_memory_encrypted()) ensures that the memory is not being accessed during a transition, so the window of inconsistency is not a problem. However, the load_unaligned_zeropad() function can read arbitrary memory pages at arbitrary times, which could access a transitioning page during the window. In such a case, CoCo VM specific exceptions are taken (depending on the CoCo architecture in use). Current code in those exception handlers recovers and does "fixup" on the result returned by load_unaligned_zeropad(). Unfortunately, this exception handling can't work in paravisor scenarios (TDX Paritioning and SEV-SNP in vTOM mode) if the exceptions are routed to the paravisor. The paravisor can't do the load_unaligned_zeropad() fixup, so the exceptions would need to be forwarded from the paravisor to the Linux guest, but there are no architectural specs for how to do that. Fortunately, there's a simpler way to solve the problem by changing the core transition code in __set_memory_enc_pgtable() to do the following: 1. Remove aliasing mappings 2. Flush the data cache if needed 3. Remove the PRESENT bit from the PTEs of all transitioning pages 4. Notify the hypervisor of the impending encryption status change 5. Set/clear the encryption attribute as appropriate 6. Flush the TLB so the changed encryption attribute isn't visible 7. Notify the hypervisor after the encryption status change 8. Add back the PRESENT bit, making the changed attribute visible With this approach, load_unaligned_zeropad() just takes its normal page-fault-based fixup path if it touches a page that is transitioning. As a result, load_unaligned_zeropad() and CoCo VM page transitioning are completely decoupled. CoCo VM page transitions can proceed without needing to handle architecture-specific exceptions and fix things up. This decoupling reduces the complexity due to separate TDX and SEV-SNP fixup paths, and gives more freedom to revise and introduce new capabilities in future versions of the TDX and SEV-SNP architectures. Paravisor scenarios work properly without needing to forward exceptions. Because Step 3 always does a TLB flush, the separate TLB flush callback is no longer required and is removed. Signed-off-by: Michael Kelley --- arch/x86/coco/tdx/tdx.c | 20 -------------- arch/x86/hyperv/ivm.c | 6 ----- arch/x86/include/asm/x86_init.h | 2 -- arch/x86/kernel/x86_init.c | 2 -- arch/x86/mm/mem_encrypt_amd.c | 6 ----- arch/x86/mm/pat/set_memory.c | 48 ++++++++++++++++++++++++--------- 6 files changed, 35 insertions(+), 49 deletions(-) diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c index 1b5d17a9f70d..39ead21bcba6 100644 --- a/arch/x86/coco/tdx/tdx.c +++ b/arch/x86/coco/tdx/tdx.c @@ -697,24 +697,6 @@ bool tdx_handle_virt_exception(struct pt_regs *regs, struct ve_info *ve) return true; } -static bool tdx_tlb_flush_required(bool private) -{ - /* - * TDX guest is responsible for flushing TLB on private->shared - * transition. VMM is responsible for flushing on shared->private. - * - * The VMM _can't_ flush private addresses as it can't generate PAs - * with the guest's HKID. Shared memory isn't subject to integrity - * checking, i.e. the VMM doesn't need to flush for its own protection. - * - * There's no need to flush when converting from shared to private, - * as flushing is the VMM's responsibility in this case, e.g. it must - * flush to avoid integrity failures in the face of a buggy or - * malicious guest. - */ - return !private; -} - static bool tdx_cache_flush_required(void) { /* @@ -876,9 +858,7 @@ void __init tdx_early_init(void) */ x86_platform.guest.enc_status_change_prepare = tdx_enc_status_change_prepare; x86_platform.guest.enc_status_change_finish = tdx_enc_status_change_finish; - x86_platform.guest.enc_cache_flush_required = tdx_cache_flush_required; - x86_platform.guest.enc_tlb_flush_required = tdx_tlb_flush_required; /* * TDX intercepts the RDMSR to read the X2APIC ID in the parallel diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c index 8ba18635e338..4005c573e00c 100644 --- a/arch/x86/hyperv/ivm.c +++ b/arch/x86/hyperv/ivm.c @@ -550,11 +550,6 @@ static bool hv_vtom_set_host_visibility(unsigned long kbuffer, int pagecount, bo return result; } -static bool hv_vtom_tlb_flush_required(bool private) -{ - return true; -} - static bool hv_vtom_cache_flush_required(void) { return false; @@ -614,7 +609,6 @@ void __init hv_vtom_init(void) x86_platform.hyper.is_private_mmio = hv_is_private_mmio; x86_platform.guest.enc_cache_flush_required = hv_vtom_cache_flush_required; - x86_platform.guest.enc_tlb_flush_required = hv_vtom_tlb_flush_required; x86_platform.guest.enc_status_change_finish = hv_vtom_set_host_visibility; /* Set WB as the default cache mode. */ diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h index c878616a18b8..5b3a9a214815 100644 --- a/arch/x86/include/asm/x86_init.h +++ b/arch/x86/include/asm/x86_init.h @@ -146,13 +146,11 @@ struct x86_init_acpi { * * @enc_status_change_prepare Notify HV before the encryption status of a range is changed * @enc_status_change_finish Notify HV after the encryption status of a range is changed - * @enc_tlb_flush_required Returns true if a TLB flush is needed before changing page encryption status * @enc_cache_flush_required Returns true if a cache flush is needed before changing page encryption status */ struct x86_guest { bool (*enc_status_change_prepare)(unsigned long vaddr, int npages, bool enc); bool (*enc_status_change_finish)(unsigned long vaddr, int npages, bool enc); - bool (*enc_tlb_flush_required)(bool enc); bool (*enc_cache_flush_required)(void); }; diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c index a37ebd3b4773..1c0d23a2b6cf 100644 --- a/arch/x86/kernel/x86_init.c +++ b/arch/x86/kernel/x86_init.c @@ -133,7 +133,6 @@ static void default_nmi_init(void) { }; static bool enc_status_change_prepare_noop(unsigned long vaddr, int npages, bool enc) { return true; } static bool enc_status_change_finish_noop(unsigned long vaddr, int npages, bool enc) { return true; } -static bool enc_tlb_flush_required_noop(bool enc) { return false; } static bool enc_cache_flush_required_noop(void) { return false; } static bool is_private_mmio_noop(u64 addr) {return false; } @@ -156,7 +155,6 @@ struct x86_platform_ops x86_platform __ro_after_init = { .guest = { .enc_status_change_prepare = enc_status_change_prepare_noop, .enc_status_change_finish = enc_status_change_finish_noop, - .enc_tlb_flush_required = enc_tlb_flush_required_noop, .enc_cache_flush_required = enc_cache_flush_required_noop, }, }; diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c index a68f2dda0948..652cc61b89b6 100644 --- a/arch/x86/mm/mem_encrypt_amd.c +++ b/arch/x86/mm/mem_encrypt_amd.c @@ -242,11 +242,6 @@ static unsigned long pg_level_to_pfn(int level, pte_t *kpte, pgprot_t *ret_prot) return pfn; } -static bool amd_enc_tlb_flush_required(bool enc) -{ - return true; -} - static bool amd_enc_cache_flush_required(void) { return !cpu_feature_enabled(X86_FEATURE_SME_COHERENT); @@ -464,7 +459,6 @@ void __init sme_early_init(void) x86_platform.guest.enc_status_change_prepare = amd_enc_status_change_prepare; x86_platform.guest.enc_status_change_finish = amd_enc_status_change_finish; - x86_platform.guest.enc_tlb_flush_required = amd_enc_tlb_flush_required; x86_platform.guest.enc_cache_flush_required = amd_enc_cache_flush_required; /* diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index d7ef8d312a47..b125035608d5 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -2019,6 +2019,11 @@ int set_memory_wb(unsigned long addr, int numpages) } EXPORT_SYMBOL(set_memory_wb); +static int set_memory_p(unsigned long *addr, int numpages) +{ + return change_page_attr_set(addr, numpages, __pgprot(_PAGE_PRESENT), 0); +} + /* Prevent speculative access to a page by marking it not-present */ #ifdef CONFIG_X86_64 int set_mce_nospec(unsigned long pfn) @@ -2049,11 +2054,6 @@ int set_mce_nospec(unsigned long pfn) return rc; } -static int set_memory_p(unsigned long *addr, int numpages) -{ - return change_page_attr_set(addr, numpages, __pgprot(_PAGE_PRESENT), 0); -} - /* Restore full speculative operation to the pfn. */ int clear_mce_nospec(unsigned long pfn) { @@ -2144,6 +2144,23 @@ static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc) if (WARN_ONCE(addr & ~PAGE_MASK, "misaligned address: %#lx\n", addr)) addr &= PAGE_MASK; + /* + * The caller must ensure that the memory being transitioned between + * encrypted and decrypted is not being accessed. But if + * load_unaligned_zeropad() touches the "next" page, it may generate a + * read access the caller has no control over. To ensure such accesses + * cause a normal page fault for the load_unaligned_zeropad() handler, + * mark the pages not present until the transition is complete. We + * don't want a #VE or #VC fault due to a mismatch in the memory + * encryption status, since paravisor configurations can't cleanly do + * the load_unaligned_zeropad() handling in the paravisor. + * + * set_memory_np() flushes the TLB. + */ + ret = set_memory_np(addr, numpages); + if (ret) + return ret; + memset(&cpa, 0, sizeof(cpa)); cpa.vaddr = &addr; cpa.numpages = numpages; @@ -2156,14 +2173,16 @@ static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc) vm_unmap_aliases(); /* Flush the caches as needed before changing the encryption attribute. */ - if (x86_platform.guest.enc_tlb_flush_required(enc)) - cpa_flush(&cpa, x86_platform.guest.enc_cache_flush_required()); + if (x86_platform.guest.enc_cache_flush_required()) + cpa_flush(&cpa, 1); /* Notify hypervisor that we are about to set/clr encryption attribute. */ if (!x86_platform.guest.enc_status_change_prepare(addr, numpages, enc)) return -EIO; ret = __change_page_attr_set_clr(&cpa, 1); + if (ret) + return ret; /* * After changing the encryption attribute, we need to flush TLBs again @@ -2174,13 +2193,16 @@ static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc) */ cpa_flush(&cpa, 0); - /* Notify hypervisor that we have successfully set/clr encryption attribute. */ - if (!ret) { - if (!x86_platform.guest.enc_status_change_finish(addr, numpages, enc)) - ret = -EIO; - } + /* Notify hypervisor that we have successfully set/clr encryption attr. */ + if (!x86_platform.guest.enc_status_change_finish(addr, numpages, enc)) + return -EIO; - return ret; + /* + * Now that the hypervisor is sync'ed with the page table changes + * made here, add back _PAGE_PRESENT. set_memory_p() does not flush + * the TLB. + */ + return set_memory_p(&addr, numpages); } static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) -- 2.25.1