From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4CCBC433EF for ; Fri, 24 Jun 2022 00:06:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4020F8E01B0; Thu, 23 Jun 2022 20:06:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 38B4D8E01A1; Thu, 23 Jun 2022 20:06:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 206718E01B0; Thu, 23 Jun 2022 20:06:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 0A5688E01A1 for ; Thu, 23 Jun 2022 20:06:29 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id CBE6D214CF for ; Fri, 24 Jun 2022 00:06:28 +0000 (UTC) X-FDA: 79611187656.03.44CE95D Received: from mail-oo1-f51.google.com (mail-oo1-f51.google.com [209.85.161.51]) by imf01.hostedemail.com (Postfix) with ESMTP id 7153640018 for ; Fri, 24 Jun 2022 00:06:28 +0000 (UTC) Received: by mail-oo1-f51.google.com with SMTP id n24-20020a4ae758000000b0041b82638b42so169266oov.9 for ; Thu, 23 Jun 2022 17:06:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=5ztn/U14y4+bLEVfHDiR279Ws4kSCGIatUDLg6ETdSA=; b=sE1nWpmur5YxV9tq2Xcbc8zQ7qv68r4grAVVsiWE+Hb5IjzYTfE2kF71zHL669RRu1 8EOee7QUNQm5IwxycAlLVHbYxsj9/B6zjw6THdaB8U9bCQ04QFEyuYMUqXCJL/VwKgQw pERU0qG1+ubCmP0USVfbwqV9mQ7Ae/RYAmO3fT7BMk97N+H1MM15EKJoEsqjc0jqIM4J ZmZqKBi+d2WXbVUkR5ZsnAH9c+tnaqZuMUUSeMcuNV+zfEnu/lsMYmuSZtLU8S8UCtss yBdMMbxpNC6LgDmkRNH0oPPD4EO3hws1baml0AuV16zVe/kIfr0tuYlgKqK5lM8mR69W tMxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=5ztn/U14y4+bLEVfHDiR279Ws4kSCGIatUDLg6ETdSA=; b=4V/xj90fgKNZY+TG2jg76rea4dauUB5kFRafPbh0JKRDdw/1/AwmQE9ToJ5iqd/kEm CelANUak3GDCUR9KAW1fQwt8+Gk4bFRPIs1QwGujDlVIMmTL2Ng0diKIq1reMd85FADR gshDKtmpHrqNMDdmDowJlLcAaZflSEQ/6Z706Hf5utar0G53voXWYAqYz+M4JOPkaebA xODutNwrYvsQTop1X9DrgxSlIWQ6zTW09VnNQB1RXQtisZW0rxL8JA5BZDkbvP0M5Ixs mJItTUNnLSzCgKWYE0/Pcr/tmKaWS8hzzuTrh5lUJkP9q0ouBtzzpDG3qW/WgysZF22Z jbCw== X-Gm-Message-State: AJIora+fwLMqDmrDgZ1SOAzz1JXe70U2EBRCzAVsjRKStQh1Bakjqh3U EKas95BecUmf8LVy0VrB80VHaGF5qH7ZdbpqrLMG+w== X-Google-Smtp-Source: AGRyM1v+s5D9Hg1qgXGD+4GvtPs2SxeqIL4O85m4dCEssimCyLkAYn0KyTciSq2Hm2MRqFVC/wqLMIaPXf3Gq+wcLLY= X-Received: by 2002:a4a:d842:0:b0:41b:c75d:f2dd with SMTP id g2-20020a4ad842000000b0041bc75df2ddmr4805823oov.20.1656029187423; Thu, 23 Jun 2022 17:06:27 -0700 (PDT) MIME-Version: 1.0 References: <243778c282cd55a554af9c11d2ecd3ff9ea6820f.1655761627.git.ashish.kalra@amd.com> In-Reply-To: <243778c282cd55a554af9c11d2ecd3ff9ea6820f.1655761627.git.ashish.kalra@amd.com> From: Marc Orr Date: Thu, 23 Jun 2022 17:06:16 -0700 Message-ID: Subject: Re: [PATCH Part2 v6 07/49] x86/sev: Invalid pages from direct map when adding it to RMP table To: Ashish Kalra Cc: x86 , LKML , kvm list , linux-coco@lists.linux.dev, linux-mm@kvack.org, Linux Crypto Mailing List , Thomas Gleixner , Ingo Molnar , Joerg Roedel , "Lendacky, Thomas" , "H. Peter Anvin" , Ard Biesheuvel , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Jim Mattson , Andy Lutomirski , Dave Hansen , Sergio Lopez , Peter Gonda , Peter Zijlstra , Srinivas Pandruvada , David Rientjes , Dov Murik , Tobin Feldman-Fitzthum , Borislav Petkov , "Roth, Michael" , Vlastimil Babka , "Kirill A . Shutemov" , Andi Kleen , Tony Luck , Sathyanarayanan Kuppuswamy , Alper Gun , "Dr . David Alan Gilbert" , jarkko@kernel.org Content-Type: text/plain; charset="UTF-8" ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=sE1nWpmu; spf=pass (imf01.hostedemail.com: domain of marcorr@google.com designates 209.85.161.51 as permitted sender) smtp.mailfrom=marcorr@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1656029188; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5ztn/U14y4+bLEVfHDiR279Ws4kSCGIatUDLg6ETdSA=; b=v4/62k5HK5UO+xyqHNMLjB97DeMf9Ldk4aaTNLjB+zwOZW/LqjPWZi97L8UksYaJcXziq2 T3YvLLziZH7mxXqi+mK/Jg89JcgWYOCSKY0z6F61GYnLOOnOpujQjGxwp0pYSplHN6CSe1 HefpifSkH6JiWq6Lj/LHO7Q6SN4yHFI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1656029188; a=rsa-sha256; cv=none; b=jDd2ajER2uSxDkCNImjCWsfSamU04oX0QoLSCPvfrBBPpwIMFRBaflkTLb2x41t++vVHQB dTGfbZniCIMkPdqJECVub7EDSxDZypfAuFV3eGGVkLh9VG4TnAYuA+dCa8GlKMJk9pyv8T hJKe6CV1sOjs/kv8Vj/mmIN73rbflCk= X-Stat-Signature: bjzn8uh5yuup9atag19855uwdsmh8fp6 X-Rspamd-Queue-Id: 7153640018 X-Rspam-User: Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=sE1nWpmu; spf=pass (imf01.hostedemail.com: domain of marcorr@google.com designates 209.85.161.51 as permitted sender) smtp.mailfrom=marcorr@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam02 X-HE-Tag: 1656029188-610505 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jun 20, 2022 at 4:03 PM Ashish Kalra wrote: > > From: Brijesh Singh > > The integrity guarantee of SEV-SNP is enforced through the RMP table. > The RMP is used with standard x86 and IOMMU page tables to enforce memory > restrictions and page access rights. The RMP check is enforced as soon as > SEV-SNP is enabled globally in the system. When hardware encounters an > RMP checks failure, it raises a page-fault exception. nit: "RMP checks ..." -> "RMP-check ..." > > The rmp_make_private() and rmp_make_shared() helpers are used to add > or remove the pages from the RMP table. Improve the rmp_make_private() to > invalid state so that pages cannot be used in the direct-map after its nit: "invalid state ..." -> "invalidate state ..." nit: "... after its" -> "... after they're" (Here, and in the patch subject too.) > added in the RMP table, and restore to its default valid permission after nit: "... restore to its ..." -> "... restored to their ..." > the pages are removed from the RMP table. > > Signed-off-by: Brijesh Singh > --- > arch/x86/kernel/sev.c | 61 ++++++++++++++++++++++++++++++++++++++++++- > 1 file changed, 60 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c > index f6c64a722e94..734cddd837f5 100644 > --- a/arch/x86/kernel/sev.c > +++ b/arch/x86/kernel/sev.c > @@ -2451,10 +2451,42 @@ int psmash(u64 pfn) > } > EXPORT_SYMBOL_GPL(psmash); > > +static int restore_direct_map(u64 pfn, int npages) > +{ > + int i, ret = 0; > + > + for (i = 0; i < npages; i++) { > + ret = set_direct_map_default_noflush(pfn_to_page(pfn + i)); > + if (ret) > + goto cleanup; > + } > + > +cleanup: > + WARN(ret > 0, "Failed to restore direct map for pfn 0x%llx\n", pfn + i); > + return ret; > +} > + > +static int invalid_direct_map(unsigned long pfn, int npages) I think we should rename this function to "invalidate_direct_map()". > +{ > + int i, ret = 0; > + > + for (i = 0; i < npages; i++) { > + ret = set_direct_map_invalid_noflush(pfn_to_page(pfn + i)); > + if (ret) > + goto cleanup; > + } > + > + return 0; > + > +cleanup: > + restore_direct_map(pfn, i); > + return ret; > +} > + > static int rmpupdate(u64 pfn, struct rmpupdate *val) > { > unsigned long paddr = pfn << PAGE_SHIFT; > - int ret; > + int ret, level, npages; > > if (!pfn_valid(pfn)) > return -EINVAL; > @@ -2462,11 +2494,38 @@ static int rmpupdate(u64 pfn, struct rmpupdate *val) > if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP)) > return -ENXIO; > > + level = RMP_TO_X86_PG_LEVEL(val->pagesize); > + npages = page_level_size(level) / PAGE_SIZE; > + > + /* > + * If page is getting assigned in the RMP table then unmap it from the > + * direct map. > + */ > + if (val->assigned) { > + if (invalid_direct_map(pfn, npages)) { > + pr_err("Failed to unmap pfn 0x%llx pages %d from direct_map\n", > + pfn, npages); > + return -EFAULT; > + } > + } > + > /* Binutils version 2.36 supports the RMPUPDATE mnemonic. */ > asm volatile(".byte 0xF2, 0x0F, 0x01, 0xFE" > : "=a"(ret) > : "a"(paddr), "c"((unsigned long)val) > : "memory", "cc"); > + > + /* > + * Restore the direct map after the page is removed from the RMP table. > + */ > + if (!ret && !val->assigned) { > + if (restore_direct_map(pfn, npages)) { > + pr_err("Failed to map pfn 0x%llx pages %d in direct_map\n", > + pfn, npages); > + return -EFAULT; > + } > + } > + > return ret; > } > > -- > 2.25.1 > >