From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3DA4C54EE9 for ; Wed, 7 Sep 2022 17:45:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4C2C78D0002; Wed, 7 Sep 2022 13:45:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 44BE16B0073; Wed, 7 Sep 2022 13:45:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2EC658D0002; Wed, 7 Sep 2022 13:45:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 193846B0072 for ; Wed, 7 Sep 2022 13:45:41 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id DF847411CA for ; Wed, 7 Sep 2022 17:45:40 +0000 (UTC) X-FDA: 79886016840.03.72D9652 Received: from mail-io1-f52.google.com (mail-io1-f52.google.com [209.85.166.52]) by imf12.hostedemail.com (Postfix) with ESMTP id 8A3AD40091 for ; Wed, 7 Sep 2022 17:45:40 +0000 (UTC) Received: by mail-io1-f52.google.com with SMTP id v128so1046462ioe.12 for ; Wed, 07 Sep 2022 10:45:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date; bh=S/oonIaSgU5+P+xzaVg+rVuKI8oFDrUKwNm4xv6c2Yw=; b=jTRq4KN+MFLm8CxxBQM492Nbg+eiBsIkRudHUkBxz64Hp3RNe28+MUJYyF8LTDk+4q 7k8MA7jbsmfrmus7ejcfugn6Be8my2hjvTMScdCFh7JG45Ttul5ZH6Ez5A5xZzgjsYfe tHBvhe4bHR16VgXBomHHGEx/0b5G0H8Uhktbde6vK4Q+5yq7916uNMoiE2mNXscadES/ GtYFpc9IZDlJGozKkFfe6axLKweAM41WjDa0srIjc0c1lzNMeRD8CmegySAxKXm5KN0w SyNr23Zv/pUEdGG2EF11c+pRATnJx7OB7VcfwYOTezE9VI3/LEPVMFaxInFWkvkABxAO CeHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date; bh=S/oonIaSgU5+P+xzaVg+rVuKI8oFDrUKwNm4xv6c2Yw=; b=j1vfm1C6la4I4Mk6gX8QQHwSd6ESoKsUsd8BDNQT4xOlcERVw1OCHHB2IP0c0J61uY 3D9BlCYaMj/bNi8ClUFqAmK4HsuNe6ea8qNnfaGN5iIs6UxlAKoSzqWRX1vouvvtjxBp fInZ2SLAVHRyBh0qSJ2C4oAB05SdJvbku6r7jCeegoO734fxe9MVqP9BGGJzX32lo0/O Gx+3yZuQdBpWROw2rzocO+sA1UrMD4wuo8FOycOOYF2wmgSXJjf5DqtyPPAISsrs2Ili 24Oo3VOyT5HoohKGjR9DKYi8Px0MHPJBW1MT2cDnMS1qHdS0DX5Rpq6vbwvwpsqrgBhc 7K4w== X-Gm-Message-State: ACgBeo0j6Nb3nntMBFXCHPxwzwe2ne/6KYC+xa7upUw5S2FIWqjXdT9a kTw3ktAnI3SkG/0IN23sDTMZabTNPoLHcj9E7r4XFw== X-Google-Smtp-Source: AA6agR7OhLGMem1qhjuMVUdxb6YLpofj4pnp6ALJAfN+/AJK9FZfXj9IAHFzMkxEStSbHnAdeJ+Y3a++RqzhD6XSTyc= X-Received: by 2002:a02:cbb4:0:b0:34c:d42:ac34 with SMTP id v20-20020a02cbb4000000b0034c0d42ac34mr2694474jap.249.1662572739595; Wed, 07 Sep 2022 10:45:39 -0700 (PDT) MIME-Version: 1.0 References: <9151e79d4f5af888242b9589c0a106a49a97837c.1655761627.git.ashish.kalra@amd.com> In-Reply-To: <9151e79d4f5af888242b9589c0a106a49a97837c.1655761627.git.ashish.kalra@amd.com> From: Alper Gun Date: Wed, 7 Sep 2022 10:45:28 -0700 Message-ID: Subject: Re: [PATCH Part2 v6 31/49] KVM: x86: Introduce kvm_mmu_get_tdp_walk() for SEV-SNP use To: Ashish Kalra Cc: x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-coco@lists.linux.dev, linux-mm@kvack.org, linux-crypto@vger.kernel.org, tglx@linutronix.de, mingo@redhat.com, jroedel@suse.de, thomas.lendacky@amd.com, hpa@zytor.com, ardb@kernel.org, pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, jmattson@google.com, luto@kernel.org, dave.hansen@linux.intel.com, slp@redhat.com, pgonda@google.com, peterz@infradead.org, srinivas.pandruvada@linux.intel.com, rientjes@google.com, dovmurik@linux.ibm.com, tobin@ibm.com, bp@alien8.de, michael.roth@amd.com, vbabka@suse.cz, kirill@shutemov.name, ak@linux.intel.com, tony.luck@intel.com, marcorr@google.com, sathyanarayanan.kuppuswamy@linux.intel.com, dgilbert@redhat.com, jarkko@kernel.org Content-Type: text/plain; charset="UTF-8" ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662572740; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=S/oonIaSgU5+P+xzaVg+rVuKI8oFDrUKwNm4xv6c2Yw=; b=EZG0U1TVmZrPaUY8rC3Ymj1rZCGhmZP6jsLO5A5y0h4SJ2ydhIRtjcC0MU1KGxxCDnpOdl ypyrgxl7pRGLkfR8wWDYejPQKTf0b3EJb9wYjCU0c2pnpgndC7ycSjFKL3mklqw9NiDjdw rt6EwXjAA9s9X7eEEN0ulykn6B1d/3w= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=jTRq4KN+; spf=pass (imf12.hostedemail.com: domain of alpergun@google.com designates 209.85.166.52 as permitted sender) smtp.mailfrom=alpergun@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662572740; a=rsa-sha256; cv=none; b=kBg2bwRQH3l5rS4JK7jNNfY7tcvE9gaayXp43IFEKXB2wddB7tBcdUeSQDpHkPhiHFDauW EeERGsc2UJMK6eICsHFu4KBXp+8AfZe3O5hp7mzNK9teKPXCtZ/1ieB2bGeEx1B19wyDOY KZ6ZJ2TFbRssqi0Vd3GIn9tnX6ivoi8= X-Stat-Signature: h9dz5rckt75bm8y9s4jhu6qeabgpowod X-Rspamd-Queue-Id: 8A3AD40091 X-Rspam-User: Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=jTRq4KN+; spf=pass (imf12.hostedemail.com: domain of alpergun@google.com designates 209.85.166.52 as permitted sender) smtp.mailfrom=alpergun@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam09 X-HE-Tag: 1662572740-140346 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jun 20, 2022 at 4:09 PM Ashish Kalra wrote: > > From: Brijesh Singh > > The SEV-SNP VMs may call the page state change VMGEXIT to add the GPA > as private or shared in the RMP table. The page state change VMGEXIT > will contain the RMP page level to be used in the RMP entry. If the > page level between the TDP and RMP does not match then, it will result > in nested-page-fault (RMP violation). > > The SEV-SNP VMGEXIT handler will use the kvm_mmu_get_tdp_walk() to get > the current page-level in the TDP for the given GPA and calculate a > workable page level. If a GPA is mapped as a 4K-page in the TDP, but > the guest requested to add the GPA as a 2M in the RMP entry then the > 2M request will be broken into 4K-pages to keep the RMP and TDP > page-levels in sync. > > TDP SPTEs are RCU protected so need to put kvm_mmu_get_tdp_walk() in RCU > read-side critical section by using walk_shadow_page_lockless_begin() and > walk_lockless_shadow_page_lockless_end(). This fixes the > "suspicious RCU usage" message seen with lockdep kernel build. > > Signed-off-by: Brijesh Singh > Signed-off by: Ashish Kalra > --- > arch/x86/kvm/mmu.h | 2 ++ > arch/x86/kvm/mmu/mmu.c | 33 +++++++++++++++++++++++++++++++++ > 2 files changed, 35 insertions(+) > > diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h > index c99b15e97a0a..d55b5166389a 100644 > --- a/arch/x86/kvm/mmu.h > +++ b/arch/x86/kvm/mmu.h > @@ -178,6 +178,8 @@ static inline bool is_nx_huge_page_enabled(void) > return READ_ONCE(nx_huge_pages); > } > > +bool kvm_mmu_get_tdp_walk(struct kvm_vcpu *vcpu, gpa_t gpa, kvm_pfn_t *pfn, int *level); > + > static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, > u32 err, bool prefetch) > { > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 569021af349a..c1ac486e096e 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -4151,6 +4151,39 @@ kvm_pfn_t kvm_mmu_map_tdp_page(struct kvm_vcpu *vcpu, gpa_t gpa, > } > EXPORT_SYMBOL_GPL(kvm_mmu_map_tdp_page); > > +bool kvm_mmu_get_tdp_walk(struct kvm_vcpu *vcpu, gpa_t gpa, kvm_pfn_t *pfn, int *level) > +{ > + u64 sptes[PT64_ROOT_MAX_LEVEL + 1]; > + int leaf, root; > + > + walk_shadow_page_lockless_begin(vcpu); > + > + if (is_tdp_mmu(vcpu->arch.mmu)) > + leaf = kvm_tdp_mmu_get_walk(vcpu, gpa, sptes, &root); > + else > + leaf = get_walk(vcpu, gpa, sptes, &root); > + > + walk_shadow_page_lockless_end(vcpu); > + > + if (unlikely(leaf < 0)) > + return false; > + > + /* Check if the leaf SPTE is present */ > + if (!is_shadow_present_pte(sptes[leaf])) > + return false; > + > + *pfn = spte_to_pfn(sptes[leaf]); > + if (leaf > PG_LEVEL_4K) { > + u64 page_mask = KVM_PAGES_PER_HPAGE(leaf) - KVM_PAGES_PER_HPAGE(leaf - 1); > + *pfn |= (gpa_to_gfn(gpa) & page_mask); Similar to the discussion in the other patch, I believe you should apply the same fix here as well. It should be u64 page_mask = KVM_PAGES_PER_HPAGE(leaf) - 1; > + } > + > + *level = leaf; > + > + return true; > +} > +EXPORT_SYMBOL_GPL(kvm_mmu_get_tdp_walk); > + > static void nonpaging_init_context(struct kvm_mmu *context) > { > context->page_fault = nonpaging_page_fault; > -- > 2.25.1 >