From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42540C71135 for ; Fri, 13 Jun 2025 22:08:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DA78C6B008A; Fri, 13 Jun 2025 18:08:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D7EF76B008C; Fri, 13 Jun 2025 18:08:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C95336B0092; Fri, 13 Jun 2025 18:08:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id A95D66B008A for ; Fri, 13 Jun 2025 18:08:44 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 4880B12139E for ; Fri, 13 Jun 2025 22:08:44 +0000 (UTC) X-FDA: 83551767768.12.EB92913 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) by imf03.hostedemail.com (Postfix) with ESMTP id 7EE872000E for ; Fri, 13 Jun 2025 22:08:42 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=kQYPO242; spf=pass (imf03.hostedemail.com: domain of 3aaFMaAYKCCgWIERNGKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--seanjc.bounces.google.com designates 209.85.215.202 as permitted sender) smtp.mailfrom=3aaFMaAYKCCgWIERNGKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--seanjc.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749852522; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wb4jmnBC8sBuVz3+2UsGpTIRs7ejQyw5kumAZcoTVsA=; b=m0k4WjSRYX7cH/bWHFWSXr3dUEI3vfKWscP/BP/Tdv4SWxXF54KGGRFIcRS23gk0npEhXM 92hKFlJhneq8LuaUMmI+SHItaGbDO0fyZicNcakOpEPuRdZxB3oW4MtZm4YGit43oqxY2Y mzfK1SYdEnTzxyStCXv2ZpXj7RASdyM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749852522; a=rsa-sha256; cv=none; b=6cJvzhmK+j+qCBEYgczbmH5oINiFTm+H9tDyweZlCsCckxgNC/T7noGM+8kt88PaBquQ9L PT7T1kBmtBGervlE9qwlbox4RC+FcgyUEp9+TXZoC+P2kmEGJ9aeTbm3PX5U9qbs5IyFM+ BYIwh86I5AsmKZRxjaaq6BhSPEy5bww= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=kQYPO242; spf=pass (imf03.hostedemail.com: domain of 3aaFMaAYKCCgWIERNGKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--seanjc.bounces.google.com designates 209.85.215.202 as permitted sender) smtp.mailfrom=3aaFMaAYKCCgWIERNGKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--seanjc.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-b2c36951518so3016469a12.2 for ; Fri, 13 Jun 2025 15:08:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749852521; x=1750457321; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=wb4jmnBC8sBuVz3+2UsGpTIRs7ejQyw5kumAZcoTVsA=; b=kQYPO242QUGuGJqB+kW0nYm4mEt41A2DSqC+gA7y8ttkvsJkp/X6suMfVBclbtuSrw rFT0G6Z5mqJqRY/zqpPGRQ3e1qxqPTtTsL0Yn8ZBQJrnFWtCOpo4hNImlhMZdy24vhVx QIxbQabw/hL4B01nL/AJ08wqiNNiptSq1d39QUZo+8nPJWu+6KFENX6Sg1Itk4mWJWpP aPfFNiLg/NBxx7KmBiuJ1PSdFLSgPrSOvV39luOXEAJ4hLUMjULIJspRmR4vyVcW+zOa m1C9qtWWkSd5oe9TaOq+/Pu4AWLu8GZwaz5oovlCUIPcWmOys8qi+RKPiEAk9RRHX9L+ joqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749852521; x=1750457321; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wb4jmnBC8sBuVz3+2UsGpTIRs7ejQyw5kumAZcoTVsA=; b=huWklf2xr//5UeSFPnnn21Nny4D+E/+FFu3kLapfrTsJmvgbqCEqfD2FMJruiXQ0Zm Aj+dzDAcp25XNvDDD9yesyiFj0LGPmmdfdp71uZTvdW7CgPg0Dwy+hkn6OhoRoNzZXDv e0lDg30WrojZum3R14dHqaFqvHEAvrhBcnjTNRXFIPDNKWANxPKultCi2Mp4JWKM+di5 D+PcvKyQ3aR32zNHx4e9gqP0c2RLXTOiXNzyJyW/lmGgH3DmQKqrghFTAh4lVf7xA+cg dJn5UrH7/M3dw4XYNlB4mRNLGN9QJJiLTU0lgBKLPzhjpXOAzlPFQRPYqUwQh+91sHdu bm0A== X-Forwarded-Encrypted: i=1; AJvYcCU7/2jqnF5xwxHxRjJ3SI4Dh4pURic5dkFtIV1vJSnFOYDaLgkCaXbUpu95JHz/v5t402MNKJb/qQ==@kvack.org X-Gm-Message-State: AOJu0Yw8n6U+O6KjlRbjTFZAW2aG8ETHNoqzjbGN+RvOYouVqNWV0Ova rO/gGGI0Xknn+SGKFeW94hzjUr/xu6B5iO1eaI0lWsSZRyz33F1QOJhxHQqnVNuLc5csYG+wLrx wPhZiPA== X-Google-Smtp-Source: AGHT+IGdSPExuUA1/ibuJO19NsPM4/rKLVlAf2LsB0b+mN/T6vNzkkutRovkAexiqJtYBe7+n13vgpVYIl8= X-Received: from pfm6.prod.google.com ([2002:a05:6a00:726:b0:746:19fc:f077]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:7316:b0:21f:8817:f695 with SMTP id adf61e73a8af0-21fbd5590a9mr1355927637.25.1749852521334; Fri, 13 Jun 2025 15:08:41 -0700 (PDT) Date: Fri, 13 Jun 2025 15:08:39 -0700 In-Reply-To: <20250611133330.1514028-11-tabba@google.com> Mime-Version: 1.0 References: <20250611133330.1514028-1-tabba@google.com> <20250611133330.1514028-11-tabba@google.com> Message-ID: Subject: Re: [PATCH v12 10/18] KVM: x86/mmu: Handle guest page faults for guest_memfd with shared memory From: Sean Christopherson To: Fuad Tabba Cc: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev, pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com Content-Type: text/plain; charset="us-ascii" X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 7EE872000E X-Stat-Signature: o6o3qwmk9c9xhas1o3xzk74pcp674pa7 X-HE-Tag: 1749852522-968207 X-HE-Meta: U2FsdGVkX1/zOvq9P38ETEl3WQNAsT6tIJCjGWR4Ql5FAUmk8axwK0wlMS8ESdK6/75lpojRcnjO6jWeMmLnaIQWcjAkCMQejufjrs88KaRUriF3C5c/G0urAqT0bmvxfaBeQmma+ZAFEcMjHVVhB0d5Ps57T0+zLcI0Vh2zahinZ81aD4yK/T72RfJlHGqgWQH7m59K/Ih09TzG//2/wjD5gwSrhoQ5HGXq12Lx2DH4FPVDs0jEANf9z2TZrAUXxpUebJ3PYF0HcGsGantz8q7jq2PQOLKphiwrlmxzbd90p8O3OvqlGSUR+Vs9dCROGcCocA2q+jgkh5GgpxiTogsG5AB/y4dlimyTzFiKNiAAGTLWGADujWJ8sDDMx2E89CSHVwHPuaHjhYnrIKlC0yICAgeOtxz3q01jer3Vt+xaNbb2I2IIGs7MhKV80wE2Fu8I6ah5Rlc6AI8t5QIzkOnXmye2B0qs+ldUMYVH8BdKkK+wL6lJM8k6mtEdyvVIJ4CB2Syi8t23iRfPHySembpME6QXkJINIJ6SxC+3sju+lUQVr/Sx1otelUtTpLV85J6877dT40+8j5tSpEnSDA36RfCvbu0gUYCNrKS8yoKJAGpVQaodBPfbH6SV2RmQP/LRmfoS8fOdzxNNvWfi4RAUcdI4gI0E+2dLQfrHhHZOy7QYZkXLhp35nVCd5H/xmAymDfdfHLdocsZ18fz/iotycYWkFgurHenx9GFLXg9mglbIIeGdy10eDtCBiUb2lUwZKX7s3eIKhHhChMJQ//7v8iVy4lyMsu5+VJHgyHLbtcTBjPhTG91TYHyXnnF7YVQR4v3gP4LQx+Z140POzYwjrm9W0Z0NXvmeDaJGubVmn0Gb8JmaACS5hdbPz2093e179AJ4Ghi288+Ph5pRXOLVlRdg9RSRJj6etw5EkXsxTKlz4MtEOt+2aP6gwMTAqSWUvYLDVAbWLsCPxeC VpS4bW42 xpz3AOcKALxIz8fXZOlZ5IBUbLdYEocQqjGfoGodgvUxWgcPt4JroLGdztBsf4ncre9J+PH5iZlie6CAJsn1AkNfJeu8OgZgLVfQ3HZryD1+0JGcESLoOqyyDZdOy6sm9PSsE6oUvJ/r+CY3VAAs7d49OrZNjZSBkg9UJL6EOBC5WWPNOC2fMYOdkurbxRfJ3GVBVSwJ7mkkfNrHEsOAYNzxnZ1+8B502IyO9Cu+PSKuFyXM80Ju+9Hpivu9hSxePG5nyHBKU0XFGKOkej81lpMNDZ4VKDUFFbCnaY3O+XW5NCNZy+9L6QfYR9boOqIfdl8r3oLgPR/pON2vRp/l6rECMr4ZGQH/Ug4pOvRCrWNiYdyASYVXrqUIWO2kYh4EtDFi0tjlzeWoafRyuCCeLLoHgIPMZu5Ikus3aCYl1fdaBZfjmot5fuBu8jekmrviSN90ZRC2uMPBBpeGscH9agQPeXmmbALm6hKqWMF09C9AdlWM3e4fCi1d5eCwYN+BoG+W3KvZrAshibscKjlVGz+KUJhQOX811+wDdlgq6g60Ojp9yhe1lIaD5xHlffd3AwIaPVF/C6ykeOCAzZDbymO1zBa1SrQemW+Lb6IZ7z+NdFk9w9Ykw9QSCxT8MxUdOcigeTiqdC0kqmVFOsdzxuR7n9WlCV+vI86vBWX5rdTsWpNLLuzBtei/ZDXr4/dZ7ct6TLLUHZ1Tly2TqRnxhBDpHYQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Jun 11, 2025, Fuad Tabba wrote: > From: Ackerley Tng > > For memslots backed by guest_memfd with shared mem support, the KVM MMU > must always fault in pages from guest_memfd, and not from the host > userspace_addr. Update the fault handler to do so. And with a KVM_MEMSLOT_GUEST_MEMFD_ONLY flag, this becomes super obvious. > This patch also refactors related function names for accuracy: This patch. And phrase changelogs as commands. > kvm_mem_is_private() returns true only when the current private/shared > state (in the CoCo sense) of the memory is private, and returns false if > the current state is shared explicitly or impicitly, e.g., belongs to a > non-CoCo VM. Again, state changes as commands. For the above, it's not obvious if you're talking about the existing code versus the state of things after "this patch". > kvm_mmu_faultin_pfn_gmem() is updated to indicate that it can be used to > fault in not just private memory, but more generally, from guest_memfd. > +static inline u8 kvm_max_level_for_order(int order) Do not use "inline" for functions that are visible only to the local compilation unit. "inline" is just a hint, and modern compilers are smart enough to inline functions when appropriate without a hint. A longer explanation/rant here: https://lore.kernel.org/all/ZAdfX+S323JVWNZC@google.com > +static inline int kvm_gmem_max_mapping_level(const struct kvm_memory_slot *slot, > + gfn_t gfn, int max_level) > +{ > + int max_order; > > if (max_level == PG_LEVEL_4K) > return PG_LEVEL_4K; This is dead code, the one and only caller has *just* checked for this condition. > > - host_level = host_pfn_mapping_level(kvm, gfn, slot); > - return min(host_level, max_level); > + max_order = kvm_gmem_mapping_order(slot, gfn); > + return min(max_level, kvm_max_level_for_order(max_order)); > } ... > -static u8 kvm_max_private_mapping_level(struct kvm *kvm, kvm_pfn_t pfn, > - u8 max_level, int gmem_order) > +static u8 kvm_max_level_for_fault_and_order(struct kvm *kvm, This is comically verbose. C ain't Java. And having two separate helpers makes it *really* hard to (a) even see there are TWO helpers in the first place, and (b) understand how they differ. Gah, and not your bug, but completely ignoring the RMP in kvm_mmu_max_mapping_level() is wrong. It "works" because guest_memfd doesn't (yet) support dirty logging, no one enables the NX hugepage mitigation on AMD hosts. We could plumb in the pfn and private info, but I don't really see the point, at least not at this time. > + struct kvm_page_fault *fault, > + int order) > { > - u8 req_max_level; > + u8 max_level = fault->max_level; > > if (max_level == PG_LEVEL_4K) > return PG_LEVEL_4K; > > - max_level = min(kvm_max_level_for_order(gmem_order), max_level); > + max_level = min(kvm_max_level_for_order(order), max_level); > if (max_level == PG_LEVEL_4K) > return PG_LEVEL_4K; > > - req_max_level = kvm_x86_call(private_max_mapping_level)(kvm, pfn); > - if (req_max_level) > - max_level = min(max_level, req_max_level); > + if (fault->is_private) { > + u8 level = kvm_x86_call(private_max_mapping_level)(kvm, fault->pfn); Hmm, so the interesting thing here is that (IIRC) the RMP restrictions aren't just on the private pages, they also apply to the HYPERVISOR/SHARED pages. (Don't quote me on that). Regardless, I'm leaning toward dropping the "private" part, and making SNP deal with the intricacies of the RMP: /* Some VM types have additional restrictions, e.g. SNP's RMP. */ req_max_level = kvm_x86_call(max_mapping_level)(kvm, fault); if (req_max_level) max_level = min(max_level, req_max_level); Then we can get to something like: static int kvm_gmem_max_mapping_level(struct kvm *kvm, int order, struct kvm_page_fault *fault) { int max_level, req_max_level; max_level = kvm_max_level_for_order(order); if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; req_max_level = kvm_x86_call(max_mapping_level)(kvm, fault); if (req_max_level) max_level = min(max_level, req_max_level); return max_level; } int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn) { int max_level; max_level = kvm_lpage_info_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM); if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; /* TODO: Comment goes here about KVM not supporting this path (yet). */ if (kvm_mem_is_private(kvm, gfn)) return PG_LEVEL_4K; if (kvm_is_memslot_gmem_only(slot)) { int order = kvm_gmem_mapping_order(slot, gfn); return min(max_level, kvm_gmem_max_mapping_level(kvm, order, NULL)); } return min(max_level, host_pfn_mapping_level(kvm, gfn, slot)); } static int kvm_mmu_faultin_pfn_gmem(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { struct kvm *kvm = vcpu->kvm; int order, r; if (!kvm_slot_has_gmem(fault->slot)) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); return -EFAULT; } r = kvm_gmem_get_pfn(kvm, fault->slot, fault->gfn, &fault->pfn, &fault->refcounted_page, &order); if (r) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); return r; } fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY); fault->max_level = kvm_gmem_max_mapping_level(kvm, order, fault); return RET_PF_CONTINUE; } int sev_max_mapping_level(struct kvm *kvm, struct kvm_page_fault *fault) { int level, rc; bool assigned; if (!sev_snp_guest(kvm)) return 0; if (WARN_ON_ONCE(!fault) || !fault->is_private) return 0; rc = snp_lookup_rmpentry(fault->pfn, &assigned, &level); if (rc || !assigned) return PG_LEVEL_4K; return level; } > +/* > + * Returns true if the given gfn's private/shared status (in the CoCo sense) is > + * private. > + * > + * A return value of false indicates that the gfn is explicitly or implicitly > + * shared (i.e., non-CoCo VMs). > + */ > static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) > { > - return IS_ENABLED(CONFIG_KVM_GMEM) && > - kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE; > + struct kvm_memory_slot *slot; > + > + if (!IS_ENABLED(CONFIG_KVM_GMEM)) > + return false; > + > + slot = gfn_to_memslot(kvm, gfn); > + if (kvm_slot_has_gmem(slot) && kvm_gmem_memslot_supports_shared(slot)) { > + /* > + * Without in-place conversion support, if a guest_memfd memslot > + * supports shared memory, then all the slot's memory is > + * considered not private, i.e., implicitly shared. > + */ > + return false; Why!?!? Just make sure KVM_MEMORY_ATTRIBUTE_PRIVATE is mutually exclusive with mappable guest_memfd. You need to do that no matter what. Then you don't need to sprinkle special case code all over the place. > + } > + > + return kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE; > } > #else > static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) > -- > 2.50.0.rc0.642.g800a2b2222-goog >