From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2056C369D9 for ; Wed, 30 Apr 2025 16:57:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 437356B00D2; Wed, 30 Apr 2025 12:57:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3E8046B00D3; Wed, 30 Apr 2025 12:57:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 23BD46B00D4; Wed, 30 Apr 2025 12:57:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id EFA546B00D2 for ; Wed, 30 Apr 2025 12:57:11 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 80C3CBED22 for ; Wed, 30 Apr 2025 16:57:12 +0000 (UTC) X-FDA: 83391315504.23.FD836B6 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf24.hostedemail.com (Postfix) with ESMTP id DB21018000D for ; Wed, 30 Apr 2025 16:57:10 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=SYEeS205; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of 3ZVYSaAUKCKIVCDDCIQQING.EQONKPWZ-OOMXCEM.QTI@flex--tabba.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3ZVYSaAUKCKIVCDDCIQQING.EQONKPWZ-OOMXCEM.QTI@flex--tabba.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1746032231; a=rsa-sha256; cv=none; b=6QYhYPjsp6kFzX0uBoMpyKa9bcQP3n6XnDiSK1eDttJwfzgEdNm4cv5VzxJ0BOEkLDPYeu pI9jQ0KTxgQ2fg0IY41iANuOv8y3l2r5uTIYmtv65NgMrPODtra6f29JZBN7fVNLFjt3TL PPSu53wGDKdbt7Pp+9km7Nxcwfb0bH8= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=SYEeS205; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of 3ZVYSaAUKCKIVCDDCIQQING.EQONKPWZ-OOMXCEM.QTI@flex--tabba.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3ZVYSaAUKCKIVCDDCIQQING.EQONKPWZ-OOMXCEM.QTI@flex--tabba.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1746032231; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gr5Lhtq4RMEgZM2tYHreWv77Ov5o6NGJLDOXXwPVCpA=; b=RprmKZhgK3zOsexa+AS25bMeSIy6KgwJCrMrw1ptLhDT9cV6rZrQEEn/60mwyf4O7B9yVc f4jhYcDHk/G6jxDljYs3P3jUo43yuPmZAlRMdjDfLF97gtvGpSWc+0ivNOh9J+FdI3+d9i HorQWXLmtRY3EwYXZQG0e7aBqWd067U= Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43cf44b66f7so89365e9.1 for ; Wed, 30 Apr 2025 09:57:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746032229; x=1746637029; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=gr5Lhtq4RMEgZM2tYHreWv77Ov5o6NGJLDOXXwPVCpA=; b=SYEeS205sHTcJqvhaCSeAB06K9ZO1AKXGr7c10SQJGhZJ29QFXXSbk1+wmUSnlnhK0 i4W6rAVuL4FN6CzgupHc9biav9fzIOUdfrHJH9Hi4j7bhU9b8DQUkihBZbQgNJ7czT16 dpiWucFWKVeAs8NU1pJXxhxmvZ6sROVif0Wu6V2ybBCJbpF7I5qyGF2UHAa4zKK26jcm ZvXJdjojz/FiJeLFoMEDeP/cFLaRzct5ZwHWC94t3F6tkBVpNy5x7SErnGwqmfxhh6lu WAKJvXddeb9vgk1SIuxAuvKHuDGXjp+GqAGW/4L7rntLAL24j/WbHIrDfW2hEq4FfFxU hfCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746032229; x=1746637029; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gr5Lhtq4RMEgZM2tYHreWv77Ov5o6NGJLDOXXwPVCpA=; b=o+2X5Cbx8q/BvmgA4iF+tUoY6BBi64yyPSqGqWUNiutFv8Fjr1RXI5y2kL1Kw7gyDK H/JWC8/k9PhHytU1QdplVPWq5YcM1FfjB4zPRYce9zBc/FKSp7vfAUF/K0qbFGL3sEOS Pn/DUp5MLU/LNewxArpCM95qV3CtpBRdirvUr/WTo55SIBM5ir3NjPszsUWmh0hT4fhY d5GEu9xcUYC+dk/0usuVp6HhvwCxgcvv+BoH3vdr+s4kSUbwYAVlmXnD8HYAiG+/8tb8 eNtgYk6OOatVXSFXQ4HMLc8GcwxquS6lcZPoPUpRsgJLb2kuViR86E4yEnXLBKVU9ohR 0nSw== X-Forwarded-Encrypted: i=1; AJvYcCXUl830mv9015utQA2hnifboJGa5SHbetUu2agZFos9uB6W/hJjzsTQFxb5sV3Uz1wBKXrvNc46hQ==@kvack.org X-Gm-Message-State: AOJu0YxX/+1NPDz66SqytveFQsU8RZSbbtSKUg+Gotl6P/lFzWD404Ad iCM2GbthmdRgW78bNZDYvbVKQcNzqSItByQp9HMsvT+kc24szDrn9ppg7ilLnmM6gnwtjS7b0g= = X-Google-Smtp-Source: AGHT+IETFauGFsHYOJjw4CSDyJ0tdvCZM93gDYVKoGoQHbFjxmQvhxMExSQAfHqlOSGTi6uXKyM5+m3cQQ== X-Received: from wmbjx12.prod.google.com ([2002:a05:600c:578c:b0:440:595d:fba9]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4e09:b0:43d:5ec:b2f4 with SMTP id 5b1f17b1804b1-441b1f35490mr48191655e9.10.1746032229343; Wed, 30 Apr 2025 09:57:09 -0700 (PDT) Date: Wed, 30 Apr 2025 17:56:48 +0100 In-Reply-To: <20250430165655.605595-1-tabba@google.com> Mime-Version: 1.0 References: <20250430165655.605595-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.967.g6a0df3ecc3-goog Message-ID: <20250430165655.605595-7-tabba@google.com> Subject: [PATCH v8 06/13] KVM: x86: Generalize private fault lookups to guest_memfd fault lookups From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, tabba@google.com Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: DB21018000D X-Stat-Signature: xtkriunpcrbftkg9fmaufkg8rumuruat X-Rspam-User: X-HE-Tag: 1746032230-190224 X-HE-Meta: U2FsdGVkX19u2DsZVmqSMmxEZukm9ZlgCS2ZyeLvh41L0jj27FuvQLq/9cN2vv67Yf5lKjZqR6NWYjr0luI8qqxB20qjdyPLcp4f55l3Z2xKIlJnbRsf45CUkNflR2yizSQB5TF4ndo88Bjanlh/lSh/A2URT4016luKP6D7V8TXxaA3BG+omtgkdjvcF0p5DFVbZCIqzx5i9i/lTzR+V9DN7Netctg9Ah9D2Y5e10dbOIRdh9p9vyU2q7y9O2+u5AMvS3E4/TKboKPCaU2+mCm5bJgHGYar8MUitTH/Gj7PI6eZRvewN3cM9gh/RZgwYInDmsw/iqg+RrwyUOKVMuG/fLaz0LQPe92yHuI7r30g5ZTmF6UV8pTip6BnMgvZzkIUL5BoOUV2qDwOYAZeZ1Jki1Hvb2zvGTLrg6Y2puYy30XPrtL6WBLs+LKi5hLs3+HDfyDeaiyC060rmw5l5vVCIgYDUosORQ4iq29Q4mNJtcshgATGRNCSu0/5L7diK4rtRcOyGtIR8HwylAPjztXQxhRQ/cAADRQnD6a23lrjGMPxsQIbnoHvTqreYp/AW0pH/BUcUaGw1FxvyuxYTh3FOgwwX4BYEjJRxbW2jq66upE+UwOCxIPL2qDEkCimNYJmPk6I5qmevaBtNgjnUxaEMx3bGPo/1Jkyx2H63qdDgr2MT0NurXnJXE+qcbPS7OJxVsvfkq/NJ38KOG7O/B47J+8Ef7UXSh2XIR3K6hTEH4V1GI0EpR9ogE4lx66sg/bPV7q50P0tRPRqGm/vDAAl1o0DUOnPBEku0cRy3TLTi7KrNDXPMUlOqs/WmJzwfVomWsJvJ1m18r8a/XmE8V9bVwTDtOIgFG+nAqfzBc0qZRtu+geW9RmgW8Snj3+rBt6/I923hJ1jXlmkXpRst5On2pLaOYmvZ1vdF1GDI37Qj9SVKmxwNDPTw7hQQN4ahqCsksNUkaz11gx0wRi y4+ZIlof Dy5tQ9kXfcP9uymG12Jf7pNOEvsPxQzzlnRKQ0m6iSKjLsjacUDSCAVNf8/YOdQUiDj2mnZ0SZFlPc/QJ9huk5G4SrejC+kZkcy2P2EywvtMCMV1ty8fZIFjLD3e+UUjzzBjv5z4mwDtTgkZavNhaxJ4wd2r3UvV6YD5T5TL9e1FVtc/HXuXijE/9np1jAJ5tFfM72nZClcKigOi3O3zjOb1W3vG0fp7R4Nx1L0chcN3fOsNQRt+iDaHDnBUvR/yqhIIG6mRBXLMBcN1p605m5xYJgFnruQGtf1i8kozq555kIFz/l9x+MRK08pdd0w4dhmO0bwLwPJ9BSYpaZ+RaFk36wKvpvWoTBSYAu1mjRKoLdBo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Until now, faults to private memory backed by guest_memfd are always consumed from guest_memfd whereas faults to shared memory are consumed from anonymous memory. Subsequent patches will allow sharing guest_memfd backed memory in-place, and mapping it by the host. Faults to in-place shared memory should be consumed from guest_memfd as well. In order to facilitate that, generalize the fault lookups. Currently, only private memory is consumed from guest_memfd and therefore as it stands, this patch does not change the behavior. Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- arch/x86/kvm/mmu/mmu.c | 19 +++++++++---------- include/linux/kvm_host.h | 6 ++++++ 2 files changed, 15 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6d5dd869c890..08eebd24a0e1 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3258,7 +3258,7 @@ static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn, static int __kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, - gfn_t gfn, int max_level, bool is_private) + gfn_t gfn, int max_level, bool is_gmem) { struct kvm_lpage_info *linfo; int host_level; @@ -3270,7 +3270,7 @@ static int __kvm_mmu_max_mapping_level(struct kvm *kvm, break; } - if (is_private) + if (is_gmem) return max_level; if (max_level == PG_LEVEL_4K) @@ -3283,10 +3283,9 @@ static int __kvm_mmu_max_mapping_level(struct kvm *kvm, int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn) { - bool is_private = kvm_slot_has_gmem(slot) && - kvm_mem_is_private(kvm, gfn); + bool is_gmem = kvm_slot_has_gmem(slot) && kvm_mem_from_gmem(kvm, gfn); - return __kvm_mmu_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM, is_private); + return __kvm_mmu_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM, is_gmem); } void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) @@ -4465,7 +4464,7 @@ static inline u8 kvm_max_level_for_order(int order) return PG_LEVEL_4K; } -static u8 kvm_max_private_mapping_level(struct kvm *kvm, kvm_pfn_t pfn, +static u8 kvm_max_gmem_mapping_level(struct kvm *kvm, kvm_pfn_t pfn, u8 max_level, int gmem_order) { u8 req_max_level; @@ -4491,7 +4490,7 @@ static void kvm_mmu_finish_page_fault(struct kvm_vcpu *vcpu, r == RET_PF_RETRY, fault->map_writable); } -static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, +static int kvm_mmu_faultin_pfn_gmem(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { int max_order, r; @@ -4509,8 +4508,8 @@ static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, } fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY); - fault->max_level = kvm_max_private_mapping_level(vcpu->kvm, fault->pfn, - fault->max_level, max_order); + fault->max_level = kvm_max_gmem_mapping_level(vcpu->kvm, fault->pfn, + fault->max_level, max_order); return RET_PF_CONTINUE; } @@ -4521,7 +4520,7 @@ static int __kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu, unsigned int foll = fault->write ? FOLL_WRITE : 0; if (fault->is_private) - return kvm_mmu_faultin_pfn_private(vcpu, fault); + return kvm_mmu_faultin_pfn_gmem(vcpu, fault); foll |= FOLL_NOWAIT; fault->pfn = __kvm_faultin_pfn(fault->slot, fault->gfn, foll, diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index d9616ee6acc7..cdcd7ac091b5 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2514,6 +2514,12 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) } #endif /* CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES */ +static inline bool kvm_mem_from_gmem(struct kvm *kvm, gfn_t gfn) +{ + /* For now, only private memory gets consumed from guest_memfd. */ + return kvm_mem_is_private(kvm, gfn); +} + #ifdef CONFIG_KVM_GMEM int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, kvm_pfn_t *pfn, struct page **page, -- 2.49.0.901.g37484f566f-goog