From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BD2AC678DC for ; Wed, 11 Jun 2025 13:34:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ABBA46B00A6; Wed, 11 Jun 2025 09:33:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A945E6B00A8; Wed, 11 Jun 2025 09:33:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9A9DE6B00A9; Wed, 11 Jun 2025 09:33:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 79E896B00A6 for ; Wed, 11 Jun 2025 09:33:58 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 35090120C78 for ; Wed, 11 Jun 2025 13:33:58 +0000 (UTC) X-FDA: 83543212956.24.CF7D003 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf27.hostedemail.com (Postfix) with ESMTP id 8443940011 for ; Wed, 11 Jun 2025 13:33:56 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=yr9J485D; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf27.hostedemail.com: domain of 3w4VJaAUKCD4yfggflttlqj.htrqnsz2-rrp0fhp.twl@flex--tabba.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3w4VJaAUKCD4yfggflttlqj.htrqnsz2-rrp0fhp.twl@flex--tabba.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749648836; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9I0Ydw7vQZL/MzHPYPVxpepmcSRZ6xKfacwL8jDwSeg=; b=6FKjnxO5V5+/ymND72JMCfafxtnynkj1e/7pt4RzIXf7IT0soWvvi8TJA/2BiKs5iz45Xa 4i7ub7NJKwzE9Nq1uyJiT2HlLvhFI7chTVeS4KnzOaVHU3Vb2m6rUBATHQNk+3965KSnFu 1od0EmYIihFw27PQX7lwF5n6wihtSIY= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=yr9J485D; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf27.hostedemail.com: domain of 3w4VJaAUKCD4yfggflttlqj.htrqnsz2-rrp0fhp.twl@flex--tabba.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3w4VJaAUKCD4yfggflttlqj.htrqnsz2-rrp0fhp.twl@flex--tabba.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749648836; a=rsa-sha256; cv=none; b=lpCFr3nVLPfjGlTX6ZT8gEkQLrQO1M2eFYOd5oPj67L5U4r0TwQVqTN0Q9ugNKr+/M5hs4 nor9hIr7sc5yDsdqABbQBPE8nUWwQ8CnJSLuhNHIwTkU47W/Y9tdmfU7Lya9OlWWblIfBp sZskKCLh1blwsLDGmztBTt8HXgHSPdk= Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-3a4f85f31d9so2961706f8f.1 for ; Wed, 11 Jun 2025 06:33:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749648835; x=1750253635; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9I0Ydw7vQZL/MzHPYPVxpepmcSRZ6xKfacwL8jDwSeg=; b=yr9J485DuXbJqWU0nXBejDrv05G9Pc+moVmaMOJRkDOVgZNT9M/n+ce4qhmzOpATl2 pjSWUdO27rIxysXC2O1F3++zttWbdd7nIxhX9vdnPIXl0xf4hHliI/b1k9XToAow+Yqm 3jtuCNto8/7VkTwOu1n6TRrK3186AuJNiWR2a35OEaPWmAmVQ/xeZuQrcc+YXRjKYfbw IdmTLHk2+nVtCP3U5vgNX7SIgMtziHFjEG78dIx+a0pqMVogF4VQuP6VDL9c3rpI271P QEj97h+DvsPXQLqp7j024rfCPyNO9ZZBXpPnxewwa7+VbhNDHMIWZ65mKDIkzFxQjyOo QmkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749648835; x=1750253635; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9I0Ydw7vQZL/MzHPYPVxpepmcSRZ6xKfacwL8jDwSeg=; b=QFHDDsWK+7F5Akg9rbA3dKAusRBxGHRNfZywF0OmxV71G8bF3jWif7/23/v8NVC4oV Wk1w3S2DohxQm6Dz6t+XKcL8uzLZZ+UenhTw9qz44CN2e0FjTL2MgO0+tCNbukgniGVv 9Ud3yKCCiGP/YC8tgoB2hmu+KR/s877hUzEWLWlXQW50+ROK2YlVObyutgJGKwmlDCVR 9EJGOHxqEE6CIlmWVyAS2WehyUPYsi9AyJGMCh0XykpRPBQUH2Zp746dfYvPFKNVVpQy NHJLier1I8LzQOLJDHIACs4KY9WP9mL5z4SHqH8S/bBFRo95eyCzp1LXarDABmRgTBqa v/PQ== X-Forwarded-Encrypted: i=1; AJvYcCU3mRAXcGgiitdpM6L2lBQlewwlrGtJ2u8Z1IWZYkNV5DcEZqZdZ6Opvd+f0+VIJnZtuFoQC/wrug==@kvack.org X-Gm-Message-State: AOJu0YzhQMvoCZ28hYf+CQg0QiyCUU8Qd+B7qEPDoBF+gvlPJSoyDKa7 aslAT3YDqobKv6MhGoiB7onxjzfx3b5zAMFOTIDtSTQj88dQnzJteYVmi+VheR21x79hRGi6PLk X4A== X-Google-Smtp-Source: AGHT+IF7BYF8YltZJmm8AfXD5cWa2XtRByZZfSSVYhDvEQktUNN28X2aQfF+ZWAvZYSeah40bX9TMoZchg== X-Received: from wmbdv15.prod.google.com ([2002:a05:600c:620f:b0:453:99d:39ff]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:1a8d:b0:3a4:d975:7d6f with SMTP id ffacd0b85a97d-3a558ae2ad9mr2822797f8f.39.1749648835168; Wed, 11 Jun 2025 06:33:55 -0700 (PDT) Date: Wed, 11 Jun 2025 14:33:23 +0100 In-Reply-To: <20250611133330.1514028-1-tabba@google.com> Mime-Version: 1.0 References: <20250611133330.1514028-1-tabba@google.com> X-Mailer: git-send-email 2.50.0.rc0.642.g800a2b2222-goog Message-ID: <20250611133330.1514028-12-tabba@google.com> Subject: [PATCH v12 11/18] KVM: x86: Consult guest_memfd when computing max_mapping_level From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 8443940011 X-Stat-Signature: xrupi3zkdw7kp3wbax9x56yjkdkyxqor X-Rspam-User: X-HE-Tag: 1749648836-876916 X-HE-Meta: U2FsdGVkX18AgJ7p8KU/jAOFewbYXyozPHL9eATOe7mZlgC18V1GdZAiJBNNsDwOmGNc0pBreWDYoLO41YdDVK4h7930rJufNTDB8GQgjnX1uzjTMflMfOPn+tXYNZBS72gETOswisWqnNIF0mKM/qSZHbwIck6C5fIDWiJx28cnsr/CMcjohZtuhwXYHN/Ng58kJnrVj9F9PC6TNjNOzvVDCvSjWPCQZsvU4PTPonefGBzMI4SRGar3MVm+I7JqqiraCPdt+IHLNEPk3iqiEFVZs6f7qfyobUx3eQFqCSyEvl44a2TdS7eve6OjKx+9h7PD3mVTRLxjqrPOzCgZ5N1UXsf5pD+qBOJSjMwAVpp/xJ3hqepPvmbZ5twHoPTTrO1JRzRCPUPEimsCtAVaO6kXDWTKGdAOJj+ZTfA2d1ipXhV/EDTySCn9Gr3F5kGhlX1zNEZ6/ylWMJ1TEWmr4EJCpVw3IJ/d/97C98WvY1yCB1YXcwKjz/H4on7603q6drQJr5DVi6Fh1WPui9mk/ee+F9drOQKpI+Bie2tZUz0PTOkXmiRp0pR6iQDqM6Dv6cxOZyeliQn+mjHxP39odCeiEGCAmauCiNg2ovO2AIQBwBFMQgHLkWFSdUL9j9aoCdPHWev3OpKq3jfIVhaedx4YFO/s0APZ4DOHZqXsfhU1PHP6THSHZI0+h/5s7Vi87MFUico/mR6rJl7JjznGXcXivKqlA5G01B9wiqWBUNhtGA+p5aZQ7uXFoHzTUej/xn20R5Q1bwxl6Dbu3zLGjvaZ+AfpQd+bvgXM7KC8hPKhDrMctXYnavLtNj3/N0LukTtxAM0rihwWlOiYGgGioFxBScepLRdzRj4jafvaQ+0pkQgZiBLSYN2J3KMzDbz2VaN798HOzDNWMahEVj9ILX5EVg72I67/cECDKxC23kt2NZBvfjnZy6UX1HXuX0Fg2FBuaaEB/796OiiONQC +g/rZKWP WSAw9swEWlHExoUtbu6lOXO0fgiL1M9ODnXMWn1XTH1OcnEyGBMNviixRZpp6J3mKnazSRTndEoiQREAMU3+c7w+NHrO4GtdV2OPBN4eneQuomW49JC4QirrVh2sBQNblgw+xKaM8KogtW4QgeDDqavumVSzcqtFU9nZxwX9qQ08V60V4VbJyvMY1D+i/FQAjLFXvffaVOj9KFZpFcJP6AdrdVA/gXnPZnTYNL0mhCpLyK2KT7EpWDLSO+DE4PPrWOIk2hkybfGjibwUGkzeKAjXfvBSpA/YBDwODGbHJhNzx+Au0oViDlq0L66134+E4N542RbyuWZCXuEFMmma1pDdgqaBVOwp2LKMw+2U5ZOVkKdA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Ackerley Tng This patch adds kvm_gmem_max_mapping_level(), which always returns PG_LEVEL_4K since guest_memfd only supports 4K pages for now. When guest_memfd supports shared memory, max_mapping_level (especially when recovering huge pages - see call to __kvm_mmu_max_mapping_level() from recover_huge_pages_range()) should take input from guest_memfd. Input from guest_memfd should be taken in these cases: + if the memslot supports shared memory (guest_memfd is used for shared memory, or in future both shared and private memory) or + if the memslot is only used for private memory and that gfn is private. If the memslot doesn't use guest_memfd, figure out the max_mapping_level using the host page tables like before. This patch also refactors and inlines the other call to __kvm_mmu_max_mapping_level(). In kvm_mmu_hugepage_adjust(), guest_memfd's input is already provided (if applicable) in fault->max_level. Hence, there is no need to query guest_memfd. lpage_info is queried like before, and then if the fault is not from guest_memfd, adjust fault->req_level based on input from host page tables. Acked-by: David Hildenbrand Signed-off-by: Ackerley Tng Co-developed-by: Fuad Tabba Signed-off-by: Fuad Tabba --- arch/x86/kvm/mmu/mmu.c | 87 +++++++++++++++++++++++++--------------- include/linux/kvm_host.h | 11 +++++ virt/kvm/guest_memfd.c | 12 ++++++ 3 files changed, 78 insertions(+), 32 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2aab5a00caee..b31c4750d02e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3258,12 +3258,11 @@ static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn, return level; } -static int __kvm_mmu_max_mapping_level(struct kvm *kvm, - const struct kvm_memory_slot *slot, - gfn_t gfn, int max_level, bool is_private) +static int kvm_lpage_info_max_mapping_level(struct kvm *kvm, + const struct kvm_memory_slot *slot, + gfn_t gfn, int max_level) { struct kvm_lpage_info *linfo; - int host_level; max_level = min(max_level, max_huge_page_level); for ( ; max_level > PG_LEVEL_4K; max_level--) { @@ -3272,28 +3271,61 @@ static int __kvm_mmu_max_mapping_level(struct kvm *kvm, break; } - if (is_private) - return max_level; + return max_level; +} + +static inline u8 kvm_max_level_for_order(int order) +{ + BUILD_BUG_ON(KVM_MAX_HUGEPAGE_LEVEL > PG_LEVEL_1G); + + KVM_MMU_WARN_ON(order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G) && + order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M) && + order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_4K)); + + if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G)) + return PG_LEVEL_1G; + + if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M)) + return PG_LEVEL_2M; + + return PG_LEVEL_4K; +} + +static inline int kvm_gmem_max_mapping_level(const struct kvm_memory_slot *slot, + gfn_t gfn, int max_level) +{ + int max_order; if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; - host_level = host_pfn_mapping_level(kvm, gfn, slot); - return min(host_level, max_level); + max_order = kvm_gmem_mapping_order(slot, gfn); + return min(max_level, kvm_max_level_for_order(max_order)); } int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn) { - bool is_private = kvm_slot_has_gmem(slot) && - kvm_mem_is_private(kvm, gfn); + int max_level; - return __kvm_mmu_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM, is_private); + max_level = kvm_lpage_info_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM); + if (max_level == PG_LEVEL_4K) + return PG_LEVEL_4K; + + if (kvm_slot_has_gmem(slot) && + (kvm_gmem_memslot_supports_shared(slot) || + kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE)) { + return kvm_gmem_max_mapping_level(slot, gfn, max_level); + } + + return min(max_level, host_pfn_mapping_level(kvm, gfn, slot)); } static inline bool fault_from_gmem(struct kvm_page_fault *fault) { - return fault->is_private || kvm_gmem_memslot_supports_shared(fault->slot); + return fault->is_private || + (kvm_slot_has_gmem(fault->slot) && + kvm_gmem_memslot_supports_shared(fault->slot)); } void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) @@ -3316,12 +3348,20 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault * Enforce the iTLB multihit workaround after capturing the requested * level, which will be used to do precise, accurate accounting. */ - fault->req_level = __kvm_mmu_max_mapping_level(vcpu->kvm, slot, - fault->gfn, fault->max_level, - fault->is_private); + fault->req_level = kvm_lpage_info_max_mapping_level(vcpu->kvm, slot, + fault->gfn, fault->max_level); if (fault->req_level == PG_LEVEL_4K || fault->huge_page_disallowed) return; + if (!fault_from_gmem(fault)) { + int host_level; + + host_level = host_pfn_mapping_level(vcpu->kvm, fault->gfn, slot); + fault->req_level = min(fault->req_level, host_level); + if (fault->req_level == PG_LEVEL_4K) + return; + } + /* * mmu_invalidate_retry() was successful and mmu_lock is held, so * the pmd can't be split from under us. @@ -4455,23 +4495,6 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) vcpu->stat.pf_fixed++; } -static inline u8 kvm_max_level_for_order(int order) -{ - BUILD_BUG_ON(KVM_MAX_HUGEPAGE_LEVEL > PG_LEVEL_1G); - - KVM_MMU_WARN_ON(order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G) && - order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M) && - order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_4K)); - - if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G)) - return PG_LEVEL_1G; - - if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M)) - return PG_LEVEL_2M; - - return PG_LEVEL_4K; -} - static u8 kvm_max_level_for_fault_and_order(struct kvm *kvm, struct kvm_page_fault *fault, int order) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 8f7069385189..58d7761c2a90 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2574,6 +2574,10 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) return kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE; } #else +static inline unsigned long kvm_get_memory_attributes(struct kvm *kvm, gfn_t gfn) +{ + return 0; +} static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) { return false; @@ -2584,6 +2588,7 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, kvm_pfn_t *pfn, struct page **page, int *max_order); +int kvm_gmem_mapping_order(const struct kvm_memory_slot *slot, gfn_t gfn); #else static inline int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, @@ -2593,6 +2598,12 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm, KVM_BUG_ON(1, kvm); return -EIO; } +static inline int kvm_gmem_mapping_order(const struct kvm_memory_slot *slot, + gfn_t gfn) +{ + BUILD_BUG(); + return 0; +} #endif /* CONFIG_KVM_GMEM */ #ifdef CONFIG_HAVE_KVM_ARCH_GMEM_PREPARE diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 73b0aa2bc45f..ebdb2d8bf57a 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -713,6 +713,18 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, } EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn); +/* + * Returns the mapping order for this @gfn in @slot. + * + * This is equal to max_order that would be returned if kvm_gmem_get_pfn() were + * called now. + */ +int kvm_gmem_mapping_order(const struct kvm_memory_slot *slot, gfn_t gfn) +{ + return 0; +} +EXPORT_SYMBOL_GPL(kvm_gmem_mapping_order); + #ifdef CONFIG_KVM_GENERIC_GMEM_POPULATE long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long npages, kvm_gmem_populate_cb post_populate, void *opaque) -- 2.50.0.rc0.642.g800a2b2222-goog