From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F56BC3ABDA for ; Wed, 14 May 2025 09:46:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0F7C96B0120; Wed, 14 May 2025 05:46:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0A7866B0121; Wed, 14 May 2025 05:46:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E620D6B0122; Wed, 14 May 2025 05:46:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 82FA06B0120 for ; Wed, 14 May 2025 05:46:23 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id DC38D1605B8 for ; Wed, 14 May 2025 09:46:24 +0000 (UTC) X-FDA: 83441033088.08.AE16C11 Received: from mail-qt1-f181.google.com (mail-qt1-f181.google.com [209.85.160.181]) by imf29.hostedemail.com (Postfix) with ESMTP id 1AB1A12000F for ; Wed, 14 May 2025 09:46:22 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=WgBEzJSu; spf=pass (imf29.hostedemail.com: domain of tabba@google.com designates 209.85.160.181 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747215983; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=iOR+pgYJTWweXD6yG07hJucbInjcwioB90Ea884+nYE=; b=LzLMDJ41OqL1735uQw+M/oz3wyIa0MdJi5zXg4NRissU/NA4s5pJ2AM4ixL6D6YyxBweYW KUtkTPWl++Ey8px91jEfpCt19cOv3Yt46EinkJnXSHt6QiB2Jm0zyMvr5rub9ODOpWXKz2 i95CI9cPUuxFIQN+127HFh6iQnvU9qY= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=WgBEzJSu; spf=pass (imf29.hostedemail.com: domain of tabba@google.com designates 209.85.160.181 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747215983; a=rsa-sha256; cv=none; b=0aaoczkkThhLA3JCaL+XDjK9Dwvr1mbcARzEkV3LMuHiRZmeDaUMXwxV68ZX/VnQ7ejhnj jVU5UfFSIYGf8YLFIph2d+6vR1XXPVhLx7qv+izFIBItecMl3SjqEKL/9Ps/+YV+xY9A1K NxMFRQ9Gie11jvJesO7HIbXXwEJDEFM= Received: by mail-qt1-f181.google.com with SMTP id d75a77b69052e-47666573242so285961cf.0 for ; Wed, 14 May 2025 02:46:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747215982; x=1747820782; darn=kvack.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=iOR+pgYJTWweXD6yG07hJucbInjcwioB90Ea884+nYE=; b=WgBEzJSuO3IYQgxTOvx4ASj5FxW+0MkVhIMIXYJXnY7/Gbp0lqOyOATHyJ7UESZIaO /+mmoz8AoZa+dhcIpgI6Z/e2wVIo30+TyzR+vCh9AdpZomQUFO+XvZ4ir3oI5p7QJUaN aeQAhhWIsUn1WN16jD2EbEK/T19oYM26DU5KT1mb76r2+zGCCww8Hb1TqFFXeizHDlOa M6JXAGxOwfVcNY6FPMBaSPMPoU+hhW1ONEveUiIN53+B1enExrNcYOjIh0tgv46PoRfi ab7N2b8F8m/Z3SL/ttlD+IitCsGg0G/lIFlZnjfBephz4qr8qu9pk4wlA6Mk7h8EIN9f dsEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747215982; x=1747820782; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=iOR+pgYJTWweXD6yG07hJucbInjcwioB90Ea884+nYE=; b=pkWlp+gHGI/ghiQ55DwMtMyhytWXJzm3EknJdNGHJzzIZUVDEEuBfQRPrGryAB30pb 4o/23G/KdAi1nQ98MYSOVqAC7Al5PiWaczRu6mLUS5HT57Bo0SHxD9/mTA8W0Y1QvlK8 6XZctFLfEiV0jQ+ceYLsGkzPOyZ6uyDoH/9RJcchM+rfsp5XLiUkU3nCGqeb88cXA4q1 vMXVvMjcrT44PkDK4uqh3Q/por9eYwUiDo/Ky5iytWpRxYIdAWJL13ig2IOWe5UvoR/h mx96gelEgEe9xvqGPL0UYdyU1LCG5ydEyGL9uSmXSYGDrU8fUHUFrL0DMOc71jZh+7oa tV+A== X-Forwarded-Encrypted: i=1; AJvYcCUTzLFBTK+ymKnvg2skTbJoWDQuFeEKgKnh9uRi6pwhnTe72y2psQ+C0utvo7Ou3IlqRaaTzx212A==@kvack.org X-Gm-Message-State: AOJu0YwjOQM/dKrbQ/3ZHXQQkcb9jMFcI9dyFe4aFu08PkdGCO6xQIGi K99oS3QFMa+KQqmopkE/z80ZNDYpU7RisFvFZ+B3p1TIRoZo6ZNCdGgxzi+LALofn7OcQGHtr2c gGZJmcjrU1WlK6E9F+5pv6S+WRZs2/vLpPM5H3sDxr42Rs5Pi2owCPP4t/IQYlg== X-Gm-Gg: ASbGncvfXEJklMHShKz/GyneDohfAI3OYJSH9pqSJrowqLsFjw7uL/PLICwwIc31Jtz jxEOciCFC7RGdqX3VS9HQ+L/6Eu/8j69uhi2cjiL2ATomKV6CO/SfEsyGqALQV41xK3th3gQI5W /BbdLeFwwC2TWIDvCu3pgH7FDQd9tP+SRovg== X-Google-Smtp-Source: AGHT+IHyXlMRKjAmg9V7IyA3mOWuDec01j7eEOwdXb9gDqhAhTkKneyeDAXJK2yiqrmn020zaP9OVr8cS2btAa7PV0w= X-Received: by 2002:a05:622a:1444:b0:494:58a3:d3d3 with SMTP id d75a77b69052e-494961696a3mr3574791cf.20.1747215981711; Wed, 14 May 2025 02:46:21 -0700 (PDT) MIME-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> <20250513163438.3942405-8-tabba@google.com> In-Reply-To: From: Fuad Tabba Date: Wed, 14 May 2025 10:45:45 +0100 X-Gm-Features: AX0GCFvlOF6eIA97TFqNjFliq9ZQWn-Oxff0pXN7KQAMo9uomoYCYAE-vSm4W_I Message-ID: Subject: Re: [PATCH v9 07/17] KVM: guest_memfd: Allow host to map guest_memfd() pages To: Shivank Garg Cc: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 1AB1A12000F X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: cwo7ozfq1tap1aguzqdjipw1j5adtnco X-HE-Tag: 1747215982-914652 X-HE-Meta: U2FsdGVkX19+OnYabtlp5FAtpoGiPmvVRfsg4YHqEkhiEtJS45vUS1kLvi4i/dvXS/ODD5wISJmc5iFHEhDXMXTKUWIwSkcJInqlXLYgBe9RyWWFT8COJ+17xfIJUkw4xTHUk2c7uvrkIGaz+fv1Rt9OcXR3nbE8a2Nyh6A0NCNwdHr2nHR1Y6TdNNnSbntA473Cpto16ikWraePRdBglw7TFxqYx9F8rxXxBlMbweWwPyrHgecT1DMT1Y9Uj1N2REybUkgI/yy8MJuqKoKome8jiPFcwjRelohBUSIdX4lzHpZmLVwl+sQQtZdJrQS1b6b/Uc4E3lTt5r4HGSwUo5mIs869/mq5ofjY03mg630JsVKdexdGovDgzCmoOkBWTE7E7vdrt4AotBiJOS44k+jZgiiQj/m5+h+KSI5a+ipKx7YzsscTLaEZsehwbANSMXMqgNpn/KJdJY+SX5IfmAs8mttKGaSBypPh+0GIEQJFnrwKKAyNYS4XsPWloNawCKmWjww9TfedsfUyU0q//Gqr1xxOYsDsrVig47FRLzTKfH6X0o5IHqtQWrAD5V8LLbEWwCEMdmKrFOsoVs7JZB1cim9pwNZWlEPG/uCTzY6xPYjH7G+5T6fYFeaDH8RiKEDE3n8C26v9Cn8g4VDeQeSPs/F5Rp44W/VPNJgsPDiireDKuqmXeziTT+tekB1coRoWhvZW5r1tjpkx2rTEsQ1w9RfrZ22L4rn+ymJnkINNq49QZ315Gdq7SYu26vj98WDiyNeTlxe24zWUOKMdmBWuAymicD4b+m7zZiqAdODc8R7M5aFbMKOb0XbJYBiH6J9NkmK4LiEXvLQWBtS9cG2g7r+LnbMFduraFxnFtGQHimIj1sgK2GkggXBT+z6QL0R00B1zdBlKtSMw/PbiEvWABO1rONzFjfy9t6HQ/dEb7IO7SydxoWbLaEXWkrGLQCLJu4MIZszW/Q0tjtm ODqZAexa eNiXnZsvY7e4CBdjin3/N3irDF9qP/Q/IWXcxAMquuzYHIJ8URNyw6gAA/xz9Au/F+unBwyTKN8EIv3ZL2wWqntBfsQV+bjg6+UMyXzz5RNYfAcoOM2S77SfLkrNO+orp7WyLQ16JAoCTk+LCbwsSnkuRPjfFLY5Y+oPl6YP7nF+iYWnnN02eiINiKrTbQqRbrCUy1Ltj/dlQwrTDljeHMtMei0Eu/vPJgIi2lof4ayjGSybN589QWsYIzRhSRw7n/VzvaXP7v19XDGoDifdh/r7pbIcJ/BQga+doI13XIRPzaPymaVipP9eQrAUe7skEuML8EexZPVq3VSuBRDeh1vzqUdmGo8h74wu5GLhVcqQFr3M= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Thanks Shivank, On Wed, 14 May 2025 at 09:03, Shivank Garg wrote: > > On 5/13/2025 10:04 PM, Fuad Tabba wrote: > > This patch enables support for shared memory in guest_memfd, including > > mapping that memory at the host userspace. This support is gated by the > > configuration option KVM_GMEM_SHARED_MEM, and toggled by the guest_memfd > > flag GUEST_MEMFD_FLAG_SUPPORT_SHARED, which can be set when creating a > > guest_memfd instance. > > > > Co-developed-by: Ackerley Tng > > Signed-off-by: Ackerley Tng > > Signed-off-by: Fuad Tabba > > --- > > arch/x86/include/asm/kvm_host.h | 10 ++++ > > include/linux/kvm_host.h | 13 +++++ > > include/uapi/linux/kvm.h | 1 + > > virt/kvm/Kconfig | 5 ++ > > virt/kvm/guest_memfd.c | 88 +++++++++++++++++++++++++++++++++ > > 5 files changed, 117 insertions(+) > > > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > > index 709cc2a7ba66..f72722949cae 100644 > > --- a/arch/x86/include/asm/kvm_host.h > > +++ b/arch/x86/include/asm/kvm_host.h > > @@ -2255,8 +2255,18 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level, > > > > #ifdef CONFIG_KVM_GMEM > > #define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem) > > + > > +/* > > + * CoCo VMs with hardware support that use guest_memfd only for backing private > > + * memory, e.g., TDX, cannot use guest_memfd with userspace mapping enabled. > > + */ > > +#define kvm_arch_vm_supports_gmem_shared_mem(kvm) \ > > + (IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM) && \ > > + ((kvm)->arch.vm_type == KVM_X86_SW_PROTECTED_VM || \ > > + (kvm)->arch.vm_type == KVM_X86_DEFAULT_VM)) > > #else > > #define kvm_arch_supports_gmem(kvm) false > > +#define kvm_arch_vm_supports_gmem_shared_mem(kvm) false > > #endif > > > > #define kvm_arch_has_readonly_mem(kvm) (!(kvm)->arch.has_protected_state) > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > > index ae70e4e19700..2ec89c214978 100644 > > --- a/include/linux/kvm_host.h > > +++ b/include/linux/kvm_host.h > > @@ -729,6 +729,19 @@ static inline bool kvm_arch_supports_gmem(struct kvm *kvm) > > } > > #endif > > > > +/* > > + * Returns true if this VM supports shared mem in guest_memfd. > > + * > > + * Arch code must define kvm_arch_vm_supports_gmem_shared_mem if support for > > + * guest_memfd is enabled. > > + */ > > +#if !defined(kvm_arch_vm_supports_gmem_shared_mem) && !IS_ENABLED(CONFIG_KVM_GMEM) > > +static inline bool kvm_arch_vm_supports_gmem_shared_mem(struct kvm *kvm) > > +{ > > + return false; > > +} > > +#endif > > + > > #ifndef kvm_arch_has_readonly_mem > > static inline bool kvm_arch_has_readonly_mem(struct kvm *kvm) > > { > > diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h > > index b6ae8ad8934b..9857022a0f0c 100644 > > --- a/include/uapi/linux/kvm.h > > +++ b/include/uapi/linux/kvm.h > > @@ -1566,6 +1566,7 @@ struct kvm_memory_attributes { > > #define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3) > > > > #define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd) > > +#define GUEST_MEMFD_FLAG_SUPPORT_SHARED (1UL << 0) > > > > struct kvm_create_guest_memfd { > > __u64 size; > > diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig > > index 559c93ad90be..f4e469a62a60 100644 > > --- a/virt/kvm/Kconfig > > +++ b/virt/kvm/Kconfig > > @@ -128,3 +128,8 @@ config HAVE_KVM_ARCH_GMEM_PREPARE > > config HAVE_KVM_ARCH_GMEM_INVALIDATE > > bool > > depends on KVM_GMEM > > + > > +config KVM_GMEM_SHARED_MEM > > + select KVM_GMEM > > + bool > > + prompt "Enables in-place shared memory for guest_memfd" > > Hi, > > I noticed following warnings with checkpatch.pl: > > WARNING: Argument 'kvm' is not used in function-like macro > #42: FILE: arch/x86/include/asm/kvm_host.h:2269: > +#define kvm_arch_vm_supports_gmem_shared_mem(kvm) false > > WARNING: please write a help paragraph that fully describes the config symbol with at least 4 lines > #91: FILE: virt/kvm/Kconfig:132: > +config KVM_GMEM_SHARED_MEM > + select KVM_GMEM > + bool > + prompt "Enables in-place shared memory for guest_memfd" > > 0003-KVM-Rename-kvm_arch_has_private_mem-to-kvm_arch_supp.patch > ----------------------------------------------------------------------------- > WARNING: Argument 'kvm' is not used in function-like macro > #35: FILE: arch/x86/include/asm/kvm_host.h:2259: > +#define kvm_arch_supports_gmem(kvm) false > > total: 0 errors, 1 warnings, 91 lines checked > > Please let me know if these are ignored intentionally - if so, sorry for the noise. Yes, I did intentionally ignore these. kvm_arch_vm_supports_gmem_shared_mem() follows the same pattern as kvm_arch_supports_gmem(). As for the comment, I couldn't think of four lines to describe the config option that's not just fluff :) Cheers, /fuad > Best Regards, > Shivank > > > > diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c > > index 6db515833f61..8e6d1866b55e 100644 > > --- a/virt/kvm/guest_memfd.c > > +++ b/virt/kvm/guest_memfd.c > > @@ -312,7 +312,88 @@ static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn) > > return gfn - slot->base_gfn + slot->gmem.pgoff; > > } > > > > +#ifdef CONFIG_KVM_GMEM_SHARED_MEM > > + > > +static bool kvm_gmem_supports_shared(struct inode *inode) > > +{ > > + uint64_t flags = (uint64_t)inode->i_private; > > + > > + return flags & GUEST_MEMFD_FLAG_SUPPORT_SHARED; > > +} > > + > > +static vm_fault_t kvm_gmem_fault_shared(struct vm_fault *vmf) > > +{ > > + struct inode *inode = file_inode(vmf->vma->vm_file); > > + struct folio *folio; > > + vm_fault_t ret = VM_FAULT_LOCKED; > > + > > + filemap_invalidate_lock_shared(inode->i_mapping); > > + > > + folio = kvm_gmem_get_folio(inode, vmf->pgoff); > > + if (IS_ERR(folio)) { > > + int err = PTR_ERR(folio); > > + > > + if (err == -EAGAIN) > > + ret = VM_FAULT_RETRY; > > + else > > + ret = vmf_error(err); > > + > > + goto out_filemap; > > + } > > + > > + if (folio_test_hwpoison(folio)) { > > + ret = VM_FAULT_HWPOISON; > > + goto out_folio; > > + } > > + > > + if (WARN_ON_ONCE(folio_test_large(folio))) { > > + ret = VM_FAULT_SIGBUS; > > + goto out_folio; > > + } > > + > > + if (!folio_test_uptodate(folio)) { > > + clear_highpage(folio_page(folio, 0)); > > + kvm_gmem_mark_prepared(folio); > > + } > > + > > + vmf->page = folio_file_page(folio, vmf->pgoff); > > + > > +out_folio: > > + if (ret != VM_FAULT_LOCKED) { > > + folio_unlock(folio); > > + folio_put(folio); > > + } > > + > > +out_filemap: > > + filemap_invalidate_unlock_shared(inode->i_mapping); > > + > > + return ret; > > +} > > + > > +static const struct vm_operations_struct kvm_gmem_vm_ops = { > > + .fault = kvm_gmem_fault_shared, > > +}; > > + > > +static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma) > > +{ > > + if (!kvm_gmem_supports_shared(file_inode(file))) > > + return -ENODEV; > > + > > + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) != > > + (VM_SHARED | VM_MAYSHARE)) { > > + return -EINVAL; > > + } > > + > > + vma->vm_ops = &kvm_gmem_vm_ops; > > + > > + return 0; > > +} > > +#else > > +#define kvm_gmem_mmap NULL > > +#endif /* CONFIG_KVM_GMEM_SHARED_MEM */ > > + > > static struct file_operations kvm_gmem_fops = { > > + .mmap = kvm_gmem_mmap, > > .open = generic_file_open, > > .release = kvm_gmem_release, > > .fallocate = kvm_gmem_fallocate, > > @@ -463,6 +544,9 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) > > u64 flags = args->flags; > > u64 valid_flags = 0; > > > > + if (kvm_arch_vm_supports_gmem_shared_mem(kvm)) > > + valid_flags |= GUEST_MEMFD_FLAG_SUPPORT_SHARED; > > + > > if (flags & ~valid_flags) > > return -EINVAL; > > > > @@ -501,6 +585,10 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, > > offset + size > i_size_read(inode)) > > goto err; > > > > + if (kvm_gmem_supports_shared(inode) && > > + !kvm_arch_vm_supports_gmem_shared_mem(kvm)) > > + goto err; > > + > > filemap_invalidate_lock(inode->i_mapping); > > > > start = offset >> PAGE_SHIFT; >