From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1645C3600C for ; Thu, 3 Apr 2025 08:59:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5194F280006; Thu, 3 Apr 2025 04:59:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4C85F280005; Thu, 3 Apr 2025 04:59:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 38EDA280006; Thu, 3 Apr 2025 04:59:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 15BFF280005 for ; Thu, 3 Apr 2025 04:59:27 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id C5F2E1CCA65 for ; Thu, 3 Apr 2025 08:59:27 +0000 (UTC) X-FDA: 83292133974.27.0E11661 Received: from mail-qt1-f178.google.com (mail-qt1-f178.google.com [209.85.160.178]) by imf05.hostedemail.com (Postfix) with ESMTP id EE34110000E for ; Thu, 3 Apr 2025 08:59:25 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=KF0N+j8w; spf=pass (imf05.hostedemail.com: domain of tabba@google.com designates 209.85.160.178 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1743670766; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/eBVlASVIyKFmwO5dXZRhoAmxT/OvE9XEsr81sBmw+M=; b=miV6YGrebzmMOnehMkQmgeMiqmITZD8eW0HQt1YvPXPvOqhHMytLXkmGVgFPYlsLmwEQdS iJ0iAsufLx93h+v2ThRG8h2g1PXyvWpEd29qJqIWKfq9dmcTH7tgWG/BHKiOD0fFX5hqv4 8OEtfSyd5lrVhYc+9i/Z+4a5KkrG+3A= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=KF0N+j8w; spf=pass (imf05.hostedemail.com: domain of tabba@google.com designates 209.85.160.178 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1743670766; a=rsa-sha256; cv=none; b=1YAGebzwxM/NdOYwHTDKZkjQIfr9eSG5HT1xocnrNN2jV9YM7zABOoU6zANRO7j8K82R5M BdmMS6RcaU7kKfTbMK8h/8ZUYouw2jo2ePIUoTsfZLHP9yRYHxYlLTyJ9CthKi0VQsUwKA nYKKINooZqz5ZYZVEzULuWZTUm9eAJo= Received: by mail-qt1-f178.google.com with SMTP id d75a77b69052e-47681dba807so442781cf.1 for ; Thu, 03 Apr 2025 01:59:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1743670765; x=1744275565; darn=kvack.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=/eBVlASVIyKFmwO5dXZRhoAmxT/OvE9XEsr81sBmw+M=; b=KF0N+j8wrsZNQl1ETGuz4GCWcb9mozV/zLlDhPaanXsqdnynl6zQjrJ6rQeLWuxeZE eooi3t/Wrzp2lca4cydQR2J4Evq+leD7tKp3uDnS8AX7uOgBtQf22QN+wcOwQiMopl7P YiUM0AM/LOE7GeaCDfnrqypaPTqfTEVQxTYKoqJUsDw/6bGUIpbQueO7OFXoKL5geNZY ETD/ytpgXNxsCgrSG9Uk8MYPveSgrcFkbvzZGkoYdrSsboio2CVUPHHsvvBt9mhWV/T8 yJOg0aJ0RZPWmI44uvWySmIn5Y7jCasN1rqYGtX1MTaoXq72roun0ilLQUAkbQJr7yBW 2Teg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743670765; x=1744275565; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=/eBVlASVIyKFmwO5dXZRhoAmxT/OvE9XEsr81sBmw+M=; b=h1EKnKirhz4tReAsckUR/axWCxWPlv88YM4SnpZumzVBqcWwKDB0ZsIDbP5la/voPN jHRGODtTDr8tZUqL0mXQ3xPAun0CDfL279eoKWX8w5VkgutnJvPzoOYm3vkjF3Fnh065 WzqqhTgyXzEDYZ5IsF2ERJrM82qG5X9RWo6zOnGV26+2DDcgjsqkn6yIhVb3VrCxAazZ wNlBtN1EjQPqsWjegEJZ9D+LrU5qG9e0A2BP79BD2WPBTuR4SlO15LF0wLQ6N/vfEDb/ Sh58NUurB9xjjVKtzuMrvmflDQF7pCpO92I9Yt7nB/XvSHNyYYDlozu2XbRG9EjN294S L0dA== X-Forwarded-Encrypted: i=1; AJvYcCUfi4H6tkI9ecUDIc6BEU3F7wdVsDRYBJM3MecoMA45/hUbvz91cLv10PGclX1s0u7DHtx5Z3/TqA==@kvack.org X-Gm-Message-State: AOJu0Yz1aGNSg0avo2mRKUIJfToLde+KfUIjeWfemZkcNfsKynsNZBX+ yBAGUukrZA7VWA+ZXTWOckXVpK0jQH3y366A/Bwo/+o5Lbm8zcYwbARPp/DrQEViyWGUOB6ZE+Z OTj9xQcoJcuDIwS61DXAr1nzP8NBO6/S/evOr X-Gm-Gg: ASbGnctgLKJ8nH4h8tXsFvx71cN1pGf1QMoLkzYt9nwbz/yw7rH32mHF7IKzWIHc6YB 6mAFRxX3T6zzUX+g8HdOu8hH9WliXASEN3Yhrhk2bRE5c00E7c2So78y7P8YOUu7/rbNoFVFsAF iHCDLdu/+fj1ENMdCdTuLVSfUxOA== X-Google-Smtp-Source: AGHT+IEXU+qJqOsCdpb0t4Vh+4R23HPF9NYftWqFNYfIb8gTrhd4BM7gfNMK62jS+OVlyJOUYowmFojvVPudGofbU74= X-Received: by 2002:a05:622a:c7:b0:466:8887:6751 with SMTP id d75a77b69052e-4791953c680mr2650091cf.23.1743670764671; Thu, 03 Apr 2025 01:59:24 -0700 (PDT) MIME-Version: 1.0 References: <20250328153133.3504118-5-tabba@google.com> In-Reply-To: From: Fuad Tabba Date: Thu, 3 Apr 2025 09:58:47 +0100 X-Gm-Features: ATxdqUF17-bMoezskxlqxe7ZJksMoIfd89AXLMP45nEDSfGr6_4NDi8p_3vJubM Message-ID: Subject: Re: [PATCH v7 4/7] KVM: guest_memfd: Folio sharing states and functions that manage their transition To: Ackerley Tng Cc: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: EE34110000E X-Stat-Signature: 9eqxgh8chgn1ntkqm7jp6crwgfa1eqa4 X-Rspam-User: X-Rspamd-Server: rspam12 X-HE-Tag: 1743670765-928294 X-HE-Meta: U2FsdGVkX1/S2FJ+TK7H2mxtvpg+KQIURq865vCiUHIP3A3K8KjOT5sfrbI9cqflRBip1d12kALYfdKem4B6de+jZ+ry0PSLi+9R0qlv96sYUDSs6ISCUHBVJ3oUQeVSXkLQitw018L2Jw86CdEi1xv5rxuQ9RTFoRvgo9IXku6JgMCY3u9dYX20qc2c/i5gEi6APsxLHtHzscvFPi2NUN9LSnhhG8SC9APtcWVr0ohet1PE/Y106lqbf88xYWxuIBYGWj14eU+XYI1Yf7JDSllnbWkMtdH9ja2hJhqKXbGuqZ4NCdw19hL5+6atGhQK0e4LirnDBeNbcuUMbrC4FlJShl/n9T60yBINmbBbww4I97/BCHj51bCNiiD1E1cA8XkKHGEuq6EVNLrGmFgeAhTUIYXLk0BCOxoXrEoIFCCFK5Ki2blWbVvqYsqYdN1V7Y6zp/LOl6XJC6fZP4ucS7YQVHWplkAhC708fZs54QNCKGWUMeyGVdEVH1/0VEaOvT52oRxCDyRFqDaOTqFGG2YcN4iHahza4b9PA7l0zlvtCRDIJwaJKBKk5ldqExXYvtKWgo5VzJ+V9tfCcj8RwtWnIYl5vbPYnjPE5aKhCqZ+gUBkcLcYqXayvODxDCfTmACSt8SP4kF07tYbArXAsT5YV4+bsvAJgQkHuw0J29TuHABM+EIV+L7a/waKygr8L3vDphdkSusd0aq3/HLn6FPhoTkC68hkmXvhWg34lJjQqkaQMUBBfjDDCpLgVBBIR8tK/tjvu1sSNx+QVj4g8VMwF5ZIFr6IutNmLomPdUiYrFUWQxbRie3MoKepNkOamygMYPBV2SA0faD1ppL+RMr3O4Ubwx5z3Ov/1mQXTxfTUR0x7B4h0Q/siyO4Rj3LiJ9KTu/H2dcFAzhHVgil1hNg+9tmogjbHeEJdZA2YzSbuTT/urmu+ySCDCIcCsC53/vBSu6WpqIp7Uo3ZlR np1mcrBE F31A9GbUAUdb4vsg9SIpSfS/igBkG7lxK4an0W0mpDhPjJi3Xm0zfLA3AmX6hI6R8mAkRtVZfM16G8F4qzpll5EWhYt3mwiZ6bE9FJehKQ/x9Oqr5EjLlVLqQo1Y8x7qs8WkH/DkzYmWv9piBA8mPSCUFIRz6g2U0Y0nr9Ai3XB9tRFC8a58A630eJxa3Ik1hbj/LEZ8+a5EpDJ5KdVHfAJMYDdg+MHlnCqyeOkWIKlmwyAZTJ08jHFovzLOszcggY4IFhXNmLZw3+knEPeik+tIGE1HZYMZ5meCDclaqWoDuFhJlBUrO8Q9g2oBaovVmk8SoWXsjs+t8/3g3boj3T7gXAQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Ackerley, On Thu, 3 Apr 2025 at 00:48, Ackerley Tng wrote: > > Fuad Tabba writes: > > > To allow in-place sharing of guest_memfd folios with the host, > > guest_memfd needs to track their sharing state, because mapping of > > shared folios will only be allowed where it safe to access these folios. > > It is safe to map and access these folios when explicitly shared with > > the host, or potentially if not yet exposed to the guest (e.g., at > > initialization). > > > > This patch introduces sharing states for guest_memfd folios as well as > > the functions that manage transitioning between those states. > > > > Signed-off-by: Fuad Tabba > > --- > > include/linux/kvm_host.h | 39 +++++++- > > virt/kvm/guest_memfd.c | 208 ++++++++++++++++++++++++++++++++++++--- > > virt/kvm/kvm_main.c | 62 ++++++++++++ > > 3 files changed, 295 insertions(+), 14 deletions(-) > > > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > > index bc73d7426363..bf82faf16c53 100644 > > --- a/include/linux/kvm_host.h > > +++ b/include/linux/kvm_host.h > > @@ -2600,7 +2600,44 @@ long kvm_arch_vcpu_pre_fault_memory(struct kvm_vcpu *vcpu, > > #endif > > > > #ifdef CONFIG_KVM_GMEM_SHARED_MEM > > +int kvm_gmem_set_shared(struct kvm *kvm, gfn_t start, gfn_t end); > > +int kvm_gmem_clear_shared(struct kvm *kvm, gfn_t start, gfn_t end); > > +int kvm_gmem_slot_set_shared(struct kvm_memory_slot *slot, gfn_t start, > > + gfn_t end); > > +int kvm_gmem_slot_clear_shared(struct kvm_memory_slot *slot, gfn_t start, > > + gfn_t end); > > +bool kvm_gmem_slot_is_guest_shared(struct kvm_memory_slot *slot, gfn_t gfn); > > void kvm_gmem_handle_folio_put(struct folio *folio); > > -#endif > > +#else > > +static inline int kvm_gmem_set_shared(struct kvm *kvm, gfn_t start, gfn_t end) > > +{ > > + WARN_ON_ONCE(1); > > + return -EINVAL; > > +} > > +static inline int kvm_gmem_clear_shared(struct kvm *kvm, gfn_t start, > > + gfn_t end) > > +{ > > + WARN_ON_ONCE(1); > > + return -EINVAL; > > +} > > +static inline int kvm_gmem_slot_set_shared(struct kvm_memory_slot *slot, > > + gfn_t start, gfn_t end) > > +{ > > + WARN_ON_ONCE(1); > > + return -EINVAL; > > +} > > +static inline int kvm_gmem_slot_clear_shared(struct kvm_memory_slot *slot, > > + gfn_t start, gfn_t end) > > +{ > > + WARN_ON_ONCE(1); > > + return -EINVAL; > > +} > > +static inline bool kvm_gmem_slot_is_guest_shared(struct kvm_memory_slot *slot, > > + gfn_t gfn) > > +{ > > + WARN_ON_ONCE(1); > > + return false; > > +} > > +#endif /* CONFIG_KVM_GMEM_SHARED_MEM */ > > > > #endif > > diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c > > index cde16ed3b230..3b4d724084a8 100644 > > --- a/virt/kvm/guest_memfd.c > > +++ b/virt/kvm/guest_memfd.c > > @@ -29,14 +29,6 @@ static struct kvm_gmem_inode_private *kvm_gmem_private(struct inode *inode) > > return inode->i_mapping->i_private_data; > > } > > > > -#ifdef CONFIG_KVM_GMEM_SHARED_MEM > > -void kvm_gmem_handle_folio_put(struct folio *folio) > > -{ > > - WARN_ONCE(1, "A placeholder that shouldn't trigger. Work in progress."); > > -} > > -EXPORT_SYMBOL_GPL(kvm_gmem_handle_folio_put); > > -#endif /* CONFIG_KVM_GMEM_SHARED_MEM */ > > - > > /** > > * folio_file_pfn - like folio_file_page, but return a pfn. > > * @folio: The folio which contains this index. > > @@ -389,22 +381,211 @@ static void kvm_gmem_init_mount(void) > > } > > > > #ifdef CONFIG_KVM_GMEM_SHARED_MEM > > -static bool kvm_gmem_offset_is_shared(struct file *file, pgoff_t index) > > +/* > > + * An enum of the valid folio sharing states: > > + * Bit 0: set if not shared with the guest (guest cannot fault it in) > > + * Bit 1: set if not shared with the host (host cannot fault it in) > > + */ > > +enum folio_shareability { > > + KVM_GMEM_ALL_SHARED = 0b00, /* Shared with the host and the guest. */ > > + KVM_GMEM_GUEST_SHARED = 0b10, /* Shared only with the guest. */ > > + KVM_GMEM_NONE_SHARED = 0b11, /* Not shared, transient state. */ > > +}; > > + > > +static int kvm_gmem_offset_set_shared(struct inode *inode, pgoff_t index) > > { > > - struct kvm_gmem *gmem = file->private_data; > > + struct xarray *shared_offsets = &kvm_gmem_private(inode)->shared_offsets; > > + rwlock_t *offsets_lock = &kvm_gmem_private(inode)->offsets_lock; > > + void *xval = xa_mk_value(KVM_GMEM_ALL_SHARED); > > + > > + lockdep_assert_held_write(offsets_lock); > > + > > + return xa_err(xa_store(shared_offsets, index, xval, GFP_KERNEL)); > > +} > > + > > +/* > > + * Marks the range [start, end) as shared with both the host and the guest. > > + * Called when guest shares memory with the host. > > + */ > > +static int kvm_gmem_offset_range_set_shared(struct inode *inode, > > + pgoff_t start, pgoff_t end) > > +{ > > + rwlock_t *offsets_lock = &kvm_gmem_private(inode)->offsets_lock; > > + pgoff_t i; > > + int r = 0; > > + > > + write_lock(offsets_lock); > > + for (i = start; i < end; i++) { > > + r = kvm_gmem_offset_set_shared(inode, i); > > + if (WARN_ON_ONCE(r)) > > + break; > > + } > > + write_unlock(offsets_lock); > > + > > + return r; > > +} > > + > > +static int kvm_gmem_offset_clear_shared(struct inode *inode, pgoff_t index) > > +{ > > + struct xarray *shared_offsets = &kvm_gmem_private(inode)->shared_offsets; > > + rwlock_t *offsets_lock = &kvm_gmem_private(inode)->offsets_lock; > > + void *xval_guest = xa_mk_value(KVM_GMEM_GUEST_SHARED); > > + void *xval_none = xa_mk_value(KVM_GMEM_NONE_SHARED); > > + struct folio *folio; > > + int refcount; > > + int r; > > + > > + lockdep_assert_held_write(offsets_lock); > > + > > + folio = filemap_lock_folio(inode->i_mapping, index); > > + if (!IS_ERR(folio)) { > > + /* +1 references are expected because of filemap_lock_folio(). */ > > + refcount = folio_nr_pages(folio) + 1; > > + } else { > > + r = PTR_ERR(folio); > > + if (WARN_ON_ONCE(r != -ENOENT)) > > + return r; > > + > > + folio = NULL; > > + } > > + > > + if (!folio || folio_ref_freeze(folio, refcount)) { > > + /* > > + * No outstanding references: transition to guest shared. > > + */ > > + r = xa_err(xa_store(shared_offsets, index, xval_guest, GFP_KERNEL)); > > + > > + if (folio) > > + folio_ref_unfreeze(folio, refcount); > > + } else { > > + /* > > + * Outstanding references: the folio cannot be faulted in by > > + * anyone until they're dropped. > > + */ > > + r = xa_err(xa_store(shared_offsets, index, xval_none, GFP_KERNEL)); > > Once we do this on elevated refcounts, truncate needs to be updated to > handle the case where some folio is still in a KVM_GMEM_NONE_SHARED > state. > > When a folio is found in a KVM_GMEM_NONE_SHARED state, the shareability > should be fast-forwarded to KVM_GMEM_GUEST_SHARED, and the filemap's > refcounts restored. The folio can then be truncated from the filemap as > usual (which will drop the filemap's refcounts) Ack. Thanks, /fuad > > + } > > + > > + if (folio) { > > + folio_unlock(folio); > > + folio_put(folio); > > + } > > + > > + return r; > > +} > > > > +/* > > + * Marks the range [start, end) as not shared with the host. If the host doesn't > > + * have any references to a particular folio, then that folio is marked as > > + * shared with the guest. > > + * > > + * However, if the host still has references to the folio, then the folio is > > + * marked and not shared with anyone. Marking it as not shared allows draining > > + * all references from the host, and ensures that the hypervisor does not > > + * transition the folio to private, since the host still might access it. > > + * > > + * Called when guest unshares memory with the host. > > + */ > > +static int kvm_gmem_offset_range_clear_shared(struct inode *inode, > > + pgoff_t start, pgoff_t end) > > +{ > > + rwlock_t *offsets_lock = &kvm_gmem_private(inode)->offsets_lock; > > + pgoff_t i; > > + int r = 0; > > + > > + write_lock(offsets_lock); > > + for (i = start; i < end; i++) { > > + r = kvm_gmem_offset_clear_shared(inode, i); > > + if (WARN_ON_ONCE(r)) > > + break; > > + } > > + write_unlock(offsets_lock); > > + > > + return r; > > +} > > + > > +void kvm_gmem_handle_folio_put(struct folio *folio) > > +{ > > + WARN_ONCE(1, "A placeholder that shouldn't trigger. Work in progress."); > > +} > > +EXPORT_SYMBOL_GPL(kvm_gmem_handle_folio_put); > > + > > +/* > > + * Returns true if the folio is shared with the host and the guest. > > + * > > + * Must be called with the offsets_lock lock held. > > + */ > > +static bool kvm_gmem_offset_is_shared(struct inode *inode, pgoff_t index) > > +{ > > + struct xarray *shared_offsets = &kvm_gmem_private(inode)->shared_offsets; > > + rwlock_t *offsets_lock = &kvm_gmem_private(inode)->offsets_lock; > > + unsigned long r; > > + > > + lockdep_assert_held(offsets_lock); > > > > - /* For now, VMs that support shared memory share all their memory. */ > > - return kvm_arch_gmem_supports_shared_mem(gmem->kvm); > > + r = xa_to_value(xa_load(shared_offsets, index)); > > + > > + return r == KVM_GMEM_ALL_SHARED; > > +} > > + > > +/* > > + * Returns true if the folio is shared with the guest (not transitioning). > > + * > > + * Must be called with the offsets_lock lock held. > > + */ > > +static bool kvm_gmem_offset_is_guest_shared(struct inode *inode, pgoff_t index) > > +{ > > + struct xarray *shared_offsets = &kvm_gmem_private(inode)->shared_offsets; > > + rwlock_t *offsets_lock = &kvm_gmem_private(inode)->offsets_lock; > > + unsigned long r; > > + > > + lockdep_assert_held(offsets_lock); > > + > > + r = xa_to_value(xa_load(shared_offsets, index)); > > + > > + return (r == KVM_GMEM_ALL_SHARED || r == KVM_GMEM_GUEST_SHARED); > > +} > > + > > +int kvm_gmem_slot_set_shared(struct kvm_memory_slot *slot, gfn_t start, gfn_t end) > > +{ > > + struct inode *inode = file_inode(READ_ONCE(slot->gmem.file)); > > + pgoff_t start_off = slot->gmem.pgoff + start - slot->base_gfn; > > + pgoff_t end_off = start_off + end - start; > > + > > + return kvm_gmem_offset_range_set_shared(inode, start_off, end_off); > > +} > > + > > +int kvm_gmem_slot_clear_shared(struct kvm_memory_slot *slot, gfn_t start, gfn_t end) > > +{ > > + struct inode *inode = file_inode(READ_ONCE(slot->gmem.file)); > > + pgoff_t start_off = slot->gmem.pgoff + start - slot->base_gfn; > > + pgoff_t end_off = start_off + end - start; > > + > > + return kvm_gmem_offset_range_clear_shared(inode, start_off, end_off); > > +} > > + > > +bool kvm_gmem_slot_is_guest_shared(struct kvm_memory_slot *slot, gfn_t gfn) > > +{ > > + struct inode *inode = file_inode(READ_ONCE(slot->gmem.file)); > > + rwlock_t *offsets_lock = &kvm_gmem_private(inode)->offsets_lock; > > + unsigned long pgoff = slot->gmem.pgoff + gfn - slot->base_gfn; > > + bool r; > > + > > + read_lock(offsets_lock); > > + r = kvm_gmem_offset_is_guest_shared(inode, pgoff); > > + read_unlock(offsets_lock); > > + > > + return r; > > } > > > > static vm_fault_t kvm_gmem_fault(struct vm_fault *vmf) > > { > > struct inode *inode = file_inode(vmf->vma->vm_file); > > + rwlock_t *offsets_lock = &kvm_gmem_private(inode)->offsets_lock; > > struct folio *folio; > > vm_fault_t ret = VM_FAULT_LOCKED; > > > > filemap_invalidate_lock_shared(inode->i_mapping); > > + read_lock(offsets_lock); > > > > folio = kvm_gmem_get_folio(inode, vmf->pgoff); > > if (IS_ERR(folio)) { > > @@ -423,7 +604,7 @@ static vm_fault_t kvm_gmem_fault(struct vm_fault *vmf) > > goto out_folio; > > } > > > > - if (!kvm_gmem_offset_is_shared(vmf->vma->vm_file, vmf->pgoff)) { > > + if (!kvm_gmem_offset_is_shared(inode, vmf->pgoff)) { > > ret = VM_FAULT_SIGBUS; > > goto out_folio; > > } > > @@ -457,6 +638,7 @@ static vm_fault_t kvm_gmem_fault(struct vm_fault *vmf) > > } > > > > out_filemap: > > + read_unlock(offsets_lock); > > filemap_invalidate_unlock_shared(inode->i_mapping); > > > > return ret; > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > > index 3e40acb9f5c0..90762252381c 100644 > > --- a/virt/kvm/kvm_main.c > > +++ b/virt/kvm/kvm_main.c > > @@ -3091,6 +3091,68 @@ static int next_segment(unsigned long len, int offset) > > return len; > > } > > > > +#ifdef CONFIG_KVM_GMEM_SHARED_MEM > > +int kvm_gmem_set_shared(struct kvm *kvm, gfn_t start, gfn_t end) > > +{ > > + struct kvm_memslot_iter iter; > > + int r = 0; > > + > > + mutex_lock(&kvm->slots_lock); > > + > > + kvm_for_each_memslot_in_gfn_range(&iter, kvm_memslots(kvm), start, end) { > > + struct kvm_memory_slot *memslot = iter.slot; > > + gfn_t gfn_start, gfn_end; > > + > > + if (!kvm_slot_can_be_private(memslot)) > > + continue; > > + > > + gfn_start = max(start, memslot->base_gfn); > > + gfn_end = min(end, memslot->base_gfn + memslot->npages); > > + if (WARN_ON_ONCE(start >= end)) > > + continue; > > + > > + r = kvm_gmem_slot_set_shared(memslot, gfn_start, gfn_end); > > + if (WARN_ON_ONCE(r)) > > + break; > > + } > > + > > + mutex_unlock(&kvm->slots_lock); > > + > > + return r; > > +} > > +EXPORT_SYMBOL_GPL(kvm_gmem_set_shared); > > + > > +int kvm_gmem_clear_shared(struct kvm *kvm, gfn_t start, gfn_t end) > > +{ > > + struct kvm_memslot_iter iter; > > + int r = 0; > > + > > + mutex_lock(&kvm->slots_lock); > > + > > + kvm_for_each_memslot_in_gfn_range(&iter, kvm_memslots(kvm), start, end) { > > + struct kvm_memory_slot *memslot = iter.slot; > > + gfn_t gfn_start, gfn_end; > > + > > + if (!kvm_slot_can_be_private(memslot)) > > + continue; > > + > > + gfn_start = max(start, memslot->base_gfn); > > + gfn_end = min(end, memslot->base_gfn + memslot->npages); > > + if (WARN_ON_ONCE(start >= end)) > > + continue; > > + > > + r = kvm_gmem_slot_clear_shared(memslot, gfn_start, gfn_end); > > + if (WARN_ON_ONCE(r)) > > + break; > > + } > > + > > + mutex_unlock(&kvm->slots_lock); > > + > > + return r; > > +} > > +EXPORT_SYMBOL_GPL(kvm_gmem_clear_shared); > > +#endif /* CONFIG_KVM_GMEM_SHARED_MEM */ > > + > > /* Copy @len bytes from guest memory at '(@gfn * PAGE_SIZE) + @offset' to @data */ > > static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn, > > void *data, int offset, int len)