From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E86DC02196 for ; Thu, 6 Feb 2025 09:49:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1E0DB6B0099; Thu, 6 Feb 2025 04:49:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 142BD6B009B; Thu, 6 Feb 2025 04:49:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ED5D86B009A; Thu, 6 Feb 2025 04:49:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C887B6B0098 for ; Thu, 6 Feb 2025 04:49:41 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 57F9316101F for ; Thu, 6 Feb 2025 09:49:41 +0000 (UTC) X-FDA: 83089047762.03.10C4F16 Received: from mail-qt1-f180.google.com (mail-qt1-f180.google.com [209.85.160.180]) by imf30.hostedemail.com (Postfix) with ESMTP id 84A3380027 for ; Thu, 6 Feb 2025 09:49:39 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=v46YvPPq; spf=pass (imf30.hostedemail.com: domain of tabba@google.com designates 209.85.160.180 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738835379; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vMspzqIoxWrtTD4RwkkteSffyt0ubH7EbrNEdQKzrTo=; b=aRYTGRmNVBHtZ3LYx3eUnwq8mx5hSlqgy6CTKzeQV8w+rgN+s4IuD4uoIk3InNS+PBdpu4 nIC69MklCOpn2nthrflG247PEGMzqwiD6VzSAxwERBDpO5qpSkwGCNpXAtK9rDsZXuVOLu 97j1PyhSerQWuYly73+L+RNit6qM2TU= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=v46YvPPq; spf=pass (imf30.hostedemail.com: domain of tabba@google.com designates 209.85.160.180 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738835379; a=rsa-sha256; cv=none; b=sOqGl/lFVKNQvYNJzs4LtqwXZFUfn6LCD3Dft0M2YYSHngoxJ9kFd8xjTSQFV+iqI7Ox6n 19nWFb8SwU3dyH+b4eIrtrF2mGbdngX0bJanx9BzPg4nfWOH8oEK386KR04Y1s734ASPrW KeQfgPPnd5xrYzbsjLnkmPUWY4FHGsM= Received: by mail-qt1-f180.google.com with SMTP id d75a77b69052e-467abce2ef9so203901cf.0 for ; Thu, 06 Feb 2025 01:49:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738835378; x=1739440178; darn=kvack.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=vMspzqIoxWrtTD4RwkkteSffyt0ubH7EbrNEdQKzrTo=; b=v46YvPPqIz0AtC4HbYpcz3PzGx2Tz7m2K5q3obgd4VvObWnV64TcYsMbvMSVTORmEo AqOA+Ngvf4qiNF+ZnCUpOwK+8hLSb1rTmKAW7xcXK2ZbFPH8TNBLImPXm7Q7UZ1WxeML uniSwHYA25Wm/in4c57YqnHo9fUJpb70hRo0MRwVrBcV+XyxVm3smazK12VpmPQqV+RD xVQDrrmLqKvmLfTfSwmP9NZEmXa+Vt3d6caiOhTCqg8Dhs9cN43yKt+pCR/krRhM1NoT 53CfGSmVToiT0LWx/517l/MUlsyu2D8AtdfH/tKUuYnH3i+4cI6AnjJYrPQ9bKGwhIbv JRow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738835378; x=1739440178; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=vMspzqIoxWrtTD4RwkkteSffyt0ubH7EbrNEdQKzrTo=; b=gj2wZOiT77uoR4G8RlPh+JkSAl6xgSFypzJQOOG+RKVsgSRfSi7fihkpyqTfRL0rFq Z67fYNpVwkMbTZhLnewHMwJ9QC5kZlC2ZauUNphyALj/ztg92PY39NDq53+h5j6IK7gi 2WTXMfuhZGDlUHMvJQc/sWqYBc7cGPS6V8MaFkJCG6FvQx0b+hzxJQUf5swR9GjaqJrJ MCSC8tq4oxJOL1MMT93ARG2+PzYUROVMYRfUxE/wLiLGXQolO9OJDj1jZTA/5SrAfk+z uqBiPIdMRljN3FSTCb7TTbTAMTuHXYfZ72QOjXmmcCQRxapEzPLTvYUGt6NUraOPqspO GRrw== X-Forwarded-Encrypted: i=1; AJvYcCVUi1Mk8uSJM9OW0KFsZ4W9rKLpcN7dJ1nF/7LPyXDi9lfJlsrxEQB6WBdS+On/eiyJJkmepg8gLg==@kvack.org X-Gm-Message-State: AOJu0YzQa6bIsTVGJ1BE3MwPemaIKeDZW6FHMJ6JaQIw5j0ABT8y7u6Y yStPgkIw7OcudOsgvaz8h2l3lHAJhI7L1INvap9uNdCu4517Yp0uCP+cIqqb7k9Ncp3Nd5g7TLA v4sxk8HFJiDfyIL/bs1xUxph5cw+40u4wH0Lb X-Gm-Gg: ASbGncuWRB3Ve9bZw0mlN/hW7/bBdVOf8hJ1WRvciHrla1pnl0yWcz2XAZtH2Qb12/p 3+zS1XjzC7g7KrsTR3XOzsGMZLZWPIEoVdt12MH0fUtnUBEN8uT+jp5dfL6TR7/RX/jDocqRzSo 6XTIEobowzluIv1OXXWQ0rmFUAHA== X-Google-Smtp-Source: AGHT+IF1OziUk+ipGo12CI6tiiCpNFjwOsoIlMx9DcEg2iZigJ8C0IOYkeKhQT4+Bk0eAiWiez3G+VPEJSEkIkXyUEI= X-Received: by 2002:a05:622a:1886:b0:46f:c1ee:3ea3 with SMTP id d75a77b69052e-470ffb7145bmr2067291cf.20.1738835378375; Thu, 06 Feb 2025 01:49:38 -0800 (PST) MIME-Version: 1.0 References: <20250117163001.2326672-7-tabba@google.com> In-Reply-To: From: Fuad Tabba Date: Thu, 6 Feb 2025 09:49:01 +0000 X-Gm-Features: AWEUYZl0jkM18J4GnKfTjdDFBJWhhPpUXzN0WB2wY9PGEGPv5QogeezgmoZYrM4 Message-ID: Subject: Re: [RFC PATCH v5 06/15] KVM: guest_memfd: Handle final folio_put() of guestmem pages To: Ackerley Tng Cc: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 84A3380027 X-Stat-Signature: uaaidm9pbj7wyfot6jzrrpdn5st836jp X-Rspamd-Server: rspam08 X-Rspam-User: X-HE-Tag: 1738835379-408212 X-HE-Meta: U2FsdGVkX1/7Qje5eCK+/jgGCLX04NWkO1uTrSRhleLMzVd4NgC3d61hPdcQAvK4upUVxD5/XLhNu+u/J2MSRdcL+N9upuBQbl6PgQsK3FNYtYCV6taN3GYQEmM8DGlWm+gReII1dkMQjJbTLbwCbsSuJpJTUOK/0p5ama1PkAskkI9kj3/3rRID8iTn4LBiLk8NWGlPZIPdD7UiF/AyhvlG/Pnf8hHl6vWTyboShxtHY+fG7nj+oBVH+QvAFXyybIIqonB9TUm/83eN5FQRyZhWCE+QQN63h1o7x3CfCOmxUhT9KajWMCttHIKsTMhiLrAGEuv38d7yaZ4HUlPO4CMk4dPFm8R17TCrx2I0tyAQGETg3atJiXKjS24oL+qnNYQdEOuZFgtTJ4AX3shW1XGbQ1LGfUAWJW+IMKiAXFWIdZSkFNJD341/kC78CSv+9X+oQUkrs3H7pSN6X956thcmiTh7GCP3PacZpwHKt/RFik7floM2YHiYKH5aRjdOIp1IOdbXSvIJz7PcV56cv/qx4CylNilhxNYg8rk7Q3nvHHG2FxZrEUJs0UPE48g+CIYLpJ9lqGf38TfpLE4t8bFjyvZ5LTLZ+4cy5EUAst2TsMd4CxFCQNAa5N6KIon8bzxpGYf6Ai3nsRrvMXXOTUGHQOuD1oO7kMw0tIfM2SK1GXTuDJ8nBjS3s+bbDKbOPyigpbVXDcuBH9nfNzgPtpc6rh/pwQhm/qmk8litJhPFeozp21ov8v6E9R7Mzta+CsArr5Cu/9h9d2D8F81j5vqVINbNDOysIYETGesRTdIhgU7TdE0r5kgcJCkdxpmrhHQ7TW01pb/iBLgZchBG4xuABus+zaE1N22pDKv+4zEEBvXAuwxW/21LSnLllkNKESq+BEEAMFthRFb3ok5r3mey8BpxVqkODSEgdC0We7gXO2aabGgQ2bjgoMqr6A0/8iwx5BvVGDcNYfLAr7c 2i9hI4Du fCGQEdQXSEhObFc4V0K0j6P5ScGg4+qO1SHsfYNCsOjlf9IzI27WUZph0BbfAc6XTzhtohvn6v6q2ybaONw+VAdfKm0bazE6WVXqW7Sizr+AqJoICYsgs58uZXWGMf2cfV0EraSOzjdGID3q0vDkjzK/oO6js9CHC3avYKvJW5g26geiGZ9XymRZsZhCbAOTJOXzAIro1oIVDU1YfkDCXAVK4vl/86r0K1eT3bnU12kyaIVgXlM1Roj7kRxvz5j4IxCUcUrWBw+Y1katAs4NjgVqlhzKl8oeoVzEkAnqaYxPFH6dxJr9uG/R5CC9xQGNm8BhhyDcjFMiVysGyKFfHXcrF6uRWVpQDSB/1qpZWOzxZLNYiKiIgF4JeyDgFiyCuVDE5G8DlAdjCQXE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, 6 Feb 2025 at 03:37, Ackerley Tng wrote: > > Fuad Tabba writes: > > > Before transitioning a guest_memfd folio to unshared, thereby > > disallowing access by the host and allowing the hypervisor to > > transition its view of the guest page as private, we need to be > > sure that the host doesn't have any references to the folio. > > > > This patch introduces a new type for guest_memfd folios, and uses > > that to register a callback that informs the guest_memfd > > subsystem when the last reference is dropped, therefore knowing > > that the host doesn't have any remaining references. > > > > Signed-off-by: Fuad Tabba > > --- > > The function kvm_slot_gmem_register_callback() isn't used in this > > series. It will be used later in code that performs unsharing of > > memory. I have tested it with pKVM, based on downstream code [*]. > > It's included in this RFC since it demonstrates the plan to > > handle unsharing of private folios. > > > > [*] https://android-kvm.googlesource.com/linux/+/refs/heads/tabba/guestmem-6.13-v5-pkvm > > --- > > include/linux/kvm_host.h | 11 +++ > > include/linux/page-flags.h | 7 ++ > > mm/debug.c | 1 + > > mm/swap.c | 4 + > > virt/kvm/guest_memfd.c | 145 +++++++++++++++++++++++++++++++++++++ > > 5 files changed, 168 insertions(+) > > > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > > index 84aa7908a5dd..63e6d6dd98b3 100644 > > --- a/include/linux/kvm_host.h > > +++ b/include/linux/kvm_host.h > > @@ -2574,6 +2574,8 @@ int kvm_slot_gmem_clear_mappable(struct kvm_memory_slot *slot, gfn_t start, > > gfn_t end); > > bool kvm_slot_gmem_is_mappable(struct kvm_memory_slot *slot, gfn_t gfn); > > bool kvm_slot_gmem_is_guest_mappable(struct kvm_memory_slot *slot, gfn_t gfn); > > +int kvm_slot_gmem_register_callback(struct kvm_memory_slot *slot, gfn_t gfn); > > +void kvm_gmem_handle_folio_put(struct folio *folio); > > #else > > static inline bool kvm_gmem_is_mappable(struct kvm *kvm, gfn_t gfn, gfn_t end) > > { > > @@ -2615,6 +2617,15 @@ static inline bool kvm_slot_gmem_is_guest_mappable(struct kvm_memory_slot *slot, > > WARN_ON_ONCE(1); > > return false; > > } > > +static inline int kvm_slot_gmem_register_callback(struct kvm_memory_slot *slot, gfn_t gfn) > > +{ > > + WARN_ON_ONCE(1); > > + return -EINVAL; > > +} > > +static inline void kvm_gmem_handle_folio_put(struct folio *folio) > > +{ > > + WARN_ON_ONCE(1); > > +} > > #endif /* CONFIG_KVM_GMEM_MAPPABLE */ > > > > #endif > > diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h > > index 6615f2f59144..bab3cac1f93b 100644 > > --- a/include/linux/page-flags.h > > +++ b/include/linux/page-flags.h > > @@ -942,6 +942,7 @@ enum pagetype { > > PGTY_slab = 0xf5, > > PGTY_zsmalloc = 0xf6, > > PGTY_unaccepted = 0xf7, > > + PGTY_guestmem = 0xf8, > > > > PGTY_mapcount_underflow = 0xff > > }; > > @@ -1091,6 +1092,12 @@ FOLIO_TYPE_OPS(hugetlb, hugetlb) > > FOLIO_TEST_FLAG_FALSE(hugetlb) > > #endif > > > > +#ifdef CONFIG_KVM_GMEM_MAPPABLE > > +FOLIO_TYPE_OPS(guestmem, guestmem) > > +#else > > +FOLIO_TEST_FLAG_FALSE(guestmem) > > +#endif > > + > > PAGE_TYPE_OPS(Zsmalloc, zsmalloc, zsmalloc) > > > > /* > > diff --git a/mm/debug.c b/mm/debug.c > > index 95b6ab809c0e..db93be385ed9 100644 > > --- a/mm/debug.c > > +++ b/mm/debug.c > > @@ -56,6 +56,7 @@ static const char *page_type_names[] = { > > DEF_PAGETYPE_NAME(table), > > DEF_PAGETYPE_NAME(buddy), > > DEF_PAGETYPE_NAME(unaccepted), > > + DEF_PAGETYPE_NAME(guestmem), > > }; > > > > static const char *page_type_name(unsigned int page_type) > > diff --git a/mm/swap.c b/mm/swap.c > > index 6f01b56bce13..15220eaabc86 100644 > > --- a/mm/swap.c > > +++ b/mm/swap.c > > @@ -37,6 +37,7 @@ > > #include > > #include > > #include > > +#include > > > > #include "internal.h" > > > > @@ -103,6 +104,9 @@ static void free_typed_folio(struct folio *folio) > > case PGTY_offline: > > /* Nothing to do, it's offline. */ > > return; > > + case PGTY_guestmem: > > + kvm_gmem_handle_folio_put(folio); > > + return; > > default: > > WARN_ON_ONCE(1); > > } > > diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c > > index d1c192927cf7..722afd9f8742 100644 > > --- a/virt/kvm/guest_memfd.c > > +++ b/virt/kvm/guest_memfd.c > > @@ -387,6 +387,28 @@ enum folio_mappability { > > KVM_GMEM_NONE_MAPPABLE = 0b11, /* Not mappable, transient state. */ > > }; > > > > +/* > > + * Unregisters the __folio_put() callback from the folio. > > + * > > + * Restores a folio's refcount after all pending references have been released, > > + * and removes the folio type, thereby removing the callback. Now the folio can > > + * be freed normaly once all actual references have been dropped. > > + * > > + * Must be called with the filemap (inode->i_mapping) invalidate_lock held. > > + * Must also have exclusive access to the folio: folio must be either locked, or > > + * gmem holds the only reference. > > + */ > > +static void __kvm_gmem_restore_pending_folio(struct folio *folio) > > +{ > > + if (WARN_ON_ONCE(folio_mapped(folio) || !folio_test_guestmem(folio))) > > + return; > > + > > + WARN_ON_ONCE(!folio_test_locked(folio) && folio_ref_count(folio) > 1); > > + > > + __folio_clear_guestmem(folio); > > + folio_ref_add(folio, folio_nr_pages(folio)); > > +} > > + > > /* > > * Marks the range [start, end) as mappable by both the host and the guest. > > * Usually called when guest shares memory with the host. > > @@ -400,7 +422,31 @@ static int gmem_set_mappable(struct inode *inode, pgoff_t start, pgoff_t end) > > > > filemap_invalidate_lock(inode->i_mapping); > > for (i = start; i < end; i++) { > > + struct folio *folio = NULL; > > + > > + /* > > + * If the folio is NONE_MAPPABLE, it indicates that it is > > + * transitioning to private (GUEST_MAPPABLE). Transition it to > > + * shared (ALL_MAPPABLE) immediately, and remove the callback. > > + */ > > + if (xa_to_value(xa_load(mappable_offsets, i)) == KVM_GMEM_NONE_MAPPABLE) { > > + folio = filemap_lock_folio(inode->i_mapping, i); > > + if (WARN_ON_ONCE(IS_ERR(folio))) { > > + r = PTR_ERR(folio); > > + break; > > + } > > + > > + if (folio_test_guestmem(folio)) > > + __kvm_gmem_restore_pending_folio(folio); > > + } > > + > > r = xa_err(xa_store(mappable_offsets, i, xval, GFP_KERNEL)); > > + > > + if (folio) { > > + folio_unlock(folio); > > + folio_put(folio); > > + } > > + > > if (r) > > break; > > } > > @@ -473,6 +519,105 @@ static int gmem_clear_mappable(struct inode *inode, pgoff_t start, pgoff_t end) > > return r; > > } > > > > I think one of these functions to restore mappability needs to be called > to restore the refcounts on truncation. Without doing this, the > refcounts on the folios at truncation time would only be the > transient/speculative ones, and truncating will take off the filemap > refcounts which were already taken off to set up the folio_put() > callback. Good point. > Should mappability can be restored according to > GUEST_MEMFD_FLAG_INIT_MAPPABLE? Or should mappability of NONE be > restored to GUEST and mappability of ALL left as ALL? Not sure I follow :) Thanks, /fuad > > +/* > > + * Registers a callback to __folio_put(), so that gmem knows that the host does > > + * not have any references to the folio. It does that by setting the folio type > > + * to guestmem. > > + * > > + * Returns 0 if the host doesn't have any references, or -EAGAIN if the host > > + * has references, and the callback has been registered. > > + * > > + * Must be called with the following locks held: > > + * - filemap (inode->i_mapping) invalidate_lock > > + * - folio lock > > + */ > > +static int __gmem_register_callback(struct folio *folio, struct inode *inode, pgoff_t idx) > > +{ > > + struct xarray *mappable_offsets = &kvm_gmem_private(inode)->mappable_offsets; > > + void *xval_guest = xa_mk_value(KVM_GMEM_GUEST_MAPPABLE); > > + int refcount; > > + > > + rwsem_assert_held_write_nolockdep(&inode->i_mapping->invalidate_lock); > > + WARN_ON_ONCE(!folio_test_locked(folio)); > > + > > + if (folio_mapped(folio) || folio_test_guestmem(folio)) > > + return -EAGAIN; > > + > > + /* Register a callback first. */ > > + __folio_set_guestmem(folio); > > + > > + /* > > + * Check for references after setting the type to guestmem, to guard > > + * against potential races with the refcount being decremented later. > > + * > > + * At least one reference is expected because the folio is locked. > > + */ > > + > > + refcount = folio_ref_sub_return(folio, folio_nr_pages(folio)); > > + if (refcount == 1) { > > + int r; > > + > > + /* refcount isn't elevated, it's now faultable by the guest. */ > > + r = WARN_ON_ONCE(xa_err(xa_store(mappable_offsets, idx, xval_guest, GFP_KERNEL))); > > + if (!r) > > + __kvm_gmem_restore_pending_folio(folio); > > + > > + return r; > > + } > > + > > + return -EAGAIN; > > +} > > + > > +int kvm_slot_gmem_register_callback(struct kvm_memory_slot *slot, gfn_t gfn) > > +{ > > + unsigned long pgoff = slot->gmem.pgoff + gfn - slot->base_gfn; > > + struct inode *inode = file_inode(slot->gmem.file); > > + struct folio *folio; > > + int r; > > + > > + filemap_invalidate_lock(inode->i_mapping); > > + > > + folio = filemap_lock_folio(inode->i_mapping, pgoff); > > + if (WARN_ON_ONCE(IS_ERR(folio))) { > > + r = PTR_ERR(folio); > > + goto out; > > + } > > + > > + r = __gmem_register_callback(folio, inode, pgoff); > > + > > + folio_unlock(folio); > > + folio_put(folio); > > +out: > > + filemap_invalidate_unlock(inode->i_mapping); > > + > > + return r; > > +} > > + > > +/* > > + * Callback function for __folio_put(), i.e., called when all references by the > > + * host to the folio have been dropped. This allows gmem to transition the state > > + * of the folio to mappable by the guest, and allows the hypervisor to continue > > + * transitioning its state to private, since the host cannot attempt to access > > + * it anymore. > > + */ > > +void kvm_gmem_handle_folio_put(struct folio *folio) > > +{ > > + struct xarray *mappable_offsets; > > + struct inode *inode; > > + pgoff_t index; > > + void *xval; > > + > > + inode = folio->mapping->host; > > + index = folio->index; > > + mappable_offsets = &kvm_gmem_private(inode)->mappable_offsets; > > + xval = xa_mk_value(KVM_GMEM_GUEST_MAPPABLE); > > + > > + filemap_invalidate_lock(inode->i_mapping); > > + __kvm_gmem_restore_pending_folio(folio); > > + WARN_ON_ONCE(xa_err(xa_store(mappable_offsets, index, xval, GFP_KERNEL))); > > + filemap_invalidate_unlock(inode->i_mapping); > > +} > > + > > static bool gmem_is_mappable(struct inode *inode, pgoff_t pgoff) > > { > > struct xarray *mappable_offsets = &kvm_gmem_private(inode)->mappable_offsets;