From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DD26C02181 for ; Mon, 20 Jan 2025 12:15:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D1E936B0082; Mon, 20 Jan 2025 07:15:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CF5A96B0083; Mon, 20 Jan 2025 07:15:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BBEDC6B0085; Mon, 20 Jan 2025 07:15:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 9D7456B0082 for ; Mon, 20 Jan 2025 07:15:32 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 054DA4A49A for ; Mon, 20 Jan 2025 12:15:32 +0000 (UTC) X-FDA: 83027725704.23.8066828 Received: from mail-wm1-f45.google.com (mail-wm1-f45.google.com [209.85.128.45]) by imf30.hostedemail.com (Postfix) with ESMTP id 1370A80009 for ; Mon, 20 Jan 2025 12:15:29 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=zpU3toG4; spf=pass (imf30.hostedemail.com: domain of tabba@google.com designates 209.85.128.45 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737375330; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QWvW/3T8LwxW9ylBIaUuTKzxQTh1JV8NqskFJ9TijpQ=; b=66MhLJAL0iIGnftUopDIWG8ggtKCMMAg6gJ5vhL+sDCniNKojdizfF2yJ3WZBIWDad4Q95 hYWhZppd6lT2ebIdnJEnc7PhLf/zCZd0BfolWWj4+t0MkegN8xU6Xa3HcJesnjMyJuOgeS 4HSjIfJ/eXOLih4ahUG2q+wBaoVVa/w= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=zpU3toG4; spf=pass (imf30.hostedemail.com: domain of tabba@google.com designates 209.85.128.45 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737375330; a=rsa-sha256; cv=none; b=1S/xE1Eze2Xq3IfIsylDfQJlRyCjNU52aeDDUtD53CZzcx/BYhjM/ku3aHpqGrTa4kxr6k rXtWdIgoakCyHhKF2oVxLdey78tCYkEM/zutEesjzjIn4UyatfGYY95N7IK9oUscFP7zDP ICC94Fqk3FcoFaCQr59lvXY5sy5Lpz8= Received: by mail-wm1-f45.google.com with SMTP id 5b1f17b1804b1-4368a290e0dso69745e9.1 for ; Mon, 20 Jan 2025 04:15:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1737375328; x=1737980128; darn=kvack.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=QWvW/3T8LwxW9ylBIaUuTKzxQTh1JV8NqskFJ9TijpQ=; b=zpU3toG4pprMEqubq958t5VOXFzM53mvvARlY0ivN0COBTWciBhEZ/c5knN7MGWADW Bk291Vhg+HZnqrRB3nqgn5o8QUE4dPZR0g0eRgf/Qqjy8b4Qo6qFNHoeLjk30/ixwgiD HFHJ2sQ+tFuHmlI1S9yLigZpZFgUkIJ6Uov/E1bO4ql5ocCSjC14O6I676DX20ndJJ+e LeRzkYjHaXV12uoSOGLvPB2tR74TDc59BV6FmRsjYIU4tgOjZb8oWOLWHX8aQMZvDu4R Abf0oc6b1ZbGqu//M0gNm79nIpSV4IJV2+CWPWmA31SyKhEVvoDkKAU4jYnXR/tE0hc9 htjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737375328; x=1737980128; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=QWvW/3T8LwxW9ylBIaUuTKzxQTh1JV8NqskFJ9TijpQ=; b=opZVpywgtyBnKz7QUikeYU85ojii7dm7675moNV+JNfQdSSoHIjyY5bprJ0JOdMp8I oq+RT5o67VNn6zfs3+sCNQd0axUKT+DXSAUU49bMdZMIHlcaFbCg3vVaPxV3+yireVn0 RnzGiMujSyt2OcYRNgTE7IVTIJpMHDyS0lx7NGib8jc/tHxvqHoddenEdTmSlb7o6fAA XVknsPdLsarfjZvOTV+DUAoBVwMA9PFzCidlXJO5q4fIJyzCcKsUy932ofcWoeL0uTIb tS5gB1Tx9Reqrida2wI+OzxMt9ZpMZWZn9SuS48EAtpLUvMLYsWOd+csBQRERZB1b+gk aHFQ== X-Forwarded-Encrypted: i=1; AJvYcCW2jRkJtDLdIjB4gOrYD479sDaWCJtcLvkBWCQvW4vnhCIL02ua5jYVtJVGUfK7Rjhq24rGQL1Vaw==@kvack.org X-Gm-Message-State: AOJu0Ywuscw+CXGrwlCdU0hS9LkqBFkx5oC+qqgmakysdk9mHgTe+d2M +M+4yBZg44xOhO7Hi4pypCqy3uBM677YK54hFqEQb4XooEC6VssFEd06roA2ZKuFMeD6NLMgayr Z+4GD1spwS8myRr1Ks6VXRJIE0LssbcnJktUXcCUHIefHmqhH7RGw X-Gm-Gg: ASbGncuFCM+Xvc6ETQslR3vF/rKOaUhkzTK/eW61ZhnHm+5lpPu2PQ04ql6/SGtTi+M kx7i4dH0jtdIChdq9FPOukwyScTUZx8dxyyoK0jcr3Q4aBZytGA== X-Google-Smtp-Source: AGHT+IE3IRwz5JThPXsLvYgrKPbo7zk3RC9OByqB95gXnpdlRbvzoobzkAL3nR8dunv7BdydKSmbfXaelVImxZDQlPc= X-Received: by 2002:a05:600c:6d8c:b0:434:9e1d:44ef with SMTP id 5b1f17b1804b1-438a0f45d1bmr2345985e9.7.1737375328262; Mon, 20 Jan 2025 04:15:28 -0800 (PST) MIME-Version: 1.0 References: <20250117163001.2326672-1-tabba@google.com> <20250117163001.2326672-7-tabba@google.com> <417ca32d-b7f3-4dc9-8d3f-dc0ba67214ad@suse.cz> In-Reply-To: <417ca32d-b7f3-4dc9-8d3f-dc0ba67214ad@suse.cz> From: Fuad Tabba Date: Mon, 20 Jan 2025 12:14:50 +0000 X-Gm-Features: AbW1kvYiLYDkJccSGcIlFv6JvIp9L0_jnhdtUHGnef2nKfmhei23pDM3daO69gw Message-ID: Subject: Re: [RFC PATCH v5 06/15] KVM: guest_memfd: Handle final folio_put() of guestmem pages To: Vlastimil Babka Cc: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 1370A80009 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: iapnepbh4uzax8ar953jzkociq5se48n X-HE-Tag: 1737375329-286341 X-HE-Meta: U2FsdGVkX1/F5Wk2HPh3SM7uE8zHN2lhFh9q3zB+EH3qMEndzlzt2hMP7+T1kp9Y3CIQn4WJwHa9LE51Khqxp21c3kIIYD7He5RfUz/I3lhdJ3tJW+e8pDwxtUlylNinmvIHgFh9OcvRJVAFXf/yg2m9AYUCnrqCQRswSfIqeh5t3CJAATtjmacE+PqNVT+2cikc/08C3rzyVp7d7Xjf/dxsPdRgn8bs/meqmDhupxlZgSlhJz9EVh47+BbKb0KtAsJN0DVwL3G6JVTBXVc3KcYRFNMKlfuiztYJFPAw8FJQHez2pbedKlq+jh4A8ktsddLnxoeyTv9xPSsXDplqYow7Zscg541UqPUfIzhzE5WPWeJ7T8dDikgHmk0tX/FCsBnP7g/6L4DIJ89R3sP87Fc/e4GQFJUPNxkhlpOn+bQzYm5srGuPKQLFiH2CQB4/JuHso4aJuSOEYnEM8KcbcH/m/StMYHwGTXI2x8JOSOaP9sCR7VtOxodrMADEXKKbR6FajLwCMWmSzJFdsM3PgDJmunHPDTrCGAR71y2zB398zC5Z3Yv9kjc2WMquWy0L//z1tDb7ya1E325scupa5QdJpiusopyNB2TwQff7TjnbOc44TkXYLjyPaClpM2bgej1i7/9OEIWm/jAUM3ttmKiGBV8Odu+ujGg9rMf7tRTrAfMd8Vu8Pd5LFBWFkrQmDAQlFO83Kf1QM67b+mIue5zk8JMJvubcZAZRIhLnD6R1YwNhQn4VP1pZToLX+BOoCKagBUJeBVGD+MogIwB2EiY+u0ywuuE0Dr8GXtdYFnIOtWU4kvhnO55DJtPrH5zKI+CZktqoTokQHzFrXfNCLoxWFZiuQOcQd74ZgERkku1JHUbEa9Pdfwna9GYAb45UJPaJgmshfEqXTc5oKvES3NzkhU4W8Ll0a84WKiDkGGGfuKK6VuaZWxP5m7K3WoqCvNEzlFB/FUvvH1cKGkS vdsCIWeo IgaNlkAQKQxkMMbSWrUsLMd/tx/Fiuwihz77bVioBdFZPSzTGt3d3mJnSBS/jsCDtvbGqnmdFVwVKOG6Y2aI2Ju4/KR8l2f/hXo4vdjBU9IcoZ6Bf+KGY7dRdwTujMEDPZgI7dx27cfNaOZ8zy9ugwYUlDHb945R4YoU6v1TkFDXif4U53ZQmHQeVkW9UN7mGOHnWAYDQG69oHgeSwTCCaVuCvu2KAO7qzOVXmqZNKJQFBmyhscLnqd9jBXHKqvF8O+cGVffmkqpV7uvHhz/Fn/2PL8aUrAffUqS3OdnCIHATQ8gy3zFF1aXx00zXeG36QNWuEthvSHp2B3IsioW9GIO/Njkcol7i4RGsCw+DVZePkB0vkoL6SHXW1YDZYAQVNaqYY/Sq/NxmrEs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Vlastimil, On Mon, 20 Jan 2025 at 11:37, Vlastimil Babka wrote: > > On 1/17/25 17:29, Fuad Tabba wrote: > > Before transitioning a guest_memfd folio to unshared, thereby > > disallowing access by the host and allowing the hypervisor to > > transition its view of the guest page as private, we need to be > > sure that the host doesn't have any references to the folio. > > > > This patch introduces a new type for guest_memfd folios, and uses > > that to register a callback that informs the guest_memfd > > subsystem when the last reference is dropped, therefore knowing > > that the host doesn't have any remaining references. > > > > Signed-off-by: Fuad Tabba > > --- > > The function kvm_slot_gmem_register_callback() isn't used in this > > series. It will be used later in code that performs unsharing of > > memory. I have tested it with pKVM, based on downstream code [*]. > > It's included in this RFC since it demonstrates the plan to > > handle unsharing of private folios. > > > > [*] https://android-kvm.googlesource.com/linux/+/refs/heads/tabba/guestmem-6.13-v5-pkvm > > > > > --- a/virt/kvm/guest_memfd.c > > +++ b/virt/kvm/guest_memfd.c > > @@ -387,6 +387,28 @@ enum folio_mappability { > > KVM_GMEM_NONE_MAPPABLE = 0b11, /* Not mappable, transient state. */ > > }; > > > > +/* > > + * Unregisters the __folio_put() callback from the folio. > > + * > > + * Restores a folio's refcount after all pending references have been released, > > + * and removes the folio type, thereby removing the callback. Now the folio can > > + * be freed normaly once all actual references have been dropped. > > + * > > + * Must be called with the filemap (inode->i_mapping) invalidate_lock held. > > + * Must also have exclusive access to the folio: folio must be either locked, or > > + * gmem holds the only reference. > > + */ > > +static void __kvm_gmem_restore_pending_folio(struct folio *folio) > > +{ > > + if (WARN_ON_ONCE(folio_mapped(folio) || !folio_test_guestmem(folio))) > > + return; > > + > > + WARN_ON_ONCE(!folio_test_locked(folio) && folio_ref_count(folio) > 1); > > Similar to Kirill's objection on the other patch, I think there might be a > speculative refcount increase (i.e. from a pfn scanner) as long as we have > refcount over 1. Probably not a problem here if we want to restore refcount > anyway? But the warning would be spurious. > > > + > > + __folio_clear_guestmem(folio); > > + folio_ref_add(folio, folio_nr_pages(folio)); > > +} > > + > > /* > > * Marks the range [start, end) as mappable by both the host and the guest. > > * Usually called when guest shares memory with the host. > > @@ -400,7 +422,31 @@ static int gmem_set_mappable(struct inode *inode, pgoff_t start, pgoff_t end) > > > > filemap_invalidate_lock(inode->i_mapping); > > for (i = start; i < end; i++) { > > + struct folio *folio = NULL; > > + > > + /* > > + * If the folio is NONE_MAPPABLE, it indicates that it is > > + * transitioning to private (GUEST_MAPPABLE). Transition it to > > + * shared (ALL_MAPPABLE) immediately, and remove the callback. > > + */ > > + if (xa_to_value(xa_load(mappable_offsets, i)) == KVM_GMEM_NONE_MAPPABLE) { > > + folio = filemap_lock_folio(inode->i_mapping, i); > > + if (WARN_ON_ONCE(IS_ERR(folio))) { > > + r = PTR_ERR(folio); > > + break; > > + } > > + > > + if (folio_test_guestmem(folio)) > > + __kvm_gmem_restore_pending_folio(folio); > > + } > > + > > r = xa_err(xa_store(mappable_offsets, i, xval, GFP_KERNEL)); > > + > > + if (folio) { > > + folio_unlock(folio); > > + folio_put(folio); > > + } > > + > > if (r) > > break; > > } > > @@ -473,6 +519,105 @@ static int gmem_clear_mappable(struct inode *inode, pgoff_t start, pgoff_t end) > > return r; > > } > > > > +/* > > + * Registers a callback to __folio_put(), so that gmem knows that the host does > > + * not have any references to the folio. It does that by setting the folio type > > + * to guestmem. > > + * > > + * Returns 0 if the host doesn't have any references, or -EAGAIN if the host > > + * has references, and the callback has been registered. > > Note this comment. > > > + * > > + * Must be called with the following locks held: > > + * - filemap (inode->i_mapping) invalidate_lock > > + * - folio lock > > + */ > > +static int __gmem_register_callback(struct folio *folio, struct inode *inode, pgoff_t idx) > > +{ > > + struct xarray *mappable_offsets = &kvm_gmem_private(inode)->mappable_offsets; > > + void *xval_guest = xa_mk_value(KVM_GMEM_GUEST_MAPPABLE); > > + int refcount; > > + > > + rwsem_assert_held_write_nolockdep(&inode->i_mapping->invalidate_lock); > > + WARN_ON_ONCE(!folio_test_locked(folio)); > > + > > + if (folio_mapped(folio) || folio_test_guestmem(folio)) > > + return -EAGAIN; > > But here we return -EAGAIN and no callback was registered? This is intentional. If the folio is still mapped (i.e., its mapcount is elevated), then we cannot register the callback yet, so the host/vmm needs to unmap first, then try again. That said, I see the problem with the comment above, and I will clarify this. > > + > > + /* Register a callback first. */ > > + __folio_set_guestmem(folio); > > + > > + /* > > + * Check for references after setting the type to guestmem, to guard > > + * against potential races with the refcount being decremented later. > > + * > > + * At least one reference is expected because the folio is locked. > > + */ > > + > > + refcount = folio_ref_sub_return(folio, folio_nr_pages(folio)); > > + if (refcount == 1) { > > + int r; > > + > > + /* refcount isn't elevated, it's now faultable by the guest. */ > > Again this seems racy, somebody could have just speculatively increased it. > Maybe we need to freeze here as well? A speculative increase here is ok I think (famous last words). The callback was registered before the check, therefore, such an increase would trigger the callback. Thanks, /fuad > > + r = WARN_ON_ONCE(xa_err(xa_store(mappable_offsets, idx, xval_guest, GFP_KERNEL))); > > + if (!r) > > + __kvm_gmem_restore_pending_folio(folio); > > + > > + return r; > > + } > > + > > + return -EAGAIN; > > +} > > + > > +int kvm_slot_gmem_register_callback(struct kvm_memory_slot *slot, gfn_t gfn) > > +{ > > + unsigned long pgoff = slot->gmem.pgoff + gfn - slot->base_gfn; > > + struct inode *inode = file_inode(slot->gmem.file); > > + struct folio *folio; > > + int r; > > + > > + filemap_invalidate_lock(inode->i_mapping); > > + > > + folio = filemap_lock_folio(inode->i_mapping, pgoff); > > + if (WARN_ON_ONCE(IS_ERR(folio))) { > > + r = PTR_ERR(folio); > > + goto out; > > + } > > + > > + r = __gmem_register_callback(folio, inode, pgoff); > > + > > + folio_unlock(folio); > > + folio_put(folio); > > +out: > > + filemap_invalidate_unlock(inode->i_mapping); > > + > > + return r; > > +} > > + > > +/* > > + * Callback function for __folio_put(), i.e., called when all references by the > > + * host to the folio have been dropped. This allows gmem to transition the state > > + * of the folio to mappable by the guest, and allows the hypervisor to continue > > + * transitioning its state to private, since the host cannot attempt to access > > + * it anymore. > > + */ > > +void kvm_gmem_handle_folio_put(struct folio *folio) > > +{ > > + struct xarray *mappable_offsets; > > + struct inode *inode; > > + pgoff_t index; > > + void *xval; > > + > > + inode = folio->mapping->host; > > + index = folio->index; > > + mappable_offsets = &kvm_gmem_private(inode)->mappable_offsets; > > + xval = xa_mk_value(KVM_GMEM_GUEST_MAPPABLE); > > + > > + filemap_invalidate_lock(inode->i_mapping); > > + __kvm_gmem_restore_pending_folio(folio); > > + WARN_ON_ONCE(xa_err(xa_store(mappable_offsets, index, xval, GFP_KERNEL))); > > + filemap_invalidate_unlock(inode->i_mapping); > > +} > > + > > static bool gmem_is_mappable(struct inode *inode, pgoff_t pgoff) > > { > > struct xarray *mappable_offsets = &kvm_gmem_private(inode)->mappable_offsets; >