From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1553DC02196 for ; Thu, 6 Feb 2025 09:48:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9CED46B008A; Thu, 6 Feb 2025 04:47:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 97D806B008C; Thu, 6 Feb 2025 04:47:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 86C0D6B0092; Thu, 6 Feb 2025 04:47:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 6856D6B008A for ; Thu, 6 Feb 2025 04:47:59 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 202774B514 for ; Thu, 6 Feb 2025 09:47:59 +0000 (UTC) X-FDA: 83089043478.13.CB469CC Received: from mail-qt1-f178.google.com (mail-qt1-f178.google.com [209.85.160.178]) by imf17.hostedemail.com (Postfix) with ESMTP id 445E240031 for ; Thu, 6 Feb 2025 09:47:57 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=lYLmb7tL; spf=pass (imf17.hostedemail.com: domain of tabba@google.com designates 209.85.160.178 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738835277; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=v6IYTK3nh6GBIDdKkLoNwhq9MdnRnCbo7ufzWqCaI98=; b=YOiz1/Wj7ux0XVJja/vts1XiMLdNZfScfFgkzLdsc4nC5vs/R2N2H9K7GvPAX2MG2jwbZ8 c/AwCSJBFC4b9JjwDLPDNdXTgHL0NSrFpdfV3VcsW7ZTB9YX+J25TRuxNh6vvkrtseXgyp Y71P8Kkb2V4nZs461c8Tu+Qa2DwAnBc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738835277; a=rsa-sha256; cv=none; b=gx1ery1IyTqRwR/VF/M7DnBFAdCG1buA1/R/FGys8Nh8kNAWeEC5s6BhQd9zvJLUgIJQmQ eKgqs8dHhz1a1z4bSNmlxISDesDyAbuJRGUVzotj70KYx5qniqM2PjCfJXI8Gbs5yrSwLn PbmMt01DSoyz02KZ08qwldv0DExF8xQ= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=lYLmb7tL; spf=pass (imf17.hostedemail.com: domain of tabba@google.com designates 209.85.160.178 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-qt1-f178.google.com with SMTP id d75a77b69052e-467896541e1so184101cf.0 for ; Thu, 06 Feb 2025 01:47:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738835276; x=1739440076; darn=kvack.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=v6IYTK3nh6GBIDdKkLoNwhq9MdnRnCbo7ufzWqCaI98=; b=lYLmb7tLr0gNEFw2ikbCGzUkC4IqIhnvpX8ShSEUeEwTOhKHajyxPccXVXhnO1YZbo wMKL1GXReJULBcy5WIApqR42X1KOWmTpx6NzrdlCLDrnAi78aSXGwq7p0QMQSXzuQE1h Axin96tok7l/nnleUsR+2rSxy6Zo1ItJECv5MaxFeF4PJ7tgTILCs2VDZ0kujnV9dMNb JCqCm34cbrxbV3cgqybdECqHlE1VpwCVZjmnD1wdhxmOpu5BTvnaI1rmuoLvysFZ5oMY P2DPUlSl2Vv7aJ7gWj+BwohwDiEhj/kheWY03hKzjdkprXwp8mHI3gNAQTSH4RYPGJQd Y0gw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738835276; x=1739440076; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=v6IYTK3nh6GBIDdKkLoNwhq9MdnRnCbo7ufzWqCaI98=; b=E0TXvRxUsnf27vdNDT+wqrM+MHpr9ddtzZkzYX39rRdRRldBtnAs6/pKWleqeNfgJY KRwrAS31RGUIn+JnKL4oWJf+D6j35M7RsCVSz7qXNTW5htvyN6qguEJJokE6yo1aiw87 83oSpoc/4YZhs2HqbEZQVDzW5LeTHgb5n8tET2RyaE/VuiNzKbpdZWG4zZcpSF3ojZ9X 2OlcPsF3GUtndSuN03yt+rsA9onI2waYcezG/+K5/klyqxT0UkraTzKuVHvSknPIsD1l XuUlJbjJrZI67QoVq9D2hLuN9vkmKi89EEACHthYcurtXy8yt4nRsIcdikCTedteXe+d fEDA== X-Forwarded-Encrypted: i=1; AJvYcCXv7W5kovG1w5dKZ7qv4CD/LotILumCfdrbgUyE6IYnoJjOWJRDZkhzStZJdRetkmpaXVZbpZSEzQ==@kvack.org X-Gm-Message-State: AOJu0Yy/IzF/k65QNxl9/y+3rBwlEML7DIymqzURYDYtJLJGq2B/nVXe 2QY403s28u6RddvcCbhF/Vyg9S4MzwZ1tqMWZRuPMpFZLxYRKdEv9pGqU8WVkTTFhBSfddropIT IQh/XAq61GYVj2YU1ZDTJwRSLX0g2VNAlWq6/ X-Gm-Gg: ASbGncsCVYprSZv3hvhkC+QGSMe5r0uk9Tvxg4H0/Ap733nVxVp4wkt7Ux9G1NTyRg0 dz0CFHyhpYKLWUCKiQJGY9N3y0Mnd6x2vziGFEEiQHKyjdm41rGMCuLalThtmEtZkknZnubcna9 osP+gadbJJ9J//uZAGXv0C5KC2Sg== X-Google-Smtp-Source: AGHT+IH2kNovjjTMjexpacUe+GJCihFf0EH6lWEtv2ZG7xpTLVuPcxkrF5CQiQscN7rUc6TK6bBKq+QdNr3Yq+jW6tc= X-Received: by 2002:a05:622a:1311:b0:46c:791f:bf2f with SMTP id d75a77b69052e-47106bf403bmr1991231cf.1.1738835276192; Thu, 06 Feb 2025 01:47:56 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: Fuad Tabba Date: Thu, 6 Feb 2025 09:47:19 +0000 X-Gm-Features: AWEUYZlPwv3dT-XNtR8vcSj1WxKNk3oUMwXJf-eGk9uS-zbbUk_RuCmnuO5yF_w Message-ID: Subject: Re: [RFC PATCH v5 06/15] KVM: guest_memfd: Handle final folio_put() of guestmem pages To: Ackerley Tng Cc: vbabka@suse.cz, kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vannapurve@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: xn3kbwnyu4a54r1u8k3m3kj6mcmjm859 X-Rspamd-Queue-Id: 445E240031 X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1738835277-594243 X-HE-Meta: U2FsdGVkX1+Y3eHzGN/ivynC4nf4cGJqwy4pO3DsvOTTfofoOJHctZqaM6Y/4cUHzbNBX3vOsigI4Y0wSohpckbAW84jrALQPmE25MhAA25qQV8CV9TiRFZVYa/BxxHlbLqjrTHlR5HVcNV6d5cpncKp6OPqy1lZ8DGHLWAJk+yYZJ+wmCc8F//iWBQomGffc3vVvesddtF7c50vWhc0hQ1ZWQFwNZp7M3WCn+8w3AlaNEX7Y4ntnd3pmrP7Z2LV8Ad3rij3LEdYvPTt7FxzC8ADToEu3ujG9i/EoznRnu8VLt/cQWbp9Hme8QxPXeLVcwZEwBwM9RdLiCj9Qu1t4aGz8x+jyzg0lucLra9HV52x1iqgUabEQcgFu5e9AsqcYPGtErA+Wg1kfFJiA2L0BuXcwsDJ8qq//PmpzqBeFUgH8S62/RfVH/Jkl9INGFQr8xVrWB2OG/ltw8wPXIjEytfpW+b4xXVGFu60uIP0YzkDyu6Igy/FXFyCr+bE0IdarS8IQd195itcvGFEI6hjJ5VVyk8XvFPXcRIT4w9I+KPPm7oo0b6TyU2RXdBO71jzWygRfcBrhKl4nCj2WsC0z+yCcfYee16D+rzfm+o4fuDU6Tg+GB4RlKCXPFdi6OXJ6VRiKRVqoXqFp9KSXoUlXS0PG6MzZCYxPTcJ+36QCDsuRbGHcKRRnXhyOOZQ2fokAyZlFaDe9ZgiSabC6tbT2Ea4RCVNY8/xwoVcUA35eSy5aj8pGKbAbb9N/lHHqYON9VPn6yVUPG9fyb/gCYL+vloumCcODhoaBy06EaHu59o1z5g6bIy70PPfhVX9KeWx4F/qHvS8J6n+1l4bA9Uz+4Lrnl0TtxaplRpIZOHTgitsVPxFnZqEGhEgNXJIzYj9klCnrY+/REDkPGosDF/oKi98fa4xc3LReJqNCkTNwlVnarXhxxClTmlB2N0/C++t9AILGX57bT2FTIDtf8d ZJgAcXnl h6v3eVbzxixq2/b1bRsrHDp4m7OSNspfnVkN2Cbhe5dtIkytlb7tLPgIRj9j2qSdKXx0K71F44XYgII29WthK0f2WNptHY0zvnWCZ3+NokgDxqVU3NtxHEaVW5m3EuUVDy0HiJKPctzWOw4x5Ii2Tm9Jtj9IThX2vgWLq+ljiM+GGSHZILAv0Es0//lJsEk+cS0Z2HZW0Y07IsRf3nh+9c3swd68M576kAdcYmKLK9VFtRGeQfUBOYi9BeKcco8jmFmyPsBb4nWc4c2DeIV3i6FDhcwae9u4fYZU8bpS1YF9ZXvYTYtmsLoqXSrDeDQJl58IEmDsYVorsQW2lD6wn/vwX0Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000018, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, 6 Feb 2025 at 03:28, Ackerley Tng wrote: > > Fuad Tabba writes: > > > On Wed, 22 Jan 2025 at 22:24, Ackerley Tng wrote: > >> > >> Fuad Tabba writes: > >> > >> >> > > >> >> > > >> >> > +/* > >> >> > + * Registers a callback to __folio_put(), so that gmem knows that the host does > >> >> > + * not have any references to the folio. It does that by setting the folio type > >> >> > + * to guestmem. > >> >> > + * > >> >> > + * Returns 0 if the host doesn't have any references, or -EAGAIN if the host > >> >> > + * has references, and the callback has been registered. > >> >> > >> >> Note this comment. > >> >> > >> >> > + * > >> >> > + * Must be called with the following locks held: > >> >> > + * - filemap (inode->i_mapping) invalidate_lock > >> >> > + * - folio lock > >> >> > + */ > >> >> > +static int __gmem_register_callback(struct folio *folio, struct inode *inode, pgoff_t idx) > >> >> > +{ > >> >> > + struct xarray *mappable_offsets = &kvm_gmem_private(inode)->mappable_offsets; > >> >> > + void *xval_guest = xa_mk_value(KVM_GMEM_GUEST_MAPPABLE); > >> >> > + int refcount; > >> >> > + > >> >> > + rwsem_assert_held_write_nolockdep(&inode->i_mapping->invalidate_lock); > >> >> > + WARN_ON_ONCE(!folio_test_locked(folio)); > >> >> > + > >> >> > + if (folio_mapped(folio) || folio_test_guestmem(folio)) > >> >> > + return -EAGAIN; > >> >> > >> >> But here we return -EAGAIN and no callback was registered? > >> > > >> > This is intentional. If the folio is still mapped (i.e., its mapcount > >> > is elevated), then we cannot register the callback yet, so the > >> > host/vmm needs to unmap first, then try again. That said, I see the > >> > problem with the comment above, and I will clarify this. > >> > > >> >> > + > >> >> > + /* Register a callback first. */ > >> >> > + __folio_set_guestmem(folio); > >> >> > + > >> >> > + /* > >> >> > + * Check for references after setting the type to guestmem, to guard > >> >> > + * against potential races with the refcount being decremented later. > >> >> > + * > >> >> > + * At least one reference is expected because the folio is locked. > >> >> > + */ > >> >> > + > >> >> > + refcount = folio_ref_sub_return(folio, folio_nr_pages(folio)); > >> >> > + if (refcount == 1) { > >> >> > + int r; > >> >> > + > >> >> > + /* refcount isn't elevated, it's now faultable by the guest. */ > >> >> > >> >> Again this seems racy, somebody could have just speculatively increased it. > >> >> Maybe we need to freeze here as well? > >> > > >> > A speculative increase here is ok I think (famous last words). The > >> > callback was registered before the check, therefore, such an increase > >> > would trigger the callback. > >> > > >> > Thanks, > >> > /fuad > >> > > >> > > >> > >> I checked the callback (kvm_gmem_handle_folio_put()) and agree with you > >> that the mappability reset to KVM_GMEM_GUEST_MAPPABLE is handled > >> correctly (since kvm_gmem_handle_folio_put() doesn't assume anything > >> about the mappability state at callback-time). > >> > >> However, what if the new speculative reference writes to the page and > >> guest goes on to fault/use the page? > > > > I don't think that's a problem. At this point the page is in a > > transient state, but still shared from the guest's point of view. > > Moreover, no one can fault-in the page at the host at this point (we > > check in kvm_gmem_fault()). > > > > Let's have a look at the code: > > > > +static int __gmem_register_callback(struct folio *folio, struct inode > > *inode, pgoff_t idx) > > +{ > > + struct xarray *mappable_offsets = > > &kvm_gmem_private(inode)->mappable_offsets; > > + void *xval_guest = xa_mk_value(KVM_GMEM_GUEST_MAPPABLE); > > + int refcount; > > > > At this point the guest still perceives the page as shared, the state > > of the page is KVM_GMEM_NONE_MAPPABLE (transient state). This means > > that kvm_gmem_fault() doesn't fault-in the page at the host anymore. > > > > + rwsem_assert_held_write_nolockdep(&inode->i_mapping->invalidate_lock); > > + WARN_ON_ONCE(!folio_test_locked(folio)); > > + > > + if (folio_mapped(folio) || folio_test_guestmem(folio)) > > + return -EAGAIN; > > + > > + /* Register a callback first. */ > > + __folio_set_guestmem(folio); > > > > This (in addition to the state of the NONE_MAPPABLE), also ensures > > that kvm_gmem_fault() doesn't fault-in the page at the host anymore. > > > > + /* > > + * Check for references after setting the type to guestmem, to guard > > + * against potential races with the refcount being decremented later. > > + * > > + * At least one reference is expected because the folio is locked. > > + */ > > + > > + refcount = folio_ref_sub_return(folio, folio_nr_pages(folio)); > > + if (refcount == 1) { > > + int r; > > > > At this point we know that guest_memfd has the only real reference. > > Speculative references AFAIK do not access the page itself. > > + > > + /* refcount isn't elevated, it's now faultable by the guest. */ > > + r = WARN_ON_ONCE(xa_err(xa_store(mappable_offsets, > > idx, xval_guest, GFP_KERNEL))); > > > > Now it's safe so let the guest know that it can map the page. > > > > + if (!r) > > + __kvm_gmem_restore_pending_folio(folio); > > + > > + return r; > > + } > > + > > + return -EAGAIN; > > +} > > > > Does this make sense, or did I miss something? > > Thanks for explaining! I don't know enough to confirm/deny this but I agree > that if speculative references don't access the page itself, this works. > > What if over here, we just drop the refcount, and let setting mappability to > GUEST happen in the folio_put() callback? Similar to what I mentioned in the other thread, the common case should be that the mapcount and refcount are not elevated, therefore, I think it's better not to go through the callback route unless it's necessary for correctness. Cheers, /fuad > > > > Thanks! > > /fuad > > > >> >> > + r = WARN_ON_ONCE(xa_err(xa_store(mappable_offsets, idx, xval_guest, GFP_KERNEL))); > >> >> > + if (!r) > >> >> > + __kvm_gmem_restore_pending_folio(folio); > >> >> > + > >> >> > + return r; > >> >> > + } > >> >> > + > >> >> > + return -EAGAIN; > >> >> > +} > >> >> > + > >> >> > +int kvm_slot_gmem_register_callback(struct kvm_memory_slot *slot, gfn_t gfn) > >> >> > +{ > >> >> > + unsigned long pgoff = slot->gmem.pgoff + gfn - slot->base_gfn; > >> >> > + struct inode *inode = file_inode(slot->gmem.file); > >> >> > + struct folio *folio; > >> >> > + int r; > >> >> > + > >> >> > + filemap_invalidate_lock(inode->i_mapping); > >> >> > + > >> >> > + folio = filemap_lock_folio(inode->i_mapping, pgoff); > >> >> > + if (WARN_ON_ONCE(IS_ERR(folio))) { > >> >> > + r = PTR_ERR(folio); > >> >> > + goto out; > >> >> > + } > >> >> > + > >> >> > + r = __gmem_register_callback(folio, inode, pgoff); > >> >> > + > >> >> > + folio_unlock(folio); > >> >> > + folio_put(folio); > >> >> > +out: > >> >> > + filemap_invalidate_unlock(inode->i_mapping); > >> >> > + > >> >> > + return r; > >> >> > +} > >> >> > + > >> >> > +/* > >> >> > + * Callback function for __folio_put(), i.e., called when all references by the > >> >> > + * host to the folio have been dropped. This allows gmem to transition the state > >> >> > + * of the folio to mappable by the guest, and allows the hypervisor to continue > >> >> > + * transitioning its state to private, since the host cannot attempt to access > >> >> > + * it anymore. > >> >> > + */ > >> >> > +void kvm_gmem_handle_folio_put(struct folio *folio) > >> >> > +{ > >> >> > + struct xarray *mappable_offsets; > >> >> > + struct inode *inode; > >> >> > + pgoff_t index; > >> >> > + void *xval; > >> >> > + > >> >> > + inode = folio->mapping->host; > >> >> > + index = folio->index; > >> >> > + mappable_offsets = &kvm_gmem_private(inode)->mappable_offsets; > >> >> > + xval = xa_mk_value(KVM_GMEM_GUEST_MAPPABLE); > >> >> > + > >> >> > + filemap_invalidate_lock(inode->i_mapping); > >> >> > + __kvm_gmem_restore_pending_folio(folio); > >> >> > + WARN_ON_ONCE(xa_err(xa_store(mappable_offsets, index, xval, GFP_KERNEL))); > >> >> > + filemap_invalidate_unlock(inode->i_mapping); > >> >> > +} > >> >> > + > >> >> > static bool gmem_is_mappable(struct inode *inode, pgoff_t pgoff) > >> >> > { > >> >> > struct xarray *mappable_offsets = &kvm_gmem_private(inode)->mappable_offsets; > >> >>