From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB94BC5B556 for ; Fri, 30 May 2025 18:32:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 602956B0175; Fri, 30 May 2025 14:32:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5B2AC6B0197; Fri, 30 May 2025 14:32:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 47A5D6B0198; Fri, 30 May 2025 14:32:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 25EC26B0197 for ; Fri, 30 May 2025 14:32:10 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id A8256121349 for ; Fri, 30 May 2025 18:32:09 +0000 (UTC) X-FDA: 83500418778.02.CACA5C4 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) by imf11.hostedemail.com (Postfix) with ESMTP id CD33540019 for ; Fri, 30 May 2025 18:32:07 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=esS1WxTS; spf=pass (imf11.hostedemail.com: domain of 3pvk5aAsKCMkprzt60tD82vv33v0t.r310x29C-11zAprz.36v@flex--ackerleytng.bounces.google.com designates 209.85.210.201 as permitted sender) smtp.mailfrom=3pvk5aAsKCMkprzt60tD82vv33v0t.r310x29C-11zAprz.36v@flex--ackerleytng.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1748629927; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lGiyvE+Z7SWCN1E9FOENP3p6d5ja0ty4l2hl1/VGD8A=; b=VMrZgTw35/c2lXjrxOeqQwKr+A5JwJzqSTs2bqvoNkTuN+aZo5EZ2SQIndBhhrfM99tf/0 glwmakoA4wy/UUZZndJDo69yni9XLQeuPzapq3mJ7V4R/WgPOPC2gorHkqYC+0v0yL/YcT /NI3qLkxcTBKUyhuyjYVlxrcIf7pSg0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1748629927; a=rsa-sha256; cv=none; b=PMxWQ4IZrEYzTaJrDWkAVbmeCMfyAGjO9eteQWfUtxDXXzuHbJS3ou+DB1CBR3JI0IFGCb /KyaIeZbQtLus6+iH0wrv+HvSu/RYZpRyufykeNK04q6DW7vMR1GFaHIaE82fyDWAdoRsa iYG2gLBOhj7Z4uJAV7F3PRfd7KGUWqw= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=esS1WxTS; spf=pass (imf11.hostedemail.com: domain of 3pvk5aAsKCMkprzt60tD82vv33v0t.r310x29C-11zAprz.36v@flex--ackerleytng.bounces.google.com designates 209.85.210.201 as permitted sender) smtp.mailfrom=3pvk5aAsKCMkprzt60tD82vv33v0t.r310x29C-11zAprz.36v@flex--ackerleytng.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-742b6705a52so3207261b3a.1 for ; Fri, 30 May 2025 11:32:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748629927; x=1749234727; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=lGiyvE+Z7SWCN1E9FOENP3p6d5ja0ty4l2hl1/VGD8A=; b=esS1WxTSCz1k8YmfKOkCl4R7Z5AS5wNdkloTG/7JvEmGvMaQ06Hc+61W9uw9vlpzGx qGHdTmAWY5GcxG/T/syP2UHan8LuKSpOd2FEnZZlIK/VpR8a2C6Zs5i8PbCGdoElTdUS esW9nQZw0a8FLAvjtfsx/Y31lLDdqKEa9kjoI4MsrXrIheRaK4fTSq/dhGjl8OQEqOMw yOTibGrrEVPr1wH/mQL3BUaDp/+u44EPsds4lWOT4OMtB4W5ZXaOPwY5RFwzjLw5g/Qu oSmMJECaN6+kvzOrnyMhH7m1mGb6HcrEHUMZTKa7uvqxNo66l29Vz9qN1pTvcbRqClgl BdUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748629927; x=1749234727; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lGiyvE+Z7SWCN1E9FOENP3p6d5ja0ty4l2hl1/VGD8A=; b=OcnrKELp2hRKlsYS7vRn61XF8//lbaMGeMtrpv6r+hVFJfxEiCBogX0GoST0iPEjhL Miy3AZD1KDXqtv3QdmuVYP2n7NWgLa0zJVG8hsDFwXDIYsaV537MLOOjfBatbLgJqzSd yI/H+gmXruwebHjxFOVQTtMU90U0ZNCfBWIOMcmYRK3iAPDMhayLhiIzQbiEi1R7noil mwxTZaY3Hmy+9R8sgl/SsM4ESJNywvDC7PvtBxXj/q16FoY4OsMrD4Ocbf6Y7Iq8fBU9 +CXujTHg88p+0wfcYRefPnuiCcDmKIK0Qw2Fhffv/rIGBs05ijd2OXn158Sviha2QCiE QY1w== X-Forwarded-Encrypted: i=1; AJvYcCXvT6HLonqLVNOTP3Iwlz92ykcFJwxTkCbdO9IoLGVPLTrpPORfMaiYrMp7NsaAZF6sa0PXUzO2kA==@kvack.org X-Gm-Message-State: AOJu0YyF5v3fGzJpaX8zcZEn+MvAVqjkoh0Wu9JgJROV1IUYr6OLi4fs w6b3Sjya714L46GDTDbAWWLKvTYhlwdSmHsDaHAeQEpnOqtYgfj4iBdJzXmW65bh1mbBCVdXIn/ Xr9IGw3yNLwDn/Mu+5x2FFGrqJw== X-Google-Smtp-Source: AGHT+IHM5kPg6eMJSoESrR+TXUjl4+oS0Sa65KfzAcdN3HQaYyK4tSpaTAlF7bAq/a4lRhtVcdm1+x3J3RssfsHAbg== X-Received: from pfbg25.prod.google.com ([2002:a05:6a00:ae19:b0:746:3162:8be1]) (user=ackerleytng job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:1951:b0:746:24c9:c92e with SMTP id d2e1a72fcca58-747c1a7b5b2mr5413543b3a.8.1748629926517; Fri, 30 May 2025 11:32:06 -0700 (PDT) Date: Fri, 30 May 2025 11:32:04 -0700 In-Reply-To: Mime-Version: 1.0 References: Message-ID: Subject: Re: [RFC PATCH v2 02/51] KVM: guest_memfd: Introduce and use shareability to guard faulting From: Ackerley Tng To: Fuad Tabba , Yan Zhao Cc: kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org, aik@amd.com, ajones@ventanamicro.com, akpm@linux-foundation.org, amoorthy@google.com, anthony.yznaga@oracle.com, anup@brainfault.org, aou@eecs.berkeley.edu, bfoster@redhat.com, binbin.wu@linux.intel.com, brauner@kernel.org, catalin.marinas@arm.com, chao.p.peng@intel.com, chenhuacai@kernel.org, dave.hansen@intel.com, david@redhat.com, dmatlack@google.com, dwmw@amazon.co.uk, erdemaktas@google.com, fan.du@intel.com, fvdl@google.com, graf@amazon.com, haibo1.xu@intel.com, hch@infradead.org, hughd@google.com, ira.weiny@intel.com, isaku.yamahata@intel.com, jack@suse.cz, james.morse@arm.com, jarkko@kernel.org, jgg@ziepe.ca, jgowans@amazon.com, jhubbard@nvidia.com, jroedel@suse.de, jthoughton@google.com, jun.miao@intel.com, kai.huang@intel.com, keirf@google.com, kent.overstreet@linux.dev, kirill.shutemov@intel.com, liam.merwick@oracle.com, maciej.wieczor-retman@intel.com, mail@maciej.szmigiero.name, maz@kernel.org, mic@digikod.net, michael.roth@amd.com, mpe@ellerman.id.au, muchun.song@linux.dev, nikunj@amd.com, nsaenz@amazon.es, oliver.upton@linux.dev, palmer@dabbelt.com, pankaj.gupta@amd.com, paul.walmsley@sifive.com, pbonzini@redhat.com, pdurrant@amazon.co.uk, peterx@redhat.com, pgonda@google.com, pvorel@suse.cz, qperret@google.com, quic_cvanscha@quicinc.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, quic_svaddagi@quicinc.com, quic_tsoni@quicinc.com, richard.weiyang@gmail.com, rick.p.edgecombe@intel.com, rientjes@google.com, roypat@amazon.co.uk, rppt@kernel.org, seanjc@google.com, shuah@kernel.org, steven.price@arm.com, steven.sistare@oracle.com, suzuki.poulose@arm.com, thomas.lendacky@amd.com, usama.arif@bytedance.com, vannapurve@google.com, vbabka@suse.cz, viro@zeniv.linux.org.uk, vkuznets@redhat.com, wei.w.wang@intel.com, will@kernel.org, willy@infradead.org, xiaoyao.li@intel.com, yilun.xu@intel.com, yuzenghui@huawei.com, zhiquan1.li@intel.com Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: CD33540019 X-Stat-Signature: c157cahyxr9417ze75e9ee7139wfwck8 X-Rspam-User: X-HE-Tag: 1748629927-486837 X-HE-Meta: U2FsdGVkX1/pCT6sQGpmCvsznCmVDCBi2Jtng0KDQjC23Ho1UPE49PhSkrarvW89mKtUPYMHA6KfUPrKvjrhrg6FFnTwbX8asyG+1pVRTp9GtLeDMAGD+MlZqIY8c4g1lprb+7iWrbp28QfwYzmxcykE0yNy8sjfGki8RFuidrv7WVXfBPlNMj2aNnFvY5K3IFUcKUStS8Cod0PoEsIEV11vL17N+FcsqpJhwWe36vQuuj+t1rHUCCwgpaU+mtYdx7Mm4GQlUXR20MOrCCUuQOBr5P16g6cw70DKAvCNvKrwMQM4UcSX3o69f8JzlYWSfZuyL1BTLEhN06t3ClieXFQwtHK1TQw7BDdbgFk5ItZ6s/HCBbW59CiZtGukNbZT+uS+TA1hWAY4uoleXXgd3h/qAKGfjjP4dsaKk+nVm7OKSE+a9kYh6oSobdeDrrkAyWXBgODC2t/iK/b7AAqqxEZZ1YOv2C0jiZRxwtai/LaJJweOmkk2M4vsbd6c00VJ3GuQj0/JoHqmSMAbPM7khtjMe11qu3m7tA8UjeUGPt+LUGFdBEA8TARgTqxrDTZFTk3hZCqDL3qT/3RlMRO6fPXU4siFzKEonOYhb0kPVsERTx8sRddXNFyBP4MUxT+tt7wdkR6b8bLxKajsSiIlZAs/l/6lNktwASFQrw5e3MUvzVJlwE0w9BLsrMiWknp+l+9SQr0haXzIglshhtJ5qT0/J1EHinasa0UlHHDTjUHcfgN1ncP1nNixEZhAAJK2IpgNkuYN2WTBoAaIV8FcPsR2OSKU5LM5gUXnL434hJUuVJT7g2lTIWqzwOljTHMLHfPeNAXhF4lH/iv15cTm/ug8yA0jwEq5+z0QUF/xjgs764SrUelrQbxiXojJAgWGa3UUvKhG1ro0m8nyY3Vf6BU7Tmnx43jkllJCYH9HNHzNoE+gyNFTW0ACqzhBRr0WOARjXnxEpEQ+SICU12C ugv3T4Pc MNXgPyqaKgGj72xbAwPa5/Os4eiQtI8EovvFKWkS6NaIbAq3s+ktYyK4isBADDA2HXPs0nqzJIyeOx+psHDM5d+ZY7LjODB310bY29MB0453/2XC50DJ6yCM1SKdu0VoLn6c1JgiGVJVj/6fVWh3s8qOespCiMCtDTQ/+eM/taoYGcj1omurBtr81ThiazZOx5126WGoWfGKoOs+8BQTgzlGznncGJguSU0nZFNrS6VevFt3hDNkasUCiHOGbkZDRmV19J/OzfB5G35Wzo+hA+RWbnoFEw4jhUw4Fo3DVrD+rZV7hOQs3KEZfUMXmriYYtbdgUMNmo/8XutgBt2KqaQstJk/nJMV3bpPpPKCHXypSbcqLP0TOqA4uZDL+LjjB30uFiopMuOHHv7Ga35q6zQdXiTeYrq0zycEnpOGrG52goFirpD8Z9BYKref9CztWX2pksAnwYokbNNZQublH6DJrEnHomjoiTqnAvfIhmcPVLrVvJ13dJ9esHxHMZZhowd9mrC2a9JbItEUXaCxw671uJ2UxaXmHLCL2rwqDjQQ3dvgjrBzc1StBXerN/0gShN7RD40IwRPEMdaakQ/8KyCtzjmcZK/QJxi+6/Zv6c/H4DkBoOdP7BeN7B2htF+/WBjatVBP/P2UDHYqZNiSYi2Y6oL1P20txakjBLBazTxH3nIQZIBqbC85mE2C2PMqUu10hHj/D033Wup4qjrO6Ty/IWJY+TJfNyJk X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Fuad Tabba writes: > Hi, > > .. snip.. > >> I noticed that in [1], the kvm_gmem_mmap() does not check the range. >> So, the WARN() here can be hit when userspace mmap() an area larger than the >> inode size and accesses the out of band HVA. >> >> Maybe limit the mmap() range? >> >> @@ -1609,6 +1620,10 @@ static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma) >> if (!kvm_gmem_supports_shared(file_inode(file))) >> return -ENODEV; >> >> + if (vma->vm_end - vma->vm_start + (vma->vm_pgoff << PAGE_SHIFT) > i_size_read(file_inode(file))) >> + return -EINVAL; >> + >> if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) != >> (VM_SHARED | VM_MAYSHARE)) { >> return -EINVAL; >> >> [1] https://lore.kernel.org/all/20250513163438.3942405-8-tabba@google.com/ > > I don't think we want to do that for a couple of reasons. We catch > such invalid accesses on faulting, and, by analogy, afaikt, neither > secretmem nor memfd perform a similar check on mmap (nor do > memory-mapped files in general). > > There are also valid reasons why a user would want to deliberately > mmap more memory than the backing store, knowing that it's only going > to fault what it's going to use, e.g., alignment. > This is a good point. I think there's no check against the inode size on faulting now though? v10's [1] kvm_gmem_fault_shared() calls kvm_gmem_get_folio() straightaway. We should add a check like [2] to kvm_gmem_fault_shared(). [1] https://lore.kernel.org/all/20250513163438.3942405-8-tabba@google.com/ [2] https://github.com/torvalds/linux/blob/8477ab143069c6b05d6da4a8184ded8b969240f5/mm/filemap.c#L3373 > Cheers, > /fuad > > >> > + return xa_to_value(entry); >> > +} >> > + >> > +static struct folio *kvm_gmem_get_shared_folio(struct inode *inode, pgoff_t index) >> > +{ >> > + if (kvm_gmem_shareability_get(inode, index) != SHAREABILITY_ALL) >> > + return ERR_PTR(-EACCES); >> > + >> > + return kvm_gmem_get_folio(inode, index); >> > +} >> > + >> > +#else >> > + >> > +static int kvm_gmem_shareability_setup(struct maple_tree *mt, loff_t size, u64 flags) >> > +{ >> > + return 0; >> > +} >> > + >> > +static inline struct folio *kvm_gmem_get_shared_folio(struct inode *inode, pgoff_t index) >> > +{ >> > + WARN_ONCE("Unexpected call to get shared folio.") >> > + return NULL; >> > +} >> > + >> > +#endif /* CONFIG_KVM_GMEM_SHARED_MEM */ >> > + >> > static int __kvm_gmem_prepare_folio(struct kvm *kvm, struct kvm_memory_slot *slot, >> > pgoff_t index, struct folio *folio) >> > { >> > @@ -333,7 +404,7 @@ static vm_fault_t kvm_gmem_fault_shared(struct vm_fault *vmf) >> > >> > filemap_invalidate_lock_shared(inode->i_mapping); >> > >> > - folio = kvm_gmem_get_folio(inode, vmf->pgoff); >> > + folio = kvm_gmem_get_shared_folio(inode, vmf->pgoff); >> > if (IS_ERR(folio)) { >> > int err = PTR_ERR(folio); >> > >> > @@ -420,8 +491,33 @@ static struct file_operations kvm_gmem_fops = { >> > .fallocate = kvm_gmem_fallocate, >> > }; >> > >> > +static void kvm_gmem_free_inode(struct inode *inode) >> > +{ >> > + struct kvm_gmem_inode_private *private = kvm_gmem_private(inode); >> > + >> > + kfree(private); >> > + >> > + free_inode_nonrcu(inode); >> > +} >> > + >> > +static void kvm_gmem_destroy_inode(struct inode *inode) >> > +{ >> > + struct kvm_gmem_inode_private *private = kvm_gmem_private(inode); >> > + >> > +#ifdef CONFIG_KVM_GMEM_SHARED_MEM >> > + /* >> > + * mtree_destroy() can't be used within rcu callback, hence can't be >> > + * done in ->free_inode(). >> > + */ >> > + if (private) >> > + mtree_destroy(&private->shareability); >> > +#endif >> > +} >> > + >> > static const struct super_operations kvm_gmem_super_operations = { >> > .statfs = simple_statfs, >> > + .destroy_inode = kvm_gmem_destroy_inode, >> > + .free_inode = kvm_gmem_free_inode, >> > }; >> > >> > static int kvm_gmem_init_fs_context(struct fs_context *fc) >> > @@ -549,12 +645,26 @@ static const struct inode_operations kvm_gmem_iops = { >> > static struct inode *kvm_gmem_inode_make_secure_inode(const char *name, >> > loff_t size, u64 flags) >> > { >> > + struct kvm_gmem_inode_private *private; >> > struct inode *inode; >> > + int err; >> > >> > inode = alloc_anon_secure_inode(kvm_gmem_mnt->mnt_sb, name); >> > if (IS_ERR(inode)) >> > return inode; >> > >> > + err = -ENOMEM; >> > + private = kzalloc(sizeof(*private), GFP_KERNEL); >> > + if (!private) >> > + goto out; >> > + >> > + mt_init(&private->shareability); >> Wrap the mt_init() inside "#ifdef CONFIG_KVM_GMEM_SHARED_MEM" ? >> >> > + inode->i_mapping->i_private_data = private; >> > + >> > + err = kvm_gmem_shareability_setup(private, size, flags); >> > + if (err) >> > + goto out; >> > + >> > inode->i_private = (void *)(unsigned long)flags; >> > inode->i_op = &kvm_gmem_iops; >> > inode->i_mapping->a_ops = &kvm_gmem_aops; >> > @@ -566,6 +676,11 @@ static struct inode *kvm_gmem_inode_make_secure_inode(const char *name, >> > WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping)); >> > >> > return inode; >> > + >> > +out: >> > + iput(inode); >> > + >> > + return ERR_PTR(err); >> > } >> > >> > static struct file *kvm_gmem_inode_create_getfile(void *priv, loff_t size, >> > @@ -654,6 +769,9 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) >> > if (kvm_arch_vm_supports_gmem_shared_mem(kvm)) >> > valid_flags |= GUEST_MEMFD_FLAG_SUPPORT_SHARED; >> > >> > + if (flags & GUEST_MEMFD_FLAG_SUPPORT_SHARED) >> > + valid_flags |= GUEST_MEMFD_FLAG_INIT_PRIVATE; >> > + >> > if (flags & ~valid_flags) >> > return -EINVAL; >> > >> > @@ -842,6 +960,8 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, >> > if (!file) >> > return -EFAULT; >> > >> > + filemap_invalidate_lock_shared(file_inode(file)->i_mapping); >> > + >> > folio = __kvm_gmem_get_pfn(file, slot, index, pfn, &is_prepared, max_order); >> > if (IS_ERR(folio)) { >> > r = PTR_ERR(folio); >> > @@ -857,8 +977,8 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, >> > *page = folio_file_page(folio, index); >> > else >> > folio_put(folio); >> > - >> > out: >> > + filemap_invalidate_unlock_shared(file_inode(file)->i_mapping); >> > fput(file); >> > return r; >> > } >> > -- >> > 2.49.0.1045.g170613ef41-goog >> > >> >