From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01DD4C4321E for ; Wed, 30 Nov 2022 09:44:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 836B26B0072; Wed, 30 Nov 2022 04:44:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7E7246B0073; Wed, 30 Nov 2022 04:44:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6AEDB6B0078; Wed, 30 Nov 2022 04:44:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 5C59E6B0072 for ; Wed, 30 Nov 2022 04:44:07 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 26E0B160CB6 for ; Wed, 30 Nov 2022 09:44:07 +0000 (UTC) X-FDA: 80189622534.29.3D1668E Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf19.hostedemail.com (Postfix) with ESMTP id 18D241A0013 for ; Wed, 30 Nov 2022 09:44:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669801446; x=1701337446; h=date:from:to:cc:subject:message-id:reply-to:references: mime-version:in-reply-to; bh=S7IFmJUYj2DIJw7QNc2jhnguuUzMp+pGaR87dR42Bvg=; b=IsqtfCBVyRTaotpEAHpEXnzD2zumsBn4zXbVilksWCiLYwyxRRP47cZE W/0fp4dsJyyxdib9yo8A1kgor4oWvnmOGk5BITWgHLe8FSRr0R+RfjeAM JPydg8cdDM7F7BNgdou7C/UDCjcdbbCIInLx31j/HdG0aPoGG+1eYo8Hr F4PmEV0Ats/FmpajUWr47vs/Mz5EdYv+KQIFoIRcZogZzmk0TiMkrlq8C Lzhr55AQxYhIJ0RE5HSUFEZTxO8csLfNcDXGDQTWh7MTAdpI6UdZIXp0j vS/4PJ/mFJjr7x5IOSwV28AH/4KWVKQ1JBgWlGIXMcbycvfmPHi3SZXag Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10546"; a="379633051" X-IronPort-AV: E=Sophos;i="5.96,206,1665471600"; d="scan'208";a="379633051" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Nov 2022 01:44:03 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10546"; a="637934502" X-IronPort-AV: E=Sophos;i="5.96,206,1665471600"; d="scan'208";a="637934502" Received: from chaop.bj.intel.com (HELO localhost) ([10.240.193.75]) by orsmga007.jf.intel.com with ESMTP; 30 Nov 2022 01:43:53 -0800 Date: Wed, 30 Nov 2022 17:39:31 +0800 From: Chao Peng To: Michael Roth Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org, Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory Message-ID: <20221130093931.GA945726@chaop.bj.intel.com> Reply-To: Chao Peng References: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> <20221025151344.3784230-2-chao.p.peng@linux.intel.com> <20221129003725.l34qhx6n44mq2gtl@amd.com> <20221129140615.GC902164@chaop.bj.intel.com> <20221129190658.jefuep7nglp25ugt@amd.com> <20221129191815.atuv6arhodjbnvb2@amd.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20221129191815.atuv6arhodjbnvb2@amd.com> ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=IsqtfCBV; spf=none (imf19.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669801446; a=rsa-sha256; cv=none; b=V/J5bGMKRYE1/zkkoTETwXN1i50z32TgeKJKd6CWI/wkYDHTd771ABM+oGdsVMLQA60PQT /32duhWM+cxk0BGyeVMVJvgGsAbFVYOIwQkvEDXEd5aH71/0qwoarSHb9+XcFNHZQezSll DoBFw4Y8qE78LMR5B/MiRfZ6idx87tY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669801446; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QKfZQKSFqO2bPgql4FCHpTQUjTJBZhZfjUIo9V3w7Nc=; b=T8teo75rYdOxPYc8ys54cxtEsFLuBfbIwrIbr1uMxjQ9yVU+qsXOZ8mBNSS8vC426iHnXU aspvCSMJ20pCG1Bea7UdR9kXKbWeB3SSkjBl9MEITpvkyNp9bnnyCGyD444KDw3OYmjuqn +JdrTlvJS3QzF61ru5057/zXDzWFQJs= Authentication-Results: imf19.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=IsqtfCBV; spf=none (imf19.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none) X-Rspamd-Server: rspam01 X-Stat-Signature: xn7jrhgr6kt5qzy8hb4uingxk6xue6h7 X-Rspamd-Queue-Id: 18D241A0013 X-Rspam-User: X-HE-Tag: 1669801445-399607 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Nov 29, 2022 at 01:18:15PM -0600, Michael Roth wrote: > On Tue, Nov 29, 2022 at 01:06:58PM -0600, Michael Roth wrote: > > On Tue, Nov 29, 2022 at 10:06:15PM +0800, Chao Peng wrote: > > > On Mon, Nov 28, 2022 at 06:37:25PM -0600, Michael Roth wrote: > > > > On Tue, Oct 25, 2022 at 11:13:37PM +0800, Chao Peng wrote: > > > ... > > > > > +static long restrictedmem_fallocate(struct file *file, int mode, > > > > > + loff_t offset, loff_t len) > > > > > +{ > > > > > + struct restrictedmem_data *data = file->f_mapping->private_data; > > > > > + struct file *memfd = data->memfd; > > > > > + int ret; > > > > > + > > > > > + if (mode & FALLOC_FL_PUNCH_HOLE) { > > > > > + if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len)) > > > > > + return -EINVAL; > > > > > + } > > > > > + > > > > > + restrictedmem_notifier_invalidate(data, offset, offset + len, true); > > > > > > > > The KVM restrictedmem ops seem to expect pgoff_t, but here we pass > > > > loff_t. For SNP we've made this strange as part of the following patch > > > > and it seems to produce the expected behavior: > > > > > > That's correct. Thanks. > > > > > > > > > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmdroth%2Flinux%2Fcommit%2Fd669c7d3003ff7a7a47e73e8c3b4eeadbd2c4eb6&data=05%7C01%7CMichael.Roth%40amd.com%7C0c26815eb6af4f1a243508dad23cf713%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638053456609134623%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=kAL42bmyBB0alVwh%2FN%2BT3D%2BiVTdxxMsJ7V4TNuCTjM4%3D&reserved=0 > > > > > > > > > + ret = memfd->f_op->fallocate(memfd, mode, offset, len); > > > > > + restrictedmem_notifier_invalidate(data, offset, offset + len, false); > > > > > + return ret; > > > > > +} > > > > > + > > > > > > > > > > > > > > > > > +int restrictedmem_get_page(struct file *file, pgoff_t offset, > > > > > + struct page **pagep, int *order) > > > > > +{ > > > > > + struct restrictedmem_data *data = file->f_mapping->private_data; > > > > > + struct file *memfd = data->memfd; > > > > > + struct page *page; > > > > > + int ret; > > > > > + > > > > > + ret = shmem_getpage(file_inode(memfd), offset, &page, SGP_WRITE); > > > > > > > > This will result in KVM allocating pages that userspace hasn't necessary > > > > fallocate()'d. In the case of SNP we need to get the PFN so we can clean > > > > up the RMP entries when restrictedmem invalidations are issued for a GFN > > > > range. > > > > > > Yes fallocate() is unnecessary unless someone wants to reserve some > > > space (e.g. for determination or performance purpose), this matches its > > > semantics perfectly at: > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.man7.org%2Flinux%2Fman-pages%2Fman2%2Ffallocate.2.html&data=05%7C01%7CMichael.Roth%40amd.com%7C0c26815eb6af4f1a243508dad23cf713%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638053456609134623%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=acBSquFG%2FHtpbcZfHDZrP2O63bu06rI0pjiPJFSJSj8%3D&reserved=0 > > > > > > > > > > > If the guest supports lazy-acceptance however, these pages may not have > > > > been faulted in yet, and if the VMM defers actually fallocate()'ing space > > > > until the guest actually tries to issue a shared->private for that GFN > > > > (to support lazy-pinning), then there may never be a need to allocate > > > > pages for these backends. > > > > > > > > However, the restrictedmem invalidations are for GFN ranges so there's > > > > no way to know inadvance whether it's been allocated yet or not. The > > > > xarray is one option but currently it defaults to 'private' so that > > > > doesn't help us here. It might if we introduced a 'uninitialized' state > > > > or something along that line instead of just the binary > > > > 'shared'/'private' though... > > > > > > How about if we change the default to 'shared' as we discussed at > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fall%2FY35gI0L8GMt9%2BOkK%40google.com%2F&data=05%7C01%7CMichael.Roth%40amd.com%7C0c26815eb6af4f1a243508dad23cf713%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638053456609134623%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Q1vZWQiZ7mx12Qn5aKl4s8Ea9hNbwCJBb%2BjiA1du3Os%3D&reserved=0? > > > > Need to look at this a bit more, but I think that could work as well. > > > > > > > > > > But for now we added a restrictedmem_get_page_noalloc() that uses > > > > SGP_NONE instead of SGP_WRITE to avoid accidentally allocating a bunch > > > > of memory as part of guest shutdown, and a > > > > kvm_restrictedmem_get_pfn_noalloc() variant to go along with that. But > > > > maybe a boolean param is better? Or maybe SGP_NOALLOC is the better > > > > default, and we just propagate an error to userspace if they didn't > > > > fallocate() in advance? > > > > > > This (making fallocate() a hard requirement) not only complicates the > > > userspace but also forces the lazy-faulting going through a long path of > > > exiting to userspace. Unless we don't have other options I would not go > > > this way. > > > > Unless I'm missing something, it's already the case that userspace is > > responsible for handling all the shared->private transitions in response > > to KVM_EXIT_MEMORY_FAULT or (in our case) KVM_EXIT_VMGEXIT. So it only > > places the additional requirements on the VMM that if they *don't* > > preallocate, then they'll need to issue the fallocate() prior to issuing > > the KVM_MEM_ENCRYPT_REG_REGION ioctl in response to these events. Preallocating and memory conversion between shared<->private are two different things. No double fallocate() and conversion can be called together in response to KVM_EXIT_MEMORY_FAULT, but they don't have to be paired. And the fallocate() does not have to operate on the same memory range as memory conversion does. > > > > QEMU for example already has a separate 'prealloc' option for cases > > where they want to prefault all the guest memory, so it makes sense to > > continue making that an optional thing with regard to UPM. Making 'prealloc' work for UPM in QEMU does sound reasonable. Anyway, it's just an option so not change the assumption here. > > Although I guess what you're suggesting doesn't stop userspace from > deciding whether they want to prefault or not. I know the Google folks > had some concerns over unexpected allocations causing 2x memory usage > though so giving userspace full control of what is/isn't allocated in > the restrictedmem backend seems to make it easier to guard against this, > but I think checking the xarray and defaulting to 'shared' would work > for us if that's the direction we end up going. Yeah, that looks very likely the direction satisfying all people here. Chao > > -Mike > > > > > -Mike > > > > > > > > Chao > > > > > > > > -Mike > > > > > > > > > + if (ret) > > > > > + return ret; > > > > > + > > > > > + *pagep = page; > > > > > + if (order) > > > > > + *order = thp_order(compound_head(page)); > > > > > + > > > > > + SetPageUptodate(page); > > > > > + unlock_page(page); > > > > > + > > > > > + return 0; > > > > > +} > > > > > +EXPORT_SYMBOL_GPL(restrictedmem_get_page); > > > > > -- > > > > > 2.25.1 > > > > >