From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 155D2C43217 for ; Fri, 21 Oct 2022 13:59:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 65EF08E0002; Fri, 21 Oct 2022 09:59:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5E6FC8E0001; Fri, 21 Oct 2022 09:59:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 460DC8E0002; Fri, 21 Oct 2022 09:59:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 2F92D8E0001 for ; Fri, 21 Oct 2022 09:59:19 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id F19881C6189 for ; Fri, 21 Oct 2022 13:59:18 +0000 (UTC) X-FDA: 80045113596.21.C218AB6 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by imf18.hostedemail.com (Postfix) with ESMTP id 05C381C0006 for ; Fri, 21 Oct 2022 13:59:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666360758; x=1697896758; h=date:from:to:cc:subject:message-id:reply-to:references: mime-version:in-reply-to; bh=nqyW/uEhggv7r8+w3xJjD0vAC8+H09GKfNJh/nelGsA=; b=aysCkjmiv27WvAXF6JisSCgO3dASMzBNZa2jnVaSzHKsyUrzTemOjO3h SLhNZcL9LHyDU1kc7h3cSI3f+hJ/i7Z+kLlSxapgZoUhw7MqAuHFpB/8C 0DVgDSrpMBT+75r1y/1qJVDL75cTgCfV21fk2uU0NNOGgOVYGdkaCAwRD cm6XQodXVROmTcXacTwU9VmEY66+cvtrawwzDqqzPFIUC08HZLwaWn4e/ 8oNDKWNb/51gyvvHJDzIoYInxM488sdcIDuZYYH1W5tQlTye5KWxzMLw2 AMxySDKS9zRIipRZBTf1YOOPlnhft451NNF0lCeb4F1r6+70f1TbNkpyp A==; X-IronPort-AV: E=McAfee;i="6500,9779,10507"; a="306999041" X-IronPort-AV: E=Sophos;i="5.95,200,1661842800"; d="scan'208";a="306999041" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Oct 2022 06:59:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10507"; a="625350623" X-IronPort-AV: E=Sophos;i="5.95,200,1661842800"; d="scan'208";a="625350623" Received: from chaop.bj.intel.com (HELO localhost) ([10.240.193.75]) by orsmga007.jf.intel.com with ESMTP; 21 Oct 2022 06:59:05 -0700 Date: Fri, 21 Oct 2022 21:54:34 +0800 From: Chao Peng To: Vishal Annapurve Cc: "Kirill A . Shutemov" , "Gupta, Pankaj" , Vlastimil Babka , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org, Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Yu Zhang , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: Re: [PATCH v8 1/8] mm/memfd: Introduce userspace inaccessible memfd Message-ID: <20221021135434.GB3607894@chaop.bj.intel.com> Reply-To: Chao Peng References: <20220915142913.2213336-1-chao.p.peng@linux.intel.com> <20220915142913.2213336-2-chao.p.peng@linux.intel.com> <20221017161955.t4gditaztbwijgcn@box.shutemov.name> <20221017215640.hobzcz47es7dq2bi@box.shutemov.name> <20221019153225.njvg45glehlnjgc7@box.shutemov.name> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666360758; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=TeQr1C1W3JdO7HdCfJ2/M484DKJeL7KqOtkYVUQQRqc=; b=UX0Pjkm37QlHJiG+8Fi4p/6pxFHLVWgjyqW30K2bw+JQaga6P5MjbUDztFpmyQjWts7dBg mUQj5edRlk3oLnHsraMrK/VdJA3JsU9SSO+DwXrFF8knt9wzaheSZv9kv1sygkWQuOgJSK 3Uwd7hf5gMpqsDfAIv0ciM5bX8pEfkg= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=aysCkjmi; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none); spf=none (imf18.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.115) smtp.mailfrom=chao.p.peng@linux.intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666360758; a=rsa-sha256; cv=none; b=56VkBAsoGxxwbe4vs5ushEFP6XQOX+jauTIIGDvbQjf4IZBc9vaaqqO+iN3+bQc7tcQ8Ne xH4fww3wAuvJbR3P3cc2Ed9c/4zvDJT2D1g3SmorZ7ZDT60wV+O1580dwVUs6G+uMpcTGy NKP3eP8osstPgi9/SjPOLCacmSBHU44= X-Rspam-User: X-Rspamd-Queue-Id: 05C381C0006 X-Stat-Signature: fciafmr57z8nwrp6fcpek6yfwd5ytn5q Authentication-Results: imf18.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=aysCkjmi; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none); spf=none (imf18.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.115) smtp.mailfrom=chao.p.peng@linux.intel.com X-Rspamd-Server: rspam07 X-HE-Tag: 1666360757-466670 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Oct 20, 2022 at 04:20:58PM +0530, Vishal Annapurve wrote: > On Wed, Oct 19, 2022 at 9:02 PM Kirill A . Shutemov > wrote: > > > > On Tue, Oct 18, 2022 at 07:12:10PM +0530, Vishal Annapurve wrote: > > > On Tue, Oct 18, 2022 at 3:27 AM Kirill A . Shutemov > > > wrote: > > > > > > > > On Mon, Oct 17, 2022 at 06:39:06PM +0200, Gupta, Pankaj wrote: > > > > > On 10/17/2022 6:19 PM, Kirill A . Shutemov wrote: > > > > > > On Mon, Oct 17, 2022 at 03:00:21PM +0200, Vlastimil Babka wrote: > > > > > > > On 9/15/22 16:29, Chao Peng wrote: > > > > > > > > From: "Kirill A. Shutemov" > > > > > > > > > > > > > > > > KVM can use memfd-provided memory for guest memory. For normal userspace > > > > > > > > accessible memory, KVM userspace (e.g. QEMU) mmaps the memfd into its > > > > > > > > virtual address space and then tells KVM to use the virtual address to > > > > > > > > setup the mapping in the secondary page table (e.g. EPT). > > > > > > > > > > > > > > > > With confidential computing technologies like Intel TDX, the > > > > > > > > memfd-provided memory may be encrypted with special key for special > > > > > > > > software domain (e.g. KVM guest) and is not expected to be directly > > > > > > > > accessed by userspace. Precisely, userspace access to such encrypted > > > > > > > > memory may lead to host crash so it should be prevented. > > > > > > > > > > > > > > > > This patch introduces userspace inaccessible memfd (created with > > > > > > > > MFD_INACCESSIBLE). Its memory is inaccessible from userspace through > > > > > > > > ordinary MMU access (e.g. read/write/mmap) but can be accessed via > > > > > > > > in-kernel interface so KVM can directly interact with core-mm without > > > > > > > > the need to map the memory into KVM userspace. > > > > > > > > > > > > > > > > It provides semantics required for KVM guest private(encrypted) memory > > > > > > > > support that a file descriptor with this flag set is going to be used as > > > > > > > > the source of guest memory in confidential computing environments such > > > > > > > > as Intel TDX/AMD SEV. > > > > > > > > > > > > > > > > KVM userspace is still in charge of the lifecycle of the memfd. It > > > > > > > > should pass the opened fd to KVM. KVM uses the kernel APIs newly added > > > > > > > > in this patch to obtain the physical memory address and then populate > > > > > > > > the secondary page table entries. > > > > > > > > > > > > > > > > The userspace inaccessible memfd can be fallocate-ed and hole-punched > > > > > > > > from userspace. When hole-punching happens, KVM can get notified through > > > > > > > > inaccessible_notifier it then gets chance to remove any mapped entries > > > > > > > > of the range in the secondary page tables. > > > > > > > > > > > > > > > > The userspace inaccessible memfd itself is implemented as a shim layer > > > > > > > > on top of real memory file systems like tmpfs/hugetlbfs but this patch > > > > > > > > only implemented tmpfs. The allocated memory is currently marked as > > > > > > > > unmovable and unevictable, this is required for current confidential > > > > > > > > usage. But in future this might be changed. > > > > > > > > > > > > > > > > Signed-off-by: Kirill A. Shutemov > > > > > > > > Signed-off-by: Chao Peng > > > > > > > > --- > > > > > > > > > > > > > > ... > > > > > > > > > > > > > > > +static long inaccessible_fallocate(struct file *file, int mode, > > > > > > > > + loff_t offset, loff_t len) > > > > > > > > +{ > > > > > > > > + struct inaccessible_data *data = file->f_mapping->private_data; > > > > > > > > + struct file *memfd = data->memfd; > > > > > > > > + int ret; > > > > > > > > + > > > > > > > > + if (mode & FALLOC_FL_PUNCH_HOLE) { > > > > > > > > + if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len)) > > > > > > > > + return -EINVAL; > > > > > > > > + } > > > > > > > > + > > > > > > > > + ret = memfd->f_op->fallocate(memfd, mode, offset, len); > > > > > > > > + inaccessible_notifier_invalidate(data, offset, offset + len); > > > > > > > > > > > > > > Wonder if invalidate should precede the actual hole punch, otherwise we open > > > > > > > a window where the page tables point to memory no longer valid? > > > > > > > > > > > > Yes, you are right. Thanks for catching this. > > > > > > > > > > I also noticed this. But then thought the memory would be anyways zeroed > > > > > (hole punched) before this call? > > > > > > > > Hole punching can free pages, given that offset/len covers full page. > > > > > > > > -- > > > > Kiryl Shutsemau / Kirill A. Shutemov > > > > > > I think moving this notifier_invalidate before fallocate may not solve > > > the problem completely. Is it possible that between invalidate and > > > fallocate, KVM tries to handle the page fault for the guest VM from > > > another vcpu and uses the pages to be freed to back gpa ranges? Should > > > hole punching here also update mem_attr first to say that KVM should > > > consider the corresponding gpa ranges to be no more backed by > > > inaccessible memfd? > > > > We rely on external synchronization to prevent this. See code around > > mmu_invalidate_retry_hva(). > > > > -- > > Kiryl Shutsemau / Kirill A. Shutemov > > IIUC, mmu_invalidate_retry_hva/gfn ensures that page faults on gfn > ranges that are being invalidated are retried till invalidation is > complete. In this case, is it possible that KVM tries to serve the > page fault after inaccessible_notifier_invalidate is complete but > before fallocate could punch hole into the files? > e.g. > inaccessible_notifier_invalidate(...) > ... (system event preempting this control flow, giving a window for > the guest to retry accessing the gfn range which was invalidated) > fallocate(.., PUNCH_HOLE..) Looks this is something can happen. And sounds to me the solution needs just follow the mmu_notifier's way of using a invalidate_start/end pair. invalidate_start() --> kvm->mmu_invalidate_in_progress++; zap KVM page table entries; fallocate() invalidate_end() --> kvm->mmu_invalidate_in_progress--; Then during invalidate_start/end time window mmu_invalidate_retry_gfn checks 'mmu_invalidate_in_progress' and prevent repopulating the same page in KVM page table. if(kvm->mmu_invalidate_in_progress) return 1; /* retry */ Thanks, Chao