From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 964E6C433FE for ; Tue, 18 Oct 2022 00:34:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1B6A46B0072; Mon, 17 Oct 2022 20:34:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 167616B0075; Mon, 17 Oct 2022 20:34:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 02E046B0078; Mon, 17 Oct 2022 20:34:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E86D96B0072 for ; Mon, 17 Oct 2022 20:34:03 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id B98ED1C67C0 for ; Tue, 18 Oct 2022 00:34:03 +0000 (UTC) X-FDA: 80032197966.21.52A820C Received: from mail-oa1-f54.google.com (mail-oa1-f54.google.com [209.85.160.54]) by imf27.hostedemail.com (Postfix) with ESMTP id 5E8AD4001B for ; Tue, 18 Oct 2022 00:34:03 +0000 (UTC) Received: by mail-oa1-f54.google.com with SMTP id 586e51a60fabf-1322d768ba7so15182732fac.5 for ; Mon, 17 Oct 2022 17:34:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=DZFNHgeagPRGpN511S5OVmTVLDSRxzqHSZND+ilItHI=; b=rX/14hxyK+jbBsNmsi6OICKyz6vY8MdAjLpdpmL8Eoa7S9QfTsmtZ2g16772VPAdQ3 fyx8viqBFbhwe4fsuILQVTdMHesYZMcl1oUOCmX8b3E05NNXj84+XzmbL1LsDXmL5F1j LMV1n2XU1a76XmxIXhbX/jQLk94Nf9oyipbD0iN/8f2ZIv+JsTvOocrVsgP0HsxvQWOB FV59MeLky59LcmcQNFmf5mPPHgZyPDz4SSGEWvXafNz+jO3Il6nH3aWk0dudbCEewkRL w2EkcpSee8Zn3VXKnm1otN85uvjvLHE28I2YLKCADeFtBWdkawzMyHE1W2n1XGklWzw2 7bHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=DZFNHgeagPRGpN511S5OVmTVLDSRxzqHSZND+ilItHI=; b=0CHuXnlwTTvs9wX0y/faeJy8FFrBM5am/J6cXGuhwM3GZKWbiL4qPVGG+CCtBAFp3K 8RRc9Lo/jn/xc6w9UJpYilvEFThEMw9vM1+Hm4beo9CYCNwD4ZtKOd+lhJCeW4RQUmyp ByUvLV0bx9zl1Ob+ky53XZbyU6tgxZoto3dRe+F7iNWFB/iDO1YlmQ4hC2GbI4qPLWj6 VnASw3Bz8Zgljj3JV2pzYtBymMpBA4cDY1LbI5VHTvZ65SG4YRVFjbwDuxeHi9bFLwAU RXG5cX64DZDyQMZTucOZtnBMJykiVDMxmeu+7pbtn80z2Y1USXEdct1QKZSnqOBBnmRU 9ulw== X-Gm-Message-State: ACrzQf23++Vaij0jMxWk9bBrKw/7lQK3x/6X3SlsNAywF615iNqLQ+px Z5hoz5/l5Xzbm7v+VmIMyqSb78Z92uuw/A== X-Google-Smtp-Source: AMsMyM6GsW9qHuPFp8+B+qhuY2BytXR7EdsQrQz18ZdcSSusgW2PC6Ay30Uig0XRAo3WSxCvao9eLQ== X-Received: by 2002:a17:90a:4594:b0:20b:23d5:8ead with SMTP id v20-20020a17090a459400b0020b23d58eadmr35608180pjg.127.1666053232202; Mon, 17 Oct 2022 17:33:52 -0700 (PDT) Received: from google.com (7.104.168.34.bc.googleusercontent.com. [34.168.104.7]) by smtp.gmail.com with ESMTPSA id q59-20020a17090a1b4100b001efa9e83927sm9986738pjq.51.2022.10.17.17.33.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Oct 2022 17:33:51 -0700 (PDT) Date: Tue, 18 Oct 2022 00:33:48 +0000 From: Sean Christopherson To: Fuad Tabba Cc: Chao Peng , David Hildenbrand , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org, Paolo Bonzini , Jonathan Corbet , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com, Will Deacon , Marc Zyngier Subject: Re: [PATCH v8 1/8] mm/memfd: Introduce userspace inaccessible memfd Message-ID: References: <20220915142913.2213336-1-chao.p.peng@linux.intel.com> <20220915142913.2213336-2-chao.p.peng@linux.intel.com> <20220926142330.GC2658254@chaop.bj.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666053243; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=DZFNHgeagPRGpN511S5OVmTVLDSRxzqHSZND+ilItHI=; b=grYzchnnKqQT7G7uPBMX2DzorI1G9jNlCBnRIwK8CK4P6/STaEUtJVh3br4MjO3clBAqhQ 01jFuqUgkrVuL2r5ek1nOM3LyB9VFnxbmVJFwOjyOZnvjFIeKThZGPwrBwImxIlX3CTJ44 IpnVU3kBAKeF/L9Jqp2/Awoa/BjHyaE= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="rX/14hxy"; spf=pass (imf27.hostedemail.com: domain of seanjc@google.com designates 209.85.160.54 as permitted sender) smtp.mailfrom=seanjc@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666053243; a=rsa-sha256; cv=none; b=zeAbgTBkXRRp6Bax/1o9L79ZyPzLnXYTHLny5hmTYxk6DAYHf78Jt0p/+tf8K1uFyvocBs fIqXuFfl4epP1hmGvW+u8583AvHu2KIfUreuxrGwBFKnhliQMccoUr8u7SWyRdCBCYrxEp HL7Fww01ef6SYgXbbScmL0xVmgY5dIQ= X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 5E8AD4001B X-Rspam-User: Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="rX/14hxy"; spf=pass (imf27.hostedemail.com: domain of seanjc@google.com designates 209.85.160.54 as permitted sender) smtp.mailfrom=seanjc@google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: y1ir1hyqsn8a5r9d3mcp59iex5xz69f7 X-HE-Tag: 1666053243-447195 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Sep 30, 2022, Fuad Tabba wrote: > > > > > pKVM would also need a way to make an fd accessible again > > > > > when shared back, which I think isn't possible with this patch. > > > > > > > > But does pKVM really want to mmap/munmap a new region at the page-level, > > > > that can cause VMA fragmentation if the conversion is frequent as I see. > > > > Even with a KVM ioctl for mapping as mentioned below, I think there will > > > > be the same issue. > > > > > > pKVM doesn't really need to unmap the memory. What is really important > > > is that the memory is not GUP'able. > > > > Well, not entirely unguppable, just unguppable without a magic FOLL_* flag, > > otherwise KVM wouldn't be able to get the PFN to map into guest memory. > > > > The problem is that gup() and "mapped" are tied together. So yes, pKVM doesn't > > strictly need to unmap memory _in the untrusted host_, but since mapped==guppable, > > the end result is the same. > > > > Emphasis above because pKVM still needs unmap the memory _somehwere_. IIUC, the > > current approach is to do that only in the stage-2 page tables, i.e. only in the > > context of the hypervisor. Which is also the source of the gup() problems; the > > untrusted kernel is blissfully unaware that the memory is inaccessible. > > > > Any approach that moves some of that information into the untrusted kernel so that > > the kernel can protect itself will incur fragmentation in the VMAs. Well, unless > > all of guest memory becomes unguppable, but that's likely not a viable option. > > Actually, for pKVM, there is no need for the guest memory to be GUP'able at > all if we use the new inaccessible_get_pfn(). Ya, I was referring to pKVM without UPM / inaccessible memory. Jumping back to blocking gup(), what about using the same tricks as secretmem to block gup()? E.g. compare vm_ops to block regular gup() and a_ops to block fast gup() on struct page? With a Kconfig that's selected by pKVM (which would also need its own Kconfig), e.g. CONFIG_INACCESSIBLE_MAPPABLE_MEM, there would be zero performance overhead for non-pKVM kernels, i.e. hooking gup() shouldn't be controversial. I suspect the fast gup() path could even be optimized to avoid the page_mapping() lookup by adding a PG_inaccessible flag that's defined iff the TBD Kconfig is selected. I'm guessing pKVM isn't expected to be deployed on massivve NUMA systems anytime soon, so there should be plenty of page flags to go around. Blocking gup() instead of trying to play refcount games when converting back to private would eliminate the need to put heavy restrictions on mapping, as the goal of those were purely to simplify the KVM implementation, e.g. the "one mapping per memslot" thing would go away entirely. > This of course goes back to what I'd mentioned before in v7; it seems that > representing the memslot memory as a file descriptor should be orthogonal to > whether the memory is shared or private, rather than a private_fd for private > memory and the userspace_addr for shared memory. I also explored the idea of backing any guest memory with an fd, but came to the conclusion that private memory needs a separate handle[1], at least on x86. For SNP and TDX, even though the GPA is the same (ignoring the fact that SNP and TDX steal GPA bits to differentiate private vs. shared), the two types need to be treated as separate mappings[2]. Post-boot, converting is lossy in both directions, so even conceptually they are two disctint pages that just happen to share (some) GPA bits. To allow conversions, i.e. changing which mapping to use, without memslot updates, KVM needs to let userspace provide both mappings in a single memslot. So while fd-based memory is an orthogonal concept, e.g. we could add fd-based shared memory, KVM would still need a dedicated private handle. For pKVM, the fd doesn't strictly need to be mutually exclusive with the existing userspace_addr, but since the private_fd is going to be added for x86, I think it makes sense to use that instead of adding generic fd-based memory for pKVM's use case (which is arguably still "private" memory but with special semantics). [1] https://lore.kernel.org/all/YulTH7bL4MwT5v5K@google.com [2] https://lore.kernel.org/all/869622df-5bf6-0fbb-cac4-34c6ae7df119@kernel.org > The host can then map or unmap the shared/private memory using the fd, which > allows it more freedom in even choosing to unmap shared memory when not > needed, for example.