From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2BD7C433ED for ; Mon, 19 Apr 2021 20:09:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EB85E61363 for ; Mon, 19 Apr 2021 20:09:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EB85E61363 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E06C96B0036; Mon, 19 Apr 2021 16:09:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D906A6B006E; Mon, 19 Apr 2021 16:09:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BBBD86B0070; Mon, 19 Apr 2021 16:09:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0196.hostedemail.com [216.40.44.196]) by kanga.kvack.org (Postfix) with ESMTP id 98CFE6B0036 for ; Mon, 19 Apr 2021 16:09:19 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 4E8933650 for ; Mon, 19 Apr 2021 20:09:19 +0000 (UTC) X-FDA: 78050206038.19.FE68E80 Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) by imf15.hostedemail.com (Postfix) with ESMTP id A6296A0003A2 for ; Mon, 19 Apr 2021 20:09:16 +0000 (UTC) Received: by mail-pl1-f177.google.com with SMTP id e9so1400211plj.2 for ; Mon, 19 Apr 2021 13:09:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=Gb9E5yRE8jl+5yYmAYn0DVQ42fjAt3W7XVC5pZ1tYmo=; b=q54c/1fzMP5xu7hFOqEyTZl3TzZXzLIMiQykRd/M9KpoviafurGheqHD3qXHNM/DLo ZcUldoiyYKZfKWLXVpI7Tm/ddkle9LF+1W2325PKcf9CliRcM5yKeDlT1GWyyxoHSMfE pN9Fg95HBkfFN4pUCylm5Bxc23Ipcx1/pkcU83iZcJuUSs8yxwYS38SSkPy3l2XlhsJG KMTp8cTIFRXBKa5hhVTNDnFesGNmy7SMy/D1caMJKFZp5hfAhPZeq48Za+O1F3Q7dIPg bOGp31j9+5LDF5uxtQp4grECteCxwgUHF+YBSTlzS4xaSDjCAd3RjEfY121m+ISc+rce D/NA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=Gb9E5yRE8jl+5yYmAYn0DVQ42fjAt3W7XVC5pZ1tYmo=; b=NkeFLhQYvbzNGfy6bSo/+09ZbIs2gmnfJz9YQ5C33ztdePqCpHoQoesY+nw9Ch7cV+ gSwz/f+6r/iOfA+disdMnSvV6aDJ2p4LNmvyBwtEalkFezeSqVUuOme6iAVqZ8uiV7N4 Vi4pszYorNq4ijqg+IZL4cGjTNHePlGnRB/ouVFGgeeh2ZSnS7RHASX1v4HMnG4+MUn3 cz3Ujros9dYHSXDfHeluh3ZQbk/eQjWUvEZril2idjcQgWPXqmr4WQE3HVRBHLJpI7ry ELmGSqC1gvXAh7/6PXjT2SFxJx4z6d5SKX1EjEfcUc6XYqy6CllOdUxU/JkmmcUla4kT cjTg== X-Gm-Message-State: AOAM532TrpE/xlT3TzV85Q4VdUOTTKoL9XwG3bvkV1S8UgkXAhM5bBpc Ow8z0rfSHbFV0YoeQgjtdkjfrg== X-Google-Smtp-Source: ABdhPJx/XBc4RPZAjoHoXF9FrMurB1D8OXmaNslhgNLxWjVKSBzQSl3H3a1DS82M1aSEpJmLS9kosg== X-Received: by 2002:a17:90b:3742:: with SMTP id ne2mr882039pjb.38.1618862957632; Mon, 19 Apr 2021 13:09:17 -0700 (PDT) Received: from google.com (240.111.247.35.bc.googleusercontent.com. [35.247.111.240]) by smtp.gmail.com with ESMTPSA id x17sm602750pjr.0.2021.04.19.13.09.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Apr 2021 13:09:16 -0700 (PDT) Date: Mon, 19 Apr 2021 20:09:13 +0000 From: Sean Christopherson To: "Kirill A. Shutemov" Cc: "Kirill A. Shutemov" , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Jim Mattson , David Rientjes , "Edgecombe, Rick P" , "Kleen, Andi" , "Yamahata, Isaku" , Erdem Aktas , Steve Rutherford , Peter Gonda , David Hildenbrand , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [RFCv2 13/13] KVM: unmap guest memory using poisoned pages Message-ID: References: <20210416154106.23721-1-kirill.shutemov@linux.intel.com> <20210416154106.23721-14-kirill.shutemov@linux.intel.com> <20210419142602.khjbzktk5tk5l6lk@box.shutemov.name> <20210419164027.dqiptkebhdt5cfmy@box.shutemov.name> <20210419185354.v3rgandtrel7bzjj@box> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210419185354.v3rgandtrel7bzjj@box> X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: A6296A0003A2 X-Stat-Signature: zed7yr5a6o1sfwnqm6oxf8sjtwh9aqog Received-SPF: none (google.com>: No applicable sender policy available) receiver=imf15; identity=mailfrom; envelope-from=""; helo=mail-pl1-f177.google.com; client-ip=209.85.214.177 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618862956-897868 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Apr 19, 2021, Kirill A. Shutemov wrote: > On Mon, Apr 19, 2021 at 06:09:29PM +0000, Sean Christopherson wrote: > > On Mon, Apr 19, 2021, Kirill A. Shutemov wrote: > > > On Mon, Apr 19, 2021 at 04:01:46PM +0000, Sean Christopherson wrote: > > > > But fundamentally the private pages, are well, private. They can't be shared > > > > across processes, so I think we could (should?) require the VMA to always be > > > > MAP_PRIVATE. Does that buy us enough to rely on the VMA alone? I.e. is that > > > > enough to prevent userspace and unaware kernel code from acquiring a reference > > > > to the underlying page? > > > > > > Shared pages should be fine too (you folks wanted tmpfs support). > > > > Is that a conflict though? If the private->shared conversion request is kicked > > out to userspace, then userspace can re-mmap() the files as MAP_SHARED, no? > > > > Allowing MAP_SHARED for guest private memory feels wrong. The data can't be > > shared, and dirty data can't be written back to the file. > > It can be remapped, but faulting in the page would produce hwpoison entry. It sounds like you're thinking the whole tmpfs file is poisoned. My thought is that userspace would need to do something like for guest private memory: mmap(NULL, guest_size, PROT_READ|PROT_WRITE, MAP_PRIVATE | MAP_GUEST_ONLY, fd, 0); The MAP_GUEST_ONLY would be used by the kernel to ensure the resulting VMA can only point at private/poisoned memory, e.g. on fault, the associated PFN would be tagged with PG_hwpoison or whtaever. @fd in this case could point at tmpfs, but I don't think it's a hard requirement. On conversion to shared, userspace could then do: munmap(, ) mmap(, , PROT_READ|PROT_WRITE, MAP_SHARED | MAP_FIXED_NOREPLACE, fd, ); or mmap(, , PROT_READ|PROT_WRITE, MAP_SHARED | MAP_FIXED, fd, ); or ioctl(kvm, KVM_SET_USER_MEMORY_REGION, ); mmap(NULL, , PROT_READ|PROT_WRITE, MAP_SHARED, fd, ); ioctl(kvm, KVM_SET_USER_MEMORY_REGION, ); Combinations would also work, e.g. unmap the private range and move the memslot. The private and shared memory regions could also be backed differently, e.g. tmpfs for shared memory, anonymous for private memory. > I don't see other way to make Google's use-case with tmpfs-backed guest > memory work. The underlying use-case is to be able to access guest memory from more than one process, e.g. so that communication with the guest isn't limited to the VMM process associated with the KVM instances. By definition, guest private memory can't be accessed by the host; I don't see how anyone, Google included, can have any real requirements about > > > The poisoned pages must be useless outside of the process with the blessed > > > struct kvm. See kvm_pfn_map in the patch. > > > > The big requirement for kernel TDX support is that the pages are useless in the > > host. Regarding the guest, for TDX, the TDX Module guarantees that at most a > > single KVM guest can have access to a page at any given time. I believe the RMP > > provides the same guarantees for SEV-SNP. > > > > SEV/SEV-ES could still end up with corruption if multiple guests map the same > > private page, but that's obviously not the end of the world since it's the status > > quo today. Living with that shortcoming might be a worthy tradeoff if punting > > mutual exclusion between guests to firmware/hardware allows us to simplify the > > kernel implementation. > > The critical question is whether we ever need to translate hva->pfn after > the page is added to the guest private memory. I believe we do, but I > never checked. And that's the reason we need to keep hwpoison entries > around, which encode pfn. As proposed in the TDX RFC, KVM would "need" the hva->pfn translation if the guest private EPT entry was zapped, e.g. by NUMA balancing (which will fail on the backend). But in that case, KVM still has the original PFN, the "new" translation becomes a sanity check to make sure that the zapped translation wasn't moved unexpectedly. Regardless, I don't see what that has to do with kvm_pfn_map. At some point, gup() has to fault in the page or look at the host PTE value. For the latter, at least on x86, we can throw info into the PTE itself to tag it as guest-only. No matter what implementation we settle on, I think we've failed if we end up in a situation where the primary MMU has pages it doesn't know are guest-only. > If we don't, it would simplify the solution: kvm_pfn_map is not needed. > Single bit-per page would be enough.