From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.1 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C75DCC432BE for ; Wed, 1 Sep 2021 08:09:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 741D960238 for ; Wed, 1 Sep 2021 08:09:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 741D960238 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id F1B846B006C; Wed, 1 Sep 2021 04:09:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EA36D6B0071; Wed, 1 Sep 2021 04:09:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D1E3C8D0001; Wed, 1 Sep 2021 04:09:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0124.hostedemail.com [216.40.44.124]) by kanga.kvack.org (Postfix) with ESMTP id C0A676B006C for ; Wed, 1 Sep 2021 04:09:12 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 792E122BE6 for ; Wed, 1 Sep 2021 08:09:12 +0000 (UTC) X-FDA: 78538279344.01.BD6F923 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf04.hostedemail.com (Postfix) with ESMTP id 1381350000A1 for ; Wed, 1 Sep 2021 08:09:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1630483751; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xehwNc5m+0VkuxXN2ywY3KTZndfyL4hPEsZFfPwosyo=; b=Gqfb2DC+SIKh/b+m6HsvIL3HWsTUEAqJJSW2rK3DlHaTqe96lAqoF3Rq3eIYDM3KTx4fwW E87coeCpedqZBVkRDI8yScIb8dwADF5OukhyJGyRtjVJ5eoE0fdITGv2uGDTVVYSA6K5xU ML6wStMWTLjrZzsij1G2yqu12QwdPoc= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-554-pLkl3MMzMaG2mHRTBZ6pkA-1; Wed, 01 Sep 2021 04:09:10 -0400 X-MC-Unique: pLkl3MMzMaG2mHRTBZ6pkA-1 Received: by mail-wr1-f69.google.com with SMTP id d10-20020adffbca000000b00157bc86d94eso500442wrs.20 for ; Wed, 01 Sep 2021 01:09:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:to:cc:references:from:organization:subject :message-id:date:user-agent:mime-version:in-reply-to :content-language:content-transfer-encoding; bh=xehwNc5m+0VkuxXN2ywY3KTZndfyL4hPEsZFfPwosyo=; b=DjEChV18sfCXGrm3M/2CxAAeQeyerrJw1f89BTR/XmQFsW5n1sq9bVn6jgeQcfqLvH D1Tsv0+ygJ7liECR0kiL2lIYyTJLygJmh0x6zobJ4zWVmhl+J7V1xexsjXX1nEeL/ouV c4wbZNrpaz7Aa4fd646TgKi669/BHxxrJGFINCVg24pQKCJ+jdoESHX3XXuiDHxI0Mg1 mS/pccy1AXXhNgymZdPXvTuHoGVBoG7CD4QTK+Sv+YsZOLXTvX7w97EudGuvUCGCQXAi zEJjeoeE7EZvnUrC0I0MeGrurEFe6JTqmC04VEnHFKrOo5KnABiXjCExUxo37imEmc7n gklQ== X-Gm-Message-State: AOAM530P4CtjNZw3P8/2xCfm7DeaWFfBKc8Fbr8rNE8WA+TqrbAsIaCr pe5vI3WoZbZmKdVx1flY5iioiki4LauRpfG8kIvPIt+gfyOxfYVyJpp8so5r3XV4VPk+4Mva45E bbH/Y1hhvcis= X-Received: by 2002:a1c:2684:: with SMTP id m126mr8402740wmm.65.1630483749530; Wed, 01 Sep 2021 01:09:09 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxVg9SDEJVZUq3nbvsMK7CWfSbicB1cKUcx7fhtGFdGrRsx3YALb8TPN4qumsd3be+NY8b/Lw== X-Received: by 2002:a1c:2684:: with SMTP id m126mr8402689wmm.65.1630483749203; Wed, 01 Sep 2021 01:09:09 -0700 (PDT) Received: from [192.168.3.132] (p4ff23f71.dip0.t-ipconnect.de. [79.242.63.113]) by smtp.gmail.com with ESMTPSA id o7sm4481973wmc.46.2021.09.01.01.09.07 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 01 Sep 2021 01:09:08 -0700 (PDT) To: Sean Christopherson Cc: Paolo Bonzini , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Borislav Petkov , Andy Lutomirski , Andrew Morton , Joerg Roedel , Andi Kleen , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Ingo Molnar , Varad Gautam , Dario Faggioli , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, "Kirill A . Shutemov" , "Kirill A . Shutemov" , Kuppuswamy Sathyanarayanan , Dave Hansen , Yu Zhang References: <20210824005248.200037-1-seanjc@google.com> <307d385a-a263-276f-28eb-4bc8dd287e32@redhat.com> <61ea53ce-2ba7-70cc-950d-ca128bcb29c5@redhat.com> From: David Hildenbrand Organization: Red Hat Subject: Re: [RFC] KVM: mm: fd-based approach for supporting KVM guest private memory Message-ID: Date: Wed, 1 Sep 2021 10:09:07 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Gqfb2DC+; spf=none (imf04.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 1381350000A1 X-Stat-Signature: pi6ewiwfxgkof5uhwu11h3k4sso6hiua X-HE-Tag: 1630483751-842693 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: >> Do we have to protect from that? How would KVM protect from user space >> replacing private pages by shared pages in any of the models we discus= s? >=20 > The overarching rule is that KVM needs to guarantee a given pfn is neve= r mapped[*] > as both private and shared, where "shared" also incorporates any mappin= g from the > host. Essentially it boils down to the kernel ensuring that a pfn is u= nmapped > before it's converted to/from private, and KVM ensuring that it honors = any > unmap notifications from the kernel, e.g. via mmu_notifier or via a dir= ect callback > as proposed in this RFC. Okay, so the fallocate(PUNCHHOLE) from user space could trigger the=20 respective unmapping and freeing of backing storage. >=20 > As it pertains to PUNCH_HOLE, the responsibilities are no different tha= n when the > backing-store is destroyed; the backing-store needs to notify downstrea= m MMUs > (a.k.a. KVM) to unmap the pfn(s) before freeing the associated memory. Right. >=20 > [*] Whether or not the kernel's direct mapping needs to be removed is d= ebatable, > but my argument is that that behavior is not visible to userspace = and thus > out of scope for this discussion, e.g. zapping/restoring the direc= t map can > be added/removed without impacting the userspace ABI. Right. Removing it shouldn't also be requited IMHO. There are other ways=20 to teach the kernel to not read/write some online pages (filter=20 /proc/kcore, disable hibernation, strict access checks for /dev/mem ...). >=20 >>>> Define "ordinary" user memory slots as overlay on top of "encrypted"= memory >>>> slots. Inside KVM, bail out if you encounter such a VMA inside a no= rmal >>>> user memory slot. When creating a "encryped" user memory slot, requi= re that >>>> the whole VMA is covered at creation time. You know the VMA can't ch= ange >>>> later. >>> >>> This can work for the basic use cases, but even then I'd strongly pre= fer not to >>> tie memslot correctness to the VMAs. KVM doesn't truly care what lie= s behind >>> the virtual address of a memslot, and when it does care, it tends to = do poorly, >>> e.g. see the whole PFNMAP snafu. KVM cares about the pfn<->gfn mappi= ngs, and >>> that's reflected in the infrastructure. E.g. KVM relies on the mmu_n= otifiers >>> to handle mprotect()/munmap()/etc... >> >> Right, and for the existing use cases this worked. But encrypted memor= y >> breaks many assumptions we once made ... >> >> I have somewhat mixed feelings about pages that are mapped into $WHATE= VER >> page tables but not actually mapped into user space page tables. There= is no >> way to reach these via the rmap. >> >> We have something like that already via vfio. And that is fundamentall= y >> broken when it comes to mmu notifiers, page pinning, page migration, .= .. >=20 > I'm not super familiar with VFIO internals, but the idea with the fd-ba= sed > approach is that the backing-store would be in direct communication wit= h KVM and > would handle those operations through that direct channel. Right. The problem I am seeing is that e.g., try_to_unmap() might not be=20 able to actually fully unmap a page, because some non-synchronized KVM=20 MMU still maps a page. It would be great to evaluate how the fd=20 callbacks would fit into the whole picture, including the current rmap. I guess I'm missing the bigger picture how it all fits together on the=20 !KVM side. >=20 >>> As is, I don't think KVM would get any kind of notification if userpa= ces unmaps >>> the VMA for a private memslot that does not have any entries in the h= ost page >>> tables. I'm sure it's a solvable problem, e.g. by ensuring at least= one page >>> is touched by the backing store, but I don't think the end result wou= ld be any >>> prettier than a dedicated API for KVM to consume. >>> >>> Relying on VMAs, and thus the mmu_notifiers, also doesn't provide lin= e of sight >>> to page migration or swap. For those types of operations, KVM curren= tly just >>> reacts to invalidation notifications by zapping guest PTEs, and then = gets the >>> new pfn when the guest re-faults on the page. That sequence doesn't = work for >>> TDX or SEV-SNP because the trusteday agent needs to do the memcpy() o= f the page >>> contents, i.e. the host needs to call into KVM for the actual migrati= on. >> >> Right, but I still think this is a kernel internal. You can do such >> handshake later in the kernel IMHO. >=20 > It is kernel internal, but AFAICT it will be ugly because KVM "needs" t= o do the > migration and that would invert the mmu_notifer API, e.g. instead of "t= elling" > secondary MMUs to invalidate/change a mappings, the mm would be "asking= " > secondary MMus "can you move this?". More below. In my thinking, the the rmap via mmu notifiers would do the unmapping=20 just as we know it (from primary MMU -> secondary MMU). Once=20 try_to_unmap() succeeded, the fd provider could kick-off the migration=20 via whatever callback. >=20 >> But I also already thought: is it really KVM that is to perform the >> migration or is it the fd-provider that performs the migration? Who sa= ys >> memfd_encrypted() doesn't default to a TDX "backend" on Intel CPUs tha= t just >> knows how to migrate such a page? >> >> I'd love to have some details on how that's supposed to work, and whic= h >> information we'd need to migrate/swap/... in addition to the EPFN and = a new >> SPFN. >=20 > KVM "needs" to do the migration. On TDX, the migration will be a SEAMC= ALL, > a post-VMXON instruction that transfers control to the TDX-Module, that= at > minimum needs a per-VM identifier, the gfn, and the page table level. = The call The per-VM identifier and the GFN would be easy to grab. Page table=20 level, not so sure -- do you mean the general page table depth? Or if=20 it's mapped as 4k vs. 2M ... ? The latter could be answered by the fd=20 provider already I assume. Does the page still have to be mapped into the secondary MMU when=20 performing the migration via TDX? I assume not, which would simplify=20 things a lot. > into the TDX-Module would also need to take a KVM lock (probably KVM's = mmu_lock) > to satisfy TDX's concurrency requirement, e.g. to avoid "spurious" erro= rs due to > the backing-store attempting to migrate memory that KVM is unmapping du= e to a > memslot change. Something like that might be handled by fixing private memory slots=20 similar to in my draft, right? >=20 > The per-VM identifier may not apply to SEV-SNP, but I believe everythin= g else > holds true. Thanks! --=20 Thanks, David / dhildenb