From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3EA0C3DA78 for ; Tue, 17 Jan 2023 19:36:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 057086B0071; Tue, 17 Jan 2023 14:36:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 006CD6B0073; Tue, 17 Jan 2023 14:36:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DEA676B0074; Tue, 17 Jan 2023 14:36:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id CF8C76B0071 for ; Tue, 17 Jan 2023 14:36:05 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 69F481C5ECE for ; Tue, 17 Jan 2023 19:36:05 +0000 (UTC) X-FDA: 80365296690.09.F546CF6 Received: from mail-pj1-f49.google.com (mail-pj1-f49.google.com [209.85.216.49]) by imf21.hostedemail.com (Postfix) with ESMTP id A9E021C0006 for ; Tue, 17 Jan 2023 19:36:03 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="mO/P0683"; spf=pass (imf21.hostedemail.com: domain of seanjc@google.com designates 209.85.216.49 as permitted sender) smtp.mailfrom=seanjc@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673984163; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/Y/+6b+/J99MLnqP/RvN51sj3DHWazl0lw52Vf0WKHU=; b=QkQKq2Bq78Tk2y4aA9Lwfn0euCqDMI/lt8rxPHkEbTLXq3FaulfBNwyBuQsFbagNabEmGy tWiRyWHsTM3pwBZuoMtMKAHx3wvQTZretpKJf7D0Wwa6PI4ytbtckbOtBBmSWeqECEc8Sw SPBZfEO+HT8MF9k8jcLKqHa1pIaUMzQ= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="mO/P0683"; spf=pass (imf21.hostedemail.com: domain of seanjc@google.com designates 209.85.216.49 as permitted sender) smtp.mailfrom=seanjc@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673984163; a=rsa-sha256; cv=none; b=ogofwJ9RzRSKgAVqdkX4tGAE2+PXBVaG3D8wqHgFkHE1+b/8yPR716Zlm9krJxBZkW6Qoh 9f7ZRvwHdbrsEhWq0ERVKQDppWJKRG+ZNL0Q1XN3ff7coZwyrUVXAqwuHh46ZzFPTBzucA /r+ksJzZDvg6vwBcB5VmzbS2pcNitCc= Received: by mail-pj1-f49.google.com with SMTP id 7-20020a17090a098700b002298931e366so5203841pjo.2 for ; Tue, 17 Jan 2023 11:36:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=/Y/+6b+/J99MLnqP/RvN51sj3DHWazl0lw52Vf0WKHU=; b=mO/P06835gXK/fd1eBgp4cGBD2kC3K+6imr5T+2jtDjdUBhDC2ZA6VrIPZfU/f9xjC IEFaPiulMS/0ZqDbxbtw2cJsM7Tc3tGTV1WEaKwKEM7/onV7W8gwNEZgbjFg1D9VPFPt zOrBnI+uh6uL/yAkjlKRTpwIo5mbtRcjK3EEpUEcbJ7VVRL/gSQAf4SQnwHxKqHb/sTQ MXdsYhc4zLlGvKOCRi98eQbAPaNpu9G8SHim8g88Gvn8/JWvst4ReoqQ54dQnmQu9Cx8 q4mO4VJKRFbwHLXOegE1kYwL9GcvTKshaR6/+a9qCY329OuOlMsj+GHmm1PrfY3+znUl l0+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=/Y/+6b+/J99MLnqP/RvN51sj3DHWazl0lw52Vf0WKHU=; b=e7NJ6VUMhcW/5VqcIT+5FlWHTDU504UXukEkLn0f8/QKH72/kgwGAjdxgn8NSy4479 qPQiKuW6wKJt6FRpFSqH8/Y6sXt7TYKQG/Cc37KfunqVcAB54VUhZjkKW5xYVmSmLL3w XhI3x5hdBbXmybkfOqOIPkjjgw2K4x/YfmezxGRahAL+bI1tYjcLbf/uYXCAJKwQ2HbJ 0Bu+fppYRjjN0HSXrSc2a/rEKiwDSnNrdEH0Eq2TihGeQrNPNkna8VZQWwIdwalYc88J 8PdSHS6HV1OfbU/O+r4Xw9CJZC1F4Cx4YVaV41KAB6d8RxomkeCmMGPYvMcZXd3ZluMx gxAw== X-Gm-Message-State: AFqh2koUbgCQVrr7bNFOI0e9UqLZ2rgmUoDh1s9q+QFbEOXA4Fld6VMI aqZ+IjIkRNL3PV3TqvbZ7CM7KQ== X-Google-Smtp-Source: AMrXdXu1hk8ysrAGJplB4OjNB7xeVQaqK5fZ7KwLIxI5L/arUaD+5uicI01yTWHuGz/H+vtFbMLBdw== X-Received: by 2002:a05:6a20:93a4:b0:b8:e33c:f160 with SMTP id x36-20020a056a2093a400b000b8e33cf160mr178599pzh.0.1673984162308; Tue, 17 Jan 2023 11:36:02 -0800 (PST) Received: from google.com (7.104.168.34.bc.googleusercontent.com. [34.168.104.7]) by smtp.gmail.com with ESMTPSA id t1-20020a63d241000000b004c974bb9a4esm5296842pgi.83.2023.01.17.11.36.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Jan 2023 11:36:01 -0800 (PST) Date: Tue, 17 Jan 2023 19:35:58 +0000 From: Sean Christopherson To: Chao Peng Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org, Paolo Bonzini , Jonathan Corbet , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Arnd Bergmann , Naoya Horiguchi , Miaohe Lin , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, wei.w.wang@intel.com Subject: Re: [PATCH v10 9/9] KVM: Enable and expose KVM_MEM_PRIVATE Message-ID: References: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> <20221202061347.1070246-10-chao.p.peng@linux.intel.com> <20230117131251.GC273037@chaop.bj.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230117131251.GC273037@chaop.bj.intel.com> X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: xdpm6k7zrnqi6qndusgjnuodbuw4zpzh X-Rspamd-Queue-Id: A9E021C0006 X-HE-Tag: 1673984163-221665 X-HE-Meta: U2FsdGVkX19BwIjE6r0i+pfT3AaC7C8kaUd5GlcfKvFFjiiTzRrVGkxAPMejAFqG4jt390KRMlTR444foc8O4yacoSpJDTs3uv5QZ1at/XDyajCwFH1T28fF3uVsAcI48wsLCeI/fdsw3VDkqCTOcCPm6Zr6fjs3UCIqfry9Egw1qTUpTaPU2EWTX5ISnvCBrq/rXq+0FIb7PzejIokmZlDcuaST5aDOMJxeJz8R7OQb4ARyAyWJQslqb8f0bjEVrYmgTBBr/NwDZTMhs1ayAEo49d70OB3/FJ1ig//z9Tm9OQY+X3EBI1RVL/wdSMvwNz/o6R3LK3p/zM36IxYMwP8uGr4FmssxSAdxg8XM3R0EiWtJJU0ZSiTzF4wS1gPtk7xz1BdllGe5AQmLboSGslLAnWaIuno3ODS9nqmKhjhOiidycJ/WSM5gxqXZ1+2/JqzeErclj/Ovlw1rG1dx6Q2iIzr+xO7JrtIIc2SkY/HBxfXC41tlUgRASFNCWnoh4NsoM0nsrNE2eyRD2Wf27TD/MqUXVdl4M2N3CTLXYKoQ5mK+PGY4wUN1vl99amDlLjtgfIXml3G/gOa71nl/YW45RBZ/3ERlSB9GjPowzJplJLuVbuZsWyGtWJSXY3DR7LD9WfxEoR8LU78uISF01jqeCgCQmoHt2cxAVkZRV9/8QF1RJUYW+azFKcNzz2jUgy4hQ5mAYjASv4cWdJ1r4OO+XnZVgHimo2H3HMAKBpMsXXYSFkCD5ku44v2tbUWn0fvl/0JvaPu3F9LYGtrX+ZasBJJTPlG61oroNGQPHskU+W9yFtHF3F+6dY4Jbt3bxIC1mPpPdc/uMaSMYAKUQgmYNBrtjUfYol8bmRrcNpYPZH2WVOqE58esgLJ4XdyHKg1ZzLFue1YW7nWjuOApaptao0JH0KDri0T0Iu88zt68U9EgHDG4gjMeMYYH6ZG4Hgx2RUGfBhycDiGdShE Pv8m1qse NhLm7BYMndy1zEUQc9SwN+5nl5MN1Y44guRiBgDURSqA9mmmqyLBR0/+RMW+Xsf84ToKoj1Qoup9j3vEbTHw9Z3mudC4y2M9FZAXPyQg8sRey8CPOUYQGvD+hFPKC+B6HeTIJ4bbvQ66qS6ug0R3kDcc4Hg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jan 17, 2023, Chao Peng wrote: > On Sat, Jan 14, 2023 at 12:01:01AM +0000, Sean Christopherson wrote: > > On Fri, Dec 02, 2022, Chao Peng wrote: > > > @@ -10357,6 +10364,12 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) > > > > > > if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING, vcpu)) > > > static_call(kvm_x86_update_cpu_dirty_logging)(vcpu); > > > + > > > + if (kvm_check_request(KVM_REQ_MEMORY_MCE, vcpu)) { > > > + vcpu->run->exit_reason = KVM_EXIT_SHUTDOWN; > > > > Synthesizing triple fault shutdown is not the right approach. Even with TDX's > > MCE "architecture" (heavy sarcasm), it's possible that host userspace and the > > guest have a paravirt interface for handling memory errors without killing the > > host. > > Agree shutdown is not the correct choice. I see you made below change: > > send_sig_mceerr(BUS_MCEERR_AR, (void __user *)hva, PAGE_SHIFT, current) > > The MCE may happen in any thread than KVM thread, sending siginal to > 'current' thread may not be the expected behavior. This is already true today, e.g. a #MC in memory that is mapped into the guest can be triggered by a host access. Hrm, but in this case we actually have a KVM instance, and we know that the #MC is relevant to the KVM instance, so I agree that signaling 'current' is kludgy. > Also how userspace can tell is the MCE on the shared page or private page? > Do we care? We care. I was originally thinking we could require userspace to keep track of things, but that's quite prescriptive and flawed, e.g. could race with conversions. One option would be to KVM_EXIT_MEMORY_FAULT, and then wire up a generic (not x86 specific) KVM request to exit to userspace, e.g. /* KVM_EXIT_MEMORY_FAULT */ struct { #define KVM_MEMORY_EXIT_FLAG_PRIVATE (1ULL << 3) #define KVM_MEMORY_EXIT_FLAG_HW_ERROR (1ULL << 4) __u64 flags; __u64 gpa; __u64 size; } memory; But I'm not sure that's the correct approach. It kinda feels like we're reinventing the wheel. It seems like restrictedmem_get_page() _must_ be able to reject attempts to get a poisoned page, i.e. restrictedmem_get_page() should yield KVM_PFN_ERR_HWPOISON. Assuming that's the case, then I believe KVM simply needs to zap SPTEs in response to an error notification in order to force vCPUs to fault on the poisoned page. > > > + return -EINVAL; > > > if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_MEM_SLOTS_NUM) > > > return -EINVAL; > > > if (mem->guest_phys_addr + mem->memory_size < mem->guest_phys_addr) > > > @@ -2020,6 +2154,9 @@ int __kvm_set_memory_region(struct kvm *kvm, > > > if ((kvm->nr_memslot_pages + npages) < kvm->nr_memslot_pages) > > > return -EINVAL; > > > } else { /* Modify an existing slot. */ > > > + /* Private memslots are immutable, they can only be deleted. */ > > > > I'm 99% certain I suggested this, but if we're going to make these memslots > > immutable, then we should straight up disallow dirty logging, otherwise we'll > > end up with a bizarre uAPI. > > But in my mind dirty logging will be needed in the very short time, when > live migration gets supported? Ya, but if/when live migration support is added, private memslots will no longer be immutable as userspace will want to enable dirty logging only when a VM is being migrated, i.e. something will need to change. Given that it looks like we have clear line of sight to SEV+UPM guests, my preference would be to allow toggling dirty logging from the get-go. It doesn't necessarily have to be in the first patch, e.g. KVM could initially reject KVM_MEM_LOG_DIRTY_PAGES + KVM_MEM_PRIVATE and then add support separately to make the series easier to review, test, and bisect. static int check_memory_region_flags(struct kvm *kvm, const struct kvm_userspace_memory_region2 *mem) { u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES; if (kvm_arch_has_private_mem(kvm) && ~(mem->flags & KVM_MEM_LOG_DIRTY_PAGES)) valid_flags |= KVM_MEM_PRIVATE; ... } > > > + if (mem->flags & KVM_MEM_PRIVATE) > > > + return -EINVAL; > > > if ((mem->userspace_addr != old->userspace_addr) || > > > (npages != old->npages) || > > > ((mem->flags ^ old->flags) & KVM_MEM_READONLY)) > > > @@ -2048,10 +2185,28 @@ int __kvm_set_memory_region(struct kvm *kvm, > > > new->npages = npages; > > > new->flags = mem->flags; > > > new->userspace_addr = mem->userspace_addr; > > > + if (mem->flags & KVM_MEM_PRIVATE) { > > > + new->restricted_file = fget(mem->restricted_fd); > > > + if (!new->restricted_file || > > > + !file_is_restrictedmem(new->restricted_file)) { > > > + r = -EINVAL; > > > + goto out; > > > + } > > > + new->restricted_offset = mem->restricted_offset; > > I see you changed slot->restricted_offset type from loff_t to gfn_t and > used pgoff_t when doing the restrictedmem_bind/unbind(). Using page > index is reasonable KVM internally and sounds simpler than loff_t. But > we also need initialize it to page index here as well as changes in > another two cases. This is needed when restricted_offset != 0. Oof. I'm pretty sure I completely missed that loff_t is used for byte offsets, whereas pgoff_t is a frame index. Given that the restrictmem APIs take pgoff_t, I definitely think it makes sense to the index, but I'm very tempted to store pgoff_t instead of gfn_t, and name the field "index" to help connect the dots to the rest of kernel, where "pgoff_t index" is quite common. And looking at those bits again, we should wrap all of the restrictedmem fields with CONFIG_KVM_PRIVATE_MEM. It'll require minor tweaks to __kvm_set_memory_region(), but I think will yield cleaner code (and internal APIs) overall. And wrap the three fields in an anonymous struct? E.g. this is a little more versbose (restrictedmem instead restricted), but at first glance it doesn't seem to cause widespared line length issues. #ifdef CONFIG_KVM_PRIVATE_MEM struct { struct file *file; pgoff_t index; struct restrictedmem_notifier notifier; } restrictedmem; #endif > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index 547b92215002..49e375e78f30 100644 > --- a/include/linux/kvm_host.h > +++ b/include/linux/kvm_host.h > @@ -2364,8 +2364,7 @@ static inline int kvm_restricted_mem_get_pfn(struct kvm_memory_slot *slot, > gfn_t gfn, kvm_pfn_t *pfn, > int *order) > { > - pgoff_t index = gfn - slot->base_gfn + > - (slot->restricted_offset >> PAGE_SHIFT); > + pgoff_t index = gfn - slot->base_gfn + slot->restricted_offset; > struct page *page; > int ret; > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 01db35ddd5b3..7439bdcb0d04 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -935,7 +935,7 @@ static bool restrictedmem_range_is_valid(struct kvm_memory_slot *slot, > pgoff_t start, pgoff_t end, > gfn_t *gfn_start, gfn_t *gfn_end) > { > - unsigned long base_pgoff = slot->restricted_offset >> PAGE_SHIFT; > + unsigned long base_pgoff = slot->restricted_offset; > > if (start > base_pgoff) > *gfn_start = slot->base_gfn + start - base_pgoff; > @@ -2275,7 +2275,7 @@ int __kvm_set_memory_region(struct kvm *kvm, > r = -EINVAL; > goto out; > } > - new->restricted_offset = mem->restricted_offset; > + new->restricted_offset = mem->restricted_offset >> PAGE_SHIFT; > } > > r = kvm_set_memslot(kvm, old, new, change); > > Chao > > > + } > > > + > > > + new->kvm = kvm; > > > > Set this above, just so that the code flows better.