From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBA96C7619A for ; Sat, 18 Mar 2023 04:51:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D7C696B00A5; Sat, 18 Mar 2023 00:51:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D2B596B00A6; Sat, 18 Mar 2023 00:51:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BF2D86B00A7; Sat, 18 Mar 2023 00:51:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id B068E6B00A5 for ; Sat, 18 Mar 2023 00:51:43 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 7618F1C6A4D for ; Sat, 18 Mar 2023 04:51:43 +0000 (UTC) X-FDA: 80580796086.01.AB14933 Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) by imf28.hostedemail.com (Postfix) with ESMTP id 95320C0007 for ; Sat, 18 Mar 2023 04:51:41 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=JWjQ9V37; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf28.hostedemail.com: domain of isaku.yamahata@gmail.com designates 209.85.216.41 as permitted sender) smtp.mailfrom=isaku.yamahata@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679115101; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Q4U0uReejjG+yTEfEsRubyWXyJoAmN8/B87eZU0BB40=; b=zOXKS2bZP3D5EWTweaVCcLoC4uu3xzaUEvsmzTPir/vuipTQ53K52FMsSIfyCyu9gXEVUN J+gQsFTs+OyER3DOTyCVMOV4olz/h+ljUr3Di+kg1IYah4pU8zo27LM4v+p1aLCYiH2/k+ rMWmsRZL4R+4kW+BgGF9GaVxxdXejTM= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=JWjQ9V37; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf28.hostedemail.com: domain of isaku.yamahata@gmail.com designates 209.85.216.41 as permitted sender) smtp.mailfrom=isaku.yamahata@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679115101; a=rsa-sha256; cv=none; b=WXdgXBxdwGGFs71ssecaAoPGlaN/7LC+XqIc0tE3rwS6L5aZU1DnOmRAeBlvQ7FwdDOoC4 lUh/Zp2EUMOATXLoDmzsqC4dqG4ETdr7BpZ52qZa6ODHaBLd7aRkGC4XpO3aZ4LWN+oXA5 oPf4KxAU7EQngR2ByeZ7nHvp9TEdCyM= Received: by mail-pj1-f41.google.com with SMTP id gp15-20020a17090adf0f00b0023d1bbd9f9eso11300658pjb.0 for ; Fri, 17 Mar 2023 21:51:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679115100; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=Q4U0uReejjG+yTEfEsRubyWXyJoAmN8/B87eZU0BB40=; b=JWjQ9V377H9Djvzxw838yC/PJvbTJ7wzP4GginxekytJZS555W40pzyA/W1T304cEt VeZdBk0n7yUQfvdBTL/61af41Mx0WAS4KwWgjLb+iX4Uu0hgrltCjhKc3kzf/wl8Szw7 FDF4nM2UtNjTUvYjrk5eQEwIA7f/t56l8Aq+tdOaoFeTQEcbGYWenAHhJtiRJnIC+vzO zlWuV077LlUhZKHUMikdcXqpWxMe80FtRhqukpff7jds51gsnTZHnkVAIBWHnrkM0hx+ p5D8CDLsc9dQpWuUKeR0Vg4VAXxgnbYhwiefNFpuGMFcUiVmWaj05iQcWpWK0IAFe71r De6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679115100; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=Q4U0uReejjG+yTEfEsRubyWXyJoAmN8/B87eZU0BB40=; b=RH9adjie00TMdMV7nGRFItOewSX3KLdZdtHU2dE3Z/bLbS9WbnffyjL5qrG69hpE5u 0NA3XAaZD6mzM+IvHerHENS4Y5H8HKoEz/NkDos58vPgxTNksG2XdKL3aLSYP1Tg+/8U Fc+li3OpxheiwkqNaoBMtIZxpRng2R1Ma/e2zgehk4YX3YPzg0zhLf3Ub5QaznKTVAM8 X9IyUostPiBOAL/pnDseNNQ8drTwbOqgWovNcRmjytgK6vceIDugyn1ZRSMVs95tNGUt 1ORotACmQGzT7Yhl0+YnrR3up7d9SmpX5FI4OsbP7sR+2Wkni55n1a1BAcxxEV8a0Epq 4cvw== X-Gm-Message-State: AO0yUKV4MvOsLL3TNZ8o1PJXMnqr5XIYN390inpB3cBU1EvL/Tejnw7X q6KiERXlPWvRvBxp39KS3ls= X-Google-Smtp-Source: AK7set+5UlzU6279vQP5wOqmJRWEo/tBTficT+QdsGn7DnffW3NMtkje3fk8RNO3/ADXWM9YVSwiSg== X-Received: by 2002:a05:6a20:2a26:b0:cc:aedf:9a1e with SMTP id e38-20020a056a202a2600b000ccaedf9a1emr8503598pzh.61.1679115100171; Fri, 17 Mar 2023 21:51:40 -0700 (PDT) Received: from localhost ([192.55.54.55]) by smtp.gmail.com with ESMTPSA id v25-20020aa78519000000b005a8bdc18453sm2363189pfn.35.2023.03.17.21.51.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Mar 2023 21:51:39 -0700 (PDT) Date: Fri, 17 Mar 2023 21:51:37 -0700 From: Isaku Yamahata To: Michael Roth Cc: kvm@vger.kernel.org, linux-coco@lists.linux.dev, linux-mm@kvack.org, linux-crypto@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, mingo@redhat.com, jroedel@suse.de, thomas.lendacky@amd.com, hpa@zytor.com, ardb@kernel.org, pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, jmattson@google.com, luto@kernel.org, dave.hansen@linux.intel.com, slp@redhat.com, pgonda@google.com, peterz@infradead.org, srinivas.pandruvada@linux.intel.com, rientjes@google.com, dovmurik@linux.ibm.com, tobin@ibm.com, bp@alien8.de, vbabka@suse.cz, kirill@shutemov.name, ak@linux.intel.com, tony.luck@intel.com, marcorr@google.com, sathyanarayanan.kuppuswamy@linux.intel.com, alpergun@google.com, dgilbert@redhat.com, jarkko@kernel.org, ashish.kalra@amd.com, nikunj.dadhania@amd.com, isaku.yamahata@gmail.com Subject: Re: [PATCH RFC v8 01/56] KVM: x86: Add 'fault_is_private' x86 op Message-ID: <20230318045137.GC408922@ls.amr.corp.intel.com> References: <20230220183847.59159-1-michael.roth@amd.com> <20230220183847.59159-2-michael.roth@amd.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20230220183847.59159-2-michael.roth@amd.com> X-Rspamd-Queue-Id: 95320C0007 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: ic6gfuet8xhtm6fbomskj1sc5f3nmh4r X-HE-Tag: 1679115101-408080 X-HE-Meta: U2FsdGVkX1/QmUyiSvvZp0TKRDejxNqB9rwDByWIV2Yn08Mnt/P85I+sqUKXKhQOfNdR/ChmGSEYCm5pufOg7CDvq4zxJxVV5tGHyrVmfZKRontcjjoGpXetpNAQswqSJjjUVSGdGD35HVSIaO2xn8szIZ0Z/W2OXf0sjv4CasIGQB3FPeuxGlq+J07AtZgoT95mRtLARSYH3kiGILwK/+EFqaVVTfEuTvif4r6K9hdFt0aV/bUvvhGRZKOwx4cIChPHRg3YYDiAwyTQJEW6Ducp8Dd8Gj6rGRG6D+JLGFgzbxJPlw5e/2jbxrtotuUZBZdPWQvlxuSXd1/4pbHBdRnKUCYeeuFjrDkmPTV4CBdHLCgZpvk3ndSMOoDyo1/d2ykEJEoHK78yUk4yCRpyjImO001L3mxYIAaj0TPc3Gm1ykiKC25M4/oHVca99lL4fRybKCI60Z2eZMtVjgHjPGJL8gmuIDvnyJZ7EnUe6uEwYH13DcIoJdEDC4SCPhZ1E2/GGsQnHmF598MvE2wqnX8hbCU9SbWAh5ADBXotIwMExHnEZ3bn0byv9P4KTb6b2nD2ZB2GlIhmS0k6au2+fK1He4OOKY8ZDyQMae6BofT8FUf62lORz/MFAK0FcET0DJuR9pjq/Zh2rfM0swgLBVAwgyyG3RwI5nM8OmNZU7Z2MktOfI33BbsIDKsZtGTmxF7ztj3LwUQzOJcA2qzMJI6LNxJ8jC2P/86UKrzvyERqUBre9refl/CUfvTDEACrNdOPlyTKZQb65AHsloIppud5JnwlOr6DIGqFEx+CqlK57a+ZtK3gvVe6r8yJyQ9JPnKQNpU+JFPy1mikJ7VsNpSoK1epK2ot9q0DB2XoNbWkHvVg48DuUsive2Ry0T6TvtLZApm128VosYP4pWtUR8JthoS0TfLtxSMS0m7EEvcVvgCn9y2x3hyfHCuN6TLOMrHmkOzDf/0WIKr87vv 7yHBcTZS 5OBzUQLQN3Olw49LdaQRAxzM+34ha4pGSl7kIF48+RHqXZoSsdPvs33l5+TdbiDjPyfIO5/VePXXj4A0DQORiece/mxv64xY8XRJe8ZT4oMkWLOHZCQi5x9hP0kUl7IDCTSRjEuUC9hEywV8abPHu9qAPVgrcC1HTRMFmd9LG2p14MmZ5691W6qXjU1PeBLb6ZychilZkTswU1vCusAKXGPnRElpo6HuLUGzBE4ddk9njdxSIjVeHwPDM87an/f6AYBul+7irgRnhpH+4K7j/GNZEdLTM7Khc3QmrbMz7kz4QiGo398CpjKewtmvHPFshCtWp1lXEZEG3+7sdLuRVNkZ3FhvAK55cbEzNzx8q6b479IH4DU+RRk8R/JC6EgESlPlaG2i7z/S8KVfoCYaCdsNwhQgPkUMy1A7y3YE1OmAZDT8XYVOKEtynxhSwupdSwxS0k+3PTp/Lvvehh1BotFURc0P+OQQM9k36oAeJztcrjuvvlStuxYY+7qeD+tekPEkrSICToc2FHGJ/oXzpdtD1JpCV9ZzNDk6jeDEZ5Yv/TuLur2OOhi1DIdoHQAuBCUbjCWQgPC+CfeaWDvGStxcRQavy7JUDcBz2pKkI6gF8auxv37Z59q5/XclYfl8ipHy+ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Feb 20, 2023 at 12:37:52PM -0600, Michael Roth wrote: > This callback is used by the KVM MMU to check whether a #NPF was for a > private GPA or not. > > In some cases the full 64-bit error code for the #NPF will be needed to > make this determination, so also update kvm_mmu_do_page_fault() to > accept the full 64-bit value so it can be plumbed through to the > callback. We can split 64-bit part into the independent patch. > Signed-off-by: Michael Roth > --- > arch/x86/include/asm/kvm-x86-ops.h | 1 + > arch/x86/include/asm/kvm_host.h | 1 + > arch/x86/kvm/mmu/mmu.c | 3 +-- > arch/x86/kvm/mmu/mmu_internal.h | 37 +++++++++++++++++++++++++++--- > 4 files changed, 37 insertions(+), 5 deletions(-) > > diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h > index 8dc345cc6318..72183da010b8 100644 > --- a/arch/x86/include/asm/kvm-x86-ops.h > +++ b/arch/x86/include/asm/kvm-x86-ops.h > @@ -131,6 +131,7 @@ KVM_X86_OP(msr_filter_changed) > KVM_X86_OP(complete_emulated_msr) > KVM_X86_OP(vcpu_deliver_sipi_vector) > KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons); > +KVM_X86_OP_OPTIONAL_RET0(fault_is_private); > > #undef KVM_X86_OP > #undef KVM_X86_OP_OPTIONAL > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index e552374f2357..f856d689dda0 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -1643,6 +1643,7 @@ struct kvm_x86_ops { > > void (*load_mmu_pgd)(struct kvm_vcpu *vcpu, hpa_t root_hpa, > int root_level); > + bool (*fault_is_private)(struct kvm *kvm, gpa_t gpa, u64 error_code, bool *private_fault); > > bool (*has_wbinvd_exit)(void); > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index eda615f3951c..fb3f34b7391c 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -5724,8 +5724,7 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err > } > > if (r == RET_PF_INVALID) { > - r = kvm_mmu_do_page_fault(vcpu, cr2_or_gpa, > - lower_32_bits(error_code), false); > + r = kvm_mmu_do_page_fault(vcpu, cr2_or_gpa, error_code, false); > if (KVM_BUG_ON(r == RET_PF_INVALID, vcpu->kvm)) > return -EIO; > } > diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h > index e642d431df4b..557a001210df 100644 > --- a/arch/x86/kvm/mmu/mmu_internal.h > +++ b/arch/x86/kvm/mmu/mmu_internal.h > @@ -231,6 +231,37 @@ struct kvm_page_fault { > > int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); > > +static bool kvm_mmu_fault_is_private(struct kvm *kvm, gpa_t gpa, u64 err) > +{ > + struct kvm_memory_slot *slot; > + bool private_fault = false; > + gfn_t gfn = gpa_to_gfn(gpa); > + > + slot = gfn_to_memslot(kvm, gfn); > + if (!slot) { > + pr_debug("%s: no slot, GFN: 0x%llx\n", __func__, gfn); > + goto out; > + } > + > + if (!kvm_slot_can_be_private(slot)) { > + pr_debug("%s: slot is not private, GFN: 0x%llx\n", __func__, gfn); > + goto out; > + } > + > + if (static_call(kvm_x86_fault_is_private)(kvm, gpa, err, &private_fault)) > + goto out; > + > + /* > + * Handling below is for UPM self-tests and guests that treat userspace > + * as the authority on whether a fault should be private or not. > + */ > + private_fault = kvm_mem_is_private(kvm, gpa >> PAGE_SHIFT); > + > +out: > + pr_debug("%s: GFN: 0x%llx, private: %d\n", __func__, gfn, private_fault); > + return private_fault; > +} > + > /* > * Return values of handle_mmio_page_fault(), mmu.page_fault(), fast_page_fault(), > * and of course kvm_mmu_do_page_fault(). > @@ -262,11 +293,11 @@ enum { > }; > > static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, > - u32 err, bool prefetch) > + u64 err, bool prefetch) > { > struct kvm_page_fault fault = { > .addr = cr2_or_gpa, > - .error_code = err, > + .error_code = lower_32_bits(err), > .exec = err & PFERR_FETCH_MASK, > .write = err & PFERR_WRITE_MASK, > .present = err & PFERR_PRESENT_MASK, > @@ -280,7 +311,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, > .max_level = KVM_MAX_HUGEPAGE_LEVEL, > .req_level = PG_LEVEL_4K, > .goal_level = PG_LEVEL_4K, > - .is_private = kvm_mem_is_private(vcpu->kvm, cr2_or_gpa >> PAGE_SHIFT), > + .is_private = kvm_mmu_fault_is_private(vcpu->kvm, cr2_or_gpa, err), I don't think kvm_mmu_fault_is_private(). It's too heavy. We can make it it's own. I.e. the following. >From b0f914a1a4d154f076c0294831ce9ef0df7eb3d3 Mon Sep 17 00:00:00 2001 Message-Id: In-Reply-To: <428a676face7a06a90e59dca1c32941c9b6ee001.1679114841.git.isaku.yamahata@intel.com> References: <428a676face7a06a90e59dca1c32941c9b6ee001.1679114841.git.isaku.yamahata@intel.com> From: Isaku Yamahata Date: Fri, 17 Mar 2023 11:18:13 -0700 Subject: [PATCH 2/4] KVM: x86: Add 'fault_is_private' x86 op This callback is used by the KVM MMU to check whether a KVM page fault was for a private GPA or not. Originally-by: Michael Roth Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm-x86-ops.h | 1 + arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/mmu.h | 19 +++++++++++++++++++ arch/x86/kvm/mmu/mmu_internal.h | 2 +- arch/x86/kvm/x86.c | 8 ++++++++ 5 files changed, 30 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index e1f57905c8fe..dc5f18ac0bd5 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -99,6 +99,7 @@ KVM_X86_OP_OPTIONAL_RET0(set_tss_addr) KVM_X86_OP_OPTIONAL_RET0(set_identity_map_addr) KVM_X86_OP_OPTIONAL_RET0(get_mt_mask) KVM_X86_OP(load_mmu_pgd) +KVM_X86_OP(fault_is_private) KVM_X86_OP_OPTIONAL(link_private_spt) KVM_X86_OP_OPTIONAL(free_private_spt) KVM_X86_OP_OPTIONAL(split_private_spt) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 59196a80c3c8..0382d236fbf4 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1730,6 +1730,7 @@ struct kvm_x86_ops { void (*load_mmu_pgd)(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level); + bool (*fault_is_private)(struct kvm *kvm, gpa_t gpa, u64 error_code); int (*link_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level, void *private_spt); diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 4aaef2132b97..1f21680b9b97 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -289,6 +289,25 @@ static inline gpa_t kvm_translate_gpa(struct kvm_vcpu *vcpu, return translate_nested_gpa(vcpu, gpa, access, exception); } +static inline bool kvm_mmu_fault_is_private_default(struct kvm *kvm, gpa_t gpa, u64 err) +{ + struct kvm_memory_slot *slot; + gfn_t gfn = gpa_to_gfn(gpa); + + slot = gfn_to_memslot(kvm, gfn); + if (!slot) + return false; + + if (!kvm_slot_can_be_private(slot)) + return false; + + /* + * Handling below is for UPM self-tests and guests that treat userspace + * as the authority on whether a fault should be private or not. + */ + return kvm_mem_is_private(kvm, gfn); +} + static inline gfn_t kvm_gfn_shared_mask(const struct kvm *kvm) { #ifdef CONFIG_KVM_MMU_PRIVATE diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index bb5709f1cb57..6b54b069d1ed 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -445,7 +445,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, .max_level = vcpu->kvm->arch.tdp_max_page_level, .req_level = PG_LEVEL_4K, .goal_level = PG_LEVEL_4K, - .is_private = kvm_is_private_gpa(vcpu->kvm, cr2_or_gpa), + .is_private = static_call(kvm_x86_fault_is_private)(vcpu->kvm, cr2_or_gpa, err), }; int r; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index fd14368c6bc8..0311ab450330 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9419,6 +9419,14 @@ static inline void kvm_ops_update(struct kvm_x86_init_ops *ops) #undef __KVM_X86_OP kvm_pmu_ops_update(ops->pmu_ops); + + /* + * TODO: Once all backend fills this option, remove this and the default + * function. + */ + if (!ops->runtime_ops->fault_is_private) + static_call_update(kvm_x86_fault_is_private, + kvm_mmu_fault_is_private_default); } static int kvm_x86_check_processor_compatibility(void) -- 2.25.1 -- Isaku Yamahata