From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0385AC4332F for ; Mon, 6 Nov 2023 10:55:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5CAA18D0016; Mon, 6 Nov 2023 05:55:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 579E78D0002; Mon, 6 Nov 2023 05:55:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 41B318D0016; Mon, 6 Nov 2023 05:55:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 2D8768D0002 for ; Mon, 6 Nov 2023 05:55:09 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id EF3AEB5382 for ; Mon, 6 Nov 2023 10:55:08 +0000 (UTC) X-FDA: 81427222296.08.77D0062 Received: from mail-qv1-f51.google.com (mail-qv1-f51.google.com [209.85.219.51]) by imf28.hostedemail.com (Postfix) with ESMTP id 31351C0029 for ; Mon, 6 Nov 2023 10:55:06 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=BK+Pik2b; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf28.hostedemail.com: domain of tabba@google.com designates 209.85.219.51 as permitted sender) smtp.mailfrom=tabba@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1699268107; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PHBvDRtRAEwaxL1lSliJcpIEY0ugq80CfKQxFJAL4q0=; b=rxK+YZ5T+BltAdnCCza+FiabzXklX150Fdobr6SZkypzs7EYOXaq6UISwzew2ppJgvGIqh Q83SwRvT/cbiP9CVdJ1Fi0jExsCO2ZzaQRb3L7XDXLqdMEdhrvPmegWyDJ4rI8pAgvToe2 xffdHIfOJ3FMUMunIAd4QZTur5NzfI0= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=BK+Pik2b; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf28.hostedemail.com: domain of tabba@google.com designates 209.85.219.51 as permitted sender) smtp.mailfrom=tabba@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1699268107; a=rsa-sha256; cv=none; b=1+mfgmcd8Z0H3E5ARvooYEXgNT1VWS+6u6V2W1dhx9YFKSlhpUW1pRN1U7QDOFhQVNchux Bpw+bnqT50lUMYagS34r/06dSNuVqjLcZ/fo38vLJQZu7IBBqtPMxtloeoAaSz0TdFesQD mnOAYhHagBq9f5XTEMZ7eNL1t9Tkj1I= Received: by mail-qv1-f51.google.com with SMTP id 6a1803df08f44-66d17bdabe1so31088236d6.0 for ; Mon, 06 Nov 2023 02:55:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699268106; x=1699872906; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=PHBvDRtRAEwaxL1lSliJcpIEY0ugq80CfKQxFJAL4q0=; b=BK+Pik2bKVabI6TzZzAYtDmgi4f4qFxAG7PjvyFjw47hQGAmPYDWXtpr2QVYd+geyy A/U3aiWcNgL8NR41YaGlO7CL+xxOoEYH6nAlFVmADds+9y5N9tjjiXwBmnrxtnc+lUye gNiUbPYerJM5CgcNgEKTb/tmrl2Ugxe2Ye4hYIkJ7zsMP7HrYe7ArwV0wVTGKRF1udCl pVhbxcXgjOrOl/AJPp3tU6hSCLbKT2CiUdn9liFDpepUEgxGNI8aJmx2+g/K8cx5/aUv j8RYmSwsGLdoK9+TLsz/Dri1SH75KmVG0iCotMpdFCfAXDKiygfdVlrDYPntyVQ7E5/0 L9lA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699268106; x=1699872906; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PHBvDRtRAEwaxL1lSliJcpIEY0ugq80CfKQxFJAL4q0=; b=GPcD+L+KwoX+UxD/X6sIrKjR826lJkxCf+ng2fW/4IZo72TDz79m5PKNy+qcKsBE8k adtIbWtxOdzUFNOkI9hrCAPZ7jWpdcNPCyoX8YcJQJ5OYw3gw/FwqkNeN5nx/x4Qk1u1 +NtsN6xc5MgpxAeNKloUMndPVaESDJM0zBhbk/DGwHpDfbPhMfHdPLhxStDVv0iZje7G XA4uVa5QaCrF4MTKHMLej5LSn1IVNJBDkjMUB9Qklaat+XOIgynrWnrL7JBya9UoWUVV x32uwhBqmmJ5DyOFhv/M1/u3PjDYO4kK7mctv9+JYp2oyifn+cFuCy1vgO7TBuNZfn0a aTXQ== X-Gm-Message-State: AOJu0YyWQY/d+4wy/FI1HA5FFfZk9KoodpE1WbzvuQm4Sanw0zUt79Zj ukLglqiVmNXUtgG/WHf0lirUx4pxjlk8v6OZIH9how== X-Google-Smtp-Source: AGHT+IHQA77T0g4ukNLcN7qAXFDRTk2DUmoX/u+9pcfa/KbWWZJ9Bol7j14XEdBbYSqTk4yOjgLv7gDTy2y1jUp9r+w= X-Received: by 2002:ad4:5ce3:0:b0:66d:5b50:44d with SMTP id iv3-20020ad45ce3000000b0066d5b50044dmr43103399qvb.57.1699268106185; Mon, 06 Nov 2023 02:55:06 -0800 (PST) MIME-Version: 1.0 References: <20231105163040.14904-1-pbonzini@redhat.com> <20231105163040.14904-19-pbonzini@redhat.com> In-Reply-To: <20231105163040.14904-19-pbonzini@redhat.com> From: Fuad Tabba Date: Mon, 6 Nov 2023 10:54:30 +0000 Message-ID: Subject: Re: [PATCH 18/34] KVM: x86/mmu: Handle page fault for private memory To: Paolo Bonzini Cc: Marc Zyngier , Oliver Upton , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Alexander Viro , Christian Brauner , "Matthew Wilcox (Oracle)" , Andrew Morton , kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Xiaoyao Li , Xu Yilun , Chao Peng , Jarkko Sakkinen , Anish Moorthy , David Matlack , Yu Zhang , Isaku Yamahata , =?UTF-8?B?TWlja2HDq2wgU2FsYcO8bg==?= , Vlastimil Babka , Vishal Annapurve , Ackerley Tng , Maciej Szmigiero , David Hildenbrand , Quentin Perret , Michael Roth , Wang , Liam Merwick , Isaku Yamahata , "Kirill A. Shutemov" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 31351C0029 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: nyb4ftgr1cpbxc4utq5oobx5maay5poc X-HE-Tag: 1699268106-536075 X-HE-Meta: U2FsdGVkX18mUTPua1NWJZPVkffqHVsCa5UcT+niqtv9ldsdLDkT7OPOjxtM28J1PuvXGZ0eXpY/K5OOaKwi/1BMJpUdwffn/9nVhCFJJ0VjdgS6nzN7R9Z9ctq8d2kqDMq50hmpJ32NInAL7udajMpO5yMr/o71uvXLfk8kTdG4v52dwE9bE3hmj0TygsO+N5aqY2TcarEERNVqq8EgWOPV+ReMYN7UMsdPkCtRebIRY7WvdjoOraGLk3Zb0AZZizkC1lirmV/tBR0Gmhmisy9IW4PB0L1yZpXXFR56XAM5igcA3P/x2y3h8NFKH3CREO0uWU9pYNPZHkj/Qcp7MOyfodGtDW0n9Mgz3ZRLWWV9or8f23rE5KaGqQLc1V1uk0886cKT7VZi0jl0SrvuZtubfKGnMRev9dYLsbLar0awdiMeFnCGEDCUtFf2YerK591einhAulTGjwlRKuUzStgw8ZVFMPY4DAYjXaCsEydwRK2BtfegGvY37V08yH25TUADf5vKOyS2urZtQkL32Q8NwbM6XP5mBinFzvvKecmOMmMEZLBK0ovSR5gFKhqK4MC8SIQFykZlOeHEYmkJ8p6s1eZXAgQdjhlTZ9Mvbs1f/r1PUgBruBdOOi5UuBTzQztSr00wpYcwQ9cvD4Lmh0nwqoV6X0zP9I1cLDDTAg5AvGdXifdcnFxzRSClOHQP3XasitwS+nLwnNuZwyKKg7qpOfd2vsc/26nQ0YA5YuWXPEOdzi1FIzcygnaoLnNaPBARTuWvfLKc+zewOavwkYe68tP6EbcXlI0psFE8aNOhkyq66fwx6I3tQGca+YY+aq2/W92Zeg50hqwjVugDX9toGqu50lAn4RpGdKFvxrnCXOKrc6z2MwG9FqRYG8XijGpEv3QF4zJY+1DiMmoJhcnh9ccndREaNpO50GUka5IBhtYIPYqVOBHNq22rZZtPBuwW6WMgKYiea7YhejT ch/82GwA J96pXSDSmsLok3QE7SbMXy1P7ATF7kHSq/vuJw1dN+k/YpzBQGpIMdediNGe+CH5KQjCJICgmAMip4EH4VRaczatxdYeJWCX/bOrzLEy0D7DICcKTwQaaafrm4mM9ObvLMZPNvYfpmm3Xp+Pdu/hrl3VUSxXSCMAOjNUa3FikMsIUzJ8NUBeuaS3jiy9shhX4/SkuZa6mw/aw/GxX9xa/QavUflPjtNgjUJxWmSCpygjL7Y2ygPs24UTGl/hH3FNpeQ9lOUJ1d2VAE7mBNjfL792U81YlQ2y2OWZ0MYHZut1SwN+PRpLvUXvLAz6AJ2bdCygMGQu76I7vR7USGRC/Hw6NC+V7rnG+wTNdmqtHgM7tf0OZ/+24da/USK32fRbnEqiRMud6gGESwa4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi, On Sun, Nov 5, 2023 at 4:33=E2=80=AFPM Paolo Bonzini = wrote: > > From: Chao Peng > > Add support for resolving page faults on guest private memory for VMs > that differentiate between "shared" and "private" memory. For such VMs, > KVM_MEM_PRIVATE memslots can include both fd-based private memory and KVM_MEM_PRIVATE -> KVM_MEM_GUEST_MEMFD Cheers, /fuad > hva-based shared memory, and KVM needs to map in the "correct" variant, > i.e. KVM needs to map the gfn shared/private as appropriate based on the > current state of the gfn's KVM_MEMORY_ATTRIBUTE_PRIVATE flag. > > For AMD's SEV-SNP and Intel's TDX, the guest effectively gets to request > shared vs. private via a bit in the guest page tables, i.e. what the gues= t > wants may conflict with the current memory attributes. To support such > "implicit" conversion requests, exit to user with KVM_EXIT_MEMORY_FAULT > to forward the request to userspace. Add a new flag for memory faults, > KVM_MEMORY_EXIT_FLAG_PRIVATE, to communicate whether the guest wants to > map memory as shared vs. private. > > Like KVM_MEMORY_ATTRIBUTE_PRIVATE, use bit 3 for flagging private memory > so that KVM can use bits 0-2 for capturing RWX behavior if/when userspace > needs such information, e.g. a likely user of KVM_EXIT_MEMORY_FAULT is to > exit on missing mappings when handling guest page fault VM-Exits. In > that case, userspace will want to know RWX information in order to > correctly/precisely resolve the fault. > > Note, private memory *must* be backed by guest_memfd, i.e. shared mapping= s > always come from the host userspace page tables, and private mappings > always come from a guest_memfd instance. > > Co-developed-by: Yu Zhang > Signed-off-by: Yu Zhang > Signed-off-by: Chao Peng > Co-developed-by: Sean Christopherson > Signed-off-by: Sean Christopherson > Reviewed-by: Fuad Tabba > Tested-by: Fuad Tabba > Message-Id: <20231027182217.3615211-21-seanjc@google.com> > Signed-off-by: Paolo Bonzini > --- > Documentation/virt/kvm/api.rst | 8 ++- > arch/x86/kvm/mmu/mmu.c | 101 ++++++++++++++++++++++++++++++-- > arch/x86/kvm/mmu/mmu_internal.h | 1 + > include/linux/kvm_host.h | 8 ++- > include/uapi/linux/kvm.h | 1 + > 5 files changed, 110 insertions(+), 9 deletions(-) > > diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.= rst > index 6d681f45969e..4a9a291380ad 100644 > --- a/Documentation/virt/kvm/api.rst > +++ b/Documentation/virt/kvm/api.rst > @@ -6953,6 +6953,7 @@ spec refer, https://github.com/riscv/riscv-sbi-doc. > > /* KVM_EXIT_MEMORY_FAULT */ > struct { > + #define KVM_MEMORY_EXIT_FLAG_PRIVATE (1ULL << 3) > __u64 flags; > __u64 gpa; > __u64 size; > @@ -6961,8 +6962,11 @@ spec refer, https://github.com/riscv/riscv-sbi-doc= . > KVM_EXIT_MEMORY_FAULT indicates the vCPU has encountered a memory fault = that > could not be resolved by KVM. The 'gpa' and 'size' (in bytes) describe = the > guest physical address range [gpa, gpa + size) of the fault. The 'flags= ' field > -describes properties of the faulting access that are likely pertinent. > -Currently, no flags are defined. > +describes properties of the faulting access that are likely pertinent: > + > + - KVM_MEMORY_EXIT_FLAG_PRIVATE - When set, indicates the memory fault o= ccurred > + on a private memory access. When clear, indicates the fault occurred= on a > + shared access. > > Note! KVM_EXIT_MEMORY_FAULT is unique among all KVM exit reasons in tha= t it > accompanies a return code of '-1', not '0'! errno will always be set to= EFAULT > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index f5c6b0643645..754a5aaebee5 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -3147,9 +3147,9 @@ static int host_pfn_mapping_level(struct kvm *kvm, = gfn_t gfn, > return level; > } > > -int kvm_mmu_max_mapping_level(struct kvm *kvm, > - const struct kvm_memory_slot *slot, gfn_t g= fn, > - int max_level) > +static int __kvm_mmu_max_mapping_level(struct kvm *kvm, > + const struct kvm_memory_slot *slot= , > + gfn_t gfn, int max_level, bool is_= private) > { > struct kvm_lpage_info *linfo; > int host_level; > @@ -3161,6 +3161,9 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, > break; > } > > + if (is_private) > + return max_level; > + > if (max_level =3D=3D PG_LEVEL_4K) > return PG_LEVEL_4K; > > @@ -3168,6 +3171,16 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, > return min(host_level, max_level); > } > > +int kvm_mmu_max_mapping_level(struct kvm *kvm, > + const struct kvm_memory_slot *slot, gfn_t g= fn, > + int max_level) > +{ > + bool is_private =3D kvm_slot_can_be_private(slot) && > + kvm_mem_is_private(kvm, gfn); > + > + return __kvm_mmu_max_mapping_level(kvm, slot, gfn, max_level, is_= private); > +} > + > void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_faul= t *fault) > { > struct kvm_memory_slot *slot =3D fault->slot; > @@ -3188,8 +3201,9 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu,= struct kvm_page_fault *fault > * Enforce the iTLB multihit workaround after capturing the reque= sted > * level, which will be used to do precise, accurate accounting. > */ > - fault->req_level =3D kvm_mmu_max_mapping_level(vcpu->kvm, slot, > - fault->gfn, fault->m= ax_level); > + fault->req_level =3D __kvm_mmu_max_mapping_level(vcpu->kvm, slot, > + fault->gfn, fault-= >max_level, > + fault->is_private)= ; > if (fault->req_level =3D=3D PG_LEVEL_4K || fault->huge_page_disal= lowed) > return; > > @@ -4269,6 +4283,55 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vc= pu, struct kvm_async_pf *work) > kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true, NULL); > } > > +static inline u8 kvm_max_level_for_order(int order) > +{ > + BUILD_BUG_ON(KVM_MAX_HUGEPAGE_LEVEL > PG_LEVEL_1G); > + > + KVM_MMU_WARN_ON(order !=3D KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G) && > + order !=3D KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M) && > + order !=3D KVM_HPAGE_GFN_SHIFT(PG_LEVEL_4K)); > + > + if (order >=3D KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G)) > + return PG_LEVEL_1G; > + > + if (order >=3D KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M)) > + return PG_LEVEL_2M; > + > + return PG_LEVEL_4K; > +} > + > +static void kvm_mmu_prepare_memory_fault_exit(struct kvm_vcpu *vcpu, > + struct kvm_page_fault *faul= t) > +{ > + kvm_prepare_memory_fault_exit(vcpu, fault->gfn << PAGE_SHIFT, > + PAGE_SIZE, fault->write, fault->exe= c, > + fault->is_private); > +} > + > +static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu, > + struct kvm_page_fault *fault) > +{ > + int max_order, r; > + > + if (!kvm_slot_can_be_private(fault->slot)) { > + kvm_mmu_prepare_memory_fault_exit(vcpu, fault); > + return -EFAULT; > + } > + > + r =3D kvm_gmem_get_pfn(vcpu->kvm, fault->slot, fault->gfn, &fault= ->pfn, > + &max_order); > + if (r) { > + kvm_mmu_prepare_memory_fault_exit(vcpu, fault); > + return r; > + } > + > + fault->max_level =3D min(kvm_max_level_for_order(max_order), > + fault->max_level); > + fault->map_writable =3D !(fault->slot->flags & KVM_MEM_READONLY); > + > + return RET_PF_CONTINUE; > +} > + > static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_faul= t *fault) > { > struct kvm_memory_slot *slot =3D fault->slot; > @@ -4301,6 +4364,14 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu= , struct kvm_page_fault *fault > return RET_PF_EMULATE; > } > > + if (fault->is_private !=3D kvm_mem_is_private(vcpu->kvm, fault->g= fn)) { > + kvm_mmu_prepare_memory_fault_exit(vcpu, fault); > + return -EFAULT; > + } > + > + if (fault->is_private) > + return kvm_faultin_pfn_private(vcpu, fault); > + > async =3D false; > fault->pfn =3D __gfn_to_pfn_memslot(slot, fault->gfn, false, fals= e, &async, > fault->write, &fault->map_writa= ble, > @@ -7188,6 +7259,26 @@ void kvm_mmu_pre_destroy_vm(struct kvm *kvm) > } > > #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES > +bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm, > + struct kvm_gfn_range *range) > +{ > + /* > + * Zap SPTEs even if the slot can't be mapped PRIVATE. KVM x86 o= nly > + * supports KVM_MEMORY_ATTRIBUTE_PRIVATE, and so it *seems* like = KVM > + * can simply ignore such slots. But if userspace is making memo= ry > + * PRIVATE, then KVM must prevent the guest from accessing the me= mory > + * as shared. And if userspace is making memory SHARED and this = point > + * is reached, then at least one page within the range was previo= usly > + * PRIVATE, i.e. the slot's possible hugepage ranges are changing= . > + * Zapping SPTEs in this case ensures KVM will reassess whether o= r not > + * a hugepage can be used for affected ranges. > + */ > + if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm))) > + return false; > + > + return kvm_unmap_gfn_range(kvm, range); > +} > + > static bool hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, > int level) > { > diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_inter= nal.h > index decc1f153669..86c7cb692786 100644 > --- a/arch/x86/kvm/mmu/mmu_internal.h > +++ b/arch/x86/kvm/mmu/mmu_internal.h > @@ -201,6 +201,7 @@ struct kvm_page_fault { > > /* Derived from mmu and global state. */ > const bool is_tdp; > + const bool is_private; > const bool nx_huge_page_workaround_enabled; > > /* > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index a6de526c0426..67dfd4d79529 100644 > --- a/include/linux/kvm_host.h > +++ b/include/linux/kvm_host.h > @@ -2357,14 +2357,18 @@ static inline void kvm_account_pgtable_pages(void= *virt, int nr) > #define KVM_DIRTY_RING_MAX_ENTRIES 65536 > > static inline void kvm_prepare_memory_fault_exit(struct kvm_vcpu *vcpu, > - gpa_t gpa, gpa_t size) > + gpa_t gpa, gpa_t size, > + bool is_write, bool is_e= xec, > + bool is_private) > { > vcpu->run->exit_reason =3D KVM_EXIT_MEMORY_FAULT; > vcpu->run->memory_fault.gpa =3D gpa; > vcpu->run->memory_fault.size =3D size; > > - /* Flags are not (yet) defined or communicated to userspace. */ > + /* RWX flags are not (yet) defined or communicated to userspace. = */ > vcpu->run->memory_fault.flags =3D 0; > + if (is_private) > + vcpu->run->memory_fault.flags |=3D KVM_MEMORY_EXIT_FLAG_P= RIVATE; > } > > #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES > diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h > index 2802d10aa88c..8eb10f560c69 100644 > --- a/include/uapi/linux/kvm.h > +++ b/include/uapi/linux/kvm.h > @@ -535,6 +535,7 @@ struct kvm_run { > } notify; > /* KVM_EXIT_MEMORY_FAULT */ > struct { > +#define KVM_MEMORY_EXIT_FLAG_PRIVATE (1ULL << 3) > __u64 flags; > __u64 gpa; > __u64 size; > -- > 2.39.1 > >