From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCE40C021A1 for ; Tue, 11 Feb 2025 16:34:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 435976B0085; Tue, 11 Feb 2025 11:34:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3BE6A6B0088; Tue, 11 Feb 2025 11:34:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 211BE6B008C; Tue, 11 Feb 2025 11:34:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id EDFE66B0085 for ; Tue, 11 Feb 2025 11:34:42 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 7798D1205C3 for ; Tue, 11 Feb 2025 16:34:42 +0000 (UTC) X-FDA: 83108212404.06.ACDAEDE Received: from mail-qt1-f173.google.com (mail-qt1-f173.google.com [209.85.160.173]) by imf22.hostedemail.com (Postfix) with ESMTP id 92883C0007 for ; Tue, 11 Feb 2025 16:34:40 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="VW/us8zn"; spf=pass (imf22.hostedemail.com: domain of tabba@google.com designates 209.85.160.173 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739291680; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qBfR0yM8m6KhutGWg4s9NJiKi6YZt6byJQys+JOsOJQ=; b=orh4e17T0/5Ip5KOZCtqtrfZsAIdm7C9J0lelrvruz8rzuwsEPDv2ir0PP+kZNGYP8af/I WfMbsOwOzyYu9iPrUojn5u6YiW8DReYMj1dBTIPYmvxnQgc7vhf+G9Q9tFrxHZL9OFaaQc qFSY/Da1gjM3BE2JAajw+5qwJ4m1VPk= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="VW/us8zn"; spf=pass (imf22.hostedemail.com: domain of tabba@google.com designates 209.85.160.173 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739291680; a=rsa-sha256; cv=none; b=5YvevsGFQ9lKXlnWfaQVUoaMyZMO4CFx8t1qXvHUK/ZK7mepUx4UZSjssvgaItCEZGzAqW zBDc8ba5bLvhqpaKVFJaQ6tfBzHifCYNCjsW/nNm8p0J0Oes4qgsr2kjrMvT2QZVzX/LmT 48pfRcN4l09WI7FJxwQUouktGNl63RI= Received: by mail-qt1-f173.google.com with SMTP id d75a77b69052e-47190a013d4so255971cf.1 for ; Tue, 11 Feb 2025 08:34:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1739291680; x=1739896480; darn=kvack.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=qBfR0yM8m6KhutGWg4s9NJiKi6YZt6byJQys+JOsOJQ=; b=VW/us8znN1qYGiZOKCVsoCGXvWsPvVghcMGcSiEAOKx+8Es77utolOMv//heBELl7I gFReSgtjVK92wJ8FsVJKfKjr673Uf19XC4lRDfKsw79JWfGURblC9ERO9ushnGr9orkV D8PD2o/gfmj8pQ25rUNzjRkjrWtunNFLq4N4E8CrRWHdgzawma8IEVaaVwpL0Ff0nfRZ mtL8ErsYQnCVtax/5ivPoUftTzizFvR5cI7P847clxOrREfeg3ORrzFTFLz5LTgy4TYl wwMNNNiP2Xhlvnlr2Wg59BnGuIU38klwQfZuwFnC2+hrMfIl2Pa1070SMRxkD1gebKbF +ivw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739291680; x=1739896480; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=qBfR0yM8m6KhutGWg4s9NJiKi6YZt6byJQys+JOsOJQ=; b=TJuJh1yufx9DNEkRek7afpEgVHC4eBVcaPshGjbG1H14PsZOcw6Ze+9FE126xnRMBI aNBb0dhLyIdsKnZH3Y6v53KSLhmeAKRr+FUjBR+7VvLOuwjTKl4X1o0DiWoY6xoPYgkU tKbT+h96Pvn5tl7PXRbcCV4lFJ8Gj78tw8V2e+eTNWPBtiZTXDsKajzd4cxpnhYhdSWJ gU8JtqYzBm9PkNiPFbDN0Bvz1KWQcFkOB0QP2UDsPJ2tcuj4+veRcbvM7BuCaeMtyhEN 7VNoLoYGbNGf8I43WNbJY11Mc7RaJ0AFB/VM7r5jMmJ1Ktxy6cgO74QSRqZKXz+lDZqm gGIQ== X-Forwarded-Encrypted: i=1; AJvYcCVq5AS9K6VWDkxPfEu+X+ueSeRfGIkNI+ZtPvgnk3n+kL10BvRdHp/Wcxc/5EvpcXK4krmsxL7IkA==@kvack.org X-Gm-Message-State: AOJu0YwDgJZEb6TNGFlOCqI+ckDpYlKbAJcOrcRrWcsqn8lGs8NJci+N YhPrJhy2Yw3kzIwGcKrVGKNCFj3dL8P5DwOiU3C12oYaVNQnGTkLBf3II/6kUS6nPq1ZDfJsRsL Bw4KzRp2RXGI+31tS2jCPmW1TQKlNhfpvTgsb X-Gm-Gg: ASbGncvBIlwHAHKUjRwIEN2XyAc/nVq8hHrK0sq826/zG01dQfjfU8Q5JueTpjiuhK2 GLiX7TQwNpAwt/jaae56CN5jHW2FMmUgZ1sHSUXCOpd5wNIsENDwYnB9ylgtOPZoam/UXaY4= X-Google-Smtp-Source: AGHT+IGKRvYBfvDM/o/HoD9Y5zwhUvhHHPrgsNuE/tTmtP469y6rD4qgkNpj8MZ92d33tcmp6Ap5PJMafOrIJ2TWbjs= X-Received: by 2002:ac8:5904:0:b0:467:8f1e:7304 with SMTP id d75a77b69052e-471a40fa96fmr3601721cf.13.1739291679139; Tue, 11 Feb 2025 08:34:39 -0800 (PST) MIME-Version: 1.0 References: <20250211121128.703390-1-tabba@google.com> <20250211121128.703390-9-tabba@google.com> In-Reply-To: From: Fuad Tabba Date: Tue, 11 Feb 2025 16:34:02 +0000 X-Gm-Features: AWEUYZl853kvO0g1wM2KECxvfTYz_-gdGxjbCg3vJUovp4LBWwcfUDLfEpAC2S0 Message-ID: Subject: Re: [PATCH v3 08/11] KVM: arm64: Handle guest_memfd()-backed guest page faults To: Quentin Perret Cc: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Stat-Signature: c1twh6aqhygnqxmkbhyk9di4g9xnszoi X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 92883C0007 X-HE-Tag: 1739291680-619687 X-HE-Meta: U2FsdGVkX1/jaiIZoTTZnbZZZgTjsNEsAJtub92Y1RRZXH0S3mrXfweydsXTaxgxg1n1iI07/vKP/mnuiRzWCVn5P/owMdKOSnH6N+gsykN5l5bypYRz9M9ilxAeUr2r3Dza+zT0qEiLHUqfEVwzDdXOhuweH5woYp83SuKt/K7Q55OQtTi8P/xWYKizqqRxVQENy6JgoGHDPrpeM78zu99nhN1A1m0JaDotTNJedon/TToNwSk8fg64xUG+xiKz+oHiDofSBsT8WTt4429cX4ADKioR3YyEVJse0pmAkGRhNFtDDlJ3ogZ34mty/1SIj//F6OS+hqZIgkfKDGGKLFdmf7ohyvBZXx+JoLMVAkVlOtFQZf4de7ovrkQoqtaO5EUD2heJWxDqKQ/ocS+xrpcXQPHVufV3EyV+hP0zdkMrmUSvTEoCnV8NrAIAnzlA7wBnKPKJL9/Rq8ArWegf46C1hAd6jkZ1Mojz7v+03EMnPiA9C4J2IKgNHVE+rpROhGuNXd+7yTPE0+M8ejABSKXQ1VbRi2QgEkNxt+AYPsutEYnmmIBlq7EKEtt+g8CI3y30z0pOpBLWsB8+3ujk9Fd4wQRXDVZ3YR2LE3hBAS0rck4SDmiR5m8UZwGAZLvaOfSXwDyXmNFmLvLycuaaLbE4rAKFzHvTZcY0eCTiEeYO1I31Yc2FBzkurNy8QTegAGH2IMb22THfkffiKjc2G/jJV0mx/DXiQcu7Ar6RaetWNDEthj7RL8yNrh3Qgr8bdeLn6f7anu14PcFoJ8BBUFM5Oih5QSapSiOUR+OLhGii32S+ROuIRIoIWMlQ8RmHCP5N5pXO77QagegwGRjCOLucwBzsu4N/+mfK52oHV99S2TvSUDt74dLoxj0wgVPTV+a9ujpRk/C8bzdhyABoorxp0ruM/jYek/PXZYBybnao1DRMahiUnhhYYYaZBA7zU+mRX+Aqer5+GQgHBCl 24jF2cA5 Ne+MT1+WUEm4S828zlViQLIve4/Ww7cFvqSqIxtNF/+2oE5tx8ySTlHUXg33ub9GAN53IAkd3YMblegduXDNT+1p+k4DcTArzFD5UdD2DAi1GJ3WzMXypGaYNi5jWZIcDZh9LlQDMQnDwCeLfwEiCCAWABLXYMWdupOCujnrbLbPjl1XOoRYnDH4KWhy//G9i+LFrE+QAf32bKwSuei5fmDZS11dwIJihfsEguBtg2K42irsc74VFuouXF+ia44jSmH8wX4LPce/TkxZqL9GwR6DZfnX4TrxVJN/5rZ9yZ/DzMvwTumz6RmlaBsPDazaCc56XvSzgli06nabzVKRiGp7sUcW3FVdpfH70eGEzV9GE2zY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Quentin, On Tue, 11 Feb 2025 at 16:26, Quentin Perret wrote: > > On Tuesday 11 Feb 2025 at 16:13:27 (+0000), Fuad Tabba wrote: > > Hi Quentin, > > > > On Tue, 11 Feb 2025 at 15:57, Quentin Perret wrote: > > > > > > Hey Fuad, > > > > > > On Tuesday 11 Feb 2025 at 12:11:24 (+0000), Fuad Tabba wrote: > > > > Add arm64 support for handling guest page faults on guest_memfd > > > > backed memslots. > > > > > > > > For now, the fault granule is restricted to PAGE_SIZE. > > > > > > > > Signed-off-by: Fuad Tabba > > > > --- > > > > arch/arm64/kvm/mmu.c | 84 ++++++++++++++++++++++++++-------------- > > > > include/linux/kvm_host.h | 5 +++ > > > > virt/kvm/kvm_main.c | 5 --- > > > > 3 files changed, 61 insertions(+), 33 deletions(-) > > > > > > > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > > > > index b6c0acb2311c..305060518766 100644 > > > > --- a/arch/arm64/kvm/mmu.c > > > > +++ b/arch/arm64/kvm/mmu.c > > > > @@ -1454,6 +1454,33 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) > > > > return vma->vm_flags & VM_MTE_ALLOWED; > > > > } > > > > > > > > +static kvm_pfn_t faultin_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, > > > > + gfn_t gfn, bool write_fault, bool *writable, > > > > + struct page **page, bool is_private) > > > > +{ > > > > + kvm_pfn_t pfn; > > > > + int ret; > > > > + > > > > + if (!is_private) > > > > + return __kvm_faultin_pfn(slot, gfn, write_fault ? FOLL_WRITE : 0, writable, page); > > > > + > > > > + *writable = false; > > > > + > > > > + if (WARN_ON_ONCE(write_fault && memslot_is_readonly(slot))) > > > > + return KVM_PFN_ERR_NOSLOT_MASK; > > > > > > I believe this check is superfluous, we should decide to report an MMIO > > > exit to userspace for write faults to RO memslots and not get anywhere > > > near user_mem_abort(). And nit but the error code should probably be > > > KVM_PFN_ERR_RO_FAULT or something instead? > > > > I tried to replicate the behavior of __kvm_faultin_pfn() here (but got > > the wrong error!). I think you're right though that in the arm64 case, > > this check isn't needed. Should I fix the return error and keep the > > warning though? > > __kvm_faultin_pfn() will just set *writable to false if it find an RO > memslot apparently, not return an error. So I'd vote for dropping that > check so we align with that behaviour. Ack. > > > > + > > > > + ret = kvm_gmem_get_pfn(kvm, slot, gfn, &pfn, page, NULL); > > > > + if (!ret) { > > > > + *writable = write_fault; > > > > > > In normal KVM, if we're not dirty logging we'll actively map the page as > > > writable if both the memslot and the userspace mappings are writable. > > > With gmem, the latter doesn't make much sense, but essentially the > > > underlying page should really be writable (e.g. no CoW getting in the > > > way and such?). If so, then perhaps make this > > > > > > *writable = !memslot_is_readonly(slot); > > > > > > Wdyt? > > > > Ack. > > > > > > + return pfn; > > > > + } > > > > + > > > > + if (ret == -EHWPOISON) > > > > + return KVM_PFN_ERR_HWPOISON; > > > > + > > > > + return KVM_PFN_ERR_NOSLOT_MASK; > > > > +} > > > > + > > > > static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > > > > struct kvm_s2_trans *nested, > > > > struct kvm_memory_slot *memslot, unsigned long hva, > > > > @@ -1461,25 +1488,26 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > > > > { > > > > int ret = 0; > > > > bool write_fault, writable; > > > > - bool exec_fault, mte_allowed; > > > > + bool exec_fault, mte_allowed = false; > > > > bool device = false, vfio_allow_any_uc = false; > > > > unsigned long mmu_seq; > > > > phys_addr_t ipa = fault_ipa; > > > > struct kvm *kvm = vcpu->kvm; > > > > - struct vm_area_struct *vma; > > > > + struct vm_area_struct *vma = NULL; > > > > short vma_shift; > > > > void *memcache; > > > > - gfn_t gfn; > > > > + gfn_t gfn = ipa >> PAGE_SHIFT; > > > > kvm_pfn_t pfn; > > > > bool logging_active = memslot_is_logging(memslot); > > > > - bool force_pte = logging_active || is_protected_kvm_enabled(); > > > > - long vma_pagesize, fault_granule; > > > > + bool is_private = kvm_mem_is_private(kvm, gfn); > > > > > > Just trying to understand the locking rule for the xarray behind this. > > > Is it kvm->srcu that protects it for reads here? Something else? > > > > I'm not sure I follow. Which xarray are you referring to? > > Sorry, yes, that wasn't clear. I meant that kvm_mem_is_private() calls > kvm_get_memory_attributes() which indexes kvm->mem_attr_array. The > comment in struct kvm indicates that this xarray is protected by RCU for > readers, so I was just checking if we were relying on > kvm_handle_guest_abort() to take srcu_read_lock(&kvm->srcu) for us, or > if there was something else more subtle here. I was kind of afraid that people would be confused by this, and I commented on it in the commit message of the earlier patch: https://lore.kernel.org/all/20250211121128.703390-6-tabba@google.com/ > Note that the word "private" in the name of the function > kvm_mem_is_private() doesn't necessarily indicate that the memory > isn't shared, but is due to the history and evolution of > guest_memfd and the various names it has received. In effect, > this function is used to multiplex between the path of a normal > page fault and the path of a guest_memfd backed page fault. kvm_mem_is_private() is property of the memslot itself. No xarrays harmed in the process :) Cheers, /fuad > Cheers, > Quentin > > > > > > > > + bool force_pte = logging_active || is_private || is_protected_kvm_enabled(); > > > > + long vma_pagesize, fault_granule = PAGE_SIZE; > > > > enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; > > > > struct kvm_pgtable *pgt; > > > > struct page *page; > > > > enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED; > > > > > > > > - if (fault_is_perm) > > > > + if (fault_is_perm && !is_private) > > > > > > Nit: not strictly necessary I think. > > > > You're right. > > > > Thanks, > > /fuad > > > > > > fault_granule = kvm_vcpu_trap_get_perm_fault_granule(vcpu); > > > > write_fault = kvm_is_write_fault(vcpu); > > > > exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu); > > > > @@ -1510,24 +1538,30 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > > > > return ret; > > > > } > > > > > > > > + mmap_read_lock(current->mm); > > > > + > > > > /* > > > > * Let's check if we will get back a huge page backed by hugetlbfs, or > > > > * get block mapping for device MMIO region. > > > > */ > > > > - mmap_read_lock(current->mm); > > > > - vma = vma_lookup(current->mm, hva); > > > > - if (unlikely(!vma)) { > > > > - kvm_err("Failed to find VMA for hva 0x%lx\n", hva); > > > > - mmap_read_unlock(current->mm); > > > > - return -EFAULT; > > > > - } > > > > + if (!is_private) { > > > > + vma = vma_lookup(current->mm, hva); > > > > + if (unlikely(!vma)) { > > > > + kvm_err("Failed to find VMA for hva 0x%lx\n", hva); > > > > + mmap_read_unlock(current->mm); > > > > + return -EFAULT; > > > > + } > > > > > > > > - /* > > > > - * logging_active is guaranteed to never be true for VM_PFNMAP > > > > - * memslots. > > > > - */ > > > > - if (WARN_ON_ONCE(logging_active && (vma->vm_flags & VM_PFNMAP))) > > > > - return -EFAULT; > > > > + /* > > > > + * logging_active is guaranteed to never be true for VM_PFNMAP > > > > + * memslots. > > > > + */ > > > > + if (WARN_ON_ONCE(logging_active && (vma->vm_flags & VM_PFNMAP))) > > > > + return -EFAULT; > > > > + > > > > + vfio_allow_any_uc = vma->vm_flags & VM_ALLOW_ANY_UNCACHED; > > > > + mte_allowed = kvm_vma_mte_allowed(vma); > > > > + } > > > > > > > > if (force_pte) > > > > vma_shift = PAGE_SHIFT; > > > > @@ -1597,18 +1631,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > > > > ipa &= ~(vma_pagesize - 1); > > > > } > > > > > > > > - gfn = ipa >> PAGE_SHIFT; > > > > - mte_allowed = kvm_vma_mte_allowed(vma); > > > > - > > > > - vfio_allow_any_uc = vma->vm_flags & VM_ALLOW_ANY_UNCACHED; > > > > - > > > > /* Don't use the VMA after the unlock -- it may have vanished */ > > > > vma = NULL; > > > > > > > > /* > > > > * Read mmu_invalidate_seq so that KVM can detect if the results of > > > > - * vma_lookup() or __kvm_faultin_pfn() become stale prior to > > > > - * acquiring kvm->mmu_lock. > > > > + * vma_lookup() or faultin_pfn() become stale prior to acquiring > > > > + * kvm->mmu_lock. > > > > * > > > > * Rely on mmap_read_unlock() for an implicit smp_rmb(), which pairs > > > > * with the smp_wmb() in kvm_mmu_invalidate_end(). > > > > @@ -1616,8 +1645,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > > > > mmu_seq = vcpu->kvm->mmu_invalidate_seq; > > > > mmap_read_unlock(current->mm); > > > > > > > > - pfn = __kvm_faultin_pfn(memslot, gfn, write_fault ? FOLL_WRITE : 0, > > > > - &writable, &page); > > > > + pfn = faultin_pfn(kvm, memslot, gfn, write_fault, &writable, &page, is_private); > > > > if (pfn == KVM_PFN_ERR_HWPOISON) { > > > > kvm_send_hwpoison_signal(hva, vma_shift); > > > > return 0; > > > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > > > > index 39fd6e35c723..415c6274aede 100644 > > > > --- a/include/linux/kvm_host.h > > > > +++ b/include/linux/kvm_host.h > > > > @@ -1882,6 +1882,11 @@ static inline int memslot_id(struct kvm *kvm, gfn_t gfn) > > > > return gfn_to_memslot(kvm, gfn)->id; > > > > } > > > > > > > > +static inline bool memslot_is_readonly(const struct kvm_memory_slot *slot) > > > > +{ > > > > + return slot->flags & KVM_MEM_READONLY; > > > > +} > > > > + > > > > static inline gfn_t > > > > hva_to_gfn_memslot(unsigned long hva, struct kvm_memory_slot *slot) > > > > { > > > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > > > > index 38f0f402ea46..3e40acb9f5c0 100644 > > > > --- a/virt/kvm/kvm_main.c > > > > +++ b/virt/kvm/kvm_main.c > > > > @@ -2624,11 +2624,6 @@ unsigned long kvm_host_page_size(struct kvm_vcpu *vcpu, gfn_t gfn) > > > > return size; > > > > } > > > > > > > > -static bool memslot_is_readonly(const struct kvm_memory_slot *slot) > > > > -{ > > > > - return slot->flags & KVM_MEM_READONLY; > > > > -} > > > > - > > > > static unsigned long __gfn_to_hva_many(const struct kvm_memory_slot *slot, gfn_t gfn, > > > > gfn_t *nr_pages, bool write) > > > > { > > > > -- > > > > 2.48.1.502.g6dc24dfdaf-goog > > > >