From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D2CFC0219B for ; Tue, 11 Feb 2025 16:26:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DA70128000C; Tue, 11 Feb 2025 11:26:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D56A328000A; Tue, 11 Feb 2025 11:26:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BD0BC28000C; Tue, 11 Feb 2025 11:26:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 9482328000A for ; Tue, 11 Feb 2025 11:26:06 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 4A0961C775B for ; Tue, 11 Feb 2025 16:26:06 +0000 (UTC) X-FDA: 83108190732.26.B9061A0 Received: from mail-ej1-f43.google.com (mail-ej1-f43.google.com [209.85.218.43]) by imf06.hostedemail.com (Postfix) with ESMTP id E3A6F180007 for ; Tue, 11 Feb 2025 16:26:03 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=XA2LMD5i; spf=pass (imf06.hostedemail.com: domain of qperret@google.com designates 209.85.218.43 as permitted sender) smtp.mailfrom=qperret@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739291164; a=rsa-sha256; cv=none; b=RabhuSssvqe4bNxnrMUMQOo3aw6TkiSy2V8Xjb2TUpjKhWNFtaHEG7eg25bPb7ah2bVd5T rvxZJSTu//0GaSlhdsdDsxZo0/6gfOB0f9DEKcF/1idmCp2hMKPoHOtfuzu1LUquynrPde v7kVyFpI1Df8qyhd3c2zcR3BGTh2fJA= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=XA2LMD5i; spf=pass (imf06.hostedemail.com: domain of qperret@google.com designates 209.85.218.43 as permitted sender) smtp.mailfrom=qperret@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739291164; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Lc+JSO7VWa29GALV89As14B2oJ453iU8WioOaWQeMRI=; b=gyJX/qe5DERDKQLtfVB78K0mW/+aFy3FJs/S11NAKll6fJb2maVSKRksI/8LVAvzHX/G4O g+ieiSWAP9UlzcXJFSY1pTlQkLv/SOV5vIujvVaaPnbxnP/TqYUsGIguEiqImkbz7AC0dA mKY/Dfj42n1k2HcoYZxaRcjd78Z4Fs0= Received: by mail-ej1-f43.google.com with SMTP id a640c23a62f3a-aaecf50578eso1140100566b.2 for ; Tue, 11 Feb 2025 08:26:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1739291162; x=1739895962; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=Lc+JSO7VWa29GALV89As14B2oJ453iU8WioOaWQeMRI=; b=XA2LMD5iKDH+WaUXVQSktNmh5iZKi9pM2NCrfL3ZDpBFt8I4oJMb1acjiay5RoBOb+ AqG3FUkx/JgZEuPiO+lHpivXZUUOvFWqct9sFLUG+xHKEF25Jk35ycdhSV5qvDpehMCW iI+nl7qdioN9dfYFa7kRmuYVU2N8BK1d679U4mLv14LE8bKjN4wr6la+X7WrvW4FSwel SADLg4ev/0xFHkdAEoAMva7Nq579Yp0Lo0wwxKnIWm9JuuOT5krTpq9dTinqcUqrUF3Q i2Alb2/CGCDxYJBuN8sMjZY5taPsx0FfCFRtHc3UBtctaVMwhOyr4GPUFLCztZfJgSVh 6EQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739291162; x=1739895962; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=Lc+JSO7VWa29GALV89As14B2oJ453iU8WioOaWQeMRI=; b=VAhC7fQl5JHzSMHKK1GsFh1J4o/ABVpj06LeOZHzTcINkFmJIerWUByV/H+GZ5DXUg hsmGWpncF4gV100MQcySUJNMHJE5bNDptswufnntwdb62uYZi5FEoUBjGzLzM7etRrcv DN25gFm3UkdSSgz3nBVX307js2UU0XS4YJsV0b8NMFKt/dlLnDgC7PMuIs2TWjT2toE9 A9L9mnMMgKhhILrpaJ6VqvEJwKvg/mrOxbZKWDfVizxZzOyatHAWYkI+Ye69x0SybxGq wCCNHV2U2hLKdRKqpQELKIrWTPqQFJkkao7driXfjuQ6TP01zGIzM26pJ3eZZtkSHmf1 R7BA== X-Forwarded-Encrypted: i=1; AJvYcCURunZx6tW2NIpNvBAjpXVYD8g82/MjueXLU+D2WI8agJ9NCJE014CChZgQEnLSvfMDNHUbQu6xSw==@kvack.org X-Gm-Message-State: AOJu0Yydhgl6td6IHE9RS0MxCav6R2ezUXFVaB3UGDiuhkCTw52l8rLj SC74WF96j4c4z4oU2rxj3U2V9wRwpMCJA0UmWgiSsj22Qdwp5DmmMukrEtBQKA== X-Gm-Gg: ASbGncvjJWRwjjQb13R+Ig52ZNqe2uWI/uKo0M4Z0vI3QV5cGtlzwkOfLvbpTGOF7SH Coiyv5y4MFNTB98fbM5d1MHC1kplnMkXF1ivx924BrLqQMGql8NfJPNsByU/JBrDGQBoPIE8e4J 8nPbdDS+eHlDqOpBHRKOClpZ3WE1DxGXiaVMoz90VSJ+d0jS+LYV+gDU0SyFpbt+6c0MFtiNrp2 PcddyPJ4kUQegmlFZQD9X7Xy11qt2JnSU8rdOSU2J0BBVBkM5lQb6iMELsZ0e0Ul0BI2dipkhIV BTHiQbIdH2w/UBjhmLn/a/GuC5lR1WXaOogHOgRcULZlTSsO3x+u X-Google-Smtp-Source: AGHT+IH2vqwRRSMV4wIQ7ONI9MfsBJJqR+pxV9gvu3ws+VHXum+VuksieApXM+qCny14EogckM99lg== X-Received: by 2002:a17:907:c246:b0:ab7:cf4d:9b2d with SMTP id a640c23a62f3a-ab7f00e93d3mr28703266b.30.1739291162057; Tue, 11 Feb 2025 08:26:02 -0800 (PST) Received: from google.com (229.112.91.34.bc.googleusercontent.com. [34.91.112.229]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5de60497c99sm6603734a12.4.2025.02.11.08.26.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 11 Feb 2025 08:26:00 -0800 (PST) Date: Tue, 11 Feb 2025 16:25:57 +0000 From: Quentin Perret To: Fuad Tabba Cc: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com Subject: Re: [PATCH v3 08/11] KVM: arm64: Handle guest_memfd()-backed guest page faults Message-ID: References: <20250211121128.703390-1-tabba@google.com> <20250211121128.703390-9-tabba@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: E3A6F180007 X-Stat-Signature: f7ct671mwwjuoanz34phw8mton7jmrzi X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1739291163-950378 X-HE-Meta: U2FsdGVkX1+Gf8ZK3OzS0h3B6LrzkFfjV4+oVoXBm9TSutzlaN4/SUTmneTzg4ck5tBlyL55PcAaRA9DnQ0R+yKPdtqim8eundK8Fvu/1kW9NwRg9TCM3+V5px6E3lfBjhwj30sl0mk3JOUVsWQZH87p5pTCKkAZwy+wS4LYdxaAW3vUfT5XGPl//JO+YJEU3uVDwK8Y3G5HBWPMP6HG+1PDpLq20HvUA1XHj3eJkzBUc7cQbYPLK+Gwlr8qwBYUZQZnV7lu7RoStXhAuWwuOKteZFZk8QRBwaH3iSGkRTKu6ZKeMjgxcfQfl835sQu/7N/AgfhGArNKbrS3ZjZSxRpq6XMau7t3l+iFrvEao4mm3nn7ctJGITP1RCGRFSgyRgn4jCMNNTwZTGNiGzFnrw2VxJlEq1QmLHtPOPVAYWWX0OuSy03x8I4tE0MiuYtaA9vHq7Whiu24cjYH3sgKEgTEBu/nMwVuyS54QY2NcaOSwJjjelw5ClzPY6d5EzxdXeg8R8b6X6wAb60TDqRsJNtR0UXLzQc+Mkhm4ZiQkfPp+ScqqZ2cM8hFoQQ4DFvooD+6zahZ3PEHjlE68BtrzjuFGusCAtsq8WbBN4nH9RjAjkKH+d0XZ7Bm1fffLREs/1FneWrOVKmR/VwkxBWAvpS3UnoasTuCNJ60GvnbGunY0Xwl0tO2JBvLfVHQzirqSdJU0OTZBRQXY0YteNiXJSTKPKouOuaGwaBdmtyMFNN63DNh6Xol3pIBWxIK5dhWwdcJY2sWGFxJawrXkmnHwM2+ec6Yutm/U2jGWlUYqi+4zB+My9kkyrpQoc4n7tLTPwQ5xbtXDytgrGClds8s+ZPTzy7mePYX3yV4ZSbYnZBB6Gb1Bhe2hKqH0ihYGORkjf1vgCkUJ5dj5XIqW7UDRWnO/5M8dqcOx8AWcfW4lkcczjbDH6ij1w3oIkTFT8U7rpjXpI7pvmDzR9EFfwG KZdRxeWH ks92Sc8V4AJOjImIpX2sj0w6p6g1AL4Zmkgi9ZrEHmovkgbhRnE3d4hKLTflRGEvns96Cy2vG+6OD6PHQ6aUYAOqE6iR3ic3t0I7DnIhC4F3vqt/UPIrAW9LZmGb7NSrqwtBRegIOXyu4qD8FIn9ZqiFi6ib3OTd5XCtShlTISqd8HKmOoAjPgqTqrOXZTPQMyHL6sSpwPwiap0raSYlrhCc5OGICww8hjZLpRnDhMAmoooOi5ECyxe4jp3d7uVaegmnrgP8eG4fYui903pwviBB/DNZoq0eqYqNsOs5VBAH+i2Xw3kdtVtenPwqGxjjYhiufm3imgG9yioo8OfBuTBq9TUmkn/DFMhjiG1xN/8hAfaWRoWPRLDDj2bYEHcj6gPyxO+9sdz8jej19hxVJM8w7Q87VFgnUrFft/170or2bpfhFce3PIRlhtM6FBZWTVjvAUdIJcmARzmLLSShqVpSD4w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tuesday 11 Feb 2025 at 16:13:27 (+0000), Fuad Tabba wrote: > Hi Quentin, > > On Tue, 11 Feb 2025 at 15:57, Quentin Perret wrote: > > > > Hey Fuad, > > > > On Tuesday 11 Feb 2025 at 12:11:24 (+0000), Fuad Tabba wrote: > > > Add arm64 support for handling guest page faults on guest_memfd > > > backed memslots. > > > > > > For now, the fault granule is restricted to PAGE_SIZE. > > > > > > Signed-off-by: Fuad Tabba > > > --- > > > arch/arm64/kvm/mmu.c | 84 ++++++++++++++++++++++++++-------------- > > > include/linux/kvm_host.h | 5 +++ > > > virt/kvm/kvm_main.c | 5 --- > > > 3 files changed, 61 insertions(+), 33 deletions(-) > > > > > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > > > index b6c0acb2311c..305060518766 100644 > > > --- a/arch/arm64/kvm/mmu.c > > > +++ b/arch/arm64/kvm/mmu.c > > > @@ -1454,6 +1454,33 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) > > > return vma->vm_flags & VM_MTE_ALLOWED; > > > } > > > > > > +static kvm_pfn_t faultin_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, > > > + gfn_t gfn, bool write_fault, bool *writable, > > > + struct page **page, bool is_private) > > > +{ > > > + kvm_pfn_t pfn; > > > + int ret; > > > + > > > + if (!is_private) > > > + return __kvm_faultin_pfn(slot, gfn, write_fault ? FOLL_WRITE : 0, writable, page); > > > + > > > + *writable = false; > > > + > > > + if (WARN_ON_ONCE(write_fault && memslot_is_readonly(slot))) > > > + return KVM_PFN_ERR_NOSLOT_MASK; > > > > I believe this check is superfluous, we should decide to report an MMIO > > exit to userspace for write faults to RO memslots and not get anywhere > > near user_mem_abort(). And nit but the error code should probably be > > KVM_PFN_ERR_RO_FAULT or something instead? > > I tried to replicate the behavior of __kvm_faultin_pfn() here (but got > the wrong error!). I think you're right though that in the arm64 case, > this check isn't needed. Should I fix the return error and keep the > warning though? __kvm_faultin_pfn() will just set *writable to false if it find an RO memslot apparently, not return an error. So I'd vote for dropping that check so we align with that behaviour. > > > + > > > + ret = kvm_gmem_get_pfn(kvm, slot, gfn, &pfn, page, NULL); > > > + if (!ret) { > > > + *writable = write_fault; > > > > In normal KVM, if we're not dirty logging we'll actively map the page as > > writable if both the memslot and the userspace mappings are writable. > > With gmem, the latter doesn't make much sense, but essentially the > > underlying page should really be writable (e.g. no CoW getting in the > > way and such?). If so, then perhaps make this > > > > *writable = !memslot_is_readonly(slot); > > > > Wdyt? > > Ack. > > > > + return pfn; > > > + } > > > + > > > + if (ret == -EHWPOISON) > > > + return KVM_PFN_ERR_HWPOISON; > > > + > > > + return KVM_PFN_ERR_NOSLOT_MASK; > > > +} > > > + > > > static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > > > struct kvm_s2_trans *nested, > > > struct kvm_memory_slot *memslot, unsigned long hva, > > > @@ -1461,25 +1488,26 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > > > { > > > int ret = 0; > > > bool write_fault, writable; > > > - bool exec_fault, mte_allowed; > > > + bool exec_fault, mte_allowed = false; > > > bool device = false, vfio_allow_any_uc = false; > > > unsigned long mmu_seq; > > > phys_addr_t ipa = fault_ipa; > > > struct kvm *kvm = vcpu->kvm; > > > - struct vm_area_struct *vma; > > > + struct vm_area_struct *vma = NULL; > > > short vma_shift; > > > void *memcache; > > > - gfn_t gfn; > > > + gfn_t gfn = ipa >> PAGE_SHIFT; > > > kvm_pfn_t pfn; > > > bool logging_active = memslot_is_logging(memslot); > > > - bool force_pte = logging_active || is_protected_kvm_enabled(); > > > - long vma_pagesize, fault_granule; > > > + bool is_private = kvm_mem_is_private(kvm, gfn); > > > > Just trying to understand the locking rule for the xarray behind this. > > Is it kvm->srcu that protects it for reads here? Something else? > > I'm not sure I follow. Which xarray are you referring to? Sorry, yes, that wasn't clear. I meant that kvm_mem_is_private() calls kvm_get_memory_attributes() which indexes kvm->mem_attr_array. The comment in struct kvm indicates that this xarray is protected by RCU for readers, so I was just checking if we were relying on kvm_handle_guest_abort() to take srcu_read_lock(&kvm->srcu) for us, or if there was something else more subtle here. Cheers, Quentin > > > > > + bool force_pte = logging_active || is_private || is_protected_kvm_enabled(); > > > + long vma_pagesize, fault_granule = PAGE_SIZE; > > > enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; > > > struct kvm_pgtable *pgt; > > > struct page *page; > > > enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED; > > > > > > - if (fault_is_perm) > > > + if (fault_is_perm && !is_private) > > > > Nit: not strictly necessary I think. > > You're right. > > Thanks, > /fuad > > > > fault_granule = kvm_vcpu_trap_get_perm_fault_granule(vcpu); > > > write_fault = kvm_is_write_fault(vcpu); > > > exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu); > > > @@ -1510,24 +1538,30 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > > > return ret; > > > } > > > > > > + mmap_read_lock(current->mm); > > > + > > > /* > > > * Let's check if we will get back a huge page backed by hugetlbfs, or > > > * get block mapping for device MMIO region. > > > */ > > > - mmap_read_lock(current->mm); > > > - vma = vma_lookup(current->mm, hva); > > > - if (unlikely(!vma)) { > > > - kvm_err("Failed to find VMA for hva 0x%lx\n", hva); > > > - mmap_read_unlock(current->mm); > > > - return -EFAULT; > > > - } > > > + if (!is_private) { > > > + vma = vma_lookup(current->mm, hva); > > > + if (unlikely(!vma)) { > > > + kvm_err("Failed to find VMA for hva 0x%lx\n", hva); > > > + mmap_read_unlock(current->mm); > > > + return -EFAULT; > > > + } > > > > > > - /* > > > - * logging_active is guaranteed to never be true for VM_PFNMAP > > > - * memslots. > > > - */ > > > - if (WARN_ON_ONCE(logging_active && (vma->vm_flags & VM_PFNMAP))) > > > - return -EFAULT; > > > + /* > > > + * logging_active is guaranteed to never be true for VM_PFNMAP > > > + * memslots. > > > + */ > > > + if (WARN_ON_ONCE(logging_active && (vma->vm_flags & VM_PFNMAP))) > > > + return -EFAULT; > > > + > > > + vfio_allow_any_uc = vma->vm_flags & VM_ALLOW_ANY_UNCACHED; > > > + mte_allowed = kvm_vma_mte_allowed(vma); > > > + } > > > > > > if (force_pte) > > > vma_shift = PAGE_SHIFT; > > > @@ -1597,18 +1631,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > > > ipa &= ~(vma_pagesize - 1); > > > } > > > > > > - gfn = ipa >> PAGE_SHIFT; > > > - mte_allowed = kvm_vma_mte_allowed(vma); > > > - > > > - vfio_allow_any_uc = vma->vm_flags & VM_ALLOW_ANY_UNCACHED; > > > - > > > /* Don't use the VMA after the unlock -- it may have vanished */ > > > vma = NULL; > > > > > > /* > > > * Read mmu_invalidate_seq so that KVM can detect if the results of > > > - * vma_lookup() or __kvm_faultin_pfn() become stale prior to > > > - * acquiring kvm->mmu_lock. > > > + * vma_lookup() or faultin_pfn() become stale prior to acquiring > > > + * kvm->mmu_lock. > > > * > > > * Rely on mmap_read_unlock() for an implicit smp_rmb(), which pairs > > > * with the smp_wmb() in kvm_mmu_invalidate_end(). > > > @@ -1616,8 +1645,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > > > mmu_seq = vcpu->kvm->mmu_invalidate_seq; > > > mmap_read_unlock(current->mm); > > > > > > - pfn = __kvm_faultin_pfn(memslot, gfn, write_fault ? FOLL_WRITE : 0, > > > - &writable, &page); > > > + pfn = faultin_pfn(kvm, memslot, gfn, write_fault, &writable, &page, is_private); > > > if (pfn == KVM_PFN_ERR_HWPOISON) { > > > kvm_send_hwpoison_signal(hva, vma_shift); > > > return 0; > > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > > > index 39fd6e35c723..415c6274aede 100644 > > > --- a/include/linux/kvm_host.h > > > +++ b/include/linux/kvm_host.h > > > @@ -1882,6 +1882,11 @@ static inline int memslot_id(struct kvm *kvm, gfn_t gfn) > > > return gfn_to_memslot(kvm, gfn)->id; > > > } > > > > > > +static inline bool memslot_is_readonly(const struct kvm_memory_slot *slot) > > > +{ > > > + return slot->flags & KVM_MEM_READONLY; > > > +} > > > + > > > static inline gfn_t > > > hva_to_gfn_memslot(unsigned long hva, struct kvm_memory_slot *slot) > > > { > > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > > > index 38f0f402ea46..3e40acb9f5c0 100644 > > > --- a/virt/kvm/kvm_main.c > > > +++ b/virt/kvm/kvm_main.c > > > @@ -2624,11 +2624,6 @@ unsigned long kvm_host_page_size(struct kvm_vcpu *vcpu, gfn_t gfn) > > > return size; > > > } > > > > > > -static bool memslot_is_readonly(const struct kvm_memory_slot *slot) > > > -{ > > > - return slot->flags & KVM_MEM_READONLY; > > > -} > > > - > > > static unsigned long __gfn_to_hva_many(const struct kvm_memory_slot *slot, gfn_t gfn, > > > gfn_t *nr_pages, bool write) > > > { > > > -- > > > 2.48.1.502.g6dc24dfdaf-goog > > >