From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C97CEB64D7 for ; Wed, 21 Jun 2023 00:38:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DC6CC8D0003; Tue, 20 Jun 2023 20:38:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D77818D0002; Tue, 20 Jun 2023 20:38:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C17978D0003; Tue, 20 Jun 2023 20:38:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id AF9008D0002 for ; Tue, 20 Jun 2023 20:38:51 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 78F0FC05F2 for ; Wed, 21 Jun 2023 00:38:51 +0000 (UTC) X-FDA: 80924894862.26.553D604 Received: from mail-qt1-f175.google.com (mail-qt1-f175.google.com [209.85.160.175]) by imf03.hostedemail.com (Postfix) with ESMTP id B6E1120002 for ; Wed, 21 Jun 2023 00:38:49 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=0kug6R+G; spf=pass (imf03.hostedemail.com: domain of yuzhao@google.com designates 209.85.160.175 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687307929; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UbeEtUy9SR2M7k28TLFp+3oTL4iFQyUyRuoTI6dZXb8=; b=t6tm8jPAxFCok/1199+a8aLRl3gllxqlCKZzMVT+0J57AzOJtYDtN/Th1FsNYYovduQpXN nkwZmSiZw9WGwZTDjgImywmJWycOngIumQqEKirvm0dRpyyY7VW2FjNXhxySdeiTzgr2m/ BLuZ6tykZmEV9Vc36kkeXOMNa79ZMC4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687307929; a=rsa-sha256; cv=none; b=adUDiddQgpQxdlSe0rZ3RHpfJEDOZIimiXOg74lcJilxaor7q6HLGpXiMAHWDRKq34TZeR 0eM1Z8oFkqRIuMfvAELEWgRZRpqxAae5F4PbzEawvrMLYgfLm2EOTO9O7F+MvWVBzmjrPX /P9pmuKhQCTrNRF6a8Inj73wBe7pEgM= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=0kug6R+G; spf=pass (imf03.hostedemail.com: domain of yuzhao@google.com designates 209.85.160.175 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-qt1-f175.google.com with SMTP id d75a77b69052e-3f9d619103dso21431cf.1 for ; Tue, 20 Jun 2023 17:38:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687307928; x=1689899928; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=UbeEtUy9SR2M7k28TLFp+3oTL4iFQyUyRuoTI6dZXb8=; b=0kug6R+GWSxgd5hSxLlcycwxHzz/P0efqWWmQ0XJGjPTw0p18ICZI+ckgtCD2EgDM0 tpETl0tLYLIg0uN3zbUrotFafJfZ99YCdjPSCVmb8+2eqsLu2yK2G6zzoy2tS7GGZGS6 FkCkh37ZJEcrXrkFNC8KktqvO9y1ghN5ZT05cmxNTPiRsvVHDwzcUsXc+UtzA2a9Q6+t 7jJbW0Wafek2eGbY7aWKk2RLY7wgWd175TmL/pFPlLDjbat4owzU/m1qg7mJTSxvGJpY BmQyB24jIDfoMSdHUCS1pi/9MXSiZo79glRZ8ODdRwUIngU3r17N7qtKchHOQMg1ICkb aGjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687307929; x=1689899929; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UbeEtUy9SR2M7k28TLFp+3oTL4iFQyUyRuoTI6dZXb8=; b=FHD4JjRB/eT85BtSK2uwmAJU4vHXy9eS01Nk0UGNrnW3GbbGFkBDOb5K9c8yw+Tg+m xlB+mU5Wnh7M2cELb1wUrcloJdCQTqsM7mfUFSfop62ea4copEThW8yDLIxCOBs6lIow FNXVCyZJi8RSxxpgEwZC0bEVfTHnXuXKvo6d+LPyrDsJgoLZaJpOoCNuaglePq5iIgcn TI5ndLsHGtJKbgZ0COXl/RRJetxA0g+prB/kTAoG8dDoB5j9tD8SMuLlVpRkEfuH41zP QbtK0wYiH+UToxqna+w00FNW9n8EwmiHKt6pUFUtLqmQ6zQgy3z29ihpXMyW6ZdAb0xV k5Ug== X-Gm-Message-State: AC+VfDz9WUmsxv7mnrfMz71lKbGj78ERIZv0Dwit1W1C4RaepG5Had6Q 94NH2xJr8JKuLptVoFcG/IAUnTYUoc+8QO1dSu04xA== X-Google-Smtp-Source: ACHHUZ7DuWH27EJU0J2gyK8ww3GkEqCk3IcP9Lpwqnc/rAmLtX0jV9Bm68MFgueqO2IKbOsYe97TGNyxr+bVvBWc0gU= X-Received: by 2002:ac8:7f93:0:b0:3ef:404a:b291 with SMTP id z19-20020ac87f93000000b003ef404ab291mr397682qtj.7.1687307928570; Tue, 20 Jun 2023 17:38:48 -0700 (PDT) MIME-Version: 1.0 References: <20230526234435.662652-1-yuzhao@google.com> <20230526234435.662652-8-yuzhao@google.com> In-Reply-To: From: Yu Zhao Date: Tue, 20 Jun 2023 18:38:12 -0600 Message-ID: Subject: Re: [PATCH mm-unstable v2 07/10] kvm/powerpc: add kvm_arch_test_clear_young() To: Nicholas Piggin Cc: Andrew Morton , Paolo Bonzini , Alistair Popple , Anup Patel , Ben Gardon , Borislav Petkov , Catalin Marinas , Chao Peng , Christophe Leroy , Dave Hansen , Fabiano Rosas , Gaosheng Cui , Gavin Shan , "H. Peter Anvin" , Ingo Molnar , James Morse , "Jason A. Donenfeld" , Jason Gunthorpe , Jonathan Corbet , Marc Zyngier , Masami Hiramatsu , Michael Ellerman , Michael Larabel , Mike Rapoport , Oliver Upton , Paul Mackerras , Peter Xu , Sean Christopherson , Steven Rostedt , Suzuki K Poulose , Thomas Gleixner , Thomas Huth , Will Deacon , Zenghui Yu , kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, x86@kernel.org, linux-mm@google.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: e14nxg6ehamo3bx794jqtumody74936p X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: B6E1120002 X-Rspam-User: X-HE-Tag: 1687307929-473956 X-HE-Meta: U2FsdGVkX18YpodPlsvELT/2Z/YM6cY5CftUjI516Tprcoq9u7Tbdem25UinyrBR6RXHim/1mZL6mx1ErzTT9h+7D6syHKZwjauzAxqETGozJWoyNoysrw4viNkHKwrlL3e+Qtw+CSi0LJq/d0gYm6eFGbLOWuqa+khNeH99JSclo8FzO6tLJtSuqAg2MOC0p+5GZGL/j5gYqDk6Z7x7GelZo8Q3RY9ln6Pq8YtcZ0bnJg8iaHwNLagKpvnjUZzTkuELwnUMDBGkumqbA9yYiCkyvGa4DrvUIfD7GJhNxHGbW+ZfwgC7JRo55Ac63s72W5Ns9vVbpsQekINprU8K6sifvM9UErbvFB4PR1ChC4QJMYRNoKSYR1+KLVApYxyhYn6W2Q+o89YI0yPEZD1R5/BdYBJev6lhitCPoK+pPsiGph4e+d06BthEUSbZIp/Kwxf3ulA0ffHRpROuUI36O+d09kuZnPBPSICGFTAWM1HDH61BbFsHPjkOUj/auEWS0kw3URcwocGqjlC9iBRr8Zi33WS0eYUrC6O7rARnQX8cxX3T7yMn29/LQtzuhbKrt77gL6Lc4qEnRW7FfS0plGVh+PbVR7DqCQqONZi623DkN1+O45nAlaIfuf5dngGod4wEVX+Vktv+I/SaC+IbeNZhqNhyM+vbDeIKf7Pf6zbSokc0fXRqv9WXennzZA2DqCs1AVw2QBo8Fh7jceDzGpOZ28Qy44ur4GULjYhxfCleG+6J37UXGkDkQIulUR7HOMb/3V2bU7xG1xZRyfIGswGDy0l1ovADMQd7/rmZNv0HeU1aJZB54Yj15Bn+GFQwawY08ea0gzsRiaFtvbduSHYJKvAN6UDl/jWIB+aFqj7NcHDNtnF5vHfEML638+4hZNYzY4DJtOsz1KSByzb++PN42ZCIQKksv3WYTXzILHkpHQTVBV4Z8AwELxMvtksGFxFR7QHV5+L96VZY5NF 0wZCbZL7 /P2m1t7kta/V4qWHMa86DDpjgPkSTk/4NdDyiNaSrflWldfyHjqPWBykq13iwK1COSS6FgzYFqAkgBZODjvGmNAvbeIJXL/b1RS/WlpwM4tHpytZcsGnjdjcYXTe7KGRT4ZJA0UXqVK4CWSKl/iURHNoz626l7C8Ih8HWV0J1L5tvhVADdlU3ABwxLRJfHQYCBI3ADTw1eBuEEf+1SGvD9XyEczuqqBVZDsciAZqJjstHEUfjuEMXASNkSd+wxu6YP0wFbe6nQk/19F+NMbDxkGrXRnM2ospuKZvpxLstVCSWw2S9+eg0+3cikVYddKQrFEk6kOszgQmQkQw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jun 20, 2023 at 1:48=E2=80=AFAM Nicholas Piggin = wrote: > > On Sat May 27, 2023 at 9:44 AM AEST, Yu Zhao wrote: > > Implement kvm_arch_test_clear_young() to support the fast path in > > mmu_notifier_ops->test_clear_young(). > > > > It focuses on a simple case, i.e., radix MMU sets the accessed bit in > > KVM PTEs and VMs are not nested, where it can rely on RCU and > > pte_xchg() to safely clear the accessed bit without taking > > kvm->mmu_lock. Complex cases fall back to the existing slow path > > where kvm->mmu_lock is then taken. > > > > Signed-off-by: Yu Zhao > > --- > > arch/powerpc/include/asm/kvm_host.h | 8 ++++ > > arch/powerpc/include/asm/kvm_ppc.h | 1 + > > arch/powerpc/kvm/book3s.c | 6 +++ > > arch/powerpc/kvm/book3s.h | 1 + > > arch/powerpc/kvm/book3s_64_mmu_radix.c | 59 ++++++++++++++++++++++++++ > > arch/powerpc/kvm/book3s_hv.c | 5 +++ > > 6 files changed, 80 insertions(+) > > > > diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include= /asm/kvm_host.h > > index 14ee0dece853..75c260ea8a9e 100644 > > --- a/arch/powerpc/include/asm/kvm_host.h > > +++ b/arch/powerpc/include/asm/kvm_host.h > > @@ -883,4 +883,12 @@ static inline void kvm_arch_sched_in(struct kvm_vc= pu *vcpu, int cpu) {} > > static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {} > > static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {} > > > > +#define kvm_arch_has_test_clear_young kvm_arch_has_test_clear_young > > +static inline bool kvm_arch_has_test_clear_young(void) > > +{ > > + return IS_ENABLED(CONFIG_KVM_BOOK3S_HV_POSSIBLE) && > > + cpu_has_feature(CPU_FTR_HVMODE) && cpu_has_feature(CPU_FTR= _ARCH_300) && > > + radix_enabled(); > > This could probably be radix_enabled() && !kvmhv_on_pseries(). Will do. (I used !kvmhv_on_pseries() in v1 but had second thoughts on moving kvmhv_on_pseries() into this file.) > Although unclear why not nested hypervisor... I'd have to think about it = a bit > more. Do you have any idea what might go wrong, or just didn't have the > time to consider it? (Not just powerpc nested but any others). Yes, this series excludes nested KVM support on all architures. The common reason for such a decision on powerpc and x86 (aarch64 doesn't support nested yet) is that it's quite challenging to make the rmap, a complex data structure that maps one PFN to multiple GFNs, lockless. (See kvmhv_insert_nest_rmap().) It might be possible, however, the potential ROI would be in question. > > +} > > + > > #endif /* __POWERPC_KVM_HOST_H__ */ > > diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/= asm/kvm_ppc.h > > index 79a9c0bb8bba..ff1af6a7b44f 100644 > > --- a/arch/powerpc/include/asm/kvm_ppc.h > > +++ b/arch/powerpc/include/asm/kvm_ppc.h > > @@ -287,6 +287,7 @@ struct kvmppc_ops { > > bool (*unmap_gfn_range)(struct kvm *kvm, struct kvm_gfn_range *ra= nge); > > bool (*age_gfn)(struct kvm *kvm, struct kvm_gfn_range *range); > > bool (*test_age_gfn)(struct kvm *kvm, struct kvm_gfn_range *range= ); > > + bool (*test_clear_young)(struct kvm *kvm, struct kvm_gfn_range *r= ange); > > bool (*set_spte_gfn)(struct kvm *kvm, struct kvm_gfn_range *range= ); > > void (*free_memslot)(struct kvm_memory_slot *slot); > > int (*init_vm)(struct kvm *kvm); > > diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c > > index 686d8d9eda3e..37bf40b0c4ff 100644 > > --- a/arch/powerpc/kvm/book3s.c > > +++ b/arch/powerpc/kvm/book3s.c > > @@ -899,6 +899,12 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_= gfn_range *range) > > return kvm->arch.kvm_ops->test_age_gfn(kvm, range); > > } > > > > +bool kvm_arch_test_clear_young(struct kvm *kvm, struct kvm_gfn_range *= range) > > +{ > > + return !kvm->arch.kvm_ops->test_clear_young || > > + kvm->arch.kvm_ops->test_clear_young(kvm, range); > > +} > > + > > bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) > > { > > return kvm->arch.kvm_ops->set_spte_gfn(kvm, range); > > diff --git a/arch/powerpc/kvm/book3s.h b/arch/powerpc/kvm/book3s.h > > index 58391b4b32ed..fa2659e21ccc 100644 > > --- a/arch/powerpc/kvm/book3s.h > > +++ b/arch/powerpc/kvm/book3s.h > > @@ -12,6 +12,7 @@ extern void kvmppc_core_flush_memslot_hv(struct kvm *= kvm, > > extern bool kvm_unmap_gfn_range_hv(struct kvm *kvm, struct kvm_gfn_ran= ge *range); > > extern bool kvm_age_gfn_hv(struct kvm *kvm, struct kvm_gfn_range *rang= e); > > extern bool kvm_test_age_gfn_hv(struct kvm *kvm, struct kvm_gfn_range = *range); > > +extern bool kvm_test_clear_young_hv(struct kvm *kvm, struct kvm_gfn_ra= nge *range); > > extern bool kvm_set_spte_gfn_hv(struct kvm *kvm, struct kvm_gfn_range = *range); > > > > extern int kvmppc_mmu_init_pr(struct kvm_vcpu *vcpu); > > diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/= book3s_64_mmu_radix.c > > index 3b65b3b11041..0a392e9a100a 100644 > > --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c > > +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c > > @@ -1088,6 +1088,65 @@ bool kvm_test_age_radix(struct kvm *kvm, struct = kvm_memory_slot *memslot, > > return ref; > > } > > > > +bool kvm_test_clear_young_hv(struct kvm *kvm, struct kvm_gfn_range *ra= nge) > > +{ > > + bool err; > > + gfn_t gfn =3D range->start; > > + > > + rcu_read_lock(); > > + > > + err =3D !kvm_is_radix(kvm); > > + if (err) > > + goto unlock; > > + > > + /* > > + * Case 1: This function kvmppc_switch_mmu_to_hpt() > > + * > > + * rcu_read_lock() > > + * Test kvm_is_radix() kvm->arch.radix =3D 0 > > + * Use kvm->arch.pgtable synchronize_rcu() > > + * rcu_read_unlock() > > + * kvmppc_free_radix() > > + * > > + * > > + * Case 2: This function kvmppc_switch_mmu_to_radix() > > + * > > + * kvmppc_init_vm_radix() > > + * smp_wmb() > > + * Test kvm_is_radix() kvm->arch.radix =3D 1 > > + * smp_rmb() > > + * Use kvm->arch.pgtable > > + */ > > + smp_rmb(); > > Comment could stand to expand slightly on what you are solving, in > words. Will do. > If you use synchronize_rcu() on both sides, you wouldn't need the > barrier, right? Case 2 is about memory ordering, which is orthogonal to case 1 (RCU freeing). So we need the r/w barrier regardless. > > + while (gfn < range->end) { > > + pte_t *ptep; > > + pte_t old, new; > > + unsigned int shift; > > + > > + ptep =3D find_kvm_secondary_pte_unlocked(kvm, gfn * PAGE_= SIZE, &shift); > > + if (!ptep) > > + goto next; > > + > > + VM_WARN_ON_ONCE(!page_count(virt_to_page(ptep))); > > Not really appropriate at the KVM level. mm enforces this kind of > thing (with notifiers). Will remove this. > > + > > + old =3D READ_ONCE(*ptep); > > + if (!pte_present(old) || !pte_young(old)) > > + goto next; > > + > > + new =3D pte_mkold(old); > > + > > + if (kvm_should_clear_young(range, gfn)) > > + pte_xchg(ptep, old, new); > > *Probably* will work. I can't think of a reason why not at the > moment anyway :) My reasoning: * It should work if we only change the dedicated A bit, i.e., not shared for other purposes, because races are safe (the case here). * It may not work for x86 EPT without the A bit (excluded in this series) where accesses are trapped by the R/X bits, because races in changing the R/X bits can be unsafe. > > +next: > > + gfn +=3D shift ? BIT(shift - PAGE_SHIFT) : 1; > > + } > > +unlock: > > + rcu_read_unlock(); > > + > > + return err; > > +} > > + > > /* Returns the number of PAGE_SIZE pages that are dirty */ > > static int kvm_radix_test_clear_dirty(struct kvm *kvm, > > struct kvm_memory_slot *memslot, int page= num) > > diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.= c > > index 130bafdb1430..20a81ec9fde8 100644 > > --- a/arch/powerpc/kvm/book3s_hv.c > > +++ b/arch/powerpc/kvm/book3s_hv.c > > @@ -5262,6 +5262,8 @@ int kvmppc_switch_mmu_to_hpt(struct kvm *kvm) > > spin_lock(&kvm->mmu_lock); > > kvm->arch.radix =3D 0; > > spin_unlock(&kvm->mmu_lock); > > + /* see the comments in kvm_test_clear_young_hv() */ > > I'm guilty of such comments at times, but it wouldn't hurt to say > /* Finish concurrent kvm_test_clear_young_hv access to page table= s */ > > Then you know where to look for more info and you have a vague > idea what it's for. Will do. > > + synchronize_rcu(); > > > kvmppc_free_radix(kvm); > > > > lpcr =3D LPCR_VPM1; > > @@ -5286,6 +5288,8 @@ int kvmppc_switch_mmu_to_radix(struct kvm *kvm) > > if (err) > > return err; > > kvmppc_rmap_reset(kvm); > > + /* see the comments in kvm_test_clear_young_hv() */ > > + smp_wmb(); > > /* Mutual exclusion with kvm_unmap_gfn_range etc. */ > > spin_lock(&kvm->mmu_lock); > > kvm->arch.radix =3D 1; > > @@ -6185,6 +6189,7 @@ static struct kvmppc_ops kvm_ops_hv =3D { > > .unmap_gfn_range =3D kvm_unmap_gfn_range_hv, > > .age_gfn =3D kvm_age_gfn_hv, > > .test_age_gfn =3D kvm_test_age_gfn_hv, > > + .test_clear_young =3D kvm_test_clear_young_hv, > > .set_spte_gfn =3D kvm_set_spte_gfn_hv, > > .free_memslot =3D kvmppc_core_free_memslot_hv, > > .init_vm =3D kvmppc_core_init_vm_hv, > > Thanks for looking at the powerpc conversion! Thanks for reviewing!