From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3658EB64D7 for ; Tue, 20 Jun 2023 07:48:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 222A78D0005; Tue, 20 Jun 2023 03:48:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1ACEE8D0001; Tue, 20 Jun 2023 03:48:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F3F888D0005; Tue, 20 Jun 2023 03:48:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id DBB298D0001 for ; Tue, 20 Jun 2023 03:48:07 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id AA66740678 for ; Tue, 20 Jun 2023 07:48:07 +0000 (UTC) X-FDA: 80922347814.23.6ECA232 Received: from mail-oi1-f172.google.com (mail-oi1-f172.google.com [209.85.167.172]) by imf20.hostedemail.com (Postfix) with ESMTP id BF36E1C0023 for ; Tue, 20 Jun 2023 07:48:05 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=YRkxkxfr; spf=pass (imf20.hostedemail.com: domain of npiggin@gmail.com designates 209.85.167.172 as permitted sender) smtp.mailfrom=npiggin@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687247285; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9/UfqACRIbp+KfcxB9xT42vufkCG9I6wiWPxaYnWI9o=; b=6zKIRD/lVHBegmibgLQtY73Y60HNmy8AgI9hj2d18Pgp3AoNuQLxHLMGNCrp3Nul2DzI3d cPH/2uWF5CAEbOpWpi1kWnFnNuk07tu0SJr23Q9h8Hh5W+3UwV0l/P3c+/ovp0Y275gAA1 bUqME9kRN/mAY+KvUQK0F//YYOFbQWc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687247285; a=rsa-sha256; cv=none; b=qv92tneY9VfTY1ZKNxMXPw0AxCQXgOpdpuMvjTzcpGyyuvxwqCYTL33TNXL4nMFa7s34U0 3SXI+URAot8Bm/chJkbSyHvO7LShuz1xW9i3Yetznky1ptSB4F7ao8SEMGbJAfncq8rhaH vW1qY+hbZaZfQPbyeKeuZGg2tnyWszE= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=YRkxkxfr; spf=pass (imf20.hostedemail.com: domain of npiggin@gmail.com designates 209.85.167.172 as permitted sender) smtp.mailfrom=npiggin@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-oi1-f172.google.com with SMTP id 5614622812f47-3942c6584f0so2888220b6e.3 for ; Tue, 20 Jun 2023 00:48:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687247285; x=1689839285; h=in-reply-to:references:message-id:to:from:subject:cc:date :content-transfer-encoding:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=9/UfqACRIbp+KfcxB9xT42vufkCG9I6wiWPxaYnWI9o=; b=YRkxkxfrSg9A9g1ooeKb1wazJtUjLoPERxhfKAH2GwD7qw69XbLPN1Ba1FtUAvm8FL QdF4uXVuR3ymVv0p1G1ASryHTPSZYRDV2/AzOy1m/umugzTdoMz95o2l4WRUF4Iwghp3 GPXCl2dLw4v4bhe3AzveZyBYxyqQZ3++OqPtuynx+nNPaBvXUWbtmTiP8XxosVRGw3af wiTb2HdEqZxd4zej9B8EQQMu678bU7zyk2fz4wu/zoqSNubkH3xaNhY9KOM9ajlVhA8T ELkPzQLgIAx+Z+fAf5N0tP2dDEdJyROExBnsn+lxAbjQYu38/OlA9Py/ox4kdV59okLI 4Lbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687247285; x=1689839285; h=in-reply-to:references:message-id:to:from:subject:cc:date :content-transfer-encoding:mime-version:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=9/UfqACRIbp+KfcxB9xT42vufkCG9I6wiWPxaYnWI9o=; b=euNyyfK00DJgDdj0ntERmoQkQYGy0H4mJZYrimMTI72aEx/2d31BcfPJ9bNbG0/91s RULaJ0/UpycxPdgQbYc2BghATMzu6cgxyqDnt4ZmGkkC6ZS/2P3G92dJUT8W3yn+Mna6 8CgEMXtpHcvxd8wiMC9VTrlqUU7BCVajkaawoteU0BAcxykzcmwJ9nuXPsxHY60ydtUW SiROUwPkw/oJqBPRji1oFPYp1y5jCHcLCgIWAffR9pWwzUMYyYt1RI1jWU0d7L/993Mb zpOj30pLdgEW7+5FVdU/6NbreOJV0m14soTfd0OpDuaaxl/gmocMZfxXxiLscGViwiRD l2gw== X-Gm-Message-State: AC+VfDyhoH9ze/izbhgI+WiYdTzeaOMo4BK9jV2NguM8tRr2ezjlGEEY cqwnHmDpI/Kk3sJGkiDiOZ4= X-Google-Smtp-Source: ACHHUZ46XxwEUTpJsCjGTBnalzzt+j7MWruxVgUmQbM1nUv/3dXWVSuVX//QnCfJBuhR+6fpBvpozw== X-Received: by 2002:a05:6808:1a09:b0:39e:cb88:17f8 with SMTP id bk9-20020a0568081a0900b0039ecb8817f8mr9119278oib.7.1687247284740; Tue, 20 Jun 2023 00:48:04 -0700 (PDT) Received: from localhost ([124.170.190.103]) by smtp.gmail.com with ESMTPSA id nr5-20020a17090b240500b0025dc5749b4csm6789071pjb.21.2023.06.20.00.47.49 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 20 Jun 2023 00:48:04 -0700 (PDT) Mime-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=UTF-8 Date: Tue, 20 Jun 2023 17:47:47 +1000 Cc: "Alistair Popple" , "Anup Patel" , "Ben Gardon" , "Borislav Petkov" , "Catalin Marinas" , "Chao Peng" , "Christophe Leroy" , "Dave Hansen" , "Fabiano Rosas" , "Gaosheng Cui" , "Gavin Shan" , "H. Peter Anvin" , "Ingo Molnar" , "James Morse" , "Jason A. Donenfeld" , "Jason Gunthorpe" , "Jonathan Corbet" , "Marc Zyngier" , "Masami Hiramatsu" , "Michael Ellerman" , "Michael Larabel" , "Mike Rapoport" , "Oliver Upton" , "Paul Mackerras" , "Peter Xu" , "Sean Christopherson" , "Steven Rostedt" , "Suzuki K Poulose" , "Thomas Gleixner" , "Thomas Huth" , "Will Deacon" , "Zenghui Yu" , , , , , , , , , , Subject: Re: [PATCH mm-unstable v2 07/10] kvm/powerpc: add kvm_arch_test_clear_young() From: "Nicholas Piggin" To: "Yu Zhao" , "Andrew Morton" , "Paolo Bonzini" Message-Id: X-Mailer: aerc 0.14.0 References: <20230526234435.662652-1-yuzhao@google.com> <20230526234435.662652-8-yuzhao@google.com> In-Reply-To: <20230526234435.662652-8-yuzhao@google.com> X-Stat-Signature: h1en4ksygw6wk66od9pgrs7apixnsa8n X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: BF36E1C0023 X-Rspam-User: X-HE-Tag: 1687247285-545329 X-HE-Meta: U2FsdGVkX18/CkzhpYf1azzI5JsqdARQ/Htx8m4hhWT2xPrO5+ZjxlfqrTId+LxbGybBXxmeu0EwPwukqD9OKrwwafOfSKMrPCw4WCB9YD0CJXDDvJZMWVr6kydiZPWGDytqgUSxGZ815E0QihLLsFeBAhzrb4r/MQjPZBdP3CP8atKnqKaXlU0L1IWh8MLGlD0zvmECy3517WbBi3FVLXSEAdwP2v+CwYbA71Nd53zsjAKHFm5CIKnHK916t7Un3jrBWIImRtTraff1PbjNxU7w7XZIVUvd+cSz2faaZYm3w3mGmjx+Wi/Lfm7PcoLWV2FpsQdPTX7u2/HvSDD9NqMtXNAl6qziv+ckDIMfgLe1hqp8fMOE+Puvc+je+9ukNpv8CczKCe8oGX758QDcsfgad59etQGWEviC8owuapIi7n+ueYEFKk9/RpwFyTdaDM47EBCrrAXYaWZuI7wOiOyHVNObpijJOrA9VWCDO8Tr6bhQwlSLxfcJgevYQK3I7HfTZQhPDAFSzRPKp43REDoIOXsELN8Zb4FYvB+ykDCu9WwhamlT4hJHlaNL7pgPEHbvoo5VdlWZKjKWgrgHOda+uToBr98RyLBGTdQV1tXDgj1exYX8qOJnYXLFJlAhxq/sByAPv0rmwk2FWcIY9p9JIofmSnNu8bTGX3r0YcYPwpu9cFJnCoZieBwFFS/6UNPoOTJO3zIDalpYBOEZQKaj95NrJNfqhFMwCdmxTn+TSvIpDLpp1+vy1oGM9RtMQBN+w60gHgcTo6IGItF+MORq09dSsaTfflLpd1hbUGl0k0U6Bocx/PPEoWcVpqnB6nvkVkuNW6axjpXF1Wzjx2FDwDreQYcAQwL7FDffw2K52BBtQLCc/UbBtlKOfbnmW5PZyMQ/Oy1SWKJTzLdmFRjCi+9bt2CHFEyvtJ2NaNiKeSXfL53Blc2IItuA4DRbF9idrdv7Hy1v4liiE77 9Pfgvn3s NfgDtlAZlww+X/LJVg4XW/Ssej3LvRdE84PZGmA7DGeZpmKkec6+z0nDi3UxrzKEl5tuAJeJWu6hwCtg41Xdj8XTJbKritAtT+lSx79hy2p3pNtlWYGAzY5whoTpWX+w3g80LKd9FCwysCMbvUvF39Kbki91dBdmdeLKDKwK7D6SJBQHrtzSbVXUZA7ArhyMdJDvScEu8K/QG9hb8ZhXbV1mFjCHJginhNLLgNpNZL6HDdtfISHYwX4RXW5iq5fxEKHqxa1d/YnzEBkrFTrS0RHlZm30a+gV+ohE2lAA2Wro3W1gVRq2eVQ+YNnaNup1YnukakTzmTPM5bMiAtjXGTTVm4L3SkFaTWBaotJJ7k0LxrXLG9cGJ/I42FO3VtoLxCFt9sKZGKe0WPDTRhV3sevXRqqAg5HAbN4gkk4qBY/BNKq9QFVH8I7H3ZTUKlTCmdctdxrQrpcyoj0c//lykS1r3LedX/JRUkoTqanBKEZy0/QjVDVbJE5vnvfQmr0Y48VCkIaOS4eL+CFIKqt4joUHH9w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sat May 27, 2023 at 9:44 AM AEST, Yu Zhao wrote: > Implement kvm_arch_test_clear_young() to support the fast path in > mmu_notifier_ops->test_clear_young(). > > It focuses on a simple case, i.e., radix MMU sets the accessed bit in > KVM PTEs and VMs are not nested, where it can rely on RCU and > pte_xchg() to safely clear the accessed bit without taking > kvm->mmu_lock. Complex cases fall back to the existing slow path > where kvm->mmu_lock is then taken. > > Signed-off-by: Yu Zhao > --- > arch/powerpc/include/asm/kvm_host.h | 8 ++++ > arch/powerpc/include/asm/kvm_ppc.h | 1 + > arch/powerpc/kvm/book3s.c | 6 +++ > arch/powerpc/kvm/book3s.h | 1 + > arch/powerpc/kvm/book3s_64_mmu_radix.c | 59 ++++++++++++++++++++++++++ > arch/powerpc/kvm/book3s_hv.c | 5 +++ > 6 files changed, 80 insertions(+) > > diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/a= sm/kvm_host.h > index 14ee0dece853..75c260ea8a9e 100644 > --- a/arch/powerpc/include/asm/kvm_host.h > +++ b/arch/powerpc/include/asm/kvm_host.h > @@ -883,4 +883,12 @@ static inline void kvm_arch_sched_in(struct kvm_vcpu= *vcpu, int cpu) {} > static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {} > static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {} > =20 > +#define kvm_arch_has_test_clear_young kvm_arch_has_test_clear_young > +static inline bool kvm_arch_has_test_clear_young(void) > +{ > + return IS_ENABLED(CONFIG_KVM_BOOK3S_HV_POSSIBLE) && > + cpu_has_feature(CPU_FTR_HVMODE) && cpu_has_feature(CPU_FTR_ARCH_= 300) && > + radix_enabled(); This could probably be radix_enabled() && !kvmhv_on_pseries(). Although unclear why not nested hypervisor... I'd have to think about it a bit more. Do you have any idea what might go wrong, or just didn't have the time to consider it? (Not just powerpc nested but any others). > +} > + > #endif /* __POWERPC_KVM_HOST_H__ */ > diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/as= m/kvm_ppc.h > index 79a9c0bb8bba..ff1af6a7b44f 100644 > --- a/arch/powerpc/include/asm/kvm_ppc.h > +++ b/arch/powerpc/include/asm/kvm_ppc.h > @@ -287,6 +287,7 @@ struct kvmppc_ops { > bool (*unmap_gfn_range)(struct kvm *kvm, struct kvm_gfn_range *range); > bool (*age_gfn)(struct kvm *kvm, struct kvm_gfn_range *range); > bool (*test_age_gfn)(struct kvm *kvm, struct kvm_gfn_range *range); > + bool (*test_clear_young)(struct kvm *kvm, struct kvm_gfn_range *range); > bool (*set_spte_gfn)(struct kvm *kvm, struct kvm_gfn_range *range); > void (*free_memslot)(struct kvm_memory_slot *slot); > int (*init_vm)(struct kvm *kvm); > diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c > index 686d8d9eda3e..37bf40b0c4ff 100644 > --- a/arch/powerpc/kvm/book3s.c > +++ b/arch/powerpc/kvm/book3s.c > @@ -899,6 +899,12 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gf= n_range *range) > return kvm->arch.kvm_ops->test_age_gfn(kvm, range); > } > =20 > +bool kvm_arch_test_clear_young(struct kvm *kvm, struct kvm_gfn_range *ra= nge) > +{ > + return !kvm->arch.kvm_ops->test_clear_young || > + kvm->arch.kvm_ops->test_clear_young(kvm, range); > +} > + > bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) > { > return kvm->arch.kvm_ops->set_spte_gfn(kvm, range); > diff --git a/arch/powerpc/kvm/book3s.h b/arch/powerpc/kvm/book3s.h > index 58391b4b32ed..fa2659e21ccc 100644 > --- a/arch/powerpc/kvm/book3s.h > +++ b/arch/powerpc/kvm/book3s.h > @@ -12,6 +12,7 @@ extern void kvmppc_core_flush_memslot_hv(struct kvm *kv= m, > extern bool kvm_unmap_gfn_range_hv(struct kvm *kvm, struct kvm_gfn_range= *range); > extern bool kvm_age_gfn_hv(struct kvm *kvm, struct kvm_gfn_range *range)= ; > extern bool kvm_test_age_gfn_hv(struct kvm *kvm, struct kvm_gfn_range *r= ange); > +extern bool kvm_test_clear_young_hv(struct kvm *kvm, struct kvm_gfn_rang= e *range); > extern bool kvm_set_spte_gfn_hv(struct kvm *kvm, struct kvm_gfn_range *r= ange); > =20 > extern int kvmppc_mmu_init_pr(struct kvm_vcpu *vcpu); > diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/bo= ok3s_64_mmu_radix.c > index 3b65b3b11041..0a392e9a100a 100644 > --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c > +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c > @@ -1088,6 +1088,65 @@ bool kvm_test_age_radix(struct kvm *kvm, struct kv= m_memory_slot *memslot, > return ref; > } > =20 > +bool kvm_test_clear_young_hv(struct kvm *kvm, struct kvm_gfn_range *rang= e) > +{ > + bool err; > + gfn_t gfn =3D range->start; > + > + rcu_read_lock(); > + > + err =3D !kvm_is_radix(kvm); > + if (err) > + goto unlock; > + > + /* > + * Case 1: This function kvmppc_switch_mmu_to_hpt() > + * > + * rcu_read_lock() > + * Test kvm_is_radix() kvm->arch.radix =3D 0 > + * Use kvm->arch.pgtable synchronize_rcu() > + * rcu_read_unlock() > + * kvmppc_free_radix() > + * > + * > + * Case 2: This function kvmppc_switch_mmu_to_radix() > + * > + * kvmppc_init_vm_radix() > + * smp_wmb() > + * Test kvm_is_radix() kvm->arch.radix =3D 1 > + * smp_rmb() > + * Use kvm->arch.pgtable > + */ > + smp_rmb(); Comment could stand to expand slightly on what you are solving, in words. If you use synchronize_rcu() on both sides, you wouldn't need the barrier, right? > + while (gfn < range->end) { > + pte_t *ptep; > + pte_t old, new; > + unsigned int shift; > + > + ptep =3D find_kvm_secondary_pte_unlocked(kvm, gfn * PAGE_SIZE, &shift)= ; > + if (!ptep) > + goto next; > + > + VM_WARN_ON_ONCE(!page_count(virt_to_page(ptep))); Not really appropriate at the KVM level. mm enforces this kind of thing (with notifiers). > + > + old =3D READ_ONCE(*ptep); > + if (!pte_present(old) || !pte_young(old)) > + goto next; > + > + new =3D pte_mkold(old); > + > + if (kvm_should_clear_young(range, gfn)) > + pte_xchg(ptep, old, new); *Probably* will work. I can't think of a reason why not at the moment anyway :) Thanks, Nick > +next: > + gfn +=3D shift ? BIT(shift - PAGE_SHIFT) : 1; > + } > +unlock: > + rcu_read_unlock(); > + > + return err; > +} > + > /* Returns the number of PAGE_SIZE pages that are dirty */ > static int kvm_radix_test_clear_dirty(struct kvm *kvm, > struct kvm_memory_slot *memslot, int pagenum) > diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c > index 130bafdb1430..20a81ec9fde8 100644 > --- a/arch/powerpc/kvm/book3s_hv.c > +++ b/arch/powerpc/kvm/book3s_hv.c > @@ -5262,6 +5262,8 @@ int kvmppc_switch_mmu_to_hpt(struct kvm *kvm) > spin_lock(&kvm->mmu_lock); > kvm->arch.radix =3D 0; > spin_unlock(&kvm->mmu_lock); > + /* see the comments in kvm_test_clear_young_hv() */ I'm guilty of such comments at times, but it wouldn't hurt to say /* Finish concurrent kvm_test_clear_young_hv access to page tables */ Then you know where to look for more info and you have a vague idea what it's for. > + synchronize_rcu(); > kvmppc_free_radix(kvm); > =20 > lpcr =3D LPCR_VPM1; > @@ -5286,6 +5288,8 @@ int kvmppc_switch_mmu_to_radix(struct kvm *kvm) > if (err) > return err; > kvmppc_rmap_reset(kvm); > + /* see the comments in kvm_test_clear_young_hv() */ > + smp_wmb(); > /* Mutual exclusion with kvm_unmap_gfn_range etc. */ > spin_lock(&kvm->mmu_lock); > kvm->arch.radix =3D 1; > @@ -6185,6 +6189,7 @@ static struct kvmppc_ops kvm_ops_hv =3D { > .unmap_gfn_range =3D kvm_unmap_gfn_range_hv, > .age_gfn =3D kvm_age_gfn_hv, > .test_age_gfn =3D kvm_test_age_gfn_hv, > + .test_clear_young =3D kvm_test_clear_young_hv, > .set_spte_gfn =3D kvm_set_spte_gfn_hv, > .free_memslot =3D kvmppc_core_free_memslot_hv, > .init_vm =3D kvmppc_core_init_vm_hv, Thanks for looking at the powerpc conversion! Thanks, Nick