From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-22.0 required=3.0 tests=BAYES_00,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD95AC63777 for ; Thu, 3 Dec 2020 05:26:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E703C20C56 for ; Thu, 3 Dec 2020 05:26:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E703C20C56 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 21DEB6B005C; Thu, 3 Dec 2020 00:26:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1CBF56B005D; Thu, 3 Dec 2020 00:26:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0E1FC6B0068; Thu, 3 Dec 2020 00:26:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0241.hostedemail.com [216.40.44.241]) by kanga.kvack.org (Postfix) with ESMTP id EBA696B005C for ; Thu, 3 Dec 2020 00:26:02 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 97E353636 for ; Thu, 3 Dec 2020 05:26:02 +0000 (UTC) X-FDA: 77550834564.08.club11_5c0559c273b9 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id 733C11819E773 for ; Thu, 3 Dec 2020 05:26:02 +0000 (UTC) X-HE-Tag: club11_5c0559c273b9 X-Filterd-Recvd-Size: 10621 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Thu, 3 Dec 2020 05:26:01 +0000 (UTC) From: Andy Lutomirski Authentication-Results:mail.kernel.org; dkim=permerror (bad message/signature format) To: Nicholas Piggin Cc: Anton Blanchard , Arnd Bergmann , linux-arch , LKML , Linux-MM , linuxppc-dev , Mathieu Desnoyers , X86 ML , Will Deacon , Catalin Marinas , Rik van Riel , Dave Hansen , Andy Lutomirski Subject: [MOCKUP] x86/mm: Lightweight lazy mm refcounting Date: Wed, 2 Dec 2020 21:25:51 -0800 Message-Id: <7c4bcc0a464ca60be1e0aeba805a192be0ee81e5.1606972194.git.luto@kernel.org> X-Mailer: git-send-email 2.28.0 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For context, this is part of a series here: https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git/commit/?h=3D= x86/mm&id=3D7c4bcc0a464ca60be1e0aeba805a192be0ee81e5 This code compiles, but I haven't even tried to boot it. The earlier part of the series isn't terribly interesting -- it's a handful of cleanups that remove all reads of ->active_mm from arch/x86. I've been meaning to do that for a while, and now I did it. But, with that done, I think we can move to a totally different lazy mm refcounting model. So this patch is a mockup of what this could look like. The algorithm involves no atomics at all in the context switch path except for a single atomic_long_xchg() of a percpu variable when going from lazy mode to nonlazy mode. Even that could be optimized -- I suspect it could be replaced with non-atomic code if mm_users > 0. Instead, on mm exit, there's a single loop over all CPUs on which that mm could be lazily loaded that atomic_long_cmpxchg_relaxed()'s a remote percpu variable to tell the CPU to kindly mmdrop() the mm when it reschedules. All cpus that don't have the mm lazily loaded are ignored because they can't have lazy references, and no cpus can gain lazy references after mm_users hits zero. (I need to verify that all the relevant barriers are in place. I suspect that they are on x86 but that I'm short an smp_mb() on arches for which _relaxed atomics are genuinely relaxed.) Here's how I think it fits with various arches: x86: On bare metal (i.e. paravirt flush unavailable), the loop won't do much. The existing TLB shootdown when user tables are freed will empty mm_cpumask of everything but the calling CPU. So x86 ends up pretty close to as good as we can get short of reworking mm_cpumask() its= elf. arm64: with s/for_each_cpu/for_each_online_cpu, I think it will give good performance. The mmgrab()/mmdrop() overhead goes away, and, on smallish systems, the cost of the loop should be low. power: same as ARM, except that the loop may be rather larger since the systems are bigger. But I imagine it's still faster than Nick's approach -- a cmpxchg to a remote cacheline should still be faster than an IPI shootdown. (Nick, don't benchmark this patch -- at least the mm_users optimization mentioned above should be done, but also the mmgrab() and mmdrop() aren't actually removed.) Other arches: I don't know. Further research is required. What do you all think? As mentioned, there are several things blatantly wrong with this patch: The coding stype is not up to kernel standars. I have prototypes in the wrong places and other hacks. mms are likely to be freed with IRQs off. I think this is safe, but it's suboptimal. This whole thing is in arch/x86. The core algorithm ought to move outsid= e arch/, but doing so without making a mess might take some thought. It doesn't help that different architectures have very different ideas of what mm_cpumask() does. Finally, this patch has no benefit by itself. I didn't remove the active_mm refounting, so the big benefit of removing mmgrab() and mmdrop() calls on transitions to and from lazy mode isn't there. There is no point at all in benchmarking this patch as is. That being said, getting rid of the active_mm refcounting shouldn't be so hard, since x86 (in this tree) no longer uses active_mm at all. I should contemplate whether the calling CPU is special in arch_fixup_lazy_mm_refs(). On a very very quick think, it's not, but it needs more thought. Signed-off-by: Andy Lutomirski arch/x86/include/asm/tlbflush.h | 20 ++++++++ arch/x86/mm/tlb.c | 81 +++++++++++++++++++++++++++++++-- kernel/fork.c | 5 ++ 3 files changed, 101 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbfl= ush.h index 8c87a2e0b660..efcd4f125f2c 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -124,6 +124,26 @@ struct tlb_state { */ unsigned short user_pcid_flush_mask; =20 + /* + * Lazy mm tracking. + * + * - If this is NULL, it means that any mm_struct referenced by + * this CPU is kept alive by a real reference count. + * + * - If this is nonzero but the low bit is clear, it points to + * an mm_struct that must not be freed out from under this + * CPU. + * + * - If the low bit is set, it still points to an mm_struct, + * but some other CPU has mmgrab()'d it on our behalf, and we + * must mmdrop() it when we're done with it. + * + * See lazy_mm_grab() and related functions for the precise + * access rules. + */ + atomic_long_t lazy_mm; + + /* * Access to this CR4 shadow and to H/W CR4 is protected by * disabling interrupts when modifying either one. diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index e27300fc865b..00f5bace534b 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -8,6 +8,7 @@ #include #include #include +#include =20 #include #include @@ -420,6 +421,64 @@ void cr4_update_pce(void *ignored) static inline void cr4_update_pce_mm(struct mm_struct *mm) { } #endif =20 +static void lazy_mm_grab(struct mm_struct *mm) +{ + atomic_long_t *lazy_mm =3D this_cpu_ptr(&cpu_tlbstate.lazy_mm); + + WARN_ON_ONCE(atomic_long_read(lazy_mm) !=3D 0); + atomic_long_set(lazy_mm, (unsigned long)mm); +} + +static void lazy_mm_drop(void) +{ + atomic_long_t *lazy_mm =3D this_cpu_ptr(&cpu_tlbstate.lazy_mm); + + unsigned long prev =3D atomic_long_xchg(lazy_mm, 0); + if (prev & 1) + mmdrop((struct mm_struct *)(prev & ~1UL)); +} + +void arch_fixup_lazy_mm_refs(struct mm_struct *mm) +{ + int cpu; + + /* + * mm_users is zero, so no new lazy refs will be taken. + */ + WARN_ON_ONCE(atomic_read(&mm->mm_users) !=3D 0); + + for_each_cpu(cpu, mm_cpumask(mm)) { + atomic_long_t *lazy_mm =3D per_cpu_ptr(&cpu_tlbstate.lazy_mm, cpu); + unsigned long old; + + // Hmm, is this check actually useful? + if (atomic_long_read(lazy_mm) !=3D (unsigned long)mm) + continue; + + // XXX: we could optimize this by adding a bunch to + // mm_count at the beginning and subtracting the unused refs + // back off at the end. + mmgrab(mm); + + // XXX: is relaxed okay here? We need to be sure that the + // remote CPU has observed mm_users =3D=3D 0. This is just + // x86 for now, but we might want to move it into a library + // and use it elsewhere. + old =3D atomic_long_cmpxchg_relaxed(lazy_mm, (unsigned long)mm, + (unsigned long)mm | 1); + if (old =3D=3D (unsigned long)mm) { + /* The remote CPU now owns the reference we grabbed. */ + } else { + /* + * We raced! The remote CPU switched mms and no longer + * needs its reference. We didn't transfer ownership + * of the reference, so drop it. + */ + mmdrop(mm); + } + } +} + void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, struct task_struct *tsk) { @@ -587,16 +646,24 @@ void switch_mm_irqs_off(struct mm_struct *prev, str= uct mm_struct *next, cr4_update_pce_mm(next); switch_ldt(real_prev, next); } + + // XXX: this can end up in __mmdrop(). Is this okay with IRQs off? + // It might be nicer to defer this to a later stage in the scheduler + // with IRQs on. + if (was_lazy) + lazy_mm_drop(); } =20 /* - * Please ignore the name of this function. It should be called - * switch_to_kernel_thread(). + * Please don't think too hard about the name of this function. It + * should be called something like switch_to_kernel_mm(). * * enter_lazy_tlb() is a hint from the scheduler that we are entering a - * kernel thread or other context without an mm. Acceptable implementat= ions - * include doing nothing whatsoever, switching to init_mm, or various cl= ever - * lazy tricks to try to minimize TLB flushes. + * kernel thread or other context without an mm. Acceptable + * implementations include switching to init_mm, or various clever lazy + * tricks to try to minimize TLB flushes. We are, however, required to + * either stop referencing the previous mm or to take some action to + * keep it from being freed out from under us. * * The scheduler reserves the right to call enter_lazy_tlb() several tim= es * in a row. It will notify us that we're going back to a real mm by @@ -607,7 +674,11 @@ void enter_lazy_tlb(struct mm_struct *mm, struct tas= k_struct *tsk) if (this_cpu_read(cpu_tlbstate.loaded_mm) =3D=3D &init_mm) return; =20 + if (this_cpu_read(cpu_tlbstate.is_lazy)) + return; /* nothing to do */ + this_cpu_write(cpu_tlbstate.is_lazy, true); + lazy_mm_grab(mm); } =20 /* diff --git a/kernel/fork.c b/kernel/fork.c index da8d360fb032..4d68162c1d02 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1066,6 +1066,9 @@ struct mm_struct *mm_alloc(void) return mm_init(mm, current, current_user_ns()); } =20 +// XXX: move to a header +extern void arch_fixup_lazy_mm_refs(struct mm_struct *mm); + static inline void __mmput(struct mm_struct *mm) { VM_BUG_ON(atomic_read(&mm->mm_users)); @@ -1084,6 +1087,8 @@ static inline void __mmput(struct mm_struct *mm) } if (mm->binfmt) module_put(mm->binfmt->module); + + arch_fixup_lazy_mm_refs(mm); mmdrop(mm); } =20 --=20 2.28.0