From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D57EEE7718B for ; Wed, 1 Jan 2025 04:44:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ED39A6B007B; Tue, 31 Dec 2024 23:44:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E834A6B0083; Tue, 31 Dec 2024 23:44:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D54BF6B0085; Tue, 31 Dec 2024 23:44:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id B14E26B007B for ; Tue, 31 Dec 2024 23:44:13 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 27CB61A1360 for ; Wed, 1 Jan 2025 04:44:13 +0000 (UTC) X-FDA: 82957638918.19.7A95713 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf12.hostedemail.com (Postfix) with ESMTP id 1AFA740006 for ; Wed, 1 Jan 2025 04:43:49 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=none; spf=pass (imf12.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735706618; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wQ85VQ/Ni9ZrLX/uZ1nMPMcgMd4e6ESTIZvLpYx6+4E=; b=iLSZAUTiBdBaxyDJ4if6lK3Yjrky2PWILXA6TrjX60wQztDqsUK4E4hsc1gJzkmOg88qUg Ri3VM/Ml/vvjhugQbX70UOYWZIXGhiGt8RfZtrCFyoD0wpEFADcHShZ5e+6w2XU4szzXsJ X33TJJZK5+AQnWe6xHV9/5GN+8RIXpo= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=none; spf=pass (imf12.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735706618; a=rsa-sha256; cv=none; b=Zk2ODnw6BFzRIVp/ecEAQc4xqzkRmTAT/TrvMOFAQs5TGIlNnqkn+44gYamHJpZ+NIwzI8 nmY07kav+0l6NgHeHHZJfS9LNWhBPN/wBDa7YPN05ERiyJSrZ4Fn9zEyvrIQkqGPtdxM6j aRhawBhzHhncMC8RVVrsXBgFXS0TSZY= Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1tSqYk-000000001BT-1bzW; Tue, 31 Dec 2024 23:42:18 -0500 Message-ID: <95b614577f5475a919c878d6906d721004c83584.camel@surriel.com> Subject: Re: [PATCH 09/12] x86/mm: enable broadcast TLB invalidation for multi-threaded processes From: Rik van Riel To: Nadav Amit Cc: the arch/x86 maintainers , Linux Kernel Mailing List , kernel-team@meta.com, Dave Hansen , luto@kernel.org, peterz@infradead.org, Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , Andrew Morton , zhengqi.arch@bytedance.com, "open list:MEMORY MANAGEMENT" Date: Tue, 31 Dec 2024 23:42:17 -0500 In-Reply-To: References: <20241230175550.4046587-1-riel@surriel.com> <20241230175550.4046587-10-riel@surriel.com> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable User-Agent: Evolution 3.54.1 (3.54.1-1.fc41) MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Stat-Signature: nntew3bhtms3c7aw75h814goebz54cny X-Rspamd-Queue-Id: 1AFA740006 X-Rspam-User: X-HE-Tag: 1735706629-870361 X-HE-Meta: U2FsdGVkX1/8f/eXIxtf1Una2kChtyMQYdvMkZONIROpgNvMK6UumO0ajLuaNlXxFNPqBgs8LnbBGKcbSCJBQ0L3iY9hMScyfd5pRLp8MG8Qv0WjymOFWgJpR5/xyX18rBSWFSDl61+w2JBhQDvbXEO9zBXHBQ/34aiPQkEhpGoXSRvskgFRsIWsG+M42rAA2veRnpWwMRZBKJcxfIVEUId8kU/isnKGQwVOlhYaRUzBg9lMRZ+owUc4hVkEdj8Pgd+bFaquiKD17iFcaSCwiNXst8m7bqA6vlfyYRVu4ChI/qu8paalV1OLHmpuAVg9ljrUi7KVRsym1nhC7oNcKoMhJ0Zdsv8q5qCSvZLkdhxYGq9c+B/vJUiM1O+ULvY+VCSFOcVAotttCYHmeOzxwaAQ3xwVpoRMiXBgDdx74K1jPn8D2rl/1XgrTAYOtEdgWGHAb+sw9/te44sZz8KdEVU9Lvz2hmhNIRTm9Vu98HkDHmaov5qdWvQYvbXp/ApskCOE0VgATvTXqYM+sA6gp+8ncuJpPfwBcoA5h4SL6tVIKR8rlpBohpN4WomzJ5LXhzI76B2axEc1SSTNpzEYP3DIWkspaFWXn6GSxRgQmJrLZxoqjwTAYmFp0m3HIg8ZmTg15TV7NEHTuODlEammuuC3rLGv2JCdDvOsOqFjaClifdvFbgro/coNKsibX+zKB/JnSMxXltorygaePuQcMn2i/MQnluLi9BOwbGGmt8Fefga7TkoZV3o1OZ44J/z8TdBqH5vfSxdMFtphRBtMBEEJomDZfhZrG0bNbqbCghtY9R7cY3Bs5ih0yc4LsekId1bDsZJprcRQKpIZTO8ZKpw3jycBU+ObNEa/jVSlhaiqnw2cnyB1CA7mnvZsFfHBymyg8pwMO+8jNIg/Xz2N3VeKLnirxJr5mf2iqX3L653UQZbBMlkpCO5oht8FLSo3Zdi2FfL+ph3+pTttIBN ZO0OonaT 897m4YY32ffgLbLgmcVGJYchkLoe+Vu4QcxT7INqNTds9zzhjN8oEePkrZPIT2LhchNA/sp8YDx9WO+h0b2skX+ZlhSRAIccMgoPBkjB5qMlh/QWaqqgsTXJZm3XvxEx2OSuWV79Wm3AvHTpLgwn4/2HXCZbH4JL6GI/xCczQa30I0JUa4W0pqKwYmmg198/KkU8PvrzkS5fuBqxg7YrAT1H3HiyxTlpgaF2F2XcxdoUMc6stgcnksP6o80rTVtmqbi16RlM/lVKl2vS4z8EoklAo2hIlihfVsiwgZqncWIqPzaVBgRlZ39vyNrv5CKINFnLAFTDvvtwy8MRX0PJpN+W3Kg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, 2024-12-30 at 21:24 +0200, Nadav Amit wrote: >=20 > > --- a/arch/x86/include/asm/tlbflush.h > > +++ b/arch/x86/include/asm/tlbflush.h > > @@ -65,6 +65,23 @@ static inline void cr4_clear_bits(unsigned long > > mask) > > =C2=A0*/ > > #define TLB_NR_DYN_ASIDS 6 > >=20 > > +#ifdef CONFIG_CPU_SUP_AMD > > +#define is_dyn_asid(asid) (asid) < TLB_NR_DYN_ASIDS > >=20 >=20 > I don=E2=80=99t see a reason why those should be #define instead of inlin= e > functions. > Arguably, those are better due to type-checking, etc. For instance > is_dyn_asid() > is missing brackets to be safe. >=20 I've turned these all into inline functions for the next version. > >=20 > > @@ -225,6 +227,18 @@ static void choose_new_asid(struct mm_struct > > *next, u64 next_tlb_gen, > > return; > > } > >=20 > > + /* > > + * TLB consistency for this ASID is maintained with > > INVLPGB; > > + * TLB flushes happen even while the process isn't > > running. > > + */ > > +#ifdef CONFIG_CPU_SUP_AMD > I=E2=80=99m pretty sure IS_ENABLED() can be used here. >=20 > > + if (static_cpu_has(X86_FEATURE_INVLPGB) && > > mm_broadcast_asid(next)) { > > + *new_asid =3D mm_broadcast_asid(next); >=20 > Isn=E2=80=99t there a risk of a race changing broadcast_asid between the = two > reads? >=20 > Maybe use READ_ONCE() also since the value is modified > asynchronously?=20 >=20 In the current code, the broadcast_asid only ever changes from zero to a non-zero value, which continues to be used for the lifetime of the process. It will not change from one non-zero to another non-zero value. I have cleaned up the code a bit by having just one single read, though. > >=20 > > @@ -251,6 +265,245 @@ static void choose_new_asid(struct mm_struct > > *next, u64 next_tlb_gen, > > *need_flush =3D true; > > } > >=20 > > +#ifdef CONFIG_CPU_SUP_AMD > > +/* > > + * Logic for AMD INVLPGB support. > > + */ > > +static DEFINE_RAW_SPINLOCK(broadcast_asid_lock); > > +static u16 last_broadcast_asid =3D TLB_NR_DYN_ASIDS; > > +static DECLARE_BITMAP(broadcast_asid_used, MAX_ASID_AVAILABLE) =3D { > > 0 }; > > +static LIST_HEAD(broadcast_asid_list); > > +static int broadcast_asid_available =3D MAX_ASID_AVAILABLE - > > TLB_NR_DYN_ASIDS - 1; >=20 > Presumably some of these data structures are shared, and some are > accessed > frequently together. Wouldn=E2=80=99t it make more sense to put them insi= de a > struct(s) > and make it cacheline aligned? >=20 Maybe? I'm not sure any of these are particularly frequently used, at least not compared to things like TLB invalidations. > >=20 > > + /* Try claiming this broadcast ASID. */ > > + if (!test_and_set_bit(asid, broadcast_asid_used)) > > { >=20 > IIUC, broadcast_asid_used is always protected with > broadcast_asid_lock. > So why test_and_set_bit=C2=A0 ? Thanks for spotting that one. I cleaned that up for the next version. >=20 > > +void destroy_context_free_broadcast_asid(struct mm_struct *mm) > > +{ > > + if (!mm->context.broadcast_asid) >=20 > mm_broadcast_asid()? I've intentionally kept this one in the same "shape" as the assignment a few lines lower. That might be cleaner than reading it one way, and then writing it another way. >=20 > > +static void use_broadcast_asid(struct mm_struct *mm) > > +{ > > + guard(raw_spinlock_irqsave)(&broadcast_asid_lock); > > + > > + /* This process is already using broadcast TLB > > invalidation. */ > > + if (mm->context.broadcast_asid) > > + return; > > + > > + mm->context.broadcast_asid =3D get_broadcast_asid(); >=20 > This is read without the lock, so do you want WRITE_ONCE() here?=20 >=20 > > + mm->context.asid_transition =3D true; >=20 > And what about asid_transition? Presumably also need WRITE_ONCE(). > But more > importantly than this theoretical compiler optimization, is there > some assumed > ordering with setting broadcast_asid? I changed both to WRITE_ONCE.=20 Thinking about ordering, maybe we need to set asid_transition before setting broadcast_asid? That way when we see a non-zero broadcast ASID, we are guaranteed to try finish_asid_transition() from broadcast_tlb_flush(). Fixed for the next version. >=20 > > + if (meets_broadcast_asid_threshold(mm)) > > + use_broadcast_asid(mm); > > +} >=20 > I don=E2=80=99t think count_tlb_flush() is a name that reflects what this > function > does. >=20 Agreed. I've renamed it to the equally lame consider_broadcast_asid :) > > + > > +static void finish_asid_transition(struct flush_tlb_info *info) > > +{ > > + struct mm_struct *mm =3D info->mm; > > + int bc_asid =3D mm_broadcast_asid(mm); > > + int cpu; > > + > > + if (!mm->context.asid_transition) >=20 > is_asid_transition()? I'm not convinced that for a thing we access in so few places, having a different "shape" for the read than we have for the write would make the code easier to follow. I'm open to arguments, though ;) >=20 > > + if (info->end =3D=3D TLB_FLUSH_ALL) { > > + invlpgb_flush_single_pcid(kern_pcid(asid)); > > + /* Do any CPUs supporting INVLPGB need PTI? */ > > + if (static_cpu_has(X86_FEATURE_PTI)) > > + invlpgb_flush_single_pcid(user_pcid(asid)) > > ; > > + } else do { >=20 > I couldn=E2=80=99t find any use of =E2=80=9Celse do=E2=80=9D in the kerne= l. Might it be > confusing? I could replace it with a goto finish_asid_transition.=C2=A0 That's what I had there before. I'm not sure it was better, though :/ >=20 > > + /* > > + * Calculate how many pages can be flushed at > > once; if the > > + * remainder of the range is less than one page, > > flush one. > > + */ > > + nr =3D min(maxnr, (info->end - addr) >> info- > > >stride_shift); > > + nr =3D max(nr, 1); > > + > > + invlpgb_flush_user_nr(kern_pcid(asid), addr, nr, > > pmd); > > + /* Do any CPUs supporting INVLPGB need PTI? */ > > + if (static_cpu_has(X86_FEATURE_PTI)) > > + invlpgb_flush_user_nr(user_pcid(asid), > > addr, nr, pmd); > > + addr +=3D nr << info->stride_shift; > > + } while (addr < info->end); > > + > > + finish_asid_transition(info); > > + > > + /* Wait for the INVLPGBs kicked off above to finish. */ > > + tlbsync(); > > +} > >=20 > > @@ -556,8 +809,9 @@ void switch_mm_irqs_off(struct mm_struct > > *unused, struct mm_struct *next, > > */ > > if (prev =3D=3D next) { > > /* Not actually switching mm's */ > > - > > VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) !=3D > > - =C2=A0=C2=A0 next->context.ctx_id); > > + if (is_dyn_asid(prev_asid)) > > + VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs > > [prev_asid].ctx_id) !=3D > > + =C2=A0=C2=A0 next->context.ctx_id); >=20 > Why not to add the condition into the VM_WARN_ON and avoid the > nesting? Done. Thank you. >=20 > > @@ -1026,14 +1311,18 @@ void flush_tlb_mm_range(struct mm_struct > > *mm, unsigned long start, > > bool freed_tables) > > { > > struct flush_tlb_info *info; > > + unsigned long threshold =3D tlb_single_page_flush_ceiling; > > u64 new_tlb_gen; > > int cpu; > >=20 > > + if (static_cpu_has(X86_FEATURE_INVLPGB)) > > + threshold *=3D invlpgb_count_max; >=20 > I know it=E2=80=99s not really impacting performance, but it is hard for = me > to see > such calculations happening unnecessarily every time... >=20 Thanks for pointing out this code. We should only do the multiplication if this mm is using broadcast TLB invalidation. Fixed for the next version. I left the multiplication for now, since that is smaller than adding code to the debugfs tlb_single_page_flush_ceiling handler to store a multiplied value in another variable. --=20 All Rights Reversed.