From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 800A7C636D6 for ; Fri, 3 Feb 2023 17:02:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E3D5D6B0078; Fri, 3 Feb 2023 12:02:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DEE836B007B; Fri, 3 Feb 2023 12:02:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CB6606B007D; Fri, 3 Feb 2023 12:02:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id BCBD76B0078 for ; Fri, 3 Feb 2023 12:02:21 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 56B7041288 for ; Fri, 3 Feb 2023 17:02:21 +0000 (UTC) X-FDA: 80426598882.24.E4FAE15 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf15.hostedemail.com (Postfix) with ESMTP id 1BC15A0037 for ; Fri, 3 Feb 2023 17:02:17 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf15.hostedemail.com: domain of mark.rutland@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=mark.rutland@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675443738; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=o2zCWZ+Lqcyu7kM1MmaGOswNOXQMW470KKAcp4Gu+uc=; b=E5c/oYay0SEi2CqPSzcQwpU8Rdd1YUhs3vL1DwgzFKREZ7els5fdnljm47cN8zy0drEjM4 OwKneH2eMth1HNJqL/Y/MfXGrwlwAK792wHqULbv01Rju0vVaFo8ivRSSWoUDy/bLHyYuX hXtgJP8wQF8a+d1EZgM5yYUr4hkn1O4= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf15.hostedemail.com: domain of mark.rutland@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=mark.rutland@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675443738; a=rsa-sha256; cv=none; b=F3meFi9eo7c7yZGAzr67LqHzwIZzcspd8jMSZ5X60P3J6JFZUPNFDrfZiXA0kzyurfdQmF MoaEpfW/pvhe46fjp98oKznsBghAd5MMLDjSpbZ8Kc1Wzp1unsJbNvOnWUImoxFpZx5rEP m2YkWXfgtwRbCIW8ubmObcNXDEHd3tY= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2AFDE1474; Fri, 3 Feb 2023 09:02:59 -0800 (PST) Received: from FVFF77S0Q05N (unknown [10.57.90.37]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1443B3F71E; Fri, 3 Feb 2023 09:02:10 -0800 (PST) Date: Fri, 3 Feb 2023 17:02:08 +0000 From: Mark Rutland To: Peter Zijlstra Cc: torvalds@linux-foundation.org, corbet@lwn.net, will@kernel.org, boqun.feng@gmail.com, catalin.marinas@arm.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com, borntraeger@linux.ibm.com, svens@linux.ibm.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, joro@8bytes.org, suravee.suthikulpanit@amd.com, robin.murphy@arm.com, dwmw2@infradead.org, baolu.lu@linux.intel.com, Arnd Bergmann , Herbert Xu , davem@davemloft.net, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, Andrew Morton , vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-s390@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-crypto@vger.kernel.org Subject: Re: [PATCH v2 05/10] percpu: Wire up cmpxchg128 Message-ID: References: <20230202145030.223740842@infradead.org> <20230202152655.494373332@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230202152655.494373332@infradead.org> X-Rspamd-Queue-Id: 1BC15A0037 X-Stat-Signature: 1mujg99urmt1qgqxejxrpm3m5mazf8x4 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1675443737-149253 X-HE-Meta: U2FsdGVkX1+nIvG4hn/LHdjcj1wfRW8oCDwadD6kbJDt7hUxl9VGtBRmOcaRMZw6F4Sa30V3JiWonFcy6mPmj5eIpoytTcoAuB9C1uYX5lGRIQ1qfnE8plWO+el6KxGE2jKZr7QzwrGNLBakHXpqNE2EqwlAsy+5uq9Su5+pmKPACuIUxZADfyhr0WMnOr8Ue2VWuz7fFiwQQBDGFMFgDwgqasB/753NwmifR2mSmZCy+3mvXjLBFV6YH1NySG/z6VtmRsN2AEIJqndgfenZNRHYKi8/2sE0f+gk6gA6O/xkOE6tq78NY/EWJuj0OIxxv0kAoNnOrmQknqZJMKsq4rG8sfEiNaGtMG182IvzpfcZzNtYJQSAE5bNJ9UOQ3/D7xXLus7hQRcEhDqhnMQZxE/T/U/pf6tGjGkbLFvRfpDo0HbminuE3BZGT0hyFv8RmFNBYoJb+IxG3R9DJygc+Vv68b4cEVn3Tgw1bYwRcE2pSXBoSr3CQmDKq0vjcUntLdXJPt0vYK4N8arlnOGD72fu67w9fVjGNJ6BGBk0LPCBrIKQiKCmA0deT85ZzaEB2F2Jl/l+PDmXulr/gVUmweBofj7dHi3wUQttWKUj8a2JvL1w3MiiX3X+an+zEe+RJmbu9O4QVIJiCprOj1aI6YZFIhPbADZljlscT20MLiyUWz9Em1Dm/vaU3zAV7CJiwdOBgy/cj0ZPuQ3/WLBYgJYxtisva7LVEKHdSg3saU788o3U7FqYpXqLqWAazWPJGJTRGJYUBHY0NVL/ZXJm37aODiqHHtBc33tsm01me8K2d+rDNLmUT283KIx2eBzkbkiuOWeYG/SQ1ieuVJxMYwa2cwanD19ZcQFu5B1oHsoCNHPuLNOlyQX3l3laFCjca2XfQPSammhpGNFH1g0vSNj3m24KsOPDoGrSSTgIZ/41OkXig1/DRDAhT12lSNCogwT35uxVNe0HRgoYrit pYYgj/8C RPKef8Zd1qvwj+c+72FPF/YVF/i/IgWV7wjzmahjDqQnXhdO/roa6qLW5xKRlk5SBKGRcHaeiw0jOA+d8Q1C+be2SQG3UkRu8rEdIOIYuvBPD63e0FOP+y6WfJ1DvfiO0CFyRRK7AqD+VO89gvvDBkcxwgqQRpeQPhwILh6NacSogeYDxofUdgGB/tsRiILNaZFWy/22xINQe2B87AzXJsPvCoLy3p+WvhpAr5YODVpWcH4eXrhcUiTuL7qEvGLFfG+uKQIgMeiLc+qlacpmaYKBV+O182Ys9Tm5sjMCpjyS2Wk9gLSJd1iVB44DmhPH5mOBe X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Feb 02, 2023 at 03:50:35PM +0100, Peter Zijlstra wrote: > In order to replace cmpxchg_double() with the newly minted > cmpxchg128() family of functions, wire it up in this_cpu_cmpxchg(). > > Signed-off-by: Peter Zijlstra (Intel) > --- > arch/arm64/include/asm/percpu.h | 21 +++++++++++++++ > arch/s390/include/asm/percpu.h | 17 ++++++++++++ > arch/x86/include/asm/percpu.h | 56 ++++++++++++++++++++++++++++++++++++++++ > include/asm-generic/percpu.h | 8 +++++ > include/linux/percpu-defs.h | 20 ++++++++++++-- > 5 files changed, 120 insertions(+), 2 deletions(-) For arm64: Acked-by: Mark Rutland Mark. > > --- a/arch/arm64/include/asm/percpu.h > +++ b/arch/arm64/include/asm/percpu.h > @@ -140,6 +140,10 @@ PERCPU_RET_OP(add, add, ldadd) > * re-enabling preemption for preemptible kernels, but doing that in a way > * which builds inside a module would mean messing directly with the preempt > * count. If you do this, peterz and tglx will hunt you down. > + * > + * Not to mention it'll break the actual preemption model for missing a > + * preemption point when TIF_NEED_RESCHED gets set while preemption is > + * disabled. > */ > #define this_cpu_cmpxchg_double_8(ptr1, ptr2, o1, o2, n1, n2) \ > ({ \ > @@ -240,6 +244,23 @@ PERCPU_RET_OP(add, add, ldadd) > #define this_cpu_cmpxchg_8(pcp, o, n) \ > _pcp_protect_return(cmpxchg_relaxed, pcp, o, n) > > +#define this_cpu_cmpxchg_16(pcp, o, n) \ > +({ \ > + typedef typeof(pcp) pcp_op_T__; \ > + union { \ > + pcp_op_T__ pot; \ > + u128 val; \ > + } old__, new__, ret__; \ > + pcp_op_T__ *ptr__; \ > + old__.pot = o; \ > + new__.pot = n; \ > + preempt_disable_notrace(); \ > + ptr__ = raw_cpu_ptr(&(pcp)); \ > + ret__.val = cmpxchg128_local((void *)ptr__, old__.val, new__.val); \ > + preempt_enable_notrace(); \ > + ret__.pot; \ > +}) > + > #ifdef __KVM_NVHE_HYPERVISOR__ > extern unsigned long __hyp_per_cpu_offset(unsigned int cpu); > #define __per_cpu_offset > --- a/arch/s390/include/asm/percpu.h > +++ b/arch/s390/include/asm/percpu.h > @@ -148,6 +148,23 @@ > #define this_cpu_cmpxchg_4(pcp, oval, nval) arch_this_cpu_cmpxchg(pcp, oval, nval) > #define this_cpu_cmpxchg_8(pcp, oval, nval) arch_this_cpu_cmpxchg(pcp, oval, nval) > > +#define this_cpu_cmpxchg_16(pcp, oval, nval) \ > +({ \ > + typedef typeof(pcp) pcp_op_T__; \ > + union { \ > + pcp_op_T__ pot; \ > + u128 val; \ > + } old__, new__, ret__; \ > + pcp_op_T__ *ptr__; \ > + old__.pot = oval; \ > + new__.pot = nval; \ > + preempt_disable_notrace(); \ > + ptr__ = raw_cpu_ptr(&(pcp)); \ > + ret__.val = cmpxchg128((void *)ptr__, old__.val, new__.val); \ > + preempt_enable_notrace(); \ > + ret__.pot; \ > +}) > + > #define arch_this_cpu_xchg(pcp, nval) \ > ({ \ > typeof(pcp) *ptr__; \ > --- a/arch/x86/include/asm/percpu.h > +++ b/arch/x86/include/asm/percpu.h > @@ -210,6 +210,62 @@ do { \ > (typeof(_var))(unsigned long) pco_old__; \ > }) > > +#if defined(CONFIG_X86_32) && defined(CONFIG_X86_CMPXCHG64) > +#define percpu_cmpxchg64_op(size, qual, _var, _oval, _nval) \ > +({ \ > + union { \ > + typeof(_var) var; \ > + struct { \ > + u32 low, high; \ > + }; \ > + } old__, new__; \ > + \ > + old__.var = _oval; \ > + new__.var = _nval; \ > + \ > + asm qual ("cmpxchg8b " __percpu_arg([var]) \ > + : [var] "+m" (_var), \ > + "+a" (old__.low), \ > + "+d" (old__.high) \ > + : "b" (new__.low), \ > + "c" (new__.high) \ > + : "memory"); \ > + \ > + old__.var; \ > +}) > + > +#define raw_cpu_cmpxchg_8(pcp, oval, nval) percpu_cmpxchg64_op(8, , pcp, oval, nval) > +#define this_cpu_cmpxchg_8(pcp, oval, nval) percpu_cmpxchg64_op(8, volatile, pcp, oval, nval) > +#endif > + > +#ifdef CONFIG_X86_64 > +#define percpu_cmpxchg128_op(size, qual, _var, _oval, _nval) \ > +({ \ > + union { \ > + typeof(_var) var; \ > + struct { \ > + u64 low, high; \ > + }; \ > + } old__, new__; \ > + \ > + old__.var = _oval; \ > + new__.var = _nval; \ > + \ > + asm qual ("cmpxchg16b " __percpu_arg([var]) \ > + : [var] "+m" (_var), \ > + "+a" (old__.low), \ > + "+d" (old__.high) \ > + : "b" (new__.low), \ > + "c" (new__.high) \ > + : "memory"); \ > + \ > + old__.var; \ > +}) > + > +#define raw_cpu_cmpxchg_16(pcp, oval, nval) percpu_cmpxchg128_op(16, , pcp, oval, nval) > +#define this_cpu_cmpxchg_16(pcp, oval, nval) percpu_cmpxchg128_op(16, volatile, pcp, oval, nval) > +#endif > + > /* > * this_cpu_read() makes gcc load the percpu variable every time it is > * accessed while this_cpu_read_stable() allows the value to be cached. > --- a/include/asm-generic/percpu.h > +++ b/include/asm-generic/percpu.h > @@ -298,6 +298,10 @@ do { \ > #define raw_cpu_cmpxchg_8(pcp, oval, nval) \ > raw_cpu_generic_cmpxchg(pcp, oval, nval) > #endif > +#ifndef raw_cpu_cmpxchg_16 > +#define raw_cpu_cmpxchg_16(pcp, oval, nval) \ > + raw_cpu_generic_cmpxchg(pcp, oval, nval) > +#endif > > #ifndef raw_cpu_cmpxchg_double_1 > #define raw_cpu_cmpxchg_double_1(pcp1, pcp2, oval1, oval2, nval1, nval2) \ > @@ -423,6 +427,10 @@ do { \ > #define this_cpu_cmpxchg_8(pcp, oval, nval) \ > this_cpu_generic_cmpxchg(pcp, oval, nval) > #endif > +#ifndef this_cpu_cmpxchg_16 > +#define this_cpu_cmpxchg_16(pcp, oval, nval) \ > + this_cpu_generic_cmpxchg(pcp, oval, nval) > +#endif > > #ifndef this_cpu_cmpxchg_double_1 > #define this_cpu_cmpxchg_double_1(pcp1, pcp2, oval1, oval2, nval1, nval2) \ > --- a/include/linux/percpu-defs.h > +++ b/include/linux/percpu-defs.h > @@ -343,6 +343,22 @@ static inline void __this_cpu_preempt_ch > pscr2_ret__; \ > }) > > +#define __pcpu_size16_call_return2(stem, variable, ...) \ > +({ \ > + typeof(variable) pscr2_ret__; \ > + __verify_pcpu_ptr(&(variable)); \ > + switch(sizeof(variable)) { \ > + case 1: pscr2_ret__ = stem##1(variable, __VA_ARGS__); break; \ > + case 2: pscr2_ret__ = stem##2(variable, __VA_ARGS__); break; \ > + case 4: pscr2_ret__ = stem##4(variable, __VA_ARGS__); break; \ > + case 8: pscr2_ret__ = stem##8(variable, __VA_ARGS__); break; \ > + case 16: pscr2_ret__ = stem##16(variable, __VA_ARGS__); break; \ > + default: \ > + __bad_size_call_parameter(); break; \ > + } \ > + pscr2_ret__; \ > +}) > + > /* > * Special handling for cmpxchg_double. cmpxchg_double is passed two > * percpu variables. The first has to be aligned to a double word > @@ -425,7 +441,7 @@ do { \ > #define raw_cpu_add_return(pcp, val) __pcpu_size_call_return2(raw_cpu_add_return_, pcp, val) > #define raw_cpu_xchg(pcp, nval) __pcpu_size_call_return2(raw_cpu_xchg_, pcp, nval) > #define raw_cpu_cmpxchg(pcp, oval, nval) \ > - __pcpu_size_call_return2(raw_cpu_cmpxchg_, pcp, oval, nval) > + __pcpu_size16_call_return2(raw_cpu_cmpxchg_, pcp, oval, nval) > #define raw_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) \ > __pcpu_double_call_return_bool(raw_cpu_cmpxchg_double_, pcp1, pcp2, oval1, oval2, nval1, nval2) > > @@ -512,7 +528,7 @@ do { \ > #define this_cpu_add_return(pcp, val) __pcpu_size_call_return2(this_cpu_add_return_, pcp, val) > #define this_cpu_xchg(pcp, nval) __pcpu_size_call_return2(this_cpu_xchg_, pcp, nval) > #define this_cpu_cmpxchg(pcp, oval, nval) \ > - __pcpu_size_call_return2(this_cpu_cmpxchg_, pcp, oval, nval) > + __pcpu_size16_call_return2(this_cpu_cmpxchg_, pcp, oval, nval) > #define this_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) \ > __pcpu_double_call_return_bool(this_cpu_cmpxchg_double_, pcp1, pcp2, oval1, oval2, nval1, nval2) > > >