From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2274C6379F for ; Fri, 3 Feb 2023 16:53:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2D24B6B0072; Fri, 3 Feb 2023 11:53:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 282106B0075; Fri, 3 Feb 2023 11:53:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 149FE6B0078; Fri, 3 Feb 2023 11:53:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 024256B0072 for ; Fri, 3 Feb 2023 11:53:16 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id A7BC0A1243 for ; Fri, 3 Feb 2023 16:53:16 +0000 (UTC) X-FDA: 80426575992.23.B37FCAA Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf25.hostedemail.com (Postfix) with ESMTP id 0A280A0011 for ; Fri, 3 Feb 2023 16:53:13 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf25.hostedemail.com: domain of mark.rutland@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=mark.rutland@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675443194; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1cMWMpHbraYQT5qbP2bZgBpfPEFHvSvmouuroUPIxUg=; b=QFF6r9xWKpJ49i3mCGx7Q97Lnalh0eIGVYPGq7tV5CwSmIbbv7rrY5rHf+Yrh5MH8q7cRL 8b/tzQ9InOkbQohxL2nJe7Rm6nDsEIrpQh+FF5emTfmOWmtTBA7z7sm/6C+viq2HsbUhwD yk18IWbcbRfBMxotRilr9Zd78QgU1l8= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf25.hostedemail.com: domain of mark.rutland@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=mark.rutland@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675443194; a=rsa-sha256; cv=none; b=ys1mtlsKQgpw4Kuv0y88qgxYIKyWUZPj7D3lP+mHL2wMXys9muYAJklfK/GXQJyxrWpiCP 9Co9k7fYnmylDqceDOUoXjEGLU+MzNU/3GjOPkpIco3DzTdUNBpap/zsSy2W2O11c39kQO 8Nlx/1R7ILyzeS+3DqcZvoYiMLHL6eQ= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0CD93C14; Fri, 3 Feb 2023 08:53:55 -0800 (PST) Received: from FVFF77S0Q05N (unknown [10.57.90.37]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EF3D43F71E; Fri, 3 Feb 2023 08:53:06 -0800 (PST) Date: Fri, 3 Feb 2023 16:52:59 +0000 From: Mark Rutland To: Peter Zijlstra Cc: torvalds@linux-foundation.org, corbet@lwn.net, will@kernel.org, boqun.feng@gmail.com, catalin.marinas@arm.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com, borntraeger@linux.ibm.com, svens@linux.ibm.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, joro@8bytes.org, suravee.suthikulpanit@amd.com, robin.murphy@arm.com, dwmw2@infradead.org, baolu.lu@linux.intel.com, Arnd Bergmann , Herbert Xu , davem@davemloft.net, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, Andrew Morton , vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-s390@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-crypto@vger.kernel.org Subject: Re: [PATCH v2 03/10] arch: Introduce arch_{,try_}_cmpxchg128{,_local}() Message-ID: References: <20230202145030.223740842@infradead.org> <20230202152655.373335780@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230202152655.373335780@infradead.org> X-Rspamd-Queue-Id: 0A280A0011 X-Stat-Signature: 8go47n1k6dmrf4xjukqd9gyctnhjo7nb X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1675443193-82280 X-HE-Meta: U2FsdGVkX18cM/yqp1Md+kQCKL4KBSc//N7ITgd7ynDS4YiZ6XjvdHr/VfX2z/7U0DG0gWiLv8WMQymgL5Wb9qw/sNhLazpch4JCCeis6E9aZLFv8v5oU2SAzWY1WA7W7qjkFAg9Bt6azvEHdO1+YlsQfj1VuVTQK86iD3DUx23ymQTEe62AuY3TbG5NNX5gBTEsYLp3tn6PANUkH9h1rILt7tMuc5RZld+R1Jqe4JAHjo+Oy5cPK4PemhYy9ZGttrH+R9j8K4LR9vpjlcAYXRZLeoXZqJ6Xn83OIYhqqJXDkSpgRM8A+wQrUpwPOVoso6ONon76ZXu1kp5eMioaSXLiqYlZNvX79VbSDEIUBiizic5190jzSyz+HRjPa0m4WsCwM0qJ8TxZwFrZHER5ewhiKM5KdC1OmsGn2oR218W9qInMaLd0XBya5JdGu8soGsvYFgBmxHHIHhdsi83LhBYFbVk3bJgedFYNVCWr0hp9lUzRa6BJgcjvOgs/Qntc8OWr4VmpeYHKU9YH8M+ZZx4V5Y0lgpTIY6ktS3kOmA8CxZY/JDU/eLSHMNfPU0TVZFkeGUK+1pRjdiNQOKIGxj03BgT/a6fP/5eAPQfp6PHvYLXnzVT+MZT48wolOmhCklScMtQGlcqrXB7adEkZnKDqkO3pnuu+aRdpvRaT2H/7EOXrWnS3bpDb3pVWdGu6OakycMmBUseGKkIv22xUa5nhgU4YPcqeSXf8hYQY6t6KEBrkka+hvY6tWW7h6lj6jPPP7Byhu+znOWYTv4vgCnmcITcqjKYVB0QNCy5Yd1FnRtCBFoFgwRTXXCLSs4S9qrDa4ZpPA2mIKRsteu5L+KpQSeKTPInmQbSioEojyWY6uQuzh2ejr0ga2jsm/MyUAH/t9/ZmclZzErWIbaMpQxCviSfsPpbfxLw2tTwH9/2YqDaaRWS06Nhlh8qLlTakzeTYulTrGCaPnX0wTbO FLO1ecEl WFBLTEzsAv4NiCx0pKDQGrvz8unx6cDPZn+4nENAwuDIfhd77SeA+TBwY1TjC63CYaTk29thek1wJfNHUEENcTAFLpu2rezG/UZ5G1YbqjZCeefPiGJm9f2zyaK7bKfNIVOFb6a04LDzcjMk5ePzSWrOvo5h6eIaIVH0sHz7CY1zZ4Xe1MAlz8Q6Bw7UbN9S5edR+fx/mHQknTvU5/KbXNetfeQDrCNAJMgtfAKrIAhV65d3U+R3waK99QeRXIROLy5O0sWvZMRWSYulhbEQoiQtverrIPeCVm9VFKaYn+hDZt3rhV0VbjJHEUW8pTn23Ws+1UnTCF/TbDhWGo88AQ/x25ZO/nvknumokU7d4D2RXz/c= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Feb 02, 2023 at 03:50:33PM +0100, Peter Zijlstra wrote: > For all architectures that currently support cmpxchg_double() > implement the cmpxchg128() family of functions that is basically the > same but with a saner interface. > > Signed-off-by: Peter Zijlstra (Intel) For arm64: Acked-by: Mark Rutland Mark. > --- > arch/arm64/include/asm/atomic_ll_sc.h | 41 +++++++++++++++++++++++++ > arch/arm64/include/asm/atomic_lse.h | 31 +++++++++++++++++++ > arch/arm64/include/asm/cmpxchg.h | 26 ++++++++++++++++ > arch/s390/include/asm/cmpxchg.h | 14 ++++++++ > arch/x86/include/asm/cmpxchg_32.h | 3 + > arch/x86/include/asm/cmpxchg_64.h | 55 +++++++++++++++++++++++++++++++++- > 6 files changed, 168 insertions(+), 2 deletions(-) > > --- a/arch/arm64/include/asm/atomic_ll_sc.h > +++ b/arch/arm64/include/asm/atomic_ll_sc.h > @@ -326,6 +326,47 @@ __CMPXCHG_DBL( , , , ) > __CMPXCHG_DBL(_mb, dmb ish, l, "memory") > > #undef __CMPXCHG_DBL > + > +union __u128_halves { > + u128 full; > + struct { > + u64 low, high; > + }; > +}; > + > +#define __CMPXCHG128(name, mb, rel, cl...) \ > +static __always_inline u128 \ > +__ll_sc__cmpxchg128##name(volatile u128 *ptr, u128 old, u128 new) \ > +{ \ > + union __u128_halves r, o = { .full = (old) }, \ > + n = { .full = (new) }; \ > + unsigned int tmp; \ > + \ > + asm volatile("// __cmpxchg128" #name "\n" \ > + " prfm pstl1strm, %[v]\n" \ > + "1: ldxp %[rl], %[rh], %[v]\n" \ > + " cmp %[rl], %[ol]\n" \ > + " ccmp %[rh], %[oh], 0, eq\n" \ > + " b.ne 2f\n" \ > + " st" #rel "xp %w[tmp], %[nl], %[nh], %[v]\n" \ > + " cbnz %w[tmp], 1b\n" \ > + " " #mb "\n" \ > + "2:" \ > + : [v] "+Q" (*(u128 *)ptr), \ > + [rl] "=&r" (r.low), [rh] "=&r" (r.high), \ > + [tmp] "=&r" (tmp) \ > + : [ol] "r" (o.low), [oh] "r" (o.high), \ > + [nl] "r" (n.low), [nh] "r" (n.high) \ > + : "cc", ##cl); \ > + \ > + return r.full; \ > +} > + > +__CMPXCHG128( , , ) > +__CMPXCHG128(_mb, dmb ish, l, "memory") > + > +#undef __CMPXCHG128 > + > #undef K > > #endif /* __ASM_ATOMIC_LL_SC_H */ > --- a/arch/arm64/include/asm/atomic_lse.h > +++ b/arch/arm64/include/asm/atomic_lse.h > @@ -324,4 +324,35 @@ __CMPXCHG_DBL(_mb, al, "memory") > > #undef __CMPXCHG_DBL > > +#define __CMPXCHG128(name, mb, cl...) \ > +static __always_inline u128 \ > +__lse__cmpxchg128##name(volatile u128 *ptr, u128 old, u128 new) \ > +{ \ > + union __u128_halves r, o = { .full = (old) }, \ > + n = { .full = (new) }; \ > + register unsigned long x0 asm ("x0") = o.low; \ > + register unsigned long x1 asm ("x1") = o.high; \ > + register unsigned long x2 asm ("x2") = n.low; \ > + register unsigned long x3 asm ("x3") = n.high; \ > + register unsigned long x4 asm ("x4") = (unsigned long)ptr; \ > + \ > + asm volatile( \ > + __LSE_PREAMBLE \ > + " casp" #mb "\t%[old1], %[old2], %[new1], %[new2], %[v]\n"\ > + : [old1] "+&r" (x0), [old2] "+&r" (x1), \ > + [v] "+Q" (*(u128 *)ptr) \ > + : [new1] "r" (x2), [new2] "r" (x3), [ptr] "r" (x4), \ > + [oldval1] "r" (o.low), [oldval2] "r" (o.high) \ > + : cl); \ > + \ > + r.low = x0; r.high = x1; \ > + \ > + return r.full; \ > +} > + > +__CMPXCHG128( , ) > +__CMPXCHG128(_mb, al, "memory") > + > +#undef __CMPXCHG128 > + > #endif /* __ASM_ATOMIC_LSE_H */ > --- a/arch/arm64/include/asm/cmpxchg.h > +++ b/arch/arm64/include/asm/cmpxchg.h > @@ -147,6 +147,19 @@ __CMPXCHG_DBL(_mb) > > #undef __CMPXCHG_DBL > > +#define __CMPXCHG128(name) \ > +static inline u128 __cmpxchg128##name(volatile u128 *ptr, \ > + u128 old, u128 new) \ > +{ \ > + return __lse_ll_sc_body(_cmpxchg128##name, \ > + ptr, old, new); \ > +} > + > +__CMPXCHG128( ) > +__CMPXCHG128(_mb) > + > +#undef __CMPXCHG128 > + > #define __CMPXCHG_GEN(sfx) \ > static __always_inline unsigned long __cmpxchg##sfx(volatile void *ptr, \ > unsigned long old, \ > @@ -229,6 +242,19 @@ __CMPXCHG_GEN(_mb) > __ret; \ > }) > > +/* cmpxchg128 */ > +#define system_has_cmpxchg128() 1 > + > +#define arch_cmpxchg128(ptr, o, n) \ > +({ \ > + __cmpxchg128_mb((ptr), (o), (n)); \ > +}) > + > +#define arch_cmpxchg128_local(ptr, o, n) \ > +({ \ > + __cmpxchg128((ptr), (o), (n)); \ > +}) > + > #define __CMPWAIT_CASE(w, sfx, sz) \ > static inline void __cmpwait_case_##sz(volatile void *ptr, \ > unsigned long val) \ > --- a/arch/s390/include/asm/cmpxchg.h > +++ b/arch/s390/include/asm/cmpxchg.h > @@ -201,4 +201,18 @@ static __always_inline int __cmpxchg_dou > (unsigned long)(n1), (unsigned long)(n2)); \ > }) > > +#define system_has_cmpxchg128() 1 > + > +static __always_inline u128 arch_cmpxchg128(volatile u128 *ptr, u128 old, u128 new) > +{ > + asm volatile( > + " cdsg %[old],%[new],%[ptr]\n" > + : [old] "+d" (old), [ptr] "+QS" (*ptr) > + : [new] "d" (new) > + : "memory", "cc"); > + return old; > +} > + > +#define arch_cmpxchg128 arch_cmpxchg128 > + > #endif /* __ASM_CMPXCHG_H */ > --- a/arch/x86/include/asm/cmpxchg_32.h > +++ b/arch/x86/include/asm/cmpxchg_32.h > @@ -103,6 +103,7 @@ static inline bool __try_cmpxchg64(volat > > #endif > > -#define system_has_cmpxchg_double() boot_cpu_has(X86_FEATURE_CX8) > +#define system_has_cmpxchg_double() boot_cpu_has(X86_FEATURE_CX8) > +#define system_has_cmpxchg64() boot_cpu_has(X86_FEATURE_CX8) > > #endif /* _ASM_X86_CMPXCHG_32_H */ > --- a/arch/x86/include/asm/cmpxchg_64.h > +++ b/arch/x86/include/asm/cmpxchg_64.h > @@ -20,6 +20,59 @@ > arch_try_cmpxchg((ptr), (po), (n)); \ > }) > > -#define system_has_cmpxchg_double() boot_cpu_has(X86_FEATURE_CX16) > +union __u128_halves { > + u128 full; > + struct { > + u64 low, high; > + }; > +}; > + > +static __always_inline u128 arch_cmpxchg128(volatile u128 *ptr, u128 old, u128 new) > +{ > + union __u128_halves o = { .full = old, }, n = { .full = new, }; > + > + asm volatile(LOCK_PREFIX "cmpxchg16b %[ptr]" > + : [ptr] "+m" (*ptr), > + "+a" (o.low), "+d" (o.high) > + : "b" (n.low), "c" (n.high) > + : "memory"); > + > + return o.full; > +} > + > +static __always_inline u128 arch_cmpxchg128_local(volatile u128 *ptr, u128 old, u128 new) > +{ > + union __u128_halves o = { .full = old, }, n = { .full = new, }; > + > + asm volatile("cmpxchg16b %[ptr]" > + : [ptr] "+m" (*ptr), > + "+a" (o.low), "+d" (o.high) > + : "b" (n.low), "c" (n.high) > + : "memory"); > + > + return o.full; > +} > + > +static __always_inline bool arch_try_cmpxchg128(volatile u128 *ptr, u128 *old, u128 new) > +{ > + union __u128_halves o = { .full = *old, }, n = { .full = new, }; > + bool ret; > + > + asm volatile(LOCK_PREFIX "cmpxchg16b %[ptr]" > + CC_SET(e) > + : CC_OUT(e) (ret), > + [ptr] "+m" (*ptr), > + "+a" (o.low), "+d" (o.high) > + : "b" (n.low), "c" (n.high) > + : "memory"); > + > + if (unlikely(!ret)) > + *old = o.full; > + > + return likely(ret); > +} > + > +#define system_has_cmpxchg_double() boot_cpu_has(X86_FEATURE_CX16) > +#define system_has_cmpxchg128() boot_cpu_has(X86_FEATURE_CX16) > > #endif /* _ASM_X86_CMPXCHG_64_H */ > >