From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5AD84C3DA7D for ; Tue, 3 Jan 2023 16:19:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7F55D8E0002; Tue, 3 Jan 2023 11:19:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7A5458E0001; Tue, 3 Jan 2023 11:19:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 693E18E0002; Tue, 3 Jan 2023 11:19:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 5A0518E0001 for ; Tue, 3 Jan 2023 11:19:46 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 34DB7160BD2 for ; Tue, 3 Jan 2023 16:19:46 +0000 (UTC) X-FDA: 80313998772.21.160DD07 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf20.hostedemail.com (Postfix) with ESMTP id 7C2371C0006 for ; Tue, 3 Jan 2023 16:19:43 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf20.hostedemail.com: domain of mark.rutland@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=mark.rutland@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672762784; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fV14HJsErAIFGMSXhnWkEz5Dzw5MUc7MLJnWQjhUNwY=; b=yXmWJ44rfie/gfH917uYW//A3dtD9C7G2Pzsso1JPH/lwpMajNT+iHjwTqfxH6d9Wesxbk oh0dD0ek+SwY0T+8kuIMPy245uivIyAyRmzeN4zNGFa1LO0u2Daq2X3PsIpm7uA0d/HbfS tJK1gXPg15D+VQzKo8sjYv6/GKW3a+I= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf20.hostedemail.com: domain of mark.rutland@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=mark.rutland@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672762784; a=rsa-sha256; cv=none; b=UT3ezNL3iyulEiHhvTxphcVUEC2IHcWQ0+TiMjmdj8nAgYD3Ya57aLl4nKxEnhtRpZR7wm RYmns5w3n7izRzL7YRLe3FyIaDoR3hz+u9UbnqjDNRbd67/yOs5Z+/fjGeQbB3ktjnnSAB zh0gVXWJKaW0m6HospBI3WFRyBtUddk= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F23A51516; Tue, 3 Jan 2023 08:20:23 -0800 (PST) Received: from FVFF77S0Q05N (unknown [10.57.37.13]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7D57E3F71A; Tue, 3 Jan 2023 08:19:36 -0800 (PST) Date: Tue, 3 Jan 2023 16:19:30 +0000 From: Mark Rutland To: Peter Zijlstra Cc: Boqun Feng , torvalds@linux-foundation.org, corbet@lwn.net, will@kernel.org, catalin.marinas@arm.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com, borntraeger@linux.ibm.com, svens@linux.ibm.com, Herbert Xu , davem@davemloft.net, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, joro@8bytes.org, suravee.suthikulpanit@amd.com, robin.murphy@arm.com, dwmw2@infradead.org, baolu.lu@linux.intel.com, Arnd Bergmann , penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, Andrew Morton , vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-s390@vger.kernel.org, linux-crypto@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org Subject: Re: [RFC][PATCH 05/12] arch: Introduce arch_{,try_}_cmpxchg128{,_local}() Message-ID: References: <20221219153525.632521981@infradead.org> <20221219154119.154045458@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: 7C2371C0006 X-Stat-Signature: f6yq6amfq4yphe35qjrkraqbeneecscw X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1672762783-936738 X-HE-Meta: U2FsdGVkX1/XzhkMUFaykmeiNSXrP9WP4twZd3n7LqgIDP+GiuePPMMGabV6KxxPZXCRPSNJxyDzMr8bx000exJLaklQWVfswW5v3OzHQDlIi0s3Q47rMN9WtL2THVtOCXm1l7uIYo0qKnh7MnrPQd2+Jn1xikVb762a9MLWxerJKB/s+z8OZiGWSTkhZYiMf0xOvZWnOgor6XuCVtTIO3urN0B0hHH6MXfrkbh2z0furLtUG5h/aj4dQuob+BFaDEqtMYnGT4O8QjQTulvCNZg0AfT3b+99b3Dky5/L1Zn0qDcqsCZ3YDz6q4CloxEGvNtIP+b8iuymgzP7ZMZfYMWz3zhY+mbvmc+j26yGpq72XxS1ULlI221ZerlO1uawm4rvglir+Nk18TB71A3j/JgvQZtwcG14up2JFS/oSKBlANMiR10Mcaf2/9dDZUpYNBgj6b3q1dwRkUhs5dkKkYXVPpUp4442c+oXtYajNzcYrxyrTB7mHdpEdO93abpMkv2hYRZgWTCZrBvSk29SxdMBhwBYE0HZTQmlmpjkkZZdRoorDmY5TsTZkcNOOx+R4wnZsQjEIfjEP8dMG1wVMNVz9Dz9NFontDxE6ne+R8Rqgeslghq5ECbVNsAU/XEVAZ1cecxxUjexG8X6l+ty7uPAulB3L+aCEU93Y8CQ8ZPuCh2mQkt4omBQYWuY5ups1tmFmvBpt6h1tEv+rsogiHUIzgXytzRq5Y/TTl6XBG6rr4PNrbqj9Yu0BueLoKRug6BUnxHKd0Ao+v63YZeFMBhhs7JwhYJAYbCIMoX4KJqe96xj9rBaHs96kPd5/nNq5zpOlFjMqnkaSYfyc1PCj9DKbAgJhtXwONgsS2frCeGUPXZQ4gwyxMoUfj98NVuL7u0VqGPlJDaUGaviiXIE9aWb0SN0SCybIRWlfTMmaipJ5p42MT2FB3SQCbfqc5fN X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jan 03, 2023 at 02:03:37PM +0000, Mark Rutland wrote: > On Tue, Jan 03, 2023 at 01:25:35PM +0000, Mark Rutland wrote: > > On Tue, Dec 20, 2022 at 12:08:16PM +0100, Peter Zijlstra wrote: > > > On Mon, Dec 19, 2022 at 12:07:25PM -0800, Boqun Feng wrote: > > > > On Mon, Dec 19, 2022 at 04:35:30PM +0100, Peter Zijlstra wrote: > > > > > For all architectures that currently support cmpxchg_double() > > > > > implement the cmpxchg128() family of functions that is basically the > > > > > same but with a saner interface. > > > > > > > > > > Signed-off-by: Peter Zijlstra (Intel) > > > > > --- > > > > > arch/arm64/include/asm/atomic_ll_sc.h | 38 +++++++++++++++++++++++ > > > > > arch/arm64/include/asm/atomic_lse.h | 33 +++++++++++++++++++- > > > > > arch/arm64/include/asm/cmpxchg.h | 26 ++++++++++++++++ > > > > > arch/s390/include/asm/cmpxchg.h | 33 ++++++++++++++++++++ > > > > > arch/x86/include/asm/cmpxchg_32.h | 3 + > > > > > arch/x86/include/asm/cmpxchg_64.h | 55 +++++++++++++++++++++++++++++++++- > > > > > 6 files changed, 185 insertions(+), 3 deletions(-) > > > > > > > > > > --- a/arch/arm64/include/asm/atomic_ll_sc.h > > > > > +++ b/arch/arm64/include/asm/atomic_ll_sc.h > > > > > @@ -326,6 +326,44 @@ __CMPXCHG_DBL( , , , ) > > > > > __CMPXCHG_DBL(_mb, dmb ish, l, "memory") > > > > > > > > > > #undef __CMPXCHG_DBL > > > > > + > > > > > +union __u128_halves { > > > > > + u128 full; > > > > > + struct { > > > > > + u64 low, high; > > > > > + }; > > > > > +}; > > > > > + > > > > > +#define __CMPXCHG128(name, mb, rel, cl) \ > > > > > +static __always_inline u128 \ > > > > > +__ll_sc__cmpxchg128##name(volatile u128 *ptr, u128 old, u128 new) \ > > > > > +{ \ > > > > > + union __u128_halves r, o = { .full = (old) }, \ > > > > > + n = { .full = (new) }; \ > > > > > + \ > > > > > + asm volatile("// __cmpxchg128" #name "\n" \ > > > > > + " prfm pstl1strm, %2\n" \ > > > > > + "1: ldxp %0, %1, %2\n" \ > > > > > + " eor %3, %0, %3\n" \ > > > > > + " eor %4, %1, %4\n" \ > > > > > + " orr %3, %4, %3\n" \ > > > > > + " cbnz %3, 2f\n" \ > > > > > + " st" #rel "xp %w3, %5, %6, %2\n" \ > > > > > + " cbnz %w3, 1b\n" \ > > > > > + " " #mb "\n" \ > > > > > + "2:" \ > > > > > + : "=&r" (r.low), "=&r" (r.high), "+Q" (*(unsigned long *)ptr) \ > > > > > > > > I wonder whether we should use "(*(u128 *)ptr)" instead of "(*(unsigned > > > > long *) ptr)"? Because compilers may think only 64bit value pointed by > > > > "ptr" gets modified, and they are allowed to do "useful" optimization. > > > > > > In this I've copied the existing cmpxchg_double() code; I'll have to let > > > the arch folks speak here, I've no clue. > > > > We definitely need to ensure the compiler sees we poke the whole thing, or it > > can get this horribly wrong, so that is a latent bug. > > > > See commit: > > > > fee960bed5e857eb ("arm64: xchg: hazard against entire exchange variable") > > > > ... for examples of GCC being clever, where I overlooked the *_double() cases. > Using __uint128_t instead, e.g. > > diff --git a/arch/arm64/include/asm/atomic_ll_sc.h b/arch/arm64/include/asm/atomic_ll_sc.h > index 0890e4f568fb7..cbb3d961123b1 100644 > --- a/arch/arm64/include/asm/atomic_ll_sc.h > +++ b/arch/arm64/include/asm/atomic_ll_sc.h > @@ -315,7 +315,7 @@ __ll_sc__cmpxchg_double##name(unsigned long old1, \ > " cbnz %w0, 1b\n" \ > " " #mb "\n" \ > "2:" \ > - : "=&r" (tmp), "=&r" (ret), "+Q" (*(unsigned long *)ptr) \ > + : "=&r" (tmp), "=&r" (ret), "+Q" (*(__uint128_t *)ptr) \ > : "r" (old1), "r" (old2), "r" (new1), "r" (new2) \ > : cl); \ > \ > diff --git a/arch/arm64/include/asm/atomic_lse.h b/arch/arm64/include/asm/atomic_lse.h > index 52075e93de6c0..a94d6dacc0292 100644 > --- a/arch/arm64/include/asm/atomic_lse.h > +++ b/arch/arm64/include/asm/atomic_lse.h > @@ -311,7 +311,7 @@ __lse__cmpxchg_double##name(unsigned long old1, \ > " eor %[old2], %[old2], %[oldval2]\n" \ > " orr %[old1], %[old1], %[old2]" \ > : [old1] "+&r" (x0), [old2] "+&r" (x1), \ > - [v] "+Q" (*(unsigned long *)ptr) \ > + [v] "+Q" (*(__uint128_t *)ptr) \ > : [new1] "r" (x2), [new2] "r" (x3), [ptr] "r" (x4), \ > [oldval1] "r" (oldval1), [oldval2] "r" (oldval2) \ > : cl); \ > > ... makes GCC much happier: > ... I'll go check whether clang is happy with that, and how far back that can > go, otherwise we'll need to blat the high half with a separate constaint that > (ideally) doesn't end up allocating a pointless address register. Hmm... from the commit history it looks like GCC prior to 5.1 might not be happy with that, but that *might* just be if we actually do arithmetic on the value, and we might be ok just using it for memroy effects. I can't currently get such an old GCC to run on my machines so I haven't been able to check. I'll dig into this a bit more tomorrow, but it looks like the options (for a backport-suitable fix) will be: (a) use a __uint128_t input+output, as above, if we're lucky (b) introduce a second 64-bit input+output for the high half (likely a "+o") (c) use a full memory clobber for ancient compilers. Mark.