From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22B75C38A2D for ; Wed, 26 Oct 2022 18:54:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6D56C8E0002; Wed, 26 Oct 2022 14:54:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 65DBF8E0001; Wed, 26 Oct 2022 14:54:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4FE908E0002; Wed, 26 Oct 2022 14:54:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 3BCE48E0001 for ; Wed, 26 Oct 2022 14:54:39 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id F26AF1A109E for ; Wed, 26 Oct 2022 18:54:38 +0000 (UTC) X-FDA: 80064001836.11.CD81655 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf20.hostedemail.com (Postfix) with ESMTP id 559B81C0004 for ; Wed, 26 Oct 2022 18:54:38 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 25B48B8238C; Wed, 26 Oct 2022 18:54:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B21A6C433C1; Wed, 26 Oct 2022 18:54:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666810474; bh=Ieo52/jNSdW27hzsdTn6oYBvOHTHSgTAdPGfBUcNP4k=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=GC4V3OvnH3tWXaHfzU4RZ+AC9GASpBSuAr3BBHRLfa4KKK58wnV0gYNSxG2b7DL0g SUSkIwR9P/3Niskkm33goFHVefozJRt7rQDzkkKl5NDwCwON9dCIabexG4CgzMdeF9 1Y4lryEp2aDDdHlrIh4oDlQAZj81b18xoNthNokRyS4ftCkhWM+SrpPtjK8sb74nY5 0n8ZE/ZBP+kAecFdM92q6ErE8xFY5rkCMC+DfZCxz3GUHiBXaUUUAzW3RhHKxrcgQ7 2STV5dEq5JBMzsnU+3qhAVyoD6WoNXUj3CxjlQ+gwyWJMryi9Iw3mJ/hrLs3lSLOak WfxIP9HcOrE3g== Date: Wed, 26 Oct 2022 19:54:29 +0100 From: Conor Dooley To: Wen Yao Cc: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, dennis@kernel.org, tj@kernel.org, cl@linux.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH 1/2] riscv: percpu:Add riscv percpu operations Message-ID: References: <20221026104015.565468-1-haiwenyao@uniontech.com> <20221026104015.565468-2-haiwenyao@uniontech.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20221026104015.565468-2-haiwenyao@uniontech.com> ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666810478; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9zuFnNkl2qNGtjYSwwpaBLVF9jriCjyWZmydYNtgfuQ=; b=j1h5a4MYYBGlagR1pd5KRGUvIMUFzEuRjfrjTVnPLfMGKH1TfwhvfVcUxyayVLwnKvvSW6 5e1+ClJmetLWOfFO8VCViiXyHWm/Zpy6cm197UhSipEVpERoMLKC8jco8h8CV6+hsmgw67 s9ASCKgU9Ec9iyN62mhGe8I+K37GS3g= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=GC4V3Ovn; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf20.hostedemail.com: domain of conor@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=conor@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666810478; a=rsa-sha256; cv=none; b=rf6wuOefbC4tQ1p9v0oZ+1DN4LXucv71pVZqAKOZzo8NYej8DV8MiWDQIgp4/9CO/FTrhC KOFHzDIUZHVTp2VlV4+av9g/KqTtmNsb3kKwUTayHO4h0fkko7L8ZPa6wtjFeBKz+CcEwf JggifvCcb5IZuEe6aPFGlm8O3D6deI4= X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 559B81C0004 X-Rspam-User: Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=GC4V3Ovn; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf20.hostedemail.com: domain of conor@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=conor@kernel.org X-Stat-Signature: uexjdaekeoaaxp4p3zpehd8xs7iibhuw X-HE-Tag: 1666810478-377359 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hey Wen Yao, Couple comments for you. On Wed, Oct 26, 2022 at 06:40:14PM +0800, Wen Yao wrote: > riscv: percpu:Add riscv percpu operations Can you please consistently use ": " between parts of the commit messages? For both this and patch 2/2. > This patch use riscv AMO(Atomic Memory Operation) instructions to nit: s/This patch/Use (or better: "Optimise some ... using RISC-V AMO (Atomic..." > optimise some this_cpu_and this_cpu_or this_cpu_add operations. > It reuse cmpxchg_local() to impletment this_cpu_cmpxchg macros. s/It Reuse/Reuse, and "impletment" is a typo. > It reuse xchg_relaxed() to impletment this_cpu_xchg macros. > > Signed-off-by: Wen Yao > --- > arch/riscv/include/asm/percpu.h | 101 ++++++++++++++++++++++++++++++++ > 1 file changed, 101 insertions(+) > create mode 100644 arch/riscv/include/asm/percpu.h > > diff --git a/arch/riscv/include/asm/percpu.h b/arch/riscv/include/asm/percpu.h > new file mode 100644 > index 000000000000..ae796e328442 > --- /dev/null > +++ b/arch/riscv/include/asm/percpu.h > @@ -0,0 +1,101 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +/* > + * Copyright (C) 2020-2022 Union Tech Software Technology Corporation Limited > + */ > +#ifndef __ASM_PERCPU_H > +#define __ASM_PERCPU_H > + > +#include > + > +#define PERCPU_OP(op, asm_op, c_op) \ > + static inline unsigned long __percpu_##op(void *ptr, \ Can you please make sure that these \s are actually aligned & swap the spaces you've used for tabs? The other files that I checked in this directory all use tabs for \ alignment in macros. Thanks, Conor. > + unsigned long val, int size) \ > + { \ > + unsigned long ret; \ > + switch (size) { \ > + case 4: \ > + __asm__ __volatile__( \ > + "amo" #asm_op ".w" \ > + " %[ret], %[val], %[ptr]\n" \ > + : [ret] "=&r"(ret), [ptr] "+A"(*(u32 *)ptr) \ > + : [val] "r"(val)); \ > + break; \ > + case 8: \ > + __asm__ __volatile__( \ > + "amo" #asm_op ".d" \ > + " %[ret], %[val], %[ptr]\n" \ > + : [ret] "=&r"(ret), [ptr] "+A"(*(u64 *)ptr) \ > + : [val] "r"(val)); \ > + break; \ > + default: \ > + ret = 0; \ > + BUILD_BUG(); \ > + } \ > + \ > + return ret c_op val; \ > + } > + > +PERCPU_OP(add, add, +) > +PERCPU_OP(and, and, &) > +PERCPU_OP(or, or, |) > +#undef PERCPU_OP > + > +/* this_cpu_xchg */ > +#define _protect_xchg_local(pcp, val) \ > + ({ \ > + typeof(*raw_cpu_ptr(&(pcp))) __ret; \ > + preempt_disable_notrace(); \ > + __ret = xchg_relaxed(raw_cpu_ptr(&(pcp)), val); \ > + preempt_enable_notrace(); \ > + __ret; \ > + }) > + > +/* this_cpu_cmpxchg */ > +#define _protect_cmpxchg_local(pcp, o, n) \ > + ({ \ > + typeof(*raw_cpu_ptr(&(pcp))) __ret; \ > + preempt_disable_notrace(); \ > + __ret = cmpxchg_local(raw_cpu_ptr(&(pcp)), o, n); \ > + preempt_enable_notrace(); \ > + __ret; \ > + }) > + > +#define _pcp_protect(operation, pcp, val) \ > + ({ \ > + typeof(pcp) __retval; \ > + preempt_disable_notrace(); \ > + __retval = (typeof(pcp))operation(raw_cpu_ptr(&(pcp)), (val), \ > + sizeof(pcp)); \ > + preempt_enable_notrace(); \ > + __retval; \ > + }) > + > +#define _percpu_add(pcp, val) _pcp_protect(__percpu_add, pcp, val) > + > +#define _percpu_add_return(pcp, val) _percpu_add(pcp, val) > + > +#define _percpu_and(pcp, val) _pcp_protect(__percpu_and, pcp, val) > + > +#define _percpu_or(pcp, val) _pcp_protect(__percpu_or, pcp, val) > + > +#define this_cpu_add_4(pcp, val) _percpu_add(pcp, val) > +#define this_cpu_add_8(pcp, val) _percpu_add(pcp, val) > + > +#define this_cpu_add_return_4(pcp, val) _percpu_add_return(pcp, val) > +#define this_cpu_add_return_8(pcp, val) _percpu_add_return(pcp, val) > + > +#define this_cpu_and_4(pcp, val) _percpu_and(pcp, val) > +#define this_cpu_and_8(pcp, val) _percpu_and(pcp, val) > + > +#define this_cpu_or_4(pcp, val) _percpu_or(pcp, val) > +#define this_cpu_or_8(pcp, val) _percpu_or(pcp, val) > + > +#define this_cpu_xchg_4(pcp, val) _protect_xchg_local(pcp, val) > +#define this_cpu_xchg_8(pcp, val) _protect_xchg_local(pcp, val) > + > +#define this_cpu_cmpxchg_4(ptr, o, n) _protect_cmpxchg_local(ptr, o, n) > +#define this_cpu_cmpxchg_8(ptr, o, n) _protect_cmpxchg_local(ptr, o, n) > + > +#include > + > +#endif /* __ASM_PERCPU_H */ > -- > 2.25.1 >