From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F251DC433FE for ; Wed, 26 Oct 2022 10:41:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7386C8E0003; Wed, 26 Oct 2022 06:41:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6E98F8E0001; Wed, 26 Oct 2022 06:41:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5B1368E0003; Wed, 26 Oct 2022 06:41:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 4C9318E0001 for ; Wed, 26 Oct 2022 06:41:34 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 213E780550 for ; Wed, 26 Oct 2022 10:41:34 +0000 (UTC) X-FDA: 80062759308.04.39AE778 Received: from smtpbgau1.qq.com (smtpbgau1.qq.com [54.206.16.166]) by imf21.hostedemail.com (Postfix) with ESMTP id 7D94A1C0016 for ; Wed, 26 Oct 2022 10:41:31 +0000 (UTC) X-QQ-mid: bizesmtp72t1666780844t14jq9v9 Received: from localhost.localdomain ( [101.6.93.82]) by bizesmtp.qq.com (ESMTP) with id ; Wed, 26 Oct 2022 18:40:39 +0800 (CST) X-QQ-SSF: 0140000000000060B000000A0000000 X-QQ-FEAT: LG+NUo/f6sH/5cg3evHTTvo84aPoo8de19Y7L/kvt0DQWpVkBQ87B8w35Spxc r7yWzeAKHrTO4ar9iZQKg0C6tgnLQgquRZJz67RCahATJwXLarxPJku1iYNue5PPLvw6zI9 1DoJ3tDWf61LB3gsRazJpM2WTznKu5ba2scVA6PJ9B8ZAGNukYi8RfkfdU8NUIVMSUiZKlD yLs3sI7wmwg9G+fzeY0Y46u9uCUVeVi2pvx0KoZlNsSGPJ++SbjpxVM4UCAKp6VDALOFxq9 m5/wEGveIT0Gob4/F8lX3Mt4cLudhdq2q5S9STDbRaMpgWoFFwHTUw7HxEcptFhLXSoc2QS Mw8xXDFJRK2o28+/A9QL6fcL/quzThVhvuGafDRIVvSkLQvu4dwfngoCy38F+anzYHUw47T 3HKaaYKWF2o= X-QQ-GoodBg: 1 From: Wen Yao To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, dennis@kernel.org, tj@kernel.org, cl@linux.com Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Wen Yao Subject: [PATCH 1/2] riscv: percpu:Add riscv percpu operations Date: Wed, 26 Oct 2022 18:40:14 +0800 Message-Id: <20221026104015.565468-2-haiwenyao@uniontech.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221026104015.565468-1-haiwenyao@uniontech.com> References: <20221026104015.565468-1-haiwenyao@uniontech.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:uniontech.com:qybglogicsvr:qybglogicsvr2 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666780893; a=rsa-sha256; cv=none; b=v3vXBReY8TChiPhe+TOniApbRQ//n3GgEsb0q9Jnjtsi1XwnyqipTDCJEu8RWPo+64ZyA4 ZXZF7T9JQYM+8euLdV+qh/VBct6XCgwVSUbleDzSe6fzyCHBbD9c21s5seHjpmWkU15Qx2 T+iVxu3+im6xhSvaIc4SHlCAbERbGr0= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of haiwenyao@uniontech.com designates 54.206.16.166 as permitted sender) smtp.mailfrom=haiwenyao@uniontech.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666780893; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4nsaFWfkZHRw0nksR7WqGvrRYtHQL21XPf2esmru0so=; b=Q10JN7Y7hboWqsXvuA3DmIHiggR0NZRNYhPwiU7viYA6Y1LNxidDG5rJgeMUNYLZkA6YqB +d6UDz6/bx4nivMZCY06Xle9ARQeFqnEQmomfi8uDKA4Z3xZ5dGWyp7wm8NhQ7IxVqiscv f4mRqfz6bsFZJqyveQ051qhkqUIqHtc= X-Rspam-User: X-Rspamd-Queue-Id: 7D94A1C0016 Authentication-Results: imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of haiwenyao@uniontech.com designates 54.206.16.166 as permitted sender) smtp.mailfrom=haiwenyao@uniontech.com; dmarc=none X-Stat-Signature: xmuxfewgmqgein6r6do6f3sbfj5zasc1 X-Rspamd-Server: rspam03 X-HE-Tag: 1666780891-451913 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch use riscv AMO(Atomic Memory Operation) instructions to optimise some this_cpu_and this_cpu_or this_cpu_add operations. It reuse cmpxchg_local() to impletment this_cpu_cmpxchg macros. It reuse xchg_relaxed() to impletment this_cpu_xchg macros. Signed-off-by: Wen Yao --- arch/riscv/include/asm/percpu.h | 101 ++++++++++++++++++++++++++++++++ 1 file changed, 101 insertions(+) create mode 100644 arch/riscv/include/asm/percpu.h diff --git a/arch/riscv/include/asm/percpu.h b/arch/riscv/include/asm/percpu.h new file mode 100644 index 000000000000..ae796e328442 --- /dev/null +++ b/arch/riscv/include/asm/percpu.h @@ -0,0 +1,101 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020-2022 Union Tech Software Technology Corporation Limited + */ +#ifndef __ASM_PERCPU_H +#define __ASM_PERCPU_H + +#include + +#define PERCPU_OP(op, asm_op, c_op) \ + static inline unsigned long __percpu_##op(void *ptr, \ + unsigned long val, int size) \ + { \ + unsigned long ret; \ + switch (size) { \ + case 4: \ + __asm__ __volatile__( \ + "amo" #asm_op ".w" \ + " %[ret], %[val], %[ptr]\n" \ + : [ret] "=&r"(ret), [ptr] "+A"(*(u32 *)ptr) \ + : [val] "r"(val)); \ + break; \ + case 8: \ + __asm__ __volatile__( \ + "amo" #asm_op ".d" \ + " %[ret], %[val], %[ptr]\n" \ + : [ret] "=&r"(ret), [ptr] "+A"(*(u64 *)ptr) \ + : [val] "r"(val)); \ + break; \ + default: \ + ret = 0; \ + BUILD_BUG(); \ + } \ + \ + return ret c_op val; \ + } + +PERCPU_OP(add, add, +) +PERCPU_OP(and, and, &) +PERCPU_OP(or, or, |) +#undef PERCPU_OP + +/* this_cpu_xchg */ +#define _protect_xchg_local(pcp, val) \ + ({ \ + typeof(*raw_cpu_ptr(&(pcp))) __ret; \ + preempt_disable_notrace(); \ + __ret = xchg_relaxed(raw_cpu_ptr(&(pcp)), val); \ + preempt_enable_notrace(); \ + __ret; \ + }) + +/* this_cpu_cmpxchg */ +#define _protect_cmpxchg_local(pcp, o, n) \ + ({ \ + typeof(*raw_cpu_ptr(&(pcp))) __ret; \ + preempt_disable_notrace(); \ + __ret = cmpxchg_local(raw_cpu_ptr(&(pcp)), o, n); \ + preempt_enable_notrace(); \ + __ret; \ + }) + +#define _pcp_protect(operation, pcp, val) \ + ({ \ + typeof(pcp) __retval; \ + preempt_disable_notrace(); \ + __retval = (typeof(pcp))operation(raw_cpu_ptr(&(pcp)), (val), \ + sizeof(pcp)); \ + preempt_enable_notrace(); \ + __retval; \ + }) + +#define _percpu_add(pcp, val) _pcp_protect(__percpu_add, pcp, val) + +#define _percpu_add_return(pcp, val) _percpu_add(pcp, val) + +#define _percpu_and(pcp, val) _pcp_protect(__percpu_and, pcp, val) + +#define _percpu_or(pcp, val) _pcp_protect(__percpu_or, pcp, val) + +#define this_cpu_add_4(pcp, val) _percpu_add(pcp, val) +#define this_cpu_add_8(pcp, val) _percpu_add(pcp, val) + +#define this_cpu_add_return_4(pcp, val) _percpu_add_return(pcp, val) +#define this_cpu_add_return_8(pcp, val) _percpu_add_return(pcp, val) + +#define this_cpu_and_4(pcp, val) _percpu_and(pcp, val) +#define this_cpu_and_8(pcp, val) _percpu_and(pcp, val) + +#define this_cpu_or_4(pcp, val) _percpu_or(pcp, val) +#define this_cpu_or_8(pcp, val) _percpu_or(pcp, val) + +#define this_cpu_xchg_4(pcp, val) _protect_xchg_local(pcp, val) +#define this_cpu_xchg_8(pcp, val) _protect_xchg_local(pcp, val) + +#define this_cpu_cmpxchg_4(ptr, o, n) _protect_cmpxchg_local(ptr, o, n) +#define this_cpu_cmpxchg_8(ptr, o, n) _protect_cmpxchg_local(ptr, o, n) + +#include + +#endif /* __ASM_PERCPU_H */ -- 2.25.1