From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D063EC3DA4A for ; Mon, 19 Aug 2024 09:58:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6396B6B0082; Mon, 19 Aug 2024 05:58:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5E95B6B0088; Mon, 19 Aug 2024 05:58:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4B0D36B0089; Mon, 19 Aug 2024 05:58:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 2DA996B0082 for ; Mon, 19 Aug 2024 05:58:03 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id DD55614117B for ; Mon, 19 Aug 2024 09:58:02 +0000 (UTC) X-FDA: 82468544004.16.6C5D4CA Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by imf14.hostedemail.com (Postfix) with ESMTP id D76C110001E for ; Mon, 19 Aug 2024 09:57:59 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf14.hostedemail.com: domain of jonathan.cameron@huawei.com designates 185.176.79.56 as permitted sender) smtp.mailfrom=jonathan.cameron@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724061394; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EnBMvyEzzi9ZHlnb/Peya9fI++dz1exmCtueIBAzm+I=; b=Bi86OwqCOsoiHdI9TpB2HV0G8JjyhNKLuuCF1xH29cSpKHpyaKjDM7/HMtD7Tg61OjZH3n v3/gFxeEm3SE/BbbGfDUc2XdY6UtCo6qtYeZq/VUNc/cQyjU6zBuPVroiw9QYk/tFMZ0wG mp/K1rkcNPBwHoOXKZOnMCfHwaEO8ek= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724061394; a=rsa-sha256; cv=none; b=utmWaIZU1pu1+86YcrGCkUKQrkNYgRwpNkAwmWpaCnCPO6frGJmNJKECxiNGefvOE/6bNf TWCRLbK1dBcXZg3JbDuq5xRAy+D8FUHpJhS3E4sLdGnXwORtTuUrVJqaipXB/zF+xgojan +W58Dm0jJXSjw+5WxskOBTCpdUWenic= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf14.hostedemail.com: domain of jonathan.cameron@huawei.com designates 185.176.79.56 as permitted sender) smtp.mailfrom=jonathan.cameron@huawei.com Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4WnScz4lGNz6K94c; Mon, 19 Aug 2024 17:54:59 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 6331E1400DB; Mon, 19 Aug 2024 17:57:52 +0800 (CST) Received: from localhost (10.203.177.66) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Mon, 19 Aug 2024 10:57:51 +0100 Date: Mon, 19 Aug 2024 10:57:50 +0100 From: Jonathan Cameron To: Tong Tiangen CC: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , "Robin Murphy" , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" , , , , , , Guohanjun Subject: Re: [PATCH v12 1/6] uaccess: add generic fallback version of copy_mc_to_user() Message-ID: <20240819105750.00001269@Huawei.com> In-Reply-To: <20240528085915.1955987-2-tongtiangen@huawei.com> References: <20240528085915.1955987-1-tongtiangen@huawei.com> <20240528085915.1955987-2-tongtiangen@huawei.com> Organization: Huawei Technologies Research and Development (UK) Ltd. X-Mailer: Claws Mail 4.1.0 (GTK 3.24.33; x86_64-w64-mingw32) MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.203.177.66] X-ClientProxiedBy: lhrpeml500001.china.huawei.com (7.191.163.213) To lhrpeml500005.china.huawei.com (7.191.163.240) X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: D76C110001E X-Stat-Signature: b4bmfnr5mwcnzuyborpr45i8or3xrd8p X-Rspam-User: X-HE-Tag: 1724061479-507121 X-HE-Meta: U2FsdGVkX1/LvrG8DgIAVFHiCxhwJLXf5YJehH5rOi48F8roIVWM5OftpwFMK8GDIFyKmP0+vPexytYviU8+01M7r3Kh6QGyU23trhnxc1QCTCh9FClcs6LlEjc0sKjIo5K/giv810ayQOFnjBUMiucrqLNjM16ihRSj8iRx1y7AellGXTgLLu6MgC2U1vcX1EprRS6vPStnHubaiJlI41kscpwBx7zhouKk78fVhzUdxhhGaZ0TLwzxwCoRPMGMIljCcwLfB+atZYhg8dG1cifLzwXXD1Jj9UiRABBmtbAs7Ah3OxFgd7E/BG46oyv5Cc7lHgwZIa7FmwR9SWZksqtQW6UuQGgd2BqfLogunbm//uDMi81FQXNguNLVodHZRB0jltirsYXv8zVVppbC8p5sTmxjUElT1yOeL/Xa/PlmUeqfC95V3gHhlRzxS3ScxJEOA1opxGvqcotBVA4pDm7AdqZO8F+jDPls6H2nAo2HK4fVKQubAGF5asBJObC+1Ja7JgY0OPCaBdExLXva1H+KkQUZCgMyPhowGp5YuJ1cUp5CSv+e45rTnXXowD0keMjP5/brX3OGivqgIKyh0fJuNFkbzui6JXYG/RcgxQyfwrqzENXPyI0VTUxp5u1bv8xac6AGA5FWmDVgSF66orAA9+7jCsRIF31+Ktb+uSDubMrfvN8SUTka8xUo0+D0L3gFihxn2fpX36OwQF7lK5GECSe6pQZoO1RBgthnNzdEazQoKnIqz7vaMQ+H7ahgzT3HagYmBNqK9GG+uKw92dpPeB3Sajjmm0/15R828iV6NHSx9NNXGsUC2BpiO3V0MN3PF0hwKfSShk2aWHaCvcXf3PzDynZK8bzfH1IUjTjyn4Q5K5ADMaUe2ySzDtrgOjPrORsbHr2iEX47PTngZ6N/3wtdgIUYxYe+bg3QVpApxMH0+tFjZ47p7TlpEhRpjxGOAUDjch4UOod5+U/ 9DLktNFK cyN/yTJwQmXbCqoYZylvxwqCPrkjNPnKk+eyq9uWAr7l7+63vLYRdDhNGW5ZGl54OlQakLUYz2ozjKHhF/oLyIeACKh2h+uZ1kyyJNc3MGa7cYyBo2vxGwmKjqrkpEmg9bYZiy+6+1k/iIpHKQ6+9tNVazH1DgqieP2f2VNnZB1+nFDrSri3+LwJp1Xl+pmycFzBVAseWo58y1nyNNuJ1aM/bF94QzHQwVUc9Gk2G8x039O/hSoKoHzxgT5dqnnZ/PCdg8GQxo93Lp/gtLoXGY09z1cBOBJzYP9tQro56Kolm0KS6cV7/FRFoFos9UIWb2zFzSvvRMACmkOkir2FJwIg+Yslnjc4BA8Nlbn7i7fW5c5rFsHuvwh7X+ghBqbPAyJmE X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, 28 May 2024 16:59:10 +0800 Tong Tiangen wrote: > x86/powerpc has it's implementation of copy_mc_to_user(), we add generic > fallback in include/linux/uaccess.h prepare for other architechures to > enable CONFIG_ARCH_HAS_COPY_MC. > > Signed-off-by: Tong Tiangen > Acked-by: Michael Ellerman Seems like a sensible approach to me given existing fallbacks in x86 if the relevant features are disabled. It may be worth exploring at some point if some of the special casing in the callers of this function can also be remove now there is a default version. There are some small differences but I've not analyzed if they matter or not. Reviewed-by: Jonathan Cameron > --- > arch/powerpc/include/asm/uaccess.h | 1 + > arch/x86/include/asm/uaccess.h | 1 + > include/linux/uaccess.h | 8 ++++++++ > 3 files changed, 10 insertions(+) > > diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h > index de10437fd206..df42e6ad647f 100644 > --- a/arch/powerpc/include/asm/uaccess.h > +++ b/arch/powerpc/include/asm/uaccess.h > @@ -381,6 +381,7 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n) > > return n; > } > +#define copy_mc_to_user copy_mc_to_user > #endif > > extern long __copy_from_user_flushcache(void *dst, const void __user *src, > diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h > index 0f9bab92a43d..309f2439327e 100644 > --- a/arch/x86/include/asm/uaccess.h > +++ b/arch/x86/include/asm/uaccess.h > @@ -497,6 +497,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len); > > unsigned long __must_check > copy_mc_to_user(void __user *to, const void *from, unsigned len); > +#define copy_mc_to_user copy_mc_to_user > #endif > > /* > diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h > index 3064314f4832..0dfa9241b6ee 100644 > --- a/include/linux/uaccess.h > +++ b/include/linux/uaccess.h > @@ -205,6 +205,14 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt) > } > #endif > > +#ifndef copy_mc_to_user > +static inline unsigned long __must_check > +copy_mc_to_user(void *dst, const void *src, size_t cnt) > +{ > + return copy_to_user(dst, src, cnt); > +} > +#endif > + > static __always_inline void pagefault_disabled_inc(void) > { > current->pagefault_disabled++;