From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09E59C46467 for ; Sat, 7 Jan 2023 09:10:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 08E5F8E0002; Sat, 7 Jan 2023 04:10:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 03E338E0001; Sat, 7 Jan 2023 04:10:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E21838E0002; Sat, 7 Jan 2023 04:10:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id CFDD48E0001 for ; Sat, 7 Jan 2023 04:10:34 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 943F416092E for ; Sat, 7 Jan 2023 09:10:34 +0000 (UTC) X-FDA: 80327432388.20.E8B2913 Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) by imf25.hostedemail.com (Postfix) with ESMTP id A958EA0003 for ; Sat, 7 Jan 2023 09:10:32 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=shutemov.name header.s=fm1 header.b=m6VsE3BU; dkim=pass header.d=messagingengine.com header.s=fm2 header.b=IB9KbeGD; spf=pass (imf25.hostedemail.com: domain of kirill@shutemov.name designates 66.111.4.29 as permitted sender) smtp.mailfrom=kirill@shutemov.name; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673082632; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FKnmdOzVKyy4kJvBw0IGAezSRAC3ERJEoigxCSqnGug=; b=n0fzuVLPbsLHlbu71EW90cDEsmDqYxP+Hsjr55AWd2Z/8Ugz0fFwA/bSkvKNxr1FPVMR0x rqO6f5nFEP/IO3YKkPRHjKBI2JFksRknXNRTQAEGchW4Omo8BBseb7qwgXnqXduHtXbVnw qxt4liZRJ6ND1BXqsq7G8nx7D2pqLAU= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=shutemov.name header.s=fm1 header.b=m6VsE3BU; dkim=pass header.d=messagingengine.com header.s=fm2 header.b=IB9KbeGD; spf=pass (imf25.hostedemail.com: domain of kirill@shutemov.name designates 66.111.4.29 as permitted sender) smtp.mailfrom=kirill@shutemov.name; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673082632; a=rsa-sha256; cv=none; b=gL5pVlWM3THJagG7Dn1fr0hNv0f9bAptAcF/NBpzTOlmxgefIV9KqgjHO576VI/U3T9ZtE KajOPvPt2YHr5iN6dczM9ia4Bpy0+cc+dj/x0auxeiE/4eJK6B5rspT5eQvelEwiudiljv KR9lEehGwhq00vfWULlhes9U2T9meoU= Received: from compute6.internal (compute6.nyi.internal [10.202.2.47]) by mailout.nyi.internal (Postfix) with ESMTP id E43095C00F8; Sat, 7 Jan 2023 04:10:31 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute6.internal (MEProxy); Sat, 07 Jan 2023 04:10:31 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov.name; h=cc:cc:content-type:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to; s=fm1; t=1673082631; x=1673169031; bh=FK nmdOzVKyy4kJvBw0IGAezSRAC3ERJEoigxCSqnGug=; b=m6VsE3BUrlajoN5ueg vZ3St5gMBp+f8AA0vN+wHs4a92eQPZzLpP76K2D8UFcVRjfqYp/pREE2urBNYsbY EbbhWX0cuk1Hqr9YXGAC0l8JrWDeyyi9F5/6lggFXWVPbwT8RsAzQEsaUl77SO4q YcsLFK1jXMOpkORWEuQlLplDr8DoyoFPMCeiInOX2BtZ0tj+ezE0TpSzw/nHGaLX LgLKWyqgxESMdHElIQ5gjKfZtBncafGjoXydDmNrVZwduQm5AEj3L4x7/EqKTX4O zIyJ4dyKN+edhTM47dkMAe16UUPV7cR3nrehaODwrGq87Oy0zx1Kd0gUdqgUbY7d fpyw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-type:date:date:feedback-id :feedback-id:from:from:in-reply-to:in-reply-to:message-id :mime-version:references:reply-to:sender:subject:subject:to:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; t=1673082631; x=1673169031; bh=FKnmdOzVKyy4kJvBw0IGAezSRAC3 ERJEoigxCSqnGug=; b=IB9KbeGD7Isjbc1Eokh4i+VTPBvpIknJu+J3RyGnkJH/ uk8GPbCzJ0crQpqzQroofvNgFRUiZHubVpRQCU8TTwvwsbQvO18QAryGqLNpViTb 2SlruqbfnE4kyB3yS6i/BpgWSOAPQshQy1ikfYe+dFfUg3A8X6C1i3I9Lrv/fg18 aKmy+EiKugiesUpg5z1YGTDHUMpqRLXXv+GMMs5wjt++qcIcw35UoIot70drUfAn 4qwVlG6lSUkQ+sHYEQFE4GQLehrBabVw02vb4HXomftLKjw2tiAauWSDZXbX+WAq vuG10b2MJq2dCNiuuajD/yM63IrEvOroEf3oWuJcBg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrkedugdduvdelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhepfffhvfevuffkfhggtggujgesthdttddttddtvdenucfhrhhomhepfdfmihhr ihhllhcutedrucfuhhhuthgvmhhovhdfuceokhhirhhilhhlsehshhhuthgvmhhovhdrnh grmhgvqeenucggtffrrghtthgvrhhnpeduveeluefghfeugfegjeeffeduvedugfevtdeu keevkedtgefhjeejffdvueffjeenucffohhmrghinhepghgvthhushgvrhdrshgspdhpuh htuhhsvghrrdhssgenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhl fhhrohhmpehkihhrihhllhesshhhuhhtvghmohhvrdhnrghmvg X-ME-Proxy: Feedback-ID: ie3994620:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sat, 7 Jan 2023 04:10:30 -0500 (EST) Received: by box.shutemov.name (Postfix, from userid 1000) id 8CF9D104373; Sat, 7 Jan 2023 12:10:27 +0300 (+03) Date: Sat, 7 Jan 2023 12:10:27 +0300 From: "Kirill A. Shutemov" To: Linus Torvalds Cc: "Kirill A. Shutemov" , Dave Hansen , Andy Lutomirski , Peter Zijlstra , x86@kernel.org, Kostya Serebryany , Andrey Ryabinin , Andrey Konovalov , Alexander Potapenko , Taras Madan , Dmitry Vyukov , "H . J . Lu" , Andi Kleen , Rick Edgecombe , Bharata B Rao , Jacob Pan , Ashok Raj , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCHv13 05/16] x86/uaccess: Provide untagged_addr() and remove tags before address check Message-ID: <20230107091027.xbikgiizkeegofdd@box.shutemov.name> References: <20221227030829.12508-1-kirill.shutemov@linux.intel.com> <20221227030829.12508-6-kirill.shutemov@linux.intel.com> <20221231001029.5nckrhtmwahb65jo@box> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: A958EA0003 X-Stat-Signature: zjef9xhbd67i7rud6xborhsebpwtqqd4 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1673082632-884360 X-HE-Meta: U2FsdGVkX1+S9QWJJycQxXnq5YC1fmSDm49MM18SVR1yEd3vgnfRbiiIzq0/suEQ9CJ6Nd6QCjGInlXpH5Gd/ykmU6lxkI46BtmQVhSu4mCr3XMNAu+VWEdH5z0JC2X5R7TF/CPRvRH5oHWQwDAGnrnenWEK/cGhYa+/0+7LdIID8fyy3q5hGmPry9T77V7sDj2E/7bg0Py5XeZnd5GOFwU6gCpT9v+ismzS/RHJhAdratpBa4RoGNwyFG4BXpOOzfV2SNJYxnDsK8F40zPplM7JJQk4sxUZiPDoCfXpIO6W56NsAcGUqhUBHfSvLir115UP3200x7Alh8KoPnUR/8N3vk57+FdcivXGOuMZ/PY2ev3p9LELWvWMR3CU0NtVVuEh5ElemhPTHVwGVmX0HtL25Q2kp8pj9fLtqNrKhWxH1k7p1mZP5no1VbP8LIZlAFsHZyPLtjAuKTGTLLueco6KNaOCkkTcJl4wy73gpE0bGlQaiqa3cGQIh5LLilyu/wvZNlN3y+oRJsoSE4FqGPA660JFHcM9M2EKZABMUqxrQxO8+gcO+pBqwUaL5RHdb24DYP/FpRq7MCLJ9qGT7xlNNKUBScGhh45bp3b20nfHWewiPcI/DpBb0rqhZmoE5QUeJ743Yh3J/v+jiAcm8oaGKKw3dvFooIIzXalutKKltOjRPc3mWt4dO+mv8Gzgzpb9iarQBX8TZCu4fh0AO95OqTK7gMYm6eo7XndJq1CvZUiwcPCNDYqlMLI0W5upXH2FXMF1w2XC0gp7EUHL6OL9ShBhub3jMDDW7oXru893YI+S9VLnY4cC0PGmK+gIdNpDHTXT52FYrYAVOMuXGmsuSo8EAKHHE5LYOskOWIp1gleaMuqI+2sZzgvJpKybYK2aWXgJP/7Y9wtKYuFHCmlEYYKfojiLjPABsmhr6GQXdAyMEAOOJTPYwkIOA5WBnLDQ4TfDJgcFUPBNK2Y aowYpvJ7 0zLQFXLELC4SaMFjqwSljJhk4t67+sXf3sFJhgrCvxRhpKli4dBKHzy05NaSZ8F91KEI+iisLMgbGYYr2rDDIPMMvAiMiAs5RzR8D X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Dec 30, 2022 at 04:42:05PM -0800, Linus Torvalds wrote: > The one thing that that "shift by 63 and bitwise or" trick does > require is that the _ASM_EXTABLE_UA() thing for getuser/putuser would > have to have an extra annotation to shut up the > > WARN_ONCE(trapnr == X86_TRAP_GP, "General protection fault in > user access. Non-canonical address?"); > > in ex_handler_uaccess() for the GP trap that users can now cause by > giving a non-canonical address with the high bit clear. So we'd > probably just want a new EX_TYPE_* for these cases, but that still > looks fairly straightforward. Plain _ASM_EXTABLE() seems does the trick. > Hmm? Here's what I've come up with: diff --git a/arch/x86/lib/getuser.S b/arch/x86/lib/getuser.S index b70d98d79a9d..3e69e3727769 100644 --- a/arch/x86/lib/getuser.S +++ b/arch/x86/lib/getuser.S @@ -37,22 +37,22 @@ #define ASM_BARRIER_NOSPEC ALTERNATIVE "", "lfence", X86_FEATURE_LFENCE_RDTSC -#ifdef CONFIG_X86_5LEVEL -#define LOAD_TASK_SIZE_MINUS_N(n) \ - ALTERNATIVE __stringify(mov $((1 << 47) - 4096 - (n)),%rdx), \ - __stringify(mov $((1 << 56) - 4096 - (n)),%rdx), X86_FEATURE_LA57 -#else -#define LOAD_TASK_SIZE_MINUS_N(n) \ - mov $(TASK_SIZE_MAX - (n)),%_ASM_DX -#endif +.macro check_range size:req +.if IS_ENABLED(CONFIG_X86_64) + mov %rax, %rdx + shr $63, %rdx + or %rdx, %rax +.else + cmp $TASK_SIZE_MAX-\size+1, %eax + jae .Lbad_get_user + sbb %edx, %edx /* array_index_mask_nospec() */ + and %edx, %eax +.endif +.endm .text SYM_FUNC_START(__get_user_1) - LOAD_TASK_SIZE_MINUS_N(0) - cmp %_ASM_DX,%_ASM_AX - jae bad_get_user - sbb %_ASM_DX, %_ASM_DX /* array_index_mask_nospec() */ - and %_ASM_DX, %_ASM_AX + check_range size=1 ASM_STAC 1: movzbl (%_ASM_AX),%edx xor %eax,%eax @@ -62,11 +62,7 @@ SYM_FUNC_END(__get_user_1) EXPORT_SYMBOL(__get_user_1) SYM_FUNC_START(__get_user_2) - LOAD_TASK_SIZE_MINUS_N(1) - cmp %_ASM_DX,%_ASM_AX - jae bad_get_user - sbb %_ASM_DX, %_ASM_DX /* array_index_mask_nospec() */ - and %_ASM_DX, %_ASM_AX + check_range size=2 ASM_STAC 2: movzwl (%_ASM_AX),%edx xor %eax,%eax @@ -76,11 +72,7 @@ SYM_FUNC_END(__get_user_2) EXPORT_SYMBOL(__get_user_2) SYM_FUNC_START(__get_user_4) - LOAD_TASK_SIZE_MINUS_N(3) - cmp %_ASM_DX,%_ASM_AX - jae bad_get_user - sbb %_ASM_DX, %_ASM_DX /* array_index_mask_nospec() */ - and %_ASM_DX, %_ASM_AX + check_range size=4 ASM_STAC 3: movl (%_ASM_AX),%edx xor %eax,%eax @@ -90,30 +82,17 @@ SYM_FUNC_END(__get_user_4) EXPORT_SYMBOL(__get_user_4) SYM_FUNC_START(__get_user_8) -#ifdef CONFIG_X86_64 - LOAD_TASK_SIZE_MINUS_N(7) - cmp %_ASM_DX,%_ASM_AX - jae bad_get_user - sbb %_ASM_DX, %_ASM_DX /* array_index_mask_nospec() */ - and %_ASM_DX, %_ASM_AX + check_range size=8 ASM_STAC +#ifdef CONFIG_X86_64 4: movq (%_ASM_AX),%rdx - xor %eax,%eax - ASM_CLAC - RET #else - LOAD_TASK_SIZE_MINUS_N(7) - cmp %_ASM_DX,%_ASM_AX - jae bad_get_user_8 - sbb %_ASM_DX, %_ASM_DX /* array_index_mask_nospec() */ - and %_ASM_DX, %_ASM_AX - ASM_STAC 4: movl (%_ASM_AX),%edx 5: movl 4(%_ASM_AX),%ecx +#endif xor %eax,%eax ASM_CLAC RET -#endif SYM_FUNC_END(__get_user_8) EXPORT_SYMBOL(__get_user_8) @@ -166,7 +145,7 @@ EXPORT_SYMBOL(__get_user_nocheck_8) SYM_CODE_START_LOCAL(.Lbad_get_user_clac) ASM_CLAC -bad_get_user: +.Lbad_get_user: xor %edx,%edx mov $(-EFAULT),%_ASM_AX RET @@ -184,23 +163,23 @@ SYM_CODE_END(.Lbad_get_user_8_clac) #endif /* get_user */ - _ASM_EXTABLE_UA(1b, .Lbad_get_user_clac) - _ASM_EXTABLE_UA(2b, .Lbad_get_user_clac) - _ASM_EXTABLE_UA(3b, .Lbad_get_user_clac) + _ASM_EXTABLE(1b, .Lbad_get_user_clac) + _ASM_EXTABLE(2b, .Lbad_get_user_clac) + _ASM_EXTABLE(3b, .Lbad_get_user_clac) #ifdef CONFIG_X86_64 - _ASM_EXTABLE_UA(4b, .Lbad_get_user_clac) + _ASM_EXTABLE(4b, .Lbad_get_user_clac) #else - _ASM_EXTABLE_UA(4b, .Lbad_get_user_8_clac) - _ASM_EXTABLE_UA(5b, .Lbad_get_user_8_clac) + _ASM_EXTABLE(4b, .Lbad_get_user_8_clac) + _ASM_EXTABLE(5b, .Lbad_get_user_8_clac) #endif /* __get_user */ - _ASM_EXTABLE_UA(6b, .Lbad_get_user_clac) - _ASM_EXTABLE_UA(7b, .Lbad_get_user_clac) - _ASM_EXTABLE_UA(8b, .Lbad_get_user_clac) + _ASM_EXTABLE(6b, .Lbad_get_user_clac) + _ASM_EXTABLE(7b, .Lbad_get_user_clac) + _ASM_EXTABLE(8b, .Lbad_get_user_clac) #ifdef CONFIG_X86_64 - _ASM_EXTABLE_UA(9b, .Lbad_get_user_clac) + _ASM_EXTABLE(9b, .Lbad_get_user_clac) #else - _ASM_EXTABLE_UA(9b, .Lbad_get_user_8_clac) - _ASM_EXTABLE_UA(10b, .Lbad_get_user_8_clac) + _ASM_EXTABLE(9b, .Lbad_get_user_8_clac) + _ASM_EXTABLE(10b, .Lbad_get_user_8_clac) #endif diff --git a/arch/x86/lib/putuser.S b/arch/x86/lib/putuser.S index 32125224fcca..0ec57997a764 100644 --- a/arch/x86/lib/putuser.S +++ b/arch/x86/lib/putuser.S @@ -33,20 +33,20 @@ * as they get called from within inline assembly. */ -#ifdef CONFIG_X86_5LEVEL -#define LOAD_TASK_SIZE_MINUS_N(n) \ - ALTERNATIVE __stringify(mov $((1 << 47) - 4096 - (n)),%rbx), \ - __stringify(mov $((1 << 56) - 4096 - (n)),%rbx), X86_FEATURE_LA57 -#else -#define LOAD_TASK_SIZE_MINUS_N(n) \ - mov $(TASK_SIZE_MAX - (n)),%_ASM_BX -#endif +.macro check_range size:req +.if IS_ENABLED(CONFIG_X86_64) + movq %rcx, %rbx + shrq $63, %rbx + orq %rbx, %rcx +.else + cmp $TASK_SIZE_MAX-\size+1, %ecx + jae .Lbad_put_user +.endif +.endm .text SYM_FUNC_START(__put_user_1) - LOAD_TASK_SIZE_MINUS_N(0) - cmp %_ASM_BX,%_ASM_CX - jae .Lbad_put_user + check_range size=1 ASM_STAC 1: movb %al,(%_ASM_CX) xor %ecx,%ecx @@ -66,9 +66,7 @@ SYM_FUNC_END(__put_user_nocheck_1) EXPORT_SYMBOL(__put_user_nocheck_1) SYM_FUNC_START(__put_user_2) - LOAD_TASK_SIZE_MINUS_N(1) - cmp %_ASM_BX,%_ASM_CX - jae .Lbad_put_user + check_range size=2 ASM_STAC 3: movw %ax,(%_ASM_CX) xor %ecx,%ecx @@ -88,9 +86,7 @@ SYM_FUNC_END(__put_user_nocheck_2) EXPORT_SYMBOL(__put_user_nocheck_2) SYM_FUNC_START(__put_user_4) - LOAD_TASK_SIZE_MINUS_N(3) - cmp %_ASM_BX,%_ASM_CX - jae .Lbad_put_user + check_range size=4 ASM_STAC 5: movl %eax,(%_ASM_CX) xor %ecx,%ecx @@ -110,9 +106,7 @@ SYM_FUNC_END(__put_user_nocheck_4) EXPORT_SYMBOL(__put_user_nocheck_4) SYM_FUNC_START(__put_user_8) - LOAD_TASK_SIZE_MINUS_N(7) - cmp %_ASM_BX,%_ASM_CX - jae .Lbad_put_user + check_range size=8 ASM_STAC 7: mov %_ASM_AX,(%_ASM_CX) #ifdef CONFIG_X86_32 @@ -144,15 +138,15 @@ SYM_CODE_START_LOCAL(.Lbad_put_user_clac) RET SYM_CODE_END(.Lbad_put_user_clac) - _ASM_EXTABLE_UA(1b, .Lbad_put_user_clac) - _ASM_EXTABLE_UA(2b, .Lbad_put_user_clac) - _ASM_EXTABLE_UA(3b, .Lbad_put_user_clac) - _ASM_EXTABLE_UA(4b, .Lbad_put_user_clac) - _ASM_EXTABLE_UA(5b, .Lbad_put_user_clac) - _ASM_EXTABLE_UA(6b, .Lbad_put_user_clac) - _ASM_EXTABLE_UA(7b, .Lbad_put_user_clac) - _ASM_EXTABLE_UA(9b, .Lbad_put_user_clac) + _ASM_EXTABLE(1b, .Lbad_put_user_clac) + _ASM_EXTABLE(2b, .Lbad_put_user_clac) + _ASM_EXTABLE(3b, .Lbad_put_user_clac) + _ASM_EXTABLE(4b, .Lbad_put_user_clac) + _ASM_EXTABLE(5b, .Lbad_put_user_clac) + _ASM_EXTABLE(6b, .Lbad_put_user_clac) + _ASM_EXTABLE(7b, .Lbad_put_user_clac) + _ASM_EXTABLE(9b, .Lbad_put_user_clac) #ifdef CONFIG_X86_32 - _ASM_EXTABLE_UA(8b, .Lbad_put_user_clac) - _ASM_EXTABLE_UA(10b, .Lbad_put_user_clac) + _ASM_EXTABLE(8b, .Lbad_put_user_clac) + _ASM_EXTABLE(10b, .Lbad_put_user_clac) #endif -- Kiryl Shutsemau / Kirill A. Shutemov