From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C023C433F5 for ; Sat, 25 Dec 2021 02:05:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 314B26B0071; Fri, 24 Dec 2021 21:05:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2C3546B0072; Fri, 24 Dec 2021 21:05:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 13E196B0073; Fri, 24 Dec 2021 21:05:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0236.hostedemail.com [216.40.44.236]) by kanga.kvack.org (Postfix) with ESMTP id F22436B0071 for ; Fri, 24 Dec 2021 21:05:22 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id A0BD7894CF for ; Sat, 25 Dec 2021 02:05:22 +0000 (UTC) X-FDA: 78954674484.02.6829709 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf05.hostedemail.com (Postfix) with ESMTP id 7DA9C10003A for ; Sat, 25 Dec 2021 02:05:21 +0000 (UTC) Received: from dggpemm500020.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4JLS1J6R83zcbrD; Sat, 25 Dec 2021 10:04:52 +0800 (CST) Received: from dggpemm500001.china.huawei.com (7.185.36.107) by dggpemm500020.china.huawei.com (7.185.36.49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Sat, 25 Dec 2021 10:05:18 +0800 Received: from [10.174.177.243] (10.174.177.243) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.20; Sat, 25 Dec 2021 10:05:17 +0800 Message-ID: Date: Sat, 25 Dec 2021 10:05:16 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.2.0 Subject: Re: [PATCH] Revert "mm/usercopy: Drop extra is_vmalloc_or_module() check" Content-Language: en-US To: Christophe Leroy , Kees Cook , Laura Abbott , Mark Rutland , "linux-mm@kvack.org" , "Andrew Morton" , "linux-kernel@vger.kernel.org" , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , "linuxppc-dev@lists.ozlabs.org" , Nick Piggin References: <20211223102126.161848-1-wangkefeng.wang@huawei.com> <96fe1826-aeaf-4ea0-9f01-03d6b3933b34@huawei.com> <6e2ddc83-bec3-fdd4-4d91-3ade0de0b7c8@csgroup.eu> From: Kefeng Wang In-Reply-To: <6e2ddc83-bec3-fdd4-4d91-3ade0de0b7c8@csgroup.eu> Content-Type: text/plain; charset="UTF-8"; format=flowed X-Originating-IP: [10.174.177.243] X-ClientProxiedBy: dggeme706-chm.china.huawei.com (10.1.199.102) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected Authentication-Results: imf05.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf05.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 7DA9C10003A X-Stat-Signature: 3btj51ibdjn3weouqffmn64ce1tqh85b X-HE-Tag: 1640397921-746676 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2021/12/24 21:18, Christophe Leroy wrote: > > Le 24/12/2021 =C3=A0 08:06, Kefeng Wang a =C3=A9crit=C2=A0: >> On 2021/12/24 14:01, Christophe Leroy wrote: >>> Le 23/12/2021 =C3=A0 11:21, Kefeng Wang a =C3=A9crit=C2=A0: >>>> This reverts commit 517e1fbeb65f5eade8d14f46ac365db6c75aea9b. >>>> >>>> =C2=A0=C2=A0=C2=A0 usercopy: Kernel memory exposure attempt detecte= d from SLUB >>>> object not in SLUB page?! (offset 0, size 1048)! >>>> =C2=A0=C2=A0=C2=A0 kernel BUG at mm/usercopy.c:99 >>>> =C2=A0=C2=A0=C2=A0 ... >>>> =C2=A0=C2=A0=C2=A0 usercopy_abort+0x64/0xa0 (unreliable) >>>> =C2=A0=C2=A0=C2=A0 __check_heap_object+0x168/0x190 >>>> =C2=A0=C2=A0=C2=A0 __check_object_size+0x1a0/0x200 >>>> =C2=A0=C2=A0=C2=A0 dev_ethtool+0x2494/0x2b20 >>>> =C2=A0=C2=A0=C2=A0 dev_ioctl+0x5d0/0x770 >>>> =C2=A0=C2=A0=C2=A0 sock_do_ioctl+0xf0/0x1d0 >>>> =C2=A0=C2=A0=C2=A0 sock_ioctl+0x3ec/0x5a0 >>>> =C2=A0=C2=A0=C2=A0 __se_sys_ioctl+0xf0/0x160 >>>> =C2=A0=C2=A0=C2=A0 system_call_exception+0xfc/0x1f0 >>>> =C2=A0=C2=A0=C2=A0 system_call_common+0xf8/0x200 >>>> >>>> When run ethtool eth0, the BUG occurred, the code shows below, >>>> >>>> =C2=A0=C2=A0=C2=A0 data =3D vzalloc(array_size(gstrings.len, ETH_GS= TRING_LEN)); >>>> =C2=A0=C2=A0=C2=A0 copy_to_user(useraddr, data, gstrings.len * ETH_= GSTRING_LEN)) >>>> >>>> The data is alloced by vmalloc(),=C2=A0 virt_addr_valid(ptr) will re= turn true >>>> on PowerPC64, which leads to the panic, add back the >>>> is_vmalloc_or_module() >>>> check to fix it. >>> Is it expected that virt_addr_valid() returns true on PPC64 for >>> vmalloc'ed memory ? If that's the case it also means that >>> CONFIG_DEBUG_VIRTUAL won't work as expected either. >> Our product reports this bug to me, after let them do some test, >> >> I found virt_addr_valid return true for vmalloc'ed memory on their boa= rd. >> >> I think DEBUG_VIRTUAL could not be work well too, but I can't test it. >> >>> If it is unexpected, I think you should fix PPC64 instead of adding t= his >>> hack back. Maybe the ARM64 fix can be used as a starting point, see >>> commit 68dd8ef32162 ("arm64: memory: Fix virt_addr_valid() using >>> __is_lm_address()") >> Yes=EF=BC=8C I check the history,=C2=A0 fix virt_addr_valid() on Power= PC is what I >> firstly want to do, >> >> but I am not familiar with PPC, and also HARDENED_USERCOPY on other's >> ARCHs could >> >> has this issue too, so I add the workaround back. >> >> >> 1) PPC maintainer/expert, any suggestion ? >> >> 2) Maybe we could add some check to WARN this scenario. >> >> --- a/mm/usercopy.c >> +++ b/mm/usercopy.c >> @@ -229,6 +229,8 @@ static inline void check_heap_object(const void >> *ptr, unsigned long n, >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (!virt_addr_valid(ptr)= ) >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 return; >> >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 WARN_ON_ONCE(is_vmalloc_or_modul= e_addr(ptr)); >> >>> In the meantime, can you provide more information on your config, >>> especially which memory model is used ? >> Some useful configs, >> >> CONFIG_PPC64=3Dy >> CONFIG_PPC_BOOK3E_64=3Dy >> CONFIG_E5500_CPU=3Dy >> CONFIG_TARGET_CPU_BOOL=3Dy >> CONFIG_PPC_BOOK3E=3Dy >> CONFIG_E500=3Dy >> CONFIG_PPC_E500MC=3Dy >> CONFIG_PPC_FPU=3Dy >> CONFIG_FSL_EMB_PERFMON=3Dy >> CONFIG_FSL_EMB_PERF_EVENT=3Dy >> CONFIG_FSL_EMB_PERF_EVENT_E500=3Dy >> CONFIG_BOOKE=3Dy >> CONFIG_PPC_FSL_BOOK3E=3Dy >> CONFIG_PTE_64BIT=3Dy >> CONFIG_PHYS_64BIT=3Dy >> CONFIG_PPC_MMU_NOHASH=3Dy >> CONFIG_PPC_BOOK3E_MMU=3Dy >> CONFIG_SELECT_MEMORY_MODEL=3Dy >> CONFIG_FLATMEM_MANUAL=3Dy >> CONFIG_FLATMEM=3Dy >> CONFIG_FLAT_NODE_MEM_MAP=3Dy >> CONFIG_SPARSEMEM_VMEMMAP_ENABLE=3Dy >> > OK so it is PPC64 book3e and with flatmem. > > The problem is virt_to_pfn() which uses __pa() > > __pa(x) on PPC64 is (x) & 0x0fffffffffffffffUL > > And on book3e/64 we have > > VMALLOC_START =3D KERN_VIRT_START =3D ASM_CONST(0x8000000000000000) > > > It means that __pa() will return a valid PFN for VMALLOCed addresses. > > > So an additional check is required in virt_addr_valid(), maybe check > that (kaddr & PAGE_OFFSET) =3D=3D PAGE_OFFSET > > Can you try that ? > > #define virt_addr_valid(kaddr) ((kaddr & PAGE_OFFSET) =3D=3D PAGE_OFFSE= T && > pfn_valid(virt_to_pfn(kaddr))) I got this commit, commit 4dd7554a6456d124c85e0a4ad156625b71390b5c Author: Nicholas Piggin Date:=C2=A0=C2=A0 Wed Jul 24 18:46:37 2019 +1000 =C2=A0=C2=A0=C2=A0 powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __= pa addresses =C2=A0=C2=A0=C2=A0 Ensure __va is given a physical address below PAGE_OF= FSET, and __pa is =C2=A0=C2=A0=C2=A0 given a virtual address above PAGE_OFFSET. It has check the PAGE_OFFSET in __pa,=C2=A0 will test it and resend the=20 patch(with above warning changes). Thanks. > > > Thanks > Christophe