From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BF93EB64DD for ; Tue, 11 Jul 2023 09:01:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 221C96B007B; Tue, 11 Jul 2023 05:01:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1AC836B007D; Tue, 11 Jul 2023 05:01:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 024D16B007E; Tue, 11 Jul 2023 05:01:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id E25D26B007B for ; Tue, 11 Jul 2023 05:01:37 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id A32ED16013B for ; Tue, 11 Jul 2023 09:01:37 +0000 (UTC) X-FDA: 80998737834.14.8583352 Received: from smtp2-g21.free.fr (smtp2-g21.free.fr [212.27.42.2]) by imf29.hostedemail.com (Postfix) with ESMTP id 751D4120004 for ; Tue, 11 Jul 2023 09:01:34 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=free.fr header.s=smtp-20201208 header.b=YFIe1QKy; dmarc=pass (policy=none) header.from=free.fr; spf=pass (imf29.hostedemail.com: domain of marc.w.gonzalez@free.fr designates 212.27.42.2 as permitted sender) smtp.mailfrom=marc.w.gonzalez@free.fr ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1689066095; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mXLeKjXLr/WVlxlCOJ2CLIvC7/2l8URAdGr99wtImXs=; b=aEGaOtyV7wyl01TwQJ8AuidmII+/VzlnxJmxks4XFWrIOF8360EwNWxlC2pqFxQg4kH7aw 6VIoA4WdcH7X9eaf22j73PoQbH/oD5Sb2PmHEEOhwjKdWRX4LsFRhF8B3PhCZN+bKCe37U RB+06KwB62vTd9/nemF9GgtAlT0BwqY= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=free.fr header.s=smtp-20201208 header.b=YFIe1QKy; dmarc=pass (policy=none) header.from=free.fr; spf=pass (imf29.hostedemail.com: domain of marc.w.gonzalez@free.fr designates 212.27.42.2 as permitted sender) smtp.mailfrom=marc.w.gonzalez@free.fr ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1689066095; a=rsa-sha256; cv=none; b=f4GSCmzfYSvO9fP7SFKjm0AbNTfs3oOU9SWbr0vifanHUY37M5Llhw03fMgLoMlR+4/zRO LTCMTlFoysRgxQQT108X0xtJranFgbey5UqFGmD9ZVyhPt5xOU6PSuLEwZITyc1CmkX46n igK8Sb5tfUnPATw2PDDEIEsmNrj0Hso= Received: from [192.168.108.81] (unknown [213.36.7.13]) (Authenticated sender: marc.w.gonzalez@free.fr) by smtp2-g21.free.fr (Postfix) with ESMTPSA id BE6B02003C6; Tue, 11 Jul 2023 11:01:17 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=free.fr; s=smtp-20201208; t=1689066092; bh=kzLwFEZiiwK9VWldSuslytwZfQEdKH1LYg5EawYSmjQ=; h=Date:Subject:From:To:Cc:References:In-Reply-To:From; b=YFIe1QKymp3K6OhI+PW3VS6QUKiUiicO6XH7GZNOeW0weanp9MkZQQ/Ur2AlHwoM4 OoHGgoNsGXeo4sb2u6ijsoDw+7SVgD+20f6p5lknaoC8ZSTfld+1/MFJHHCq/IrsKg 2mYRSPZ8uEARrj2Hs59VVU/kx7aiEbcaQMH8Vfpjqc5dyjYIQ+ztxD00kUKkgOwenG 8NZIFf9QEoueKfwlW5PXvZsuzBOPw1RghnP19bq6X7OAQtCyRoJto/gh6AcShkwifo cuQ0QfUcKZzt7Oe7bQ1alqg4ALBO/2NMNKa+j7DqDlKKLZxElK53g0SthBxi8GmAtM QJ1WTa2Zp//lw== Message-ID: <0caa0a41-4eb9-1683-8aa5-cc830b12dfe3@free.fr> Date: Tue, 11 Jul 2023 11:01:17 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.11.0 Subject: Re: RFC: Faster memtest (possibly bypassing data cache) Content-Language: en-US From: Marc Gonzalez To: LKML , linux-mm@kvack.org, Linux ARM Cc: Vladimir Murzin , Will Deacon , Mark Rutland , Robin Murphy , Thomas Gleixner , Tomas Mudrunka , HPeter Anvin , Ingo Molnar , Arnd Bergmann , Ard Biesheuvel References: In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 751D4120004 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: h39p58zozf3jmm3aw5syrbg4pdm7wyew X-HE-Tag: 1689066094-414081 X-HE-Meta: U2FsdGVkX19V+iIlv+abBpBq/1NHknAtYH4h+IefaLOh+6ywJSarM9q7CLx3w6x0cEbJwOPWlneAdLP2M5Kq4+BQ+DqIHLC0gJSreyA+VrmRNXAYc2Md6RZFHZPbtFsHrh3qWiK6VnhsFnbla9xcMbFAG98ofRALcisGKUIIGXfpDinR6GpS0GJrZlkevmkEtX3ilPG/DP9gIBKNjpIfKKf1R/oAEJnQQVMSPcW3YSkKz3OhPb5k591jKKHbRl/Yeq4d6BcNnyMf1jmtmsGpN3l9/pAyUbeNm3PolQheITE9uXyf8ZodleTUtedhonWMWMOuVlP95Iwr2ROnJQgxocPSVHhb/iepc4YxeG11snNdPizQqgSAqnA3x1b9Lsqc6Bo2pYjhHlrRXwJrdyht7za5M2ix0G4LjovWtv/ZdBSBx6Ev/WuEKdrBM9x5W5Mwgvn7dlvPOgSqa96v0Plwx1SMZ0GfYLW8TQxhaN8ZrXzwwM+CZvr5CLVT+dEeuJgX8bLYMkk/yOZ2JcBmzwfKxfXmF5okhlgE/Qw1i0ZAmFEKiY5h4UVdeAabqjcaRaOoJD5StjaEKV42zD7eysvb8z6UAT7be0ap9CHw569gOZpr3vXQRPD0P1SKj15op68N8kJSEsijFs2B+kWMu1Pi01gH+7KPthBBb8auWY4WuSm0jhviZgemvjHfwFxhF4UZodq0JmCC6lOEQ3Q924HjHReiL8/pEiB+Mk4vKx4+eMucJfxf4O1z5Ih2FiEhDDXybMEs3OKshqm/O2BmxH0HuQRSuaq83Nfr69P61UP329IkKLyQ+VitfnwgX0gl4Bz9bkdgMMYG7syoZ5NvVsrAbHwW/Qbv5pg4YaIXQjTDw1O/gC5gS8Mjz6Xr2/dsceZsKOsvZJ40uSUeSRoHvh3OEiTg/kPO5WDME4lSwXJIR/CqcDLhDHODuA0HgwaOcp/sdF3DeP6bpoMoCO7zwO2 by6Pdf9A EgmjEwijGvLWeYUXlvsq5f8J5fA5+pEf5wsrQR49+90fS/Ou6bEVXq+6EMxjx2ZW68UUiY3DTHJx6kEjKclaxZhcX4CEH5E/fG7I3/vNWXIjmqWpRvfchFyDZtP2fj8y5z9NmqpMx+tFCP2EKxpaN6TqsvKGE+Y9ygK1BoXHluvmGz1FdtgFeSzhq5h/zl/Pe4nBt+5PVHyncrdmyEN0br10CpAxjIm0ablqJ68bMStg/GmLQvyBFuWjJ175N8DQf8xyknu6+xhyWrXxkXWcuKKPOc8QMu3WsreMf20sXmvLtOJFodeY2JSDMhHYMZPvF6FtctVBzFQAJWkhmlCISgHkfhw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 05/07/2023 17:41, Marc Gonzalez wrote: > Hello, > > When dealing with a few million devices (x86 and arm64), > it is statistically expected to have "a few" devices with > at least one bad RAM cell. (How many?) > > For one particular model, we've determined that ~0.1% have > at least one bad RAM cell (ergo, a few thousand devices). > > I've been wondering if someone more experienced knows: > Are these RAM cells bad from the start, or do they become bad > with time? (I assume both failure modes exist.) > > Once the first bad cell is detected, is it more likely > to detect other bad cells as time goes by? > In other words, what are the failure modes of ageing RAM? > > > Closing the HW tangent, focusing on the SW side of things: > > Since these bad RAM cells wreak havoc for the device's user, > especially with ASLR (different stuff crashes across reboots), > I've been experimenting with mm/memtest.c as a first line > of defense against bad RAM cells. > > However, I have a run into a few issues. > > Even though early_memtest is called, well... early, memory has > already been mapped as regular *cached* memory. > > This means that when we test an area smaller than L3 cache, we're > not even hitting RAM, we're just testing the cache hierarchy. > I suppose it /might/ make sense to test the cache hierarchy, > as it could(?) have errors as well? > However, I suspect defects in cache are much more rare > (and thus detection might not be worth the added run-time). > > On x86, I ran a few tests using SIMD non-temporal stores > (to bypass the cache on stores), and got 30% reduction in run-time. > (Minimal run-time is critical for being able to deploy the code > to millions of devices for the benefit of a few thousand users.) > AFAIK, there are no non-temporal loads, the normal loads probably > thrashed the data cache. > > I was hoping to be able to test a different implementation: > > When we enter early_memtest(), we remap [start, end] > as UC (or maybe WC?) so as to entirely bypass the cache. > We read/write using the largest size available for stores/loads, > e.g. entire cache lines on recent x86 HW. > Then when we leave, we remap as was done originally. > > Is that possible? > > Hopefully, the other cores are not started at this point? > (Otherwise this whole charade would be pointless.) > > To summarize: is it possible to tweak memtest to make it > run faster while testing RAM in all cases? Hello again, I had a short chat with Robin on IRC. He said trying to bypass the cache altogether was a bad idea(TM) performance-wise. Do others agree with this assessment? :) Would like to read people's thoughts about the whole thing. What is the kernel API to flush a kernel memory range to memory? int flush_cache_to_memory(void *va_start, void *va_end); On aarch64, I would test LDNP/STNP. Possibly also LD4/ST4. Regards, Marc