linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Marc Gonzalez <marc.w.gonzalez@free.fr>
To: LKML <linux-kernel@vger.kernel.org>, linux-mm@kvack.org
Cc: Vladimir Murzin <vladimir.murzin@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	Mark Rutland <mark.rutland@arm.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@kernel.org>,
	Tomas Mudrunka <tomas.mudrunka@gmail.com>,
	HPeter Anvin <hpa@zytor.com>, Arnd Bergmann <arnd@kernel.org>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>
Subject: RFC: Faster memtest (possibly bypassing data cache)
Date: Wed, 5 Jul 2023 17:41:01 +0200	[thread overview]
Message-ID: <b74d93bc-ed02-4019-e5e4-0841bdceb44c@free.fr> (raw)

Hello,

When dealing with a few million devices (x86 and arm64),
it is statistically expected to have "a few" devices with
at least one bad RAM cell. (How many?)

For one particular model, we've determined that ~0.1% have
at least one bad RAM cell (ergo, a few thousand devices).

I've been wondering if someone more experienced knows:
Are these RAM cells bad from the start, or do they become bad
with time? (I assume both failure modes exist.)

Once the first bad cell is detected, is it more likely
to detect other bad cells as time goes by?
In other words, what are the failure modes of ageing RAM?


Closing the HW tangent, focusing on the SW side of things:

Since these bad RAM cells wreak havoc for the device's user,
especially with ASLR (different stuff crashes across reboots),
I've been experimenting with mm/memtest.c as a first line
of defense against bad RAM cells.

However, I have a run into a few issues.

Even though early_memtest is called, well... early, memory has
already been mapped as regular *cached* memory.

This means that when we test an area smaller than L3 cache, we're
not even hitting RAM, we're just testing the cache hierarchy.
I suppose it /might/ make sense to test the cache hierarchy,
as it could(?) have errors as well?
However, I suspect defects in cache are much more rare
(and thus detection might not be worth the added run-time).

On x86, I ran a few tests using SIMD non-temporal stores
(to bypass the cache on stores), and got 30% reduction in run-time.
(Minimal run-time is critical for being able to deploy the code
to millions of devices for the benefit of a few thousand users.)
AFAIK, there are no non-temporal loads, the normal loads probably
thrashed the data cache.

I was hoping to be able to test a different implementation:

When we enter early_memtest(), we remap [start, end]
as UC (or maybe WC?) so as to entirely bypass the cache.
We read/write using the largest size available for stores/loads,
e.g. entire cache lines on recent x86 HW.
Then when we leave, we remap as was done originally.

Is that possible?

Hopefully, the other cores are not started at this point?
(Otherwise this whole charade would be pointless.)

To summarize: is it possible to tweak memtest to make it
run faster while testing RAM in all cases?

Regards,

Marc Gonzalez


             reply	other threads:[~2023-07-05 15:41 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-05 15:41 Marc Gonzalez [this message]
2023-07-11  9:01 ` Marc Gonzalez

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b74d93bc-ed02-4019-e5e4-0841bdceb44c@free.fr \
    --to=marc.w.gonzalez@free.fr \
    --cc=ard.biesheuvel@linaro.org \
    --cc=arnd@kernel.org \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mark.rutland@arm.com \
    --cc=mingo@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=tomas.mudrunka@gmail.com \
    --cc=vladimir.murzin@arm.com \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox