linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Luck, Tony" <tony.luck@intel.com>
To: Jiaqi Yan <jiaqiyan@google.com>,
	"naoya.horiguchi@nec.com" <naoya.horiguchi@nec.com>,
	"dave.hansen@linux.intel.com" <dave.hansen@linux.intel.com>,
	"david@redhat.com" <david@redhat.com>
Cc: "Aktas, Erdem" <erdemaktas@google.com>,
	"pgonda@google.com" <pgonda@google.com>,
	"rientjes@google.com" <rientjes@google.com>,
	"Hsiao, Duen-wen" <duenwen@google.com>,
	"Vilas.Sridharan@amd.com" <Vilas.Sridharan@amd.com>,
	"Malvestuto, Mike" <mike.malvestuto@intel.com>,
	"gthelen@google.com" <gthelen@google.com>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"jthoughton@google.com" <jthoughton@google.com>
Subject: RE: [RFC] Kernel Support of Memory Error Detection.
Date: Thu, 3 Nov 2022 16:27:06 +0000	[thread overview]
Message-ID: <SJ1PR11MB6083060010644620F58AF44AFC389@SJ1PR11MB6083.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20221103155029.2451105-1-jiaqiyan@google.com>

>- HPS usually doesn’t consume CPU cores but does consume memory
>  controller cycles and memory bandwidth. SW consumes both CPU cycles
>  and memory bandwidth, but is only a problem if administrators opt into
>  the scanning after weighing the cost benefit.

Maybe there is a middle ground on platforms that support some s/w programmable
DMA engine that can detect memory errors in a way that doesn't signal a
fatal system error. Your s/w scanner can direct that DMA engine to read from
the regions of memory that you want to scan, at a frequency that is compatible
with your system load requirements and risk assessments.

If your idea gets traction, maybe structure the code so that it can either use
a CPU core scan a block of memory, or pass requests to a platform driver that can
use a DMA engine to perform the scan.

-Tony



  reply	other threads:[~2022-11-03 16:27 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-03 15:50 Jiaqi Yan
2022-11-03 16:27 ` Luck, Tony [this message]
2022-11-03 16:40   ` Nadav Amit
2022-11-08  2:24     ` Jiaqi Yan
2022-11-08 16:17       ` Luck, Tony
2022-11-09  5:04         ` HORIGUCHI NAOYA(堀口 直也)
2022-11-10 20:23           ` Jiaqi Yan
2022-11-18  1:19           ` Jiaqi Yan
2022-11-18 14:38             ` Sridharan, Vilas
2022-11-18 17:10               ` Luck, Tony
2022-11-07 16:59 ` Sridharan, Vilas
2022-11-09  5:29 ` HORIGUCHI NAOYA(堀口 直也)
2022-11-09 16:15   ` Luck, Tony
2022-11-10 20:25     ` Jiaqi Yan
2022-11-10 20:23   ` Jiaqi Yan
2022-11-30  5:31 ` David Rientjes
2022-12-13  9:27   ` HORIGUCHI NAOYA(堀口 直也)
2022-12-13 18:09     ` Luck, Tony
2022-12-13 19:03       ` Jiaqi Yan
2022-12-14 14:45         ` Yazen Ghannam

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SJ1PR11MB6083060010644620F58AF44AFC389@SJ1PR11MB6083.namprd11.prod.outlook.com \
    --to=tony.luck@intel.com \
    --cc=Vilas.Sridharan@amd.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=duenwen@google.com \
    --cc=erdemaktas@google.com \
    --cc=gthelen@google.com \
    --cc=jiaqiyan@google.com \
    --cc=jthoughton@google.com \
    --cc=linux-mm@kvack.org \
    --cc=mike.malvestuto@intel.com \
    --cc=naoya.horiguchi@nec.com \
    --cc=pgonda@google.com \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox