From: Shiju Jose <shiju.jose@huawei.com>
To: Fan Ni <nifan.cxl@gmail.com>
Cc: "linux-edac@vger.kernel.org" <linux-edac@vger.kernel.org>,
"linux-cxl@vger.kernel.org" <linux-cxl@vger.kernel.org>,
"linux-acpi@vger.kernel.org" <linux-acpi@vger.kernel.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"bp@alien8.de" <bp@alien8.de>,
"tony.luck@intel.com" <tony.luck@intel.com>,
"rafael@kernel.org" <rafael@kernel.org>,
"lenb@kernel.org" <lenb@kernel.org>,
"mchehab@kernel.org" <mchehab@kernel.org>,
"dan.j.williams@intel.com" <dan.j.williams@intel.com>,
"dave@stgolabs.net" <dave@stgolabs.net>,
"Jonathan Cameron" <jonathan.cameron@huawei.com>,
"dave.jiang@intel.com" <dave.jiang@intel.com>,
"alison.schofield@intel.com" <alison.schofield@intel.com>,
"vishal.l.verma@intel.com" <vishal.l.verma@intel.com>,
"ira.weiny@intel.com" <ira.weiny@intel.com>,
"david@redhat.com" <david@redhat.com>,
"Vilas.Sridharan@amd.com" <Vilas.Sridharan@amd.com>,
"leo.duran@amd.com" <leo.duran@amd.com>,
"Yazen.Ghannam@amd.com" <Yazen.Ghannam@amd.com>,
"rientjes@google.com" <rientjes@google.com>,
"jiaqiyan@google.com" <jiaqiyan@google.com>,
"Jon.Grimm@amd.com" <Jon.Grimm@amd.com>,
"dave.hansen@linux.intel.com" <dave.hansen@linux.intel.com>,
"naoya.horiguchi@nec.com" <naoya.horiguchi@nec.com>,
"james.morse@arm.com" <james.morse@arm.com>,
"jthoughton@google.com" <jthoughton@google.com>,
"somasundaram.a@hpe.com" <somasundaram.a@hpe.com>,
"erdemaktas@google.com" <erdemaktas@google.com>,
"pgonda@google.com" <pgonda@google.com>,
"duenwen@google.com" <duenwen@google.com>,
"mike.malvestuto@intel.com" <mike.malvestuto@intel.com>,
"gthelen@google.com" <gthelen@google.com>,
"wschwartz@amperecomputing.com" <wschwartz@amperecomputing.com>,
"dferguson@amperecomputing.com" <dferguson@amperecomputing.com>,
"wbs@os.amperecomputing.com" <wbs@os.amperecomputing.com>,
"jgroves@micron.com" <jgroves@micron.com>,
"vsalve@micron.com" <vsalve@micron.com>,
tanxiaofei <tanxiaofei@huawei.com>,
"Zengtao (B)" <prime.zeng@hisilicon.com>,
Roberto Sassu <roberto.sassu@huawei.com>,
"kangkang.shen@futurewei.com" <kangkang.shen@futurewei.com>,
wanghuiqiang <wanghuiqiang@huawei.com>,
Linuxarm <linuxarm@huawei.com>
Subject: RE: [PATCH v12 02/17] EDAC: Add EDAC scrub control driver
Date: Fri, 27 Sep 2024 11:17:25 +0000 [thread overview]
Message-ID: <0a815781611a4e0480380a54875eaa65@huawei.com> (raw)
In-Reply-To: <ZvXoiiOB330Kv-2Q@fan>
>-----Original Message-----
>From: Fan Ni <nifan.cxl@gmail.com>
>Sent: 27 September 2024 00:05
>To: Shiju Jose <shiju.jose@huawei.com>
>Cc: linux-edac@vger.kernel.org; linux-cxl@vger.kernel.org; linux-
>acpi@vger.kernel.org; linux-mm@kvack.org; linux-kernel@vger.kernel.org;
>bp@alien8.de; tony.luck@intel.com; rafael@kernel.org; lenb@kernel.org;
>mchehab@kernel.org; dan.j.williams@intel.com; dave@stgolabs.net; Jonathan
>Cameron <jonathan.cameron@huawei.com>; dave.jiang@intel.com;
>alison.schofield@intel.com; vishal.l.verma@intel.com; ira.weiny@intel.com;
>david@redhat.com; Vilas.Sridharan@amd.com; leo.duran@amd.com;
>Yazen.Ghannam@amd.com; rientjes@google.com; jiaqiyan@google.com;
>Jon.Grimm@amd.com; dave.hansen@linux.intel.com;
>naoya.horiguchi@nec.com; james.morse@arm.com; jthoughton@google.com;
>somasundaram.a@hpe.com; erdemaktas@google.com; pgonda@google.com;
>duenwen@google.com; mike.malvestuto@intel.com; gthelen@google.com;
>wschwartz@amperecomputing.com; dferguson@amperecomputing.com;
>wbs@os.amperecomputing.com; nifan.cxl@gmail.com; jgroves@micron.com;
>vsalve@micron.com; tanxiaofei <tanxiaofei@huawei.com>; Zengtao (B)
><prime.zeng@hisilicon.com>; Roberto Sassu <roberto.sassu@huawei.com>;
>kangkang.shen@futurewei.com; wanghuiqiang <wanghuiqiang@huawei.com>;
>Linuxarm <linuxarm@huawei.com>
>Subject: Re: [PATCH v12 02/17] EDAC: Add EDAC scrub control driver
>
>On Wed, Sep 11, 2024 at 10:04:31AM +0100, shiju.jose@huawei.com wrote:
>> From: Shiju Jose <shiju.jose@huawei.com>
>>
>> Add generic EDAC scrub control driver supports configuring the memory
>> scrubbers in the system. The device with scrub feature, get the scrub
>> descriptor from the EDAC scrub and registers with the EDAC RAS feature
>> driver, which adds the sysfs scrub control interface. The scrub
>> control attributes for a scrub instance are available to userspace in
>/sys/bus/edac/devices/<dev-name>/scrub*/.
>>
>> Generic EDAC scrub driver and the common sysfs scrub interface
>> promotes unambiguous access from the userspace irrespective of the
>> underlying scrub devices.
>>
>> The sysfs scrub attribute nodes would be present only if the client
>> driver has implemented the corresponding attribute callback function
>> and pass in ops to the EDAC RAS feature driver during registration.
>>
>> Co-developed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
>> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
>> Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
>> ---
>> Documentation/ABI/testing/sysfs-edac-scrub | 69 ++++
>> drivers/edac/Makefile | 1 +
>> drivers/edac/edac_device.c | 6 +-
>> drivers/edac/edac_scrub.c | 377 +++++++++++++++++++++
>> include/linux/edac.h | 30 ++
>> 5 files changed, 482 insertions(+), 1 deletion(-) create mode 100644
>> Documentation/ABI/testing/sysfs-edac-scrub
>> create mode 100755 drivers/edac/edac_scrub.c
>>
>> diff --git a/Documentation/ABI/testing/sysfs-edac-scrub
>> b/Documentation/ABI/testing/sysfs-edac-scrub
>> new file mode 100644
>> index 000000000000..f465cc91423f
>> --- /dev/null
>> +++ b/Documentation/ABI/testing/sysfs-edac-scrub
>> @@ -0,0 +1,69 @@
>> +What: /sys/bus/edac/devices/<dev-name>/scrub*
>
>Based on the code below, we can only have scrub0, scrub1, etc.
>So should we use scrubX instead of scrub* here.
>
>The same for below.
>
Thanks . Changed, in other patches as well.
>Fan
>
Thanks,
Shiju
>> +Date: Oct 2024
>> +KernelVersion: 6.12
>> +Contact: linux-edac@vger.kernel.org
>> +Description:
>> + The sysfs EDAC bus devices /<dev-name>/scrub* subdirectory
>> + belongs to an instance of memory scrub control feature,
>> + where <dev-name> directory corresponds to a device/memory
>> + region registered with the EDAC scrub driver and thus
>> + registered with the generic EDAC RAS driver.
>> + The sysfs scrub attr nodes would be present only if the
>> + client driver has implemented the corresponding attr
>> + callback function and pass in ops to the EDAC RAS feature
>> + driver during registration.
>> +
>> +What: /sys/bus/edac/devices/<dev-
>name>/scrub*/addr_range_base
>> +Date: Oct 2024
>> +KernelVersion: 6.12
>> +Contact: linux-edac@vger.kernel.org
>> +Description:
>> + (RW) The base of the address range of the memory region
>> + to be scrubbed (on-demand scrubbing).
>> +
>> +What: /sys/bus/edac/devices/<dev-
>name>/scrub*/addr_range_size
>> +Date: Oct 2024
>> +KernelVersion: 6.12
>> +Contact: linux-edac@vger.kernel.org
>> +Description:
>> + (RW) The size of the address range of the memory region
>> + to be scrubbed (on-demand scrubbing).
>> +
>> +What: /sys/bus/edac/devices/<dev-
>name>/scrub*/enable_background
>> +Date: Oct 2024
>> +KernelVersion: 6.12
>> +Contact: linux-edac@vger.kernel.org
>> +Description:
>> + (RW) Start/Stop background(patrol) scrubbing if supported.
>> +
>> +What: /sys/bus/edac/devices/<dev-
>name>/scrub*/enable_on_demand
>> +Date: Oct 2024
>> +KernelVersion: 6.12
>> +Contact: linux-edac@vger.kernel.org
>> +Description:
>> + (RW) Start/Stop on-demand scrubbing the memory region
>> + if supported.
>> +
>> +What: /sys/bus/edac/devices/<dev-
>name>/scrub*/min_cycle_duration
>> +Date: Oct 2024
>> +KernelVersion: 6.12
>> +Contact: linux-edac@vger.kernel.org
>> +Description:
>> + (RO) Supported minimum scrub cycle duration in seconds
>> + by the memory scrubber.
>> +
>> +What: /sys/bus/edac/devices/<dev-
>name>/scrub*/max_cycle_duration
>> +Date: Oct 2024
>> +KernelVersion: 6.12
>> +Contact: linux-edac@vger.kernel.org
>> +Description:
>> + (RO) Supported maximum scrub cycle duration in seconds
>> + by the memory scrubber.
>> +
>> +What: /sys/bus/edac/devices/<dev-
>name>/scrub*/current_cycle_duration
>> +Date: Oct 2024
>> +KernelVersion: 6.12
>> +Contact: linux-edac@vger.kernel.org
>> +Description:
>> + (RW) The current scrub cycle duration in seconds and must be
>> + within the supported range by the memory scrubber.
>> diff --git a/drivers/edac/Makefile b/drivers/edac/Makefile index
>> 4edfb83ffbee..fbf0e39ec678 100644
>> --- a/drivers/edac/Makefile
>> +++ b/drivers/edac/Makefile
>> @@ -10,6 +10,7 @@ obj-$(CONFIG_EDAC) := edac_core.o
>>
>> edac_core-y := edac_mc.o edac_device.o edac_mc_sysfs.o
>> edac_core-y += edac_module.o edac_device_sysfs.o wq.o
>> +edac_core-y += edac_scrub.o
>>
>> edac_core-$(CONFIG_EDAC_DEBUG) += debugfs.o
>>
>> diff --git a/drivers/edac/edac_device.c b/drivers/edac/edac_device.c
>> index e4a5d010ea2d..6381896b6424 100644
>> --- a/drivers/edac/edac_device.c
>> +++ b/drivers/edac/edac_device.c
>> @@ -608,12 +608,16 @@ static int edac_dev_feat_init(struct device *parent,
>> const struct edac_dev_feature *ras_feat,
>> const struct attribute_group **attr_groups) {
>> - int num;
>> + int num, ret;
>>
>> switch (ras_feat->ft_type) {
>> case RAS_FEAT_SCRUB:
>> dev_data->scrub_ops = ras_feat->scrub_ops;
>> dev_data->private = ras_feat->ctx;
>> + ret = edac_scrub_get_desc(parent, attr_groups,
>> + ras_feat->instance);
>> + if (ret)
>> + return ret;
>> return 1;
>> case RAS_FEAT_ECS:
>> num = ras_feat->ecs_info.num_media_frus;
>> diff --git a/drivers/edac/edac_scrub.c b/drivers/edac/edac_scrub.c new
>> file mode 100755 index 000000000000..3f8f37629acf
>> --- /dev/null
>> +++ b/drivers/edac/edac_scrub.c
>> @@ -0,0 +1,377 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Generic EDAC scrub driver supports controlling the memory
>> + * scrubbers in the system and the common sysfs scrub interface
>> + * promotes unambiguous access from the userspace.
>> + *
>> + * Copyright (c) 2024 HiSilicon Limited.
>> + */
>> +
>> +#define pr_fmt(fmt) "EDAC SCRUB: " fmt
>> +
>> +#include <linux/edac.h>
>> +
>> +enum edac_scrub_attributes {
>> + SCRUB_ADDR_RANGE_BASE,
>> + SCRUB_ADDR_RANGE_SIZE,
>> + SCRUB_ENABLE_BACKGROUND,
>> + SCRUB_ENABLE_ON_DEMAND,
>> + SCRUB_MIN_CYCLE_DURATION,
>> + SCRUB_MAX_CYCLE_DURATION,
>> + SCRUB_CURRENT_CYCLE_DURATION,
>> + SCRUB_MAX_ATTRS
>> +};
>> +
>> +struct edac_scrub_dev_attr {
>> + struct device_attribute dev_attr;
>> + u8 instance;
>> +};
>> +
>> +struct edac_scrub_context {
>> + char name[EDAC_FEAT_NAME_LEN];
>> + struct edac_scrub_dev_attr scrub_dev_attr[SCRUB_MAX_ATTRS];
>> + struct attribute *scrub_attrs[SCRUB_MAX_ATTRS + 1];
>> + struct attribute_group group;
>> +};
>> +
>> +#define to_scrub_dev_attr(_dev_attr) \
>> + container_of(_dev_attr, struct edac_scrub_dev_attr, dev_attr)
>> +
>> +static ssize_t addr_range_base_show(struct device *ras_feat_dev,
>> + struct device_attribute *attr,
>> + char *buf)
>> +{
>> + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(attr))-
>>instance;
>> + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev);
>> + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops;
>> + u64 base, size;
>> + int ret;
>> +
>> + ret = ops->read_range(ras_feat_dev->parent, ctx->scrub[inst].private,
>&base, &size);
>> + if (ret)
>> + return ret;
>> +
>> + return sysfs_emit(buf, "0x%llx\n", base); }
>> +
>> +static ssize_t addr_range_size_show(struct device *ras_feat_dev,
>> + struct device_attribute *attr,
>> + char *buf)
>> +{
>> + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(attr))-
>>instance;
>> + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev);
>> + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops;
>> + u64 base, size;
>> + int ret;
>> +
>> + ret = ops->read_range(ras_feat_dev->parent, ctx->scrub[inst].private,
>&base, &size);
>> + if (ret)
>> + return ret;
>> +
>> + return sysfs_emit(buf, "0x%llx\n", size); }
>> +
>> +static ssize_t addr_range_base_store(struct device *ras_feat_dev,
>> + struct device_attribute *attr,
>> + const char *buf, size_t len) {
>> + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(attr))-
>>instance;
>> + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev);
>> + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops;
>> + u64 base, size;
>> + int ret;
>> +
>> + ret = ops->read_range(ras_feat_dev->parent, ctx->scrub[inst].private,
>&base, &size);
>> + if (ret)
>> + return ret;
>> +
>> + ret = kstrtou64(buf, 0, &base);
>> + if (ret < 0)
>> + return ret;
>> +
>> + ret = ops->write_range(ras_feat_dev->parent, ctx->scrub[inst].private,
>base, size);
>> + if (ret)
>> + return ret;
>> +
>> + return len;
>> +}
>> +
>> +static ssize_t addr_range_size_store(struct device *ras_feat_dev,
>> + struct device_attribute *attr,
>> + const char *buf,
>> + size_t len)
>> +{
>> + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(attr))-
>>instance;
>> + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev);
>> + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops;
>> + u64 base, size;
>> + int ret;
>> +
>> + ret = ops->read_range(ras_feat_dev->parent, ctx->scrub[inst].private,
>&base, &size);
>> + if (ret)
>> + return ret;
>> +
>> + ret = kstrtou64(buf, 0, &size);
>> + if (ret < 0)
>> + return ret;
>> +
>> + ret = ops->write_range(ras_feat_dev->parent, ctx->scrub[inst].private,
>base, size);
>> + if (ret)
>> + return ret;
>> +
>> + return len;
>> +}
>> +
>> +static ssize_t enable_background_store(struct device *ras_feat_dev,
>> + struct device_attribute *attr,
>> + const char *buf, size_t len) {
>> + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(attr))-
>>instance;
>> + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev);
>> + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops;
>> + bool enable;
>> + int ret;
>> +
>> + ret = kstrtobool(buf, &enable);
>> + if (ret < 0)
>> + return ret;
>> +
>> + ret = ops->set_enabled_bg(ras_feat_dev->parent, ctx-
>>scrub[inst].private, enable);
>> + if (ret)
>> + return ret;
>> +
>> + return len;
>> +}
>> +
>> +static ssize_t enable_background_show(struct device *ras_feat_dev,
>> + struct device_attribute *attr, char *buf) {
>> + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(attr))-
>>instance;
>> + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev);
>> + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops;
>> + bool enable;
>> + int ret;
>> +
>> + ret = ops->get_enabled_bg(ras_feat_dev->parent, ctx-
>>scrub[inst].private, &enable);
>> + if (ret)
>> + return ret;
>> +
>> + return sysfs_emit(buf, "%d\n", enable); }
>> +
>> +static ssize_t enable_on_demand_show(struct device *ras_feat_dev,
>> + struct device_attribute *attr, char *buf) {
>> + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(attr))-
>>instance;
>> + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev);
>> + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops;
>> + bool enable;
>> + int ret;
>> +
>> + ret = ops->get_enabled_od(ras_feat_dev->parent, ctx-
>>scrub[inst].private, &enable);
>> + if (ret)
>> + return ret;
>> +
>> + return sysfs_emit(buf, "%d\n", enable); }
>> +
>> +static ssize_t enable_on_demand_store(struct device *ras_feat_dev,
>> + struct device_attribute *attr,
>> + const char *buf, size_t len) {
>> + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(attr))-
>>instance;
>> + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev);
>> + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops;
>> + bool enable;
>> + int ret;
>> +
>> + ret = kstrtobool(buf, &enable);
>> + if (ret < 0)
>> + return ret;
>> +
>> + ret = ops->set_enabled_od(ras_feat_dev->parent, ctx-
>>scrub[inst].private, enable);
>> + if (ret)
>> + return ret;
>> +
>> + return len;
>> +}
>> +
>> +static ssize_t min_cycle_duration_show(struct device *ras_feat_dev,
>> + struct device_attribute *attr,
>> + char *buf)
>> +{
>> + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(attr))-
>>instance;
>> + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev);
>> + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops;
>> + u32 val;
>> + int ret;
>> +
>> + ret = ops->min_cycle_read(ras_feat_dev->parent, ctx-
>>scrub[inst].private, &val);
>> + if (ret)
>> + return ret;
>> +
>> + return sysfs_emit(buf, "%u\n", val); }
>> +
>> +static ssize_t max_cycle_duration_show(struct device *ras_feat_dev,
>> + struct device_attribute *attr,
>> + char *buf)
>> +{
>> + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(attr))-
>>instance;
>> + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev);
>> + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops;
>> + u32 val;
>> + int ret;
>> +
>> + ret = ops->max_cycle_read(ras_feat_dev->parent, ctx-
>>scrub[inst].private, &val);
>> + if (ret)
>> + return ret;
>> +
>> + return sysfs_emit(buf, "%u\n", val); }
>> +
>> +static ssize_t current_cycle_duration_show(struct device *ras_feat_dev,
>> + struct device_attribute *attr,
>> + char *buf)
>> +{
>> + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(attr))-
>>instance;
>> + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev);
>> + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops;
>> + u32 val;
>> + int ret;
>> +
>> + ret = ops->cycle_duration_read(ras_feat_dev->parent, ctx-
>>scrub[inst].private, &val);
>> + if (ret)
>> + return ret;
>> +
>> + return sysfs_emit(buf, "%u\n", val); }
>> +
>> +static ssize_t current_cycle_duration_store(struct device *ras_feat_dev,
>> + struct device_attribute *attr,
>> + const char *buf, size_t len) {
>> + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(attr))-
>>instance;
>> + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev);
>> + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops;
>> + long val;
>> + int ret;
>> +
>> + ret = kstrtol(buf, 0, &val);
>> + if (ret < 0)
>> + return ret;
>> +
>> + ret = ops->cycle_duration_write(ras_feat_dev->parent, ctx-
>>scrub[inst].private, val);
>> + if (ret)
>> + return ret;
>> +
>> + return len;
>> +}
>> +
>> +static umode_t scrub_attr_visible(struct kobject *kobj,
>> + struct attribute *a, int attr_id) {
>> + struct device *ras_feat_dev = kobj_to_dev(kobj);
>> + struct device_attribute *dev_attr =
>> + container_of(a, struct device_attribute, attr);
>> + u8 inst = ((struct edac_scrub_dev_attr *)to_scrub_dev_attr(dev_attr))-
>>instance;
>> + struct edac_dev_feat_ctx *ctx = dev_get_drvdata(ras_feat_dev);
>> + const struct edac_scrub_ops *ops = ctx->scrub[inst].scrub_ops;
>> +
>> + switch (attr_id) {
>> + case SCRUB_ADDR_RANGE_BASE:
>> + case SCRUB_ADDR_RANGE_SIZE:
>> + if (ops->read_range && ops->write_range)
>> + return a->mode;
>> + if (ops->read_range)
>> + return 0444;
>> + return 0;
>> + case SCRUB_ENABLE_BACKGROUND:
>> + if (ops->get_enabled_bg && ops->set_enabled_bg)
>> + return a->mode;
>> + if (ops->get_enabled_bg)
>> + return 0444;
>> + return 0;
>> + case SCRUB_ENABLE_ON_DEMAND:
>> + if (ops->get_enabled_od && ops->set_enabled_od)
>> + return a->mode;
>> + if (ops->get_enabled_od)
>> + return 0444;
>> + return 0;
>> + case SCRUB_MIN_CYCLE_DURATION:
>> + return ops->min_cycle_read ? a->mode : 0;
>> + case SCRUB_MAX_CYCLE_DURATION:
>> + return ops->max_cycle_read ? a->mode : 0;
>> + case SCRUB_CURRENT_CYCLE_DURATION:
>> + if (ops->cycle_duration_read && ops->cycle_duration_write)
>> + return a->mode;
>> + if (ops->cycle_duration_read)
>> + return 0444;
>> + return 0;
>> + default:
>> + return 0;
>> + }
>> +}
>> +
>> +#define EDAC_SCRUB_ATTR_RO(_name, _instance) \
>> + ((struct edac_scrub_dev_attr) { .dev_attr = __ATTR_RO(_name), \
>> + .instance = _instance })
>> +
>> +#define EDAC_SCRUB_ATTR_WO(_name, _instance) \
>> + ((struct edac_scrub_dev_attr) { .dev_attr = __ATTR_WO(_name), \
>> + .instance = _instance })
>> +
>> +#define EDAC_SCRUB_ATTR_RW(_name, _instance) \
>> + ((struct edac_scrub_dev_attr) { .dev_attr = __ATTR_RW(_name), \
>> + .instance = _instance })
>> +
>> +static int scrub_create_desc(struct device *scrub_dev,
>> + const struct attribute_group **attr_groups,
>> + u8 instance)
>> +{
>> + struct edac_scrub_context *scrub_ctx;
>> + struct attribute_group *group;
>> + int i;
>> +
>> + scrub_ctx = devm_kzalloc(scrub_dev, sizeof(*scrub_ctx), GFP_KERNEL);
>> + if (!scrub_ctx)
>> + return -ENOMEM;
>> +
>> + group = &scrub_ctx->group;
>> + scrub_ctx->scrub_dev_attr[0] =
>EDAC_SCRUB_ATTR_RW(addr_range_base, instance);
>> + scrub_ctx->scrub_dev_attr[1] =
>EDAC_SCRUB_ATTR_RW(addr_range_size, instance);
>> + scrub_ctx->scrub_dev_attr[2] =
>EDAC_SCRUB_ATTR_RW(enable_background, instance);
>> + scrub_ctx->scrub_dev_attr[3] =
>EDAC_SCRUB_ATTR_RW(enable_on_demand, instance);
>> + scrub_ctx->scrub_dev_attr[4] =
>EDAC_SCRUB_ATTR_RO(min_cycle_duration, instance);
>> + scrub_ctx->scrub_dev_attr[5] =
>EDAC_SCRUB_ATTR_RO(max_cycle_duration, instance);
>> + scrub_ctx->scrub_dev_attr[6] =
>EDAC_SCRUB_ATTR_RW(current_cycle_duration, instance);
>> + for (i = 0; i < SCRUB_MAX_ATTRS; i++)
>> + scrub_ctx->scrub_attrs[i] =
>> +&scrub_ctx->scrub_dev_attr[i].dev_attr.attr;
>> +
>> + sprintf(scrub_ctx->name, "%s%d", "scrub", instance);
>> + group->name = scrub_ctx->name;
>> + group->attrs = scrub_ctx->scrub_attrs;
>> + group->is_visible = scrub_attr_visible;
>> +
>> + attr_groups[0] = group;
>> +
>> + return 0;
>> +}
>> +
>> +/**
>> + * edac_scrub_get_desc - get EDAC scrub descriptors
>> + * @scrub_dev: client device, with scrub support
>> + * @attr_groups: pointer to attrribute group container
>> + * @instance: device's scrub instance number.
>> + *
>> + * Returns 0 on success, error otherwise.
>> + */
>> +int edac_scrub_get_desc(struct device *scrub_dev,
>> + const struct attribute_group **attr_groups,
>> + u8 instance)
>> +{
>> + if (!scrub_dev || !attr_groups)
>> + return -EINVAL;
>> +
>> + return scrub_create_desc(scrub_dev, attr_groups, instance); }
>> diff --git a/include/linux/edac.h b/include/linux/edac.h index
>> b337254cf5b8..aae8262b9863 100644
>> --- a/include/linux/edac.h
>> +++ b/include/linux/edac.h
>> @@ -674,6 +674,36 @@ enum edac_dev_feat {
>> RAS_FEAT_MAX
>> };
>>
>> +/**
>> + * struct scrub_ops - scrub device operations (all elements optional)
>> + * @read_range: read base and offset of scrubbing range.
>> + * @write_range: set the base and offset of the scrubbing range.
>> + * @get_enabled_bg: check if currently performing background scrub.
>> + * @set_enabled_bg: start or stop a bg-scrub.
>> + * @get_enabled_od: check if currently performing on-demand scrub.
>> + * @set_enabled_od: start or stop an on-demand scrub.
>> + * @min_cycle_read: minimum supported scrub cycle duration in seconds.
>> + * @max_cycle_read: maximum supported scrub cycle duration in seconds.
>> + * @cycle_duration_read: get the scrub cycle duration in seconds.
>> + * @cycle_duration_write: set the scrub cycle duration in seconds.
>> + */
>> +struct edac_scrub_ops {
>> + int (*read_range)(struct device *dev, void *drv_data, u64 *base, u64
>*size);
>> + int (*write_range)(struct device *dev, void *drv_data, u64 base, u64
>size);
>> + int (*get_enabled_bg)(struct device *dev, void *drv_data, bool *enable);
>> + int (*set_enabled_bg)(struct device *dev, void *drv_data, bool enable);
>> + int (*get_enabled_od)(struct device *dev, void *drv_data, bool *enable);
>> + int (*set_enabled_od)(struct device *dev, void *drv_data, bool enable);
>> + int (*min_cycle_read)(struct device *dev, void *drv_data, u32 *min);
>> + int (*max_cycle_read)(struct device *dev, void *drv_data, u32 *max);
>> + int (*cycle_duration_read)(struct device *dev, void *drv_data, u32
>*cycle);
>> + int (*cycle_duration_write)(struct device *dev, void *drv_data, u32
>> +cycle); };
>> +
>> +int edac_scrub_get_desc(struct device *scrub_dev,
>> + const struct attribute_group **attr_groups,
>> + u8 instance);
>> +
>> struct edac_ecs_ex_info {
>> u16 num_media_frus;
>> };
>> --
>> 2.34.1
>>
>
>--
>Fan Ni
next prev parent reply other threads:[~2024-09-27 11:17 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-09-11 9:04 [PATCH v12 00/17] EDAC: Scrub: introduce generic EDAC RAS control feature driver + CXL/ACPI-RAS2 drivers shiju.jose
2024-09-11 9:04 ` [PATCH v12 01/17] EDAC: Add support for EDAC device features control shiju.jose
2024-09-13 16:40 ` Borislav Petkov
2024-09-16 9:21 ` Shiju Jose
2024-09-16 10:50 ` Jonathan Cameron
2024-09-16 16:16 ` Shiju Jose
2024-09-11 9:04 ` [PATCH v12 02/17] EDAC: Add EDAC scrub control driver shiju.jose
2024-09-13 17:25 ` Borislav Petkov
2024-09-16 9:22 ` Shiju Jose
2024-09-26 23:04 ` Fan Ni
2024-09-27 11:17 ` Shiju Jose [this message]
2024-09-11 9:04 ` [PATCH v12 03/17] EDAC: Add EDAC ECS " shiju.jose
2024-09-27 16:28 ` Fan Ni
2024-09-11 9:04 ` [PATCH v12 04/17] cxl: Move mailbox related bits to the same context shiju.jose
2024-09-11 17:20 ` Dave Jiang
2024-09-12 9:42 ` Shiju Jose
2024-09-11 9:04 ` [PATCH v12 05/17] cxl: Fix comment regarding cxl_query_cmd() return data shiju.jose
2024-09-11 9:04 ` [PATCH v12 06/17] cxl: Refactor user ioctl command path from mds to mailbox shiju.jose
2024-09-11 9:04 ` [PATCH v12 07/17] cxl: Add Get Supported Features command for kernel usage shiju.jose
2024-09-23 23:33 ` Dave Jiang
2024-09-25 11:18 ` Shiju Jose
2024-09-11 9:04 ` [PATCH v12 08/17] cxl/mbox: Add GET_FEATURE mailbox command shiju.jose
2024-09-30 16:17 ` Fan Ni
2024-09-11 9:04 ` [PATCH v12 09/17] cxl/mbox: Add SET_FEATURE " shiju.jose
2024-09-30 16:58 ` Fan Ni
2024-09-11 9:04 ` [PATCH v12 10/17] cxl/memfeature: Add CXL memory device patrol scrub control feature shiju.jose
2024-09-30 17:38 ` Fan Ni
2024-10-01 8:38 ` Shiju Jose
2024-10-01 19:47 ` Fan Ni
2024-09-11 9:04 ` [PATCH v12 11/17] cxl/memfeature: Add CXL memory device ECS " shiju.jose
2024-09-30 18:12 ` Fan Ni
2024-10-01 8:39 ` Shiju Jose
2024-09-11 9:04 ` [PATCH v12 12/17] platform: Add __free() based cleanup function for platform_device_put shiju.jose
2024-09-11 9:04 ` [PATCH v12 13/17] ACPI:RAS2: Add ACPI RAS2 driver shiju.jose
2024-10-01 15:47 ` Fan Ni
2024-09-11 9:04 ` [PATCH v12 14/17] ras: mem: Add memory " shiju.jose
2024-09-11 9:04 ` [PATCH v12 15/17] EDAC: Add EDAC PPR control driver shiju.jose
2024-09-11 9:04 ` [PATCH v12 16/17] cxl/mbox: Add support for PERFORM_MAINTENANCE mailbox command shiju.jose
2024-09-11 9:04 ` [PATCH v12 17/17] cxl/memfeature: Add CXL memory device PPR control feature shiju.jose
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0a815781611a4e0480380a54875eaa65@huawei.com \
--to=shiju.jose@huawei.com \
--cc=Jon.Grimm@amd.com \
--cc=Vilas.Sridharan@amd.com \
--cc=Yazen.Ghannam@amd.com \
--cc=alison.schofield@intel.com \
--cc=bp@alien8.de \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=dave.jiang@intel.com \
--cc=dave@stgolabs.net \
--cc=david@redhat.com \
--cc=dferguson@amperecomputing.com \
--cc=duenwen@google.com \
--cc=erdemaktas@google.com \
--cc=gthelen@google.com \
--cc=ira.weiny@intel.com \
--cc=james.morse@arm.com \
--cc=jgroves@micron.com \
--cc=jiaqiyan@google.com \
--cc=jonathan.cameron@huawei.com \
--cc=jthoughton@google.com \
--cc=kangkang.shen@futurewei.com \
--cc=lenb@kernel.org \
--cc=leo.duran@amd.com \
--cc=linux-acpi@vger.kernel.org \
--cc=linux-cxl@vger.kernel.org \
--cc=linux-edac@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linuxarm@huawei.com \
--cc=mchehab@kernel.org \
--cc=mike.malvestuto@intel.com \
--cc=naoya.horiguchi@nec.com \
--cc=nifan.cxl@gmail.com \
--cc=pgonda@google.com \
--cc=prime.zeng@hisilicon.com \
--cc=rafael@kernel.org \
--cc=rientjes@google.com \
--cc=roberto.sassu@huawei.com \
--cc=somasundaram.a@hpe.com \
--cc=tanxiaofei@huawei.com \
--cc=tony.luck@intel.com \
--cc=vishal.l.verma@intel.com \
--cc=vsalve@micron.com \
--cc=wanghuiqiang@huawei.com \
--cc=wbs@os.amperecomputing.com \
--cc=wschwartz@amperecomputing.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox