From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68FD0D3A66F for ; Tue, 29 Oct 2024 16:32:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D5B656B0099; Tue, 29 Oct 2024 12:32:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D0AD66B009A; Tue, 29 Oct 2024 12:32:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BAB996B009B; Tue, 29 Oct 2024 12:32:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 9844C6B0099 for ; Tue, 29 Oct 2024 12:32:01 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 369251C46F8 for ; Tue, 29 Oct 2024 16:32:01 +0000 (UTC) X-FDA: 82727180844.26.4A48B52 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by imf21.hostedemail.com (Postfix) with ESMTP id 447561C0029 for ; Tue, 29 Oct 2024 16:31:11 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=EjNjtH6q; spf=pass (imf21.hostedemail.com: domain of dave.jiang@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=dave.jiang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730219361; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bCfFhTFxvfIqjx0DNDVj6ufwntTob5e9UFclSCCFd7c=; b=jEB0JQykRrovC9pfqtlpfTsi+N2ZFKRgcyWhCXCKxp/KVWL9E8xPY94uSNDDAOWY8iOyJ3 nuOctp7/tAxslsz/Whm89Bc137At1kl0prqDATpA+nd/0Rxr1YXjvtT787S0EugXi0pLdt bkxqm/a3Kur0UzL8Pie536wjyg32J14= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730219361; a=rsa-sha256; cv=none; b=UC7JqiWT9GxJUzuseiCV9GxpK86RB5UIVw4ANuDQqhkFC8XHyQFau8fV0k1/07gr4V/un+ c7YHfdpuAOsXILMRXhyTye5lljUAGXVsfQXVUOO5NlxJ/9wdbOl28T0ZGf00t9BLd+6gm/ Cv/XuuY9vQ0bJfjoplVg5TN2Hv+Od4Q= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=EjNjtH6q; spf=pass (imf21.hostedemail.com: domain of dave.jiang@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=dave.jiang@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730219518; x=1761755518; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to:content-transfer-encoding; bh=xi3dBgTsffDvk3s0H99fJEsLD40uhpn0yfxAoqXYveE=; b=EjNjtH6qCQFrHYA4S1van6QRy17Osu09QH/kLKEKig+HxovIZAu/mTPG WLgqdOqmBnumB1kdT6p2ikuTy/bHV5gDgw0u2dEuL/8x8dQBJyIh/gE7E byceZ3/H/PSopuayqGG3yEAkUTH0sa5OqCxPL0ikTBCkI+b1aJ+srh/Vs G7Tol19cKl7qpLZrujteo+jc3Ag7Q3KNvWQqUgulUixYPbBwTOfUMJv6K o+393ypFP2n+5ffCycE8VsJpV+LD6EjcDzjUNHTJ/SqzUiWNEtGuNgi6t JuWxCl5mn/RrnaB1d6y4KMEXMm2GhUnc0wxcRwllyeTC1TRKGH8ZTkfXC Q==; X-CSE-ConnectionGUID: zR4tWBLRR22/c8c8Gafa9w== X-CSE-MsgGUID: QVTd2hdIQSqT/qXlEyNE9g== X-IronPort-AV: E=McAfee;i="6700,10204,11240"; a="33800931" X-IronPort-AV: E=Sophos;i="6.11,241,1725346800"; d="scan'208";a="33800931" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Oct 2024 09:31:56 -0700 X-CSE-ConnectionGUID: UD1fiwzdRKuw9mzGCn//og== X-CSE-MsgGUID: VSbo+yEkTauu2FUGhPrYdw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,241,1725346800"; d="scan'208";a="85967749" Received: from rfrazer-mobl3.amr.corp.intel.com (HELO [10.125.108.71]) ([10.125.108.71]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Oct 2024 09:31:52 -0700 Message-ID: <3a007a70-136b-4a45-8dd2-d33725ea96bc@intel.com> Date: Tue, 29 Oct 2024 09:31:51 -0700 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v14 07/14] cxl/memfeature: Add CXL memory device patrol scrub control feature To: shiju.jose@huawei.com, linux-edac@vger.kernel.org, linux-cxl@vger.kernel.org, linux-acpi@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: bp@alien8.de, tony.luck@intel.com, rafael@kernel.org, lenb@kernel.org, mchehab@kernel.org, dan.j.williams@intel.com, dave@stgolabs.net, jonathan.cameron@huawei.com, gregkh@linuxfoundation.org, sudeep.holla@arm.com, jassisinghbrar@gmail.com, alison.schofield@intel.com, vishal.l.verma@intel.com, ira.weiny@intel.com, david@redhat.com, Vilas.Sridharan@amd.com, leo.duran@amd.com, Yazen.Ghannam@amd.com, rientjes@google.com, jiaqiyan@google.com, Jon.Grimm@amd.com, dave.hansen@linux.intel.com, naoya.horiguchi@nec.com, james.morse@arm.com, jthoughton@google.com, somasundaram.a@hpe.com, erdemaktas@google.com, pgonda@google.com, duenwen@google.com, gthelen@google.com, wschwartz@amperecomputing.com, dferguson@amperecomputing.com, wbs@os.amperecomputing.com, nifan.cxl@gmail.com, tanxiaofei@huawei.com, prime.zeng@hisilicon.com, roberto.sassu@huawei.com, kangkang.shen@futurewei.com, wanghuiqiang@huawei.com, linuxarm@huawei.com References: <20241025171356.1377-1-shiju.jose@huawei.com> <20241025171356.1377-8-shiju.jose@huawei.com> Content-Language: en-US From: Dave Jiang In-Reply-To: <20241025171356.1377-8-shiju.jose@huawei.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 447561C0029 X-Stat-Signature: 95866kw36h9dhzb9zwmq49uhp8euq9me X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1730219471-982422 X-HE-Meta: U2FsdGVkX19ahiDDRbF+obF/yuwuQ8k8zBhAz5zD5o80HxUXRbw9zmamAZd6QAsA9kWh9cuVmHSGTy6OeFEpRBuDxRo1dn6vpzsz3l2v1RefCCxyRQgfWgjjXYjtyMwRWajHiCxklahW6hWOi8YyXo8Bl7o2zevihzVsuHGGzl+x0+2sZMkemhcy9wF+fF1dz+DpSyx2DvEenLKDZPn/uLMEw2bQ/1NqtFEvQpAOmA/g/oKu00kQ/LupnZTBFhtW3te+CBtU227BQPHbVoQQ4t/huE4a0MNP+BfjgVJ+emg0OtLG9UM67eZQ6Zy3lfqdjy2ZOMtwHzlJwqTfNu6iNJFC+2GgpxDvH+8uUL5+aBlDYVEnIFk7cIeiMLL/WuO3jTuKyE1ciq0Zvq9zvCX8Ov2y2OgghSz47ub0z9CasHt/NzMmoikKxKWdwgEI5nbJQNfl8GH/hzX6uFkZPFDjj97uL7vbyIXRtF2d9I1end3NKe2vaGv2KTtm3HuEo05jF4PQiXIG9n5FvpOWaiBcPJ4U/1ilyYVihpsEcSrfWbpDT07QcthsUtp1r1KI7kFmKNfJg190YK4YbRtGFl8Z9w9AOb2DfzC6x4g5hwu0fta5U/bkbEPPEXSzofhJL8cH8cV+7WuYOGhLOQFc36V2z3DcErbvAvmB0nmLJcAxZvMXUgDCdgUhn8VPlL44ChJquqdE5XYvP3w1M0oTIyw0wBZ3DhvI56hsWp2B3TqwXBC/Xb9XlLxXZVG9S1jKSRRn+IQ6ScSxGAi3ZKbyOIUbZBjhSbu+wW+rFSkvPjl6GAjtnaMpPrng/RF45knZnNt/MvO9cRUAafhHVmCTr+KqMKsUfhsJ0JBfN8j3BXPRmTTSkge93f88S6RY51fINtySNmSGsqxpQHvFUL8uWK69REQ64n0qtTOkF4Pz1shS9QBBH6s0UT5MMVwJdjZFM0abHVtQIwkkdU6vvK1f2yB D9Vzi149 36gA2IJ5mna+XgdQZnWQx0mrSQZ9vSzx0g5kVdXccF8D5QjEznTYBCNySAPRbhzqSCDgVsB1s7BbTpbNM+Qmfb4pXFmZkoFOKaBuFSKFyaMP5z/tynV+x2wVCRTstsxDwlEdnqf5RUlxOaMVTNHzYcbeJ0Fh/GMyfYwuuGZtXRbUp1MFQ1O6L8n43qJ0I7k/GNYa/YSoptr2T0Btb8k2N/8plHYUucqraeTME3lCl9HYYJdBV+iOb2msKeuVhUpOEFXF1oh80sEhuCRnGfr2w7gqL+Yv2JDDOqTcGDZD91QYmnk0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 10/25/24 10:13 AM, shiju.jose@huawei.com wrote: > From: Shiju Jose > > CXL spec 3.1 section 8.2.9.9.11.1 describes the device patrol scrub control > feature. The device patrol scrub proactively locates and makes corrections > to errors in regular cycle. > > Allow specifying the number of hours within which the patrol scrub must be > completed, subject to minimum and maximum limits reported by the device. > Also allow disabling scrub allowing trade-off error rates against > performance. > > Add support for patrol scrub control on CXL memory devices. > Register with the EDAC device driver, which retrieves the scrub attribute > descriptors from EDAC scrub and exposes the sysfs scrub control attributes > to userspace. For example, scrub control for the CXL memory device > "cxl_mem0" is exposed in /sys/bus/edac/devices/cxl_mem0/scrubX/. > > Additionally, add support for region-based CXL memory patrol scrub control. > CXL memory regions may be interleaved across one or more CXL memory > devices. For example, region-based scrub control for "cxl_region1" is > exposed in /sys/bus/edac/devices/cxl_region1/scrubX/. > > Co-developed-by: Jonathan Cameron > Signed-off-by: Jonathan Cameron > Signed-off-by: Shiju Jose > --- > Documentation/edac/edac-scrub.rst | 74 ++++++ > drivers/cxl/Kconfig | 18 ++ > drivers/cxl/core/Makefile | 1 + > drivers/cxl/core/memfeature.c | 381 ++++++++++++++++++++++++++++++ > drivers/cxl/core/region.c | 6 + > drivers/cxl/cxlmem.h | 7 + > drivers/cxl/mem.c | 4 + > 7 files changed, 491 insertions(+) > create mode 100644 Documentation/edac/edac-scrub.rst > create mode 100644 drivers/cxl/core/memfeature.c > > diff --git a/Documentation/edac/edac-scrub.rst b/Documentation/edac/edac-scrub.rst > new file mode 100644 > index 000000000000..4aad4974b208 > --- /dev/null > +++ b/Documentation/edac/edac-scrub.rst > @@ -0,0 +1,74 @@ > +.. SPDX-License-Identifier: GPL-2.0 > + > +=================== > +EDAC Scrub control > +=================== > + > +Copyright (c) 2024 HiSilicon Limited. > + > +:Author: Shiju Jose > +:License: The GNU Free Documentation License, Version 1.2 > + (dual licensed under the GPL v2) > +:Original Reviewers: > + > +- Written for: 6.13 > +- Updated for: > + > +Introduction > +------------ > +The EDAC enhancement for RAS featurues exposes interfaces for controlling > +the memory scrubbers in the system. The scrub device drivers in the > +system register with the EDAC scrub. The driver exposes the > +scrub controls to user in the sysfs. > + > +The File System > +--------------- > + > +The control attributes of the registered scrubber instance could be > +accessed in the /sys/bus/edac/devices//scrub*/ > + > +sysfs > +----- > + > +Sysfs files are documented in > +`Documentation/ABI/testing/sysfs-edac-scrub-control`. > + > +Example > +------- > + > +The usage takes the form shown in this example:: > + > +1. CXL memory device patrol scrubber > +1.1 device based > +root@localhost:~# cat /sys/bus/edac/devices/cxl_mem0/scrub0/min_cycle_duration > +3600 > +root@localhost:~# cat /sys/bus/edac/devices/cxl_mem0/scrub0/max_cycle_duration > +918000 > +root@localhost:~# cat /sys/bus/edac/devices/cxl_mem0/scrub0/current_cycle_duration > +43200 > +root@localhost:~# echo 54000 > /sys/bus/edac/devices/cxl_mem0/scrub0/current_cycle_duration > +root@localhost:~# cat /sys/bus/edac/devices/cxl_mem0/scrub0/current_cycle_duration > +54000 > +root@localhost:~# echo 1 > /sys/bus/edac/devices/cxl_mem0/scrub0/enable_background > +root@localhost:~# cat /sys/bus/edac/devices/cxl_mem0/scrub0/enable_background > +1 > +root@localhost:~# echo 0 > /sys/bus/edac/devices/cxl_mem0/scrub0/enable_background > +root@localhost:~# cat /sys/bus/edac/devices/cxl_mem0/scrub0/enable_background > +0 > + > +1.2. region based > +root@localhost:~# cat /sys/bus/edac/devices/cxl_region0/scrub0/min_cycle_duration > +3600 > +root@localhost:~# cat /sys/bus/edac/devices/cxl_region0/scrub0/max_cycle_duration > +918000 > +root@localhost:~# cat /sys/bus/edac/devices/cxl_region0/scrub0/current_cycle_duration > +43200 > +root@localhost:~# echo 54000 > /sys/bus/edac/devices/cxl_region0/scrub0/current_cycle_duration > +root@localhost:~# cat /sys/bus/edac/devices/cxl_region0/scrub0/current_cycle_duration > +54000 > +root@localhost:~# echo 1 > /sys/bus/edac/devices/cxl_region0/scrub0/enable_background > +root@localhost:~# cat /sys/bus/edac/devices/cxl_region0/scrub0/enable_background > +1 > +root@localhost:~# echo 0 > /sys/bus/edac/devices/cxl_region0/scrub0/enable_background > +root@localhost:~# cat /sys/bus/edac/devices/cxl_region0/scrub0/enable_background > +0 > diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig > index 29c192f20082..6d79fb3e772e 100644 > --- a/drivers/cxl/Kconfig > +++ b/drivers/cxl/Kconfig > @@ -145,4 +145,22 @@ config CXL_REGION_INVALIDATION_TEST > If unsure, or if this kernel is meant for production environments, > say N. > > +config CXL_RAS_FEAT > + tristate "CXL: Memory RAS features" > + depends on CXL_PCI > + depends on CXL_MEM > + depends on EDAC > + help > + The CXL memory RAS feature control is optional and allows host to > + control the RAS features configurations of CXL Type 3 devices. > + > + It registers with the EDAC device subsystem to expose control > + attributes of CXL memory device's RAS features to the user. > + It provides interface functions to support configuring the CXL > + memory device's RAS features. > + > + Say 'y/m/n' to enable/disable control of the CXL.mem device's RAS features. > + See section 8.2.9.9.11 of CXL 3.1 specification for the detailed > + information of CXL memory device features. > + > endif > diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile > index 9259bcc6773c..2a3c7197bc23 100644 > --- a/drivers/cxl/core/Makefile > +++ b/drivers/cxl/core/Makefile > @@ -16,3 +16,4 @@ cxl_core-y += pmu.o > cxl_core-y += cdat.o > cxl_core-$(CONFIG_TRACING) += trace.o > cxl_core-$(CONFIG_CXL_REGION) += region.o > +cxl_core-$(CONFIG_CXL_RAS_FEAT) += memfeature.o > diff --git a/drivers/cxl/core/memfeature.c b/drivers/cxl/core/memfeature.c > new file mode 100644 > index 000000000000..8fff00f62f8c > --- /dev/null > +++ b/drivers/cxl/core/memfeature.c > @@ -0,0 +1,381 @@ > +// SPDX-License-Identifier: GPL-2.0-or-later > +/* > + * CXL memory RAS feature driver. > + * > + * Copyright (c) 2024 HiSilicon Limited. > + * > + * - Supports functions to configure RAS features of the > + * CXL memory devices. > + * - Registers with the EDAC device subsystem driver to expose > + * the features sysfs attributes to the user for configuring > + * CXL memory RAS feature. > + */ > + > +#define pr_fmt(fmt) "CXL MEM FEAT: " fmt > + > +#include > +#include > +#include > +#include > +#include > + > +#define CXL_DEV_NUM_RAS_FEATURES 1 > +#define CXL_DEV_HOUR_IN_SECS 3600 > + > +#define CXL_SCRUB_NAME_LEN 128 > + > +/* CXL memory patrol scrub control definitions */ > +static const uuid_t cxl_patrol_scrub_uuid = > + UUID_INIT(0x96dad7d6, 0xfde8, 0x482b, 0xa7, 0x33, 0x75, 0x77, 0x4e, 0x06, 0xdb, 0x8a); > + > +/* CXL memory patrol scrub control functions */ > +struct cxl_patrol_scrub_context { > + u8 instance; > + u16 get_feat_size; > + u16 set_feat_size; > + u8 get_version; > + u8 set_version; > + u16 set_effects; > + struct cxl_memdev *cxlmd; > + struct cxl_region *cxlr; > +}; > + > +/** > + * struct cxl_memdev_ps_params - CXL memory patrol scrub parameter data structure. > + * @enable: [IN & OUT] enable(1)/disable(0) patrol scrub. > + * @scrub_cycle_changeable: [OUT] scrub cycle attribute of patrol scrub is changeable. > + * @scrub_cycle_hrs: [IN] Requested patrol scrub cycle in hours. > + * [OUT] Current patrol scrub cycle in hours. > + * @min_scrub_cycle_hrs:[OUT] minimum patrol scrub cycle in hours supported. > + */ > +struct cxl_memdev_ps_params { > + bool enable; > + bool scrub_cycle_changeable; > + u16 scrub_cycle_hrs; > + u16 min_scrub_cycle_hrs; > +}; > + > +enum cxl_scrub_param { > + CXL_PS_PARAM_ENABLE, > + CXL_PS_PARAM_SCRUB_CYCLE, > +}; > + > +#define CXL_MEMDEV_PS_SCRUB_CYCLE_CHANGE_CAP_MASK BIT(0) > +#define CXL_MEMDEV_PS_SCRUB_CYCLE_REALTIME_REPORT_CAP_MASK BIT(1) > +#define CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_MASK GENMASK(7, 0) > +#define CXL_MEMDEV_PS_MIN_SCRUB_CYCLE_MASK GENMASK(15, 8) > +#define CXL_MEMDEV_PS_FLAG_ENABLED_MASK BIT(0) > + > +struct cxl_memdev_ps_rd_attrs { > + u8 scrub_cycle_cap; > + __le16 scrub_cycle_hrs; > + u8 scrub_flags; > +} __packed; > + > +struct cxl_memdev_ps_wr_attrs { > + u8 scrub_cycle_hrs; > + u8 scrub_flags; > +} __packed; > + > +static int cxl_mem_ps_get_attrs(struct cxl_memdev_state *mds, > + struct cxl_memdev_ps_params *params) > +{ > + size_t rd_data_size = sizeof(struct cxl_memdev_ps_rd_attrs); > + size_t data_size; > + struct cxl_memdev_ps_rd_attrs *rd_attrs __free(kfree) = > + kmalloc(rd_data_size, GFP_KERNEL); > + if (!rd_attrs) > + return -ENOMEM; > + > + data_size = cxl_get_feature(mds, cxl_patrol_scrub_uuid, > + CXL_GET_FEAT_SEL_CURRENT_VALUE, > + rd_attrs, rd_data_size); > + if (!data_size) > + return -EIO; > + > + params->scrub_cycle_changeable = FIELD_GET(CXL_MEMDEV_PS_SCRUB_CYCLE_CHANGE_CAP_MASK, > + rd_attrs->scrub_cycle_cap); > + params->enable = FIELD_GET(CXL_MEMDEV_PS_FLAG_ENABLED_MASK, > + rd_attrs->scrub_flags); > + params->scrub_cycle_hrs = FIELD_GET(CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_MASK, > + rd_attrs->scrub_cycle_hrs); > + params->min_scrub_cycle_hrs = FIELD_GET(CXL_MEMDEV_PS_MIN_SCRUB_CYCLE_MASK, > + rd_attrs->scrub_cycle_hrs); > + > + return 0; > +} > + > +static int cxl_ps_get_attrs(struct device *dev, void *drv_data, Would a union be better than a void *drv_data for all the places this is used as a parameter? How many variations of this are there? DJ > + struct cxl_memdev_ps_params *params) > +{ > + struct cxl_patrol_scrub_context *cxl_ps_ctx = drv_data; > + struct cxl_memdev *cxlmd; > + struct cxl_dev_state *cxlds; > + struct cxl_memdev_state *mds; > + u16 min_scrub_cycle = 0; > + int i, ret; > + > + if (cxl_ps_ctx->cxlr) { > + struct cxl_region *cxlr = cxl_ps_ctx->cxlr; > + struct cxl_region_params *p = &cxlr->params; > + > + for (i = p->interleave_ways - 1; i >= 0; i--) { > + struct cxl_endpoint_decoder *cxled = p->targets[i]; > + > + cxlmd = cxled_to_memdev(cxled); > + cxlds = cxlmd->cxlds; > + mds = to_cxl_memdev_state(cxlds); > + ret = cxl_mem_ps_get_attrs(mds, params); > + if (ret) > + return ret; > + > + if (params->min_scrub_cycle_hrs > min_scrub_cycle) > + min_scrub_cycle = params->min_scrub_cycle_hrs; > + } > + params->min_scrub_cycle_hrs = min_scrub_cycle; > + return 0; > + } > + cxlmd = cxl_ps_ctx->cxlmd; > + cxlds = cxlmd->cxlds; > + mds = to_cxl_memdev_state(cxlds); > + > + return cxl_mem_ps_get_attrs(mds, params); > +} > + > +static int cxl_mem_ps_set_attrs(struct device *dev, void *drv_data, > + struct cxl_memdev_state *mds, > + struct cxl_memdev_ps_params *params, > + enum cxl_scrub_param param_type) > +{ > + struct cxl_patrol_scrub_context *cxl_ps_ctx = drv_data; > + struct cxl_memdev_ps_wr_attrs wr_attrs; > + struct cxl_memdev_ps_params rd_params; > + int ret; > + > + ret = cxl_mem_ps_get_attrs(mds, &rd_params); > + if (ret) { > + dev_err(dev, "Get cxlmemdev patrol scrub params failed ret=%d\n", > + ret); > + return ret; > + } > + > + switch (param_type) { > + case CXL_PS_PARAM_ENABLE: > + wr_attrs.scrub_flags = FIELD_PREP(CXL_MEMDEV_PS_FLAG_ENABLED_MASK, > + params->enable); > + wr_attrs.scrub_cycle_hrs = FIELD_PREP(CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_MASK, > + rd_params.scrub_cycle_hrs); > + break; > + case CXL_PS_PARAM_SCRUB_CYCLE: > + if (params->scrub_cycle_hrs < rd_params.min_scrub_cycle_hrs) { > + dev_err(dev, "Invalid CXL patrol scrub cycle(%d) to set\n", > + params->scrub_cycle_hrs); > + dev_err(dev, "Minimum supported CXL patrol scrub cycle in hour %d\n", > + rd_params.min_scrub_cycle_hrs); > + return -EINVAL; > + } > + wr_attrs.scrub_cycle_hrs = FIELD_PREP(CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_MASK, > + params->scrub_cycle_hrs); > + wr_attrs.scrub_flags = FIELD_PREP(CXL_MEMDEV_PS_FLAG_ENABLED_MASK, > + rd_params.enable); > + break; > + } > + > + ret = cxl_set_feature(mds, cxl_patrol_scrub_uuid, > + cxl_ps_ctx->set_version, > + &wr_attrs, sizeof(wr_attrs), > + CXL_SET_FEAT_FLAG_DATA_SAVED_ACROSS_RESET); > + if (ret) { > + dev_err(dev, "CXL patrol scrub set feature failed ret=%d\n", ret); > + return ret; > + } > + > + return 0; > +} > + > +static int cxl_ps_set_attrs(struct device *dev, void *drv_data, > + struct cxl_memdev_ps_params *params, > + enum cxl_scrub_param param_type) > +{ > + struct cxl_patrol_scrub_context *cxl_ps_ctx = drv_data; > + struct cxl_memdev *cxlmd; > + struct cxl_dev_state *cxlds; > + struct cxl_memdev_state *mds; > + int ret, i; > + > + if (cxl_ps_ctx->cxlr) { > + struct cxl_region *cxlr = cxl_ps_ctx->cxlr; > + struct cxl_region_params *p = &cxlr->params; > + > + for (i = p->interleave_ways - 1; i >= 0; i--) { > + struct cxl_endpoint_decoder *cxled = p->targets[i]; > + > + cxlmd = cxled_to_memdev(cxled); > + cxlds = cxlmd->cxlds; > + mds = to_cxl_memdev_state(cxlds); > + ret = cxl_mem_ps_set_attrs(dev, drv_data, mds, > + params, param_type); > + if (ret) > + return ret; > + } > + return 0; > + } > + cxlmd = cxl_ps_ctx->cxlmd; > + cxlds = cxlmd->cxlds; > + mds = to_cxl_memdev_state(cxlds); > + > + return cxl_mem_ps_set_attrs(dev, drv_data, mds, params, param_type); > +} > + > +static int cxl_patrol_scrub_get_enabled_bg(struct device *dev, void *drv_data, bool *enabled) > +{ > + struct cxl_memdev_ps_params params; > + int ret; > + > + ret = cxl_ps_get_attrs(dev, drv_data, ¶ms); > + if (ret) > + return ret; > + > + *enabled = params.enable; > + > + return 0; > +} > + > +static int cxl_patrol_scrub_set_enabled_bg(struct device *dev, void *drv_data, bool enable) > +{ > + struct cxl_memdev_ps_params params = { > + .enable = enable, > + }; > + > + return cxl_ps_set_attrs(dev, drv_data, ¶ms, CXL_PS_PARAM_ENABLE); > +} > + > +static int cxl_patrol_scrub_read_min_scrub_cycle(struct device *dev, void *drv_data, > + u32 *min) > +{ > + struct cxl_memdev_ps_params params; > + int ret; > + > + ret = cxl_ps_get_attrs(dev, drv_data, ¶ms); > + if (ret) > + return ret; > + *min = params.min_scrub_cycle_hrs * CXL_DEV_HOUR_IN_SECS; > + > + return 0; > +} > + > +static int cxl_patrol_scrub_read_max_scrub_cycle(struct device *dev, void *drv_data, > + u32 *max) > +{ > + *max = U8_MAX * CXL_DEV_HOUR_IN_SECS; /* Max set by register size */ > + > + return 0; > +} > + > +static int cxl_patrol_scrub_read_scrub_cycle(struct device *dev, void *drv_data, > + u32 *scrub_cycle_secs) > +{ > + struct cxl_memdev_ps_params params; > + int ret; > + > + ret = cxl_ps_get_attrs(dev, drv_data, ¶ms); > + if (ret) > + return ret; > + > + *scrub_cycle_secs = params.scrub_cycle_hrs * CXL_DEV_HOUR_IN_SECS; > + > + return 0; > +} > + > +static int cxl_patrol_scrub_write_scrub_cycle(struct device *dev, void *drv_data, > + u32 scrub_cycle_secs) > +{ > + struct cxl_memdev_ps_params params = { > + .scrub_cycle_hrs = scrub_cycle_secs / CXL_DEV_HOUR_IN_SECS, > + }; > + > + return cxl_ps_set_attrs(dev, drv_data, ¶ms, CXL_PS_PARAM_SCRUB_CYCLE); > +} > + > +static const struct edac_scrub_ops cxl_ps_scrub_ops = { > + .get_enabled_bg = cxl_patrol_scrub_get_enabled_bg, > + .set_enabled_bg = cxl_patrol_scrub_set_enabled_bg, > + .get_min_cycle = cxl_patrol_scrub_read_min_scrub_cycle, > + .get_max_cycle = cxl_patrol_scrub_read_max_scrub_cycle, > + .get_cycle_duration = cxl_patrol_scrub_read_scrub_cycle, > + .set_cycle_duration = cxl_patrol_scrub_write_scrub_cycle, > +}; > + > +int cxl_mem_ras_features_init(struct cxl_memdev *cxlmd, struct cxl_region *cxlr) > +{ > + struct edac_dev_feature ras_features[CXL_DEV_NUM_RAS_FEATURES]; > + struct cxl_patrol_scrub_context *cxl_ps_ctx; > + char cxl_dev_name[CXL_SCRUB_NAME_LEN]; > + struct cxl_feat_entry feat_entry; > + struct cxl_memdev_state *mds; > + struct cxl_dev_state *cxlds; > + int num_ras_features = 0; > + u8 scrub_inst = 0; > + int rc, i; > + > + if (cxlr) { > + struct cxl_region_params *p = &cxlr->params; > + > + for (i = p->interleave_ways - 1; i >= 0; i--) { > + struct cxl_endpoint_decoder *cxled = p->targets[i]; > + > + cxlmd = cxled_to_memdev(cxled); > + cxlds = cxlmd->cxlds; > + mds = to_cxl_memdev_state(cxlds); > + memset(&feat_entry, 0, sizeof(feat_entry)); > + rc = cxl_get_supported_feature_entry(mds, &cxl_patrol_scrub_uuid, > + &feat_entry); > + if (rc < 0) > + return rc; > + if (!(feat_entry.attr_flags & CXL_FEAT_ENTRY_FLAG_CHANGABLE)) > + return -EOPNOTSUPP; > + } > + } else { > + cxlds = cxlmd->cxlds; > + mds = to_cxl_memdev_state(cxlds); > + rc = cxl_get_supported_feature_entry(mds, &cxl_patrol_scrub_uuid, > + &feat_entry); > + if (rc < 0) > + return rc; > + > + if (!(feat_entry.attr_flags & CXL_FEAT_ENTRY_FLAG_CHANGABLE)) > + return -EOPNOTSUPP; > + } > + > + cxl_ps_ctx = devm_kzalloc(&cxlmd->dev, sizeof(*cxl_ps_ctx), GFP_KERNEL); > + if (!cxl_ps_ctx) > + return -ENOMEM; > + > + *cxl_ps_ctx = (struct cxl_patrol_scrub_context) { > + .get_feat_size = feat_entry.get_feat_size, > + .set_feat_size = feat_entry.set_feat_size, > + .get_version = feat_entry.get_feat_ver, > + .set_version = feat_entry.set_feat_ver, > + .set_effects = feat_entry.set_effects, > + .instance = scrub_inst++, > + }; > + if (cxlr) { > + snprintf(cxl_dev_name, sizeof(cxl_dev_name), > + "cxl_region%d", cxlr->id); > + cxl_ps_ctx->cxlr = cxlr; > + } else { > + snprintf(cxl_dev_name, sizeof(cxl_dev_name), > + "%s_%s", "cxl", dev_name(&cxlmd->dev)); > + cxl_ps_ctx->cxlmd = cxlmd; > + } > + > + ras_features[num_ras_features].ft_type = RAS_FEAT_SCRUB; > + ras_features[num_ras_features].instance = cxl_ps_ctx->instance; > + ras_features[num_ras_features].scrub_ops = &cxl_ps_scrub_ops; > + ras_features[num_ras_features].ctx = cxl_ps_ctx; > + num_ras_features++; > + > + return edac_dev_register(&cxlmd->dev, cxl_dev_name, NULL, > + num_ras_features, ras_features); > +} > +EXPORT_SYMBOL_NS_GPL(cxl_mem_ras_features_init, CXL); > diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c > index e701e4b04032..4292765606cd 100644 > --- a/drivers/cxl/core/region.c > +++ b/drivers/cxl/core/region.c > @@ -3443,6 +3443,12 @@ static int cxl_region_probe(struct device *dev) > p->res->start, p->res->end, cxlr, > is_system_ram) > 0) > return 0; > + > + rc = cxl_mem_ras_features_init(NULL, cxlr); > + if (rc) > + dev_warn(&cxlr->dev, "CXL RAS features init for region_id=%d failed\n", > + cxlr->id); > + > return devm_cxl_add_dax_region(cxlr); > default: > dev_dbg(&cxlr->dev, "unsupported region mode: %d\n", > diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h > index fb356be8b426..9259c5d70a65 100644 > --- a/drivers/cxl/cxlmem.h > +++ b/drivers/cxl/cxlmem.h > @@ -933,6 +933,13 @@ int cxl_trigger_poison_list(struct cxl_memdev *cxlmd); > int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa); > int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa); > > +#if IS_ENABLED(CONFIG_CXL_RAS_FEAT) > +int cxl_mem_ras_features_init(struct cxl_memdev *cxlmd, struct cxl_region *cxlr); > +#else > +static inline int cxl_mem_ras_features_init(struct cxl_memdev *cxlmd, struct cxl_region *cxlr) > +{ return 0; } > +#endif > + > #ifdef CONFIG_CXL_SUSPEND > void cxl_mem_active_inc(void); > void cxl_mem_active_dec(void); > diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c > index a9fd5cd5a0d2..23ef99e02182 100644 > --- a/drivers/cxl/mem.c > +++ b/drivers/cxl/mem.c > @@ -116,6 +116,10 @@ static int cxl_mem_probe(struct device *dev) > if (!cxlds->media_ready) > return -EBUSY; > > + rc = cxl_mem_ras_features_init(cxlmd, NULL); > + if (rc) > + dev_warn(&cxlmd->dev, "CXL RAS features init failed\n"); > + > /* > * Someone is trying to reattach this device after it lost its port > * connection (an endpoint port previously registered by this memdev was