From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 493F2D1BDDA for ; Mon, 4 Nov 2024 18:16:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D54896B009A; Mon, 4 Nov 2024 13:16:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CDCDD6B00A1; Mon, 4 Nov 2024 13:16:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B089A6B00A2; Mon, 4 Nov 2024 13:16:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8AA066B009A for ; Mon, 4 Nov 2024 13:16:24 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 2F4F11606E2 for ; Mon, 4 Nov 2024 18:16:24 +0000 (UTC) X-FDA: 82749215850.13.EFAD372 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by imf27.hostedemail.com (Postfix) with ESMTP id 00AEA40009 for ; Mon, 4 Nov 2024 18:15:47 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="nWMgV/GO"; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf27.hostedemail.com: domain of dave.jiang@intel.com designates 192.198.163.13 as permitted sender) smtp.mailfrom=dave.jiang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730744123; a=rsa-sha256; cv=none; b=uR9t1QHIDZ6C8JrDX+aFf75kjwk/vxH3/6v3P1+p4kXUJaONd7yHkNt7cxATBLtIwjbN+a +HvaXVlXTBcWQkb+CKOmP5T/Qoi2RzbxmuFVhwoni7drvbqnFHi6Bcc22/unJEvbhNZ357 OdSptiHA9dF3dBM8Pu4MmjsKQZeVRuU= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="nWMgV/GO"; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf27.hostedemail.com: domain of dave.jiang@intel.com designates 192.198.163.13 as permitted sender) smtp.mailfrom=dave.jiang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730744123; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RtN1jOdIRTVks7uxLsHTZzPqyTpTs7GP5+YCCPmxEjc=; b=NF8Pp3XyiOgzQ9q1jdlci8nnqH8KZ7aJ0HWEkjxJ5XkKONrM9qrhFJwI5mgLhGiLiSEkdw +nzCqXj2I1+Wn6W1LgQAHad+a/0qoZRVJkoJzVeFoQTPeKSJFlXf2nb3bdghr+mXmAjhJa yYS/A7aslJZ6zWo+lOt+RrmPAWj9fBQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730744180; x=1762280180; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to:content-transfer-encoding; bh=M06ZXRFL0qgOXrTaOuGB0q1BIqD1ngJbtRlnPmZDZlE=; b=nWMgV/GOfTOJi/aRepZZTz+xOaB4hYAE6Ecw+eN7cPDk7HWpYipwcIOd 7z9N4Ne5mLWNiluKW4SNxqqvKfwT2xNlbcz0nofazAmlBncu85Y+YWbTN /bqCxSmHSgh2tiErCDzmrGoPy5LNshUyhlSTT014Ec+0nlKIh850WdaPK +jfLzQNomf50vL2fXqH9KPV559hHHMl+DpNtoIbtfJaS6VEb6Q7TY22bW 6Lkca9U3/XmMTTL9V2NrgSbz+M2pn22T+2qBTZ8LUdDxWnKRnQ6IvffU9 RsPmpTQrUayDqdyzt4Kh0Uxy58i1CKPyzxTgcFCHMIjyH1gMTgHk/NvVJ A==; X-CSE-ConnectionGUID: 2TphrlkXTRa+QXBMuv8mDQ== X-CSE-MsgGUID: t0ZbS2nzR1ubYtn7aRAAwQ== X-IronPort-AV: E=McAfee;i="6700,10204,11246"; a="33297184" X-IronPort-AV: E=Sophos;i="6.11,257,1725346800"; d="scan'208";a="33297184" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Nov 2024 10:16:19 -0800 X-CSE-ConnectionGUID: qHZE+EEgS3KHMmm3XOwMdQ== X-CSE-MsgGUID: DZCNtYjcTsSFPldYUOWWiw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,257,1725346800"; d="scan'208";a="88248185" Received: from lstrano-mobl6.amr.corp.intel.com (HELO [10.125.109.168]) ([10.125.109.168]) by fmviesa005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Nov 2024 10:16:15 -0800 Message-ID: Date: Mon, 4 Nov 2024 11:16:14 -0700 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v15 07/15] cxl/memfeature: Add CXL memory device patrol scrub control feature To: shiju.jose@huawei.com, linux-edac@vger.kernel.org, linux-cxl@vger.kernel.org, linux-acpi@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: bp@alien8.de, tony.luck@intel.com, rafael@kernel.org, lenb@kernel.org, mchehab@kernel.org, dan.j.williams@intel.com, dave@stgolabs.net, jonathan.cameron@huawei.com, gregkh@linuxfoundation.org, sudeep.holla@arm.com, jassisinghbrar@gmail.com, alison.schofield@intel.com, vishal.l.verma@intel.com, ira.weiny@intel.com, david@redhat.com, Vilas.Sridharan@amd.com, leo.duran@amd.com, Yazen.Ghannam@amd.com, rientjes@google.com, jiaqiyan@google.com, Jon.Grimm@amd.com, dave.hansen@linux.intel.com, naoya.horiguchi@nec.com, james.morse@arm.com, jthoughton@google.com, somasundaram.a@hpe.com, erdemaktas@google.com, pgonda@google.com, duenwen@google.com, gthelen@google.com, wschwartz@amperecomputing.com, dferguson@amperecomputing.com, wbs@os.amperecomputing.com, nifan.cxl@gmail.com, tanxiaofei@huawei.com, prime.zeng@hisilicon.com, roberto.sassu@huawei.com, kangkang.shen@futurewei.com, wanghuiqiang@huawei.com, linuxarm@huawei.com References: <20241101091735.1465-1-shiju.jose@huawei.com> <20241101091735.1465-8-shiju.jose@huawei.com> Content-Language: en-US From: Dave Jiang In-Reply-To: <20241101091735.1465-8-shiju.jose@huawei.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Queue-Id: 00AEA40009 X-Rspamd-Server: rspam01 X-Stat-Signature: djm1xjozk8w1ah5rmcj6cs8ydfufx7ad X-HE-Tag: 1730744147-284533 X-HE-Meta: U2FsdGVkX1/ZWM+lnU0ozxEpFcxOHeDVWyxLIIbHt0fEWnhdJWgTU1LGxdT4lcCSd0JddTODSfF253knkPYrEsj+ZZK2xc69xZGYL039TJq3pj5z4zc9pxNi98oc6ptkYnUxrppVWfXN1FtAhUGc4guEwLlMqIVH+kywF9TrN4a/2GdUnv4AbBwPFExrNSTcXL1kaSU+nyxD7bXBTFEi6EhsfRrScYfY8KWZuMACDc7JuJ9pAjHylYiesMwl88nRsFVyUYcdZ4mnr+QDS19YZzmu58juz4sY4z0N9sOmNPFi4KZ8HBqzDQFfyGKm3LhF9j/HCqeAHeTnnW9BmKXmJpm532cxLTyIP1pRqyeKDiD/CNHZ9LCP0NSWuItzpkdD8XsoIMM1apy6XAZvhEziJggs6bgriJHewleIi7Wkcsi0gwuFHgO+wj3L54Y8YbBNaJyy9/DX1rOL7MkXqis4vP8BWs8SSQoZdeuE1UhxqcjoT+9HPLa4r9AYSD6M7gYtMnCBa9e78ygFLWNDsIFUQ2LTPjRIxiClT17hthM54/Ynb3AFusS6EmQmDMa8ybS/sLwgfR1El7z4VXBS/bFAZjlsq0yjeCZY+cwuL6SNozzesHNbsvoiaV1CD5+A+MrB2dyqOrHeL/tH20Z7By8kTEhsX3gAiqXqA3hTCMcJyKSCaESJ/9rytmSkcYMY8fEZLeAUT3EAGu1Gh0XHiCbDMRVpvA5l9ahiuswg1H531EYEtoOkqpLvjnoh6YHZIlMdYXmHbHwsP1O1Aj8FKo5ou4GZxfOlvSHncUh01p6QPciawuyR/KIuhnGIzQnwu1QRBq0lbwEYeKU+oAOM6p6u77Q/wYR/7qyokz+gGAT0MjOrzpmS3vVLSELSmPgkdwXxtJ75ucLvAm4x2HVn0U939TVv5DZ99/D881Alf3u7FHhiB2HW8MJ9C2EwPZ+Fbl4lAr7A8JKI6i9FzD4HRGu QeAuKX7y sFvzayqudSIxuSO60Fso5QwBrQKftYG13oxte821P//Maz3xAd3a/MQ2XeGcBxRjfyKNyjGbSkBxdoHt70tRMUC56ewDXMcGEdB+JrzZ6LABz+40WiVVDXlLfXwi4E0PWLlsaixgyym8r8hy2YrvBifVccTbQxjS52S1sg+9SKC/2NrywvLg9eXodLVhXk0d9kf/4iTaUxQpn9IbenLWci/od/GgbxmJUwdv/yVGb2n1A9yOkoksT/l/p6QrR0Qo91pFWDHARy1si+PdxMXgunytASw201jf+p76jYq1OHG4SE5IVC+EHgiJN0uFnKBe2Wq+E X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 11/1/24 2:17 AM, shiju.jose@huawei.com wrote: > From: Shiju Jose > > CXL spec 3.1 section 8.2.9.9.11.1 describes the device patrol scrub control > feature. The device patrol scrub proactively locates and makes corrections > to errors in regular cycle. > > Allow specifying the number of hours within which the patrol scrub must be > completed, subject to minimum and maximum limits reported by the device. > Also allow disabling scrub allowing trade-off error rates against > performance. > > Add support for patrol scrub control on CXL memory devices. > Register with the EDAC device driver, which retrieves the scrub attribute > descriptors from EDAC scrub and exposes the sysfs scrub control attributes > to userspace. For example, scrub control for the CXL memory device > "cxl_mem0" is exposed in /sys/bus/edac/devices/cxl_mem0/scrubX/. > > Additionally, add support for region-based CXL memory patrol scrub control. > CXL memory regions may be interleaved across one or more CXL memory > devices. For example, region-based scrub control for "cxl_region1" is > exposed in /sys/bus/edac/devices/cxl_region1/scrubX/. > > Co-developed-by: Jonathan Cameron > Signed-off-by: Jonathan Cameron > Signed-off-by: Shiju Jose Reviewed-by: Dave Jiang > --- > drivers/cxl/Kconfig | 18 ++ > drivers/cxl/core/Makefile | 1 + > drivers/cxl/core/memfeature.c | 384 ++++++++++++++++++++++++++++++++++ > drivers/cxl/core/region.c | 6 + > drivers/cxl/cxlmem.h | 7 + > drivers/cxl/mem.c | 4 + > 6 files changed, 420 insertions(+) > create mode 100644 drivers/cxl/core/memfeature.c > > diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig > index 29c192f20082..6d79fb3e772e 100644 > --- a/drivers/cxl/Kconfig > +++ b/drivers/cxl/Kconfig > @@ -145,4 +145,22 @@ config CXL_REGION_INVALIDATION_TEST > If unsure, or if this kernel is meant for production environments, > say N. > > +config CXL_RAS_FEAT > + tristate "CXL: Memory RAS features" > + depends on CXL_PCI > + depends on CXL_MEM > + depends on EDAC > + help > + The CXL memory RAS feature control is optional and allows host to > + control the RAS features configurations of CXL Type 3 devices. > + > + It registers with the EDAC device subsystem to expose control > + attributes of CXL memory device's RAS features to the user. > + It provides interface functions to support configuring the CXL > + memory device's RAS features. > + > + Say 'y/m/n' to enable/disable control of the CXL.mem device's RAS features. > + See section 8.2.9.9.11 of CXL 3.1 specification for the detailed > + information of CXL memory device features. > + > endif > diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile > index 9259bcc6773c..2a3c7197bc23 100644 > --- a/drivers/cxl/core/Makefile > +++ b/drivers/cxl/core/Makefile > @@ -16,3 +16,4 @@ cxl_core-y += pmu.o > cxl_core-y += cdat.o > cxl_core-$(CONFIG_TRACING) += trace.o > cxl_core-$(CONFIG_CXL_REGION) += region.o > +cxl_core-$(CONFIG_CXL_RAS_FEAT) += memfeature.o > diff --git a/drivers/cxl/core/memfeature.c b/drivers/cxl/core/memfeature.c > new file mode 100644 > index 000000000000..41298acc01de > --- /dev/null > +++ b/drivers/cxl/core/memfeature.c > @@ -0,0 +1,384 @@ > +// SPDX-License-Identifier: GPL-2.0-or-later > +/* > + * CXL memory RAS feature driver. > + * > + * Copyright (c) 2024 HiSilicon Limited. > + * > + * - Supports functions to configure RAS features of the > + * CXL memory devices. > + * - Registers with the EDAC device subsystem driver to expose > + * the features sysfs attributes to the user for configuring > + * CXL memory RAS feature. > + */ > + > +#include > +#include > +#include > +#include > +#include > + > +#define CXL_DEV_NUM_RAS_FEATURES 1 > +#define CXL_DEV_HOUR_IN_SECS 3600 > + > +#define CXL_SCRUB_NAME_LEN 128 > + > +/* CXL memory patrol scrub control definitions */ > +static const uuid_t cxl_patrol_scrub_uuid = > + UUID_INIT(0x96dad7d6, 0xfde8, 0x482b, 0xa7, 0x33, 0x75, 0x77, 0x4e, 0x06, 0xdb, 0x8a); > + > +/* CXL memory patrol scrub control functions */ > +struct cxl_patrol_scrub_context { > + u8 instance; > + u16 get_feat_size; > + u16 set_feat_size; > + u8 get_version; > + u8 set_version; > + u16 set_effects; > + struct cxl_memdev *cxlmd; > + struct cxl_region *cxlr; > +}; > + > +/** > + * struct cxl_memdev_ps_params - CXL memory patrol scrub parameter data structure. > + * @enable: [IN & OUT] enable(1)/disable(0) patrol scrub. > + * @scrub_cycle_changeable: [OUT] scrub cycle attribute of patrol scrub is changeable. > + * @scrub_cycle_hrs: [IN] Requested patrol scrub cycle in hours. > + * [OUT] Current patrol scrub cycle in hours. > + * @min_scrub_cycle_hrs:[OUT] minimum patrol scrub cycle in hours supported. > + */ > +struct cxl_memdev_ps_params { > + bool enable; > + bool scrub_cycle_changeable; > + u16 scrub_cycle_hrs; > + u16 min_scrub_cycle_hrs; > +}; > + > +enum cxl_scrub_param { > + CXL_PS_PARAM_ENABLE, > + CXL_PS_PARAM_SCRUB_CYCLE, > +}; > + > +#define CXL_MEMDEV_PS_SCRUB_CYCLE_CHANGE_CAP_MASK BIT(0) > +#define CXL_MEMDEV_PS_SCRUB_CYCLE_REALTIME_REPORT_CAP_MASK BIT(1) > +#define CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_MASK GENMASK(7, 0) > +#define CXL_MEMDEV_PS_MIN_SCRUB_CYCLE_MASK GENMASK(15, 8) > +#define CXL_MEMDEV_PS_FLAG_ENABLED_MASK BIT(0) > + > +struct cxl_memdev_ps_rd_attrs { > + u8 scrub_cycle_cap; > + __le16 scrub_cycle_hrs; > + u8 scrub_flags; > +} __packed; > + > +struct cxl_memdev_ps_wr_attrs { > + u8 scrub_cycle_hrs; > + u8 scrub_flags; > +} __packed; > + > +static int cxl_mem_ps_get_attrs(struct cxl_memdev_state *mds, > + struct cxl_memdev_ps_params *params) > +{ > + size_t rd_data_size = sizeof(struct cxl_memdev_ps_rd_attrs); > + size_t data_size; > + struct cxl_memdev_ps_rd_attrs *rd_attrs __free(kfree) = > + kmalloc(rd_data_size, GFP_KERNEL); > + if (!rd_attrs) > + return -ENOMEM; > + > + data_size = cxl_get_feature(mds, cxl_patrol_scrub_uuid, > + CXL_GET_FEAT_SEL_CURRENT_VALUE, > + rd_attrs, rd_data_size); > + if (!data_size) > + return -EIO; > + > + params->scrub_cycle_changeable = FIELD_GET(CXL_MEMDEV_PS_SCRUB_CYCLE_CHANGE_CAP_MASK, > + rd_attrs->scrub_cycle_cap); > + params->enable = FIELD_GET(CXL_MEMDEV_PS_FLAG_ENABLED_MASK, > + rd_attrs->scrub_flags); > + params->scrub_cycle_hrs = FIELD_GET(CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_MASK, > + rd_attrs->scrub_cycle_hrs); > + params->min_scrub_cycle_hrs = FIELD_GET(CXL_MEMDEV_PS_MIN_SCRUB_CYCLE_MASK, > + rd_attrs->scrub_cycle_hrs); > + > + return 0; > +} > + > +static int cxl_ps_get_attrs(struct device *dev, > + struct cxl_patrol_scrub_context *cxl_ps_ctx, > + struct cxl_memdev_ps_params *params) > +{ > + struct cxl_memdev *cxlmd; > + struct cxl_dev_state *cxlds; > + struct cxl_memdev_state *mds; > + u16 min_scrub_cycle = 0; > + int i, ret; > + > + if (cxl_ps_ctx->cxlr) { > + struct cxl_region *cxlr = cxl_ps_ctx->cxlr; > + struct cxl_region_params *p = &cxlr->params; > + > + for (i = p->interleave_ways - 1; i >= 0; i--) { > + struct cxl_endpoint_decoder *cxled = p->targets[i]; > + > + cxlmd = cxled_to_memdev(cxled); > + cxlds = cxlmd->cxlds; > + mds = to_cxl_memdev_state(cxlds); > + ret = cxl_mem_ps_get_attrs(mds, params); > + if (ret) > + return ret; > + > + if (params->min_scrub_cycle_hrs > min_scrub_cycle) > + min_scrub_cycle = params->min_scrub_cycle_hrs; > + } > + params->min_scrub_cycle_hrs = min_scrub_cycle; > + return 0; > + } > + cxlmd = cxl_ps_ctx->cxlmd; > + cxlds = cxlmd->cxlds; > + mds = to_cxl_memdev_state(cxlds); > + > + return cxl_mem_ps_get_attrs(mds, params); > +} > + > +static int cxl_mem_ps_set_attrs(struct device *dev, > + struct cxl_patrol_scrub_context *cxl_ps_ctx, > + struct cxl_memdev_state *mds, > + struct cxl_memdev_ps_params *params, > + enum cxl_scrub_param param_type) > +{ > + struct cxl_memdev_ps_wr_attrs wr_attrs; > + struct cxl_memdev_ps_params rd_params; > + int ret; > + > + ret = cxl_mem_ps_get_attrs(mds, &rd_params); > + if (ret) { > + dev_err(dev, "Get cxlmemdev patrol scrub params failed ret=%d\n", > + ret); > + return ret; > + } > + > + switch (param_type) { > + case CXL_PS_PARAM_ENABLE: > + wr_attrs.scrub_flags = FIELD_PREP(CXL_MEMDEV_PS_FLAG_ENABLED_MASK, > + params->enable); > + wr_attrs.scrub_cycle_hrs = FIELD_PREP(CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_MASK, > + rd_params.scrub_cycle_hrs); > + break; > + case CXL_PS_PARAM_SCRUB_CYCLE: > + if (params->scrub_cycle_hrs < rd_params.min_scrub_cycle_hrs) { > + dev_err(dev, "Invalid CXL patrol scrub cycle(%d) to set\n", > + params->scrub_cycle_hrs); > + dev_err(dev, "Minimum supported CXL patrol scrub cycle in hour %d\n", > + rd_params.min_scrub_cycle_hrs); > + return -EINVAL; > + } > + wr_attrs.scrub_cycle_hrs = FIELD_PREP(CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_MASK, > + params->scrub_cycle_hrs); > + wr_attrs.scrub_flags = FIELD_PREP(CXL_MEMDEV_PS_FLAG_ENABLED_MASK, > + rd_params.enable); > + break; > + } > + > + ret = cxl_set_feature(mds, cxl_patrol_scrub_uuid, > + cxl_ps_ctx->set_version, > + &wr_attrs, sizeof(wr_attrs), > + CXL_SET_FEAT_FLAG_DATA_SAVED_ACROSS_RESET); > + if (ret) { > + dev_err(dev, "CXL patrol scrub set feature failed ret=%d\n", ret); > + return ret; > + } > + > + return 0; > +} > + > +static int cxl_ps_set_attrs(struct device *dev, > + struct cxl_patrol_scrub_context *cxl_ps_ctx, > + struct cxl_memdev_ps_params *params, > + enum cxl_scrub_param param_type) > +{ > + struct cxl_memdev *cxlmd; > + struct cxl_dev_state *cxlds; > + struct cxl_memdev_state *mds; > + int ret, i; > + > + if (cxl_ps_ctx->cxlr) { > + struct cxl_region *cxlr = cxl_ps_ctx->cxlr; > + struct cxl_region_params *p = &cxlr->params; > + > + for (i = p->interleave_ways - 1; i >= 0; i--) { > + struct cxl_endpoint_decoder *cxled = p->targets[i]; > + > + cxlmd = cxled_to_memdev(cxled); > + cxlds = cxlmd->cxlds; > + mds = to_cxl_memdev_state(cxlds); > + ret = cxl_mem_ps_set_attrs(dev, cxl_ps_ctx, mds, > + params, param_type); > + if (ret) > + return ret; > + } > + return 0; > + } > + cxlmd = cxl_ps_ctx->cxlmd; > + cxlds = cxlmd->cxlds; > + mds = to_cxl_memdev_state(cxlds); > + > + return cxl_mem_ps_set_attrs(dev, cxl_ps_ctx, mds, params, param_type); > +} > + > +static int cxl_patrol_scrub_get_enabled_bg(struct device *dev, void *drv_data, bool *enabled) > +{ > + struct cxl_patrol_scrub_context *ctx = drv_data; > + struct cxl_memdev_ps_params params; > + int ret; > + > + ret = cxl_ps_get_attrs(dev, ctx, ¶ms); > + if (ret) > + return ret; > + > + *enabled = params.enable; > + > + return 0; > +} > + > +static int cxl_patrol_scrub_set_enabled_bg(struct device *dev, void *drv_data, bool enable) > +{ > + struct cxl_patrol_scrub_context *ctx = drv_data; > + struct cxl_memdev_ps_params params = { > + .enable = enable, > + }; > + > + return cxl_ps_set_attrs(dev, ctx, ¶ms, CXL_PS_PARAM_ENABLE); > +} > + > +static int cxl_patrol_scrub_read_min_scrub_cycle(struct device *dev, void *drv_data, > + u32 *min) > +{ > + struct cxl_patrol_scrub_context *ctx = drv_data; > + struct cxl_memdev_ps_params params; > + int ret; > + > + ret = cxl_ps_get_attrs(dev, ctx, ¶ms); > + if (ret) > + return ret; > + *min = params.min_scrub_cycle_hrs * CXL_DEV_HOUR_IN_SECS; > + > + return 0; > +} > + > +static int cxl_patrol_scrub_read_max_scrub_cycle(struct device *dev, void *drv_data, > + u32 *max) > +{ > + *max = U8_MAX * CXL_DEV_HOUR_IN_SECS; /* Max set by register size */ > + > + return 0; > +} > + > +static int cxl_patrol_scrub_read_scrub_cycle(struct device *dev, void *drv_data, > + u32 *scrub_cycle_secs) > +{ > + struct cxl_patrol_scrub_context *ctx = drv_data; > + struct cxl_memdev_ps_params params; > + int ret; > + > + ret = cxl_ps_get_attrs(dev, ctx, ¶ms); > + if (ret) > + return ret; > + > + *scrub_cycle_secs = params.scrub_cycle_hrs * CXL_DEV_HOUR_IN_SECS; > + > + return 0; > +} > + > +static int cxl_patrol_scrub_write_scrub_cycle(struct device *dev, void *drv_data, > + u32 scrub_cycle_secs) > +{ > + struct cxl_patrol_scrub_context *ctx = drv_data; > + struct cxl_memdev_ps_params params = { > + .scrub_cycle_hrs = scrub_cycle_secs / CXL_DEV_HOUR_IN_SECS, > + }; > + > + return cxl_ps_set_attrs(dev, ctx, ¶ms, CXL_PS_PARAM_SCRUB_CYCLE); > +} > + > +static const struct edac_scrub_ops cxl_ps_scrub_ops = { > + .get_enabled_bg = cxl_patrol_scrub_get_enabled_bg, > + .set_enabled_bg = cxl_patrol_scrub_set_enabled_bg, > + .get_min_cycle = cxl_patrol_scrub_read_min_scrub_cycle, > + .get_max_cycle = cxl_patrol_scrub_read_max_scrub_cycle, > + .get_cycle_duration = cxl_patrol_scrub_read_scrub_cycle, > + .set_cycle_duration = cxl_patrol_scrub_write_scrub_cycle, > +}; > + > +int cxl_mem_ras_features_init(struct cxl_memdev *cxlmd, struct cxl_region *cxlr) > +{ > + struct edac_dev_feature ras_features[CXL_DEV_NUM_RAS_FEATURES]; > + struct cxl_patrol_scrub_context *cxl_ps_ctx; > + char cxl_dev_name[CXL_SCRUB_NAME_LEN]; > + struct cxl_feat_entry feat_entry; > + struct cxl_memdev_state *mds; > + struct cxl_dev_state *cxlds; > + int num_ras_features = 0; > + u8 scrub_inst = 0; > + int rc, i; > + > + if (cxlr) { > + struct cxl_region_params *p = &cxlr->params; > + > + for (i = p->interleave_ways - 1; i >= 0; i--) { > + struct cxl_endpoint_decoder *cxled = p->targets[i]; > + > + cxlmd = cxled_to_memdev(cxled); > + cxlds = cxlmd->cxlds; > + mds = to_cxl_memdev_state(cxlds); > + memset(&feat_entry, 0, sizeof(feat_entry)); > + rc = cxl_get_supported_feature_entry(mds, &cxl_patrol_scrub_uuid, > + &feat_entry); > + if (rc < 0) > + return rc; > + if (!(feat_entry.attr_flags & CXL_FEAT_ENTRY_FLAG_CHANGABLE)) > + return -EOPNOTSUPP; > + } > + } else { > + cxlds = cxlmd->cxlds; > + mds = to_cxl_memdev_state(cxlds); > + rc = cxl_get_supported_feature_entry(mds, &cxl_patrol_scrub_uuid, > + &feat_entry); > + if (rc < 0) > + return rc; > + > + if (!(feat_entry.attr_flags & CXL_FEAT_ENTRY_FLAG_CHANGABLE)) > + return -EOPNOTSUPP; > + } > + > + cxl_ps_ctx = devm_kzalloc(&cxlmd->dev, sizeof(*cxl_ps_ctx), GFP_KERNEL); > + if (!cxl_ps_ctx) > + return -ENOMEM; > + > + *cxl_ps_ctx = (struct cxl_patrol_scrub_context) { > + .get_feat_size = feat_entry.get_feat_size, > + .set_feat_size = feat_entry.set_feat_size, > + .get_version = feat_entry.get_feat_ver, > + .set_version = feat_entry.set_feat_ver, > + .set_effects = feat_entry.set_effects, > + .instance = scrub_inst++, > + }; > + if (cxlr) { > + snprintf(cxl_dev_name, sizeof(cxl_dev_name), > + "cxl_region%d", cxlr->id); > + cxl_ps_ctx->cxlr = cxlr; > + } else { > + snprintf(cxl_dev_name, sizeof(cxl_dev_name), > + "%s_%s", "cxl", dev_name(&cxlmd->dev)); > + cxl_ps_ctx->cxlmd = cxlmd; > + } > + > + ras_features[num_ras_features].ft_type = RAS_FEAT_SCRUB; > + ras_features[num_ras_features].instance = cxl_ps_ctx->instance; > + ras_features[num_ras_features].scrub_ops = &cxl_ps_scrub_ops; > + ras_features[num_ras_features].ctx = cxl_ps_ctx; > + num_ras_features++; > + > + return edac_dev_register(&cxlmd->dev, cxl_dev_name, NULL, > + num_ras_features, ras_features); > +} > +EXPORT_SYMBOL_NS_GPL(cxl_mem_ras_features_init, CXL); > diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c > index e701e4b04032..4292765606cd 100644 > --- a/drivers/cxl/core/region.c > +++ b/drivers/cxl/core/region.c > @@ -3443,6 +3443,12 @@ static int cxl_region_probe(struct device *dev) > p->res->start, p->res->end, cxlr, > is_system_ram) > 0) > return 0; > + > + rc = cxl_mem_ras_features_init(NULL, cxlr); > + if (rc) > + dev_warn(&cxlr->dev, "CXL RAS features init for region_id=%d failed\n", > + cxlr->id); > + > return devm_cxl_add_dax_region(cxlr); > default: > dev_dbg(&cxlr->dev, "unsupported region mode: %d\n", > diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h > index fb356be8b426..9259c5d70a65 100644 > --- a/drivers/cxl/cxlmem.h > +++ b/drivers/cxl/cxlmem.h > @@ -933,6 +933,13 @@ int cxl_trigger_poison_list(struct cxl_memdev *cxlmd); > int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa); > int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa); > > +#if IS_ENABLED(CONFIG_CXL_RAS_FEAT) > +int cxl_mem_ras_features_init(struct cxl_memdev *cxlmd, struct cxl_region *cxlr); > +#else > +static inline int cxl_mem_ras_features_init(struct cxl_memdev *cxlmd, struct cxl_region *cxlr) > +{ return 0; } > +#endif > + > #ifdef CONFIG_CXL_SUSPEND > void cxl_mem_active_inc(void); > void cxl_mem_active_dec(void); > diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c > index a9fd5cd5a0d2..23ef99e02182 100644 > --- a/drivers/cxl/mem.c > +++ b/drivers/cxl/mem.c > @@ -116,6 +116,10 @@ static int cxl_mem_probe(struct device *dev) > if (!cxlds->media_ready) > return -EBUSY; > > + rc = cxl_mem_ras_features_init(cxlmd, NULL); > + if (rc) > + dev_warn(&cxlmd->dev, "CXL RAS features init failed\n"); > + > /* > * Someone is trying to reattach this device after it lost its port > * connection (an endpoint port previously registered by this memdev was