From: Shivank Garg <shivankg@amd.com>
To: <akpm@linux-foundation.org>, <david@kernel.org>
Cc: <lorenzo.stoakes@oracle.com>, <Liam.Howlett@oracle.com>,
<vbabka@kernel.org>, <willy@infradead.org>, <rppt@kernel.org>,
<surenb@google.com>, <mhocko@suse.com>, <ziy@nvidia.com>,
<matthew.brost@intel.com>, <joshua.hahnjy@gmail.com>,
<rakie.kim@sk.com>, <byungchul@sk.com>, <gourry@gourry.net>,
<ying.huang@linux.alibaba.com>, <apopple@nvidia.com>,
<dave@stgolabs.net>, <Jonathan.Cameron@huawei.com>,
<rkodsara@amd.com>, <vkoul@kernel.org>, <bharata@amd.com>,
<sj@kernel.org>, <weixugc@google.com>, <dan.j.williams@intel.com>,
<rientjes@google.com>, <xuezhengchu@huawei.com>,
<yiannis@zptcorp.com>, <dave.hansen@intel.com>,
<hannes@cmpxchg.org>, <jhubbard@nvidia.com>, <peterx@redhat.com>,
<riel@surriel.com>, <shakeel.butt@linux.dev>,
<stalexan@redhat.com>, <tj@kernel.org>, <nifan.cxl@gmail.com>,
<linux-kernel@vger.kernel.org>, <linux-mm@kvack.org>,
Shivank Garg <shivankg@amd.com>
Subject: [RFC PATCH v4 5/6] drivers/migrate_offload: add DMA batch copy driver (dcbm)
Date: Mon, 9 Mar 2026 12:07:31 +0000 [thread overview]
Message-ID: <20260309120725.308854-14-shivankg@amd.com> (raw)
In-Reply-To: <20260309120725.308854-3-shivankg@amd.com>
Simple DMAEngine based driver that uses memcpy channels to batch-copy
folios during page migration. Primarily for testing the copy offload
infrastructure.
When DMA fails the callback returns an error and the migration path
falls back to per-folio CPU copy.
Sysfs interface under /sys/kernel/dcbm/:
offloading - enable/disable DMA offload
nr_dma_chan - max number of DMA channels to use
folios_migrated - folios copied via DMA
folios_failures - fallback count
Signed-off-by: Shivank Garg <shivankg@amd.com>
---
drivers/Kconfig | 2 +
drivers/Makefile | 2 +
drivers/migrate_offload/Kconfig | 8 +
drivers/migrate_offload/Makefile | 1 +
drivers/migrate_offload/dcbm/Makefile | 1 +
drivers/migrate_offload/dcbm/dcbm.c | 457 ++++++++++++++++++++++++++
6 files changed, 471 insertions(+)
create mode 100644 drivers/migrate_offload/Kconfig
create mode 100644 drivers/migrate_offload/Makefile
create mode 100644 drivers/migrate_offload/dcbm/Makefile
create mode 100644 drivers/migrate_offload/dcbm/dcbm.c
diff --git a/drivers/Kconfig b/drivers/Kconfig
index c0f1fb893ec0..3dbea1380603 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -255,4 +255,6 @@ source "drivers/cdx/Kconfig"
source "drivers/resctrl/Kconfig"
+source "drivers/migrate_offload/Kconfig"
+
endmenu
diff --git a/drivers/Makefile b/drivers/Makefile
index 53fbd2e0acdd..f55bddf490cc 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -42,6 +42,8 @@ obj-y += clk/
# really early.
obj-$(CONFIG_DMADEVICES) += dma/
+obj-$(CONFIG_MIGRATION_COPY_OFFLOAD) += migrate_offload/
+
# SOC specific infrastructure drivers.
obj-y += soc/
obj-$(CONFIG_PM_GENERIC_DOMAINS) += pmdomain/
diff --git a/drivers/migrate_offload/Kconfig b/drivers/migrate_offload/Kconfig
new file mode 100644
index 000000000000..0bbaedbae4ad
--- /dev/null
+++ b/drivers/migrate_offload/Kconfig
@@ -0,0 +1,8 @@
+config DCBM_DMA
+ bool "DMA Core Batch Migrator"
+ depends on MIGRATION_COPY_OFFLOAD && DMA_ENGINE
+ help
+ DMA-based batch copy engine for page migration. Uses
+ DMAEngine memcpy channels to offload folio data copies
+ during migration. Primarily intended for testing the copy
+ offload infrastructure.
diff --git a/drivers/migrate_offload/Makefile b/drivers/migrate_offload/Makefile
new file mode 100644
index 000000000000..9e16018beb15
--- /dev/null
+++ b/drivers/migrate_offload/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_DCBM_DMA) += dcbm/
diff --git a/drivers/migrate_offload/dcbm/Makefile b/drivers/migrate_offload/dcbm/Makefile
new file mode 100644
index 000000000000..56ba47cce0f1
--- /dev/null
+++ b/drivers/migrate_offload/dcbm/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_DCBM_DMA) += dcbm.o
diff --git a/drivers/migrate_offload/dcbm/dcbm.c b/drivers/migrate_offload/dcbm/dcbm.c
new file mode 100644
index 000000000000..89751d03101e
--- /dev/null
+++ b/drivers/migrate_offload/dcbm/dcbm.c
@@ -0,0 +1,457 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * DMA Core Batch Migrator (DCBM)
+ *
+ * Uses DMAEngine memcpy channels to offload batch folio copies during
+ * page migration. Reference driver meant for testing the offload
+ * infrastructure.
+ *
+ * Copyright (C) 2024-26 Advanced Micro Devices, Inc.
+ */
+
+#include <linux/module.h>
+#include <linux/dma-mapping.h>
+#include <linux/dmaengine.h>
+#include <linux/migrate.h>
+#include <linux/migrate_copy_offload.h>
+
+#define MAX_DMA_CHANNELS 16
+
+static unsigned long long folios_migrated;
+static unsigned long long folios_failures;
+
+static bool offloading_enabled;
+static unsigned int nr_dma_channels = 1;
+static DEFINE_MUTEX(dcbm_mutex);
+
+struct dma_work {
+ struct dma_chan *chan;
+ struct completion done;
+ atomic_t pending;
+ struct sg_table *src_sgt;
+ struct sg_table *dst_sgt;
+ bool mapped;
+};
+
+static void dma_completion_callback(void *data)
+{
+ struct dma_work *work = data;
+
+ if (atomic_dec_and_test(&work->pending))
+ complete(&work->done);
+}
+
+static int setup_sg_tables(struct dma_work *work, struct list_head **src_pos,
+ struct list_head **dst_pos, int nr)
+{
+ struct scatterlist *sg_src, *sg_dst;
+ struct device *dev;
+ int i, ret;
+
+ work->src_sgt = kmalloc_obj(*work->src_sgt, GFP_KERNEL);
+ if (!work->src_sgt)
+ return -ENOMEM;
+ work->dst_sgt = kmalloc_obj(*work->dst_sgt, GFP_KERNEL);
+ if (!work->dst_sgt)
+ goto err_free_src;
+
+ ret = sg_alloc_table(work->src_sgt, nr, GFP_KERNEL);
+ if (ret)
+ goto err_free_dst;
+ ret = sg_alloc_table(work->dst_sgt, nr, GFP_KERNEL);
+ if (ret)
+ goto err_free_src_table;
+
+ sg_src = work->src_sgt->sgl;
+ sg_dst = work->dst_sgt->sgl;
+ for (i = 0; i < nr; i++) {
+ struct folio *src = list_entry(*src_pos, struct folio, lru);
+ struct folio *dst = list_entry(*dst_pos, struct folio, lru);
+
+ sg_set_folio(sg_src, src, folio_size(src), 0);
+ sg_set_folio(sg_dst, dst, folio_size(dst), 0);
+
+ *src_pos = (*src_pos)->next;
+ *dst_pos = (*dst_pos)->next;
+
+ if (i < nr - 1) {
+ sg_src = sg_next(sg_src);
+ sg_dst = sg_next(sg_dst);
+ }
+ }
+
+ dev = dmaengine_get_dma_device(work->chan);
+ if (!dev) {
+ ret = -ENODEV;
+ goto err_free_dst_table;
+ }
+ ret = dma_map_sgtable(dev, work->src_sgt, DMA_TO_DEVICE,
+ DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_NO_KERNEL_MAPPING);
+ if (ret)
+ goto err_free_dst_table;
+ ret = dma_map_sgtable(dev, work->dst_sgt, DMA_FROM_DEVICE,
+ DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_NO_KERNEL_MAPPING);
+ if (ret)
+ goto err_unmap_src;
+
+ if (work->src_sgt->nents != work->dst_sgt->nents) {
+ ret = -EINVAL;
+ goto err_unmap_dst;
+ }
+ work->mapped = true;
+ return 0;
+
+err_unmap_dst:
+ dma_unmap_sgtable(dev, work->dst_sgt, DMA_FROM_DEVICE,
+ DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_NO_KERNEL_MAPPING);
+err_unmap_src:
+ dma_unmap_sgtable(dev, work->src_sgt, DMA_TO_DEVICE,
+ DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_NO_KERNEL_MAPPING);
+err_free_dst_table:
+ sg_free_table(work->dst_sgt);
+err_free_src_table:
+ sg_free_table(work->src_sgt);
+err_free_dst:
+ kfree(work->dst_sgt);
+ work->dst_sgt = NULL;
+err_free_src:
+ kfree(work->src_sgt);
+ work->src_sgt = NULL;
+ return ret;
+}
+
+static void cleanup_dma_work(struct dma_work *works, int actual_channels)
+{
+ struct device *dev;
+ int i;
+
+ if (!works)
+ return;
+
+ for (i = 0; i < actual_channels; i++) {
+ if (!works[i].chan)
+ continue;
+
+ dev = dmaengine_get_dma_device(works[i].chan);
+
+ if (works[i].mapped)
+ dmaengine_terminate_sync(works[i].chan);
+
+ if (dev && works[i].mapped) {
+ if (works[i].src_sgt) {
+ dma_unmap_sgtable(dev, works[i].src_sgt,
+ DMA_TO_DEVICE,
+ DMA_ATTR_SKIP_CPU_SYNC |
+ DMA_ATTR_NO_KERNEL_MAPPING);
+ sg_free_table(works[i].src_sgt);
+ kfree(works[i].src_sgt);
+ }
+ if (works[i].dst_sgt) {
+ dma_unmap_sgtable(dev, works[i].dst_sgt,
+ DMA_FROM_DEVICE,
+ DMA_ATTR_SKIP_CPU_SYNC |
+ DMA_ATTR_NO_KERNEL_MAPPING);
+ sg_free_table(works[i].dst_sgt);
+ kfree(works[i].dst_sgt);
+ }
+ }
+ dma_release_channel(works[i].chan);
+ }
+ kfree(works);
+}
+
+static int submit_dma_transfers(struct dma_work *work)
+{
+ struct scatterlist *sg_src, *sg_dst;
+ struct dma_async_tx_descriptor *tx;
+ unsigned long flags = DMA_CTRL_ACK;
+ dma_cookie_t cookie;
+ int i;
+
+ atomic_set(&work->pending, 1);
+
+ sg_src = work->src_sgt->sgl;
+ sg_dst = work->dst_sgt->sgl;
+ for_each_sgtable_dma_sg(work->src_sgt, sg_src, i) {
+ if (i == work->src_sgt->nents - 1)
+ flags |= DMA_PREP_INTERRUPT;
+
+ tx = dmaengine_prep_dma_memcpy(work->chan,
+ sg_dma_address(sg_dst),
+ sg_dma_address(sg_src),
+ sg_dma_len(sg_src), flags);
+ if (!tx) {
+ atomic_set(&work->pending, 0);
+ return -EIO;
+ }
+
+ if (i == work->src_sgt->nents - 1) {
+ tx->callback = dma_completion_callback;
+ tx->callback_param = work;
+ }
+
+ cookie = dmaengine_submit(tx);
+ if (dma_submit_error(cookie)) {
+ atomic_set(&work->pending, 0);
+ return -EIO;
+ }
+ sg_dst = sg_next(sg_dst);
+ }
+ return 0;
+}
+
+/**
+ * folios_copy_dma - copy a batch of folios via DMA memcpy
+ * @dst_list: destination folio list
+ * @src_list: source folio list
+ * @nr_folios: number of folios in each list
+ *
+ * Return: 0 on success, negative errno on failure.
+ */
+static int folios_copy_dma(struct list_head *dst_list,
+ struct list_head *src_list, unsigned int nr_folios)
+{
+ struct dma_work *works;
+ struct list_head *src_pos = src_list->next;
+ struct list_head *dst_pos = dst_list->next;
+ int i, folios_per_chan, ret;
+ dma_cap_mask_t mask;
+ int actual_channels = 0;
+ unsigned int max_channels;
+
+ max_channels = min3(nr_dma_channels, nr_folios,
+ (unsigned int)MAX_DMA_CHANNELS);
+
+ works = kcalloc(max_channels, sizeof(*works), GFP_KERNEL);
+ if (!works)
+ return -ENOMEM;
+
+ dma_cap_zero(mask);
+ dma_cap_set(DMA_MEMCPY, mask);
+
+ for (i = 0; i < max_channels; i++) {
+ works[actual_channels].chan = dma_request_chan_by_mask(&mask);
+ if (IS_ERR(works[actual_channels].chan))
+ break;
+ init_completion(&works[actual_channels].done);
+ actual_channels++;
+ }
+
+ if (actual_channels == 0) {
+ kfree(works);
+ return -ENODEV;
+ }
+
+ for (i = 0; i < actual_channels; i++) {
+ folios_per_chan = nr_folios * (i + 1) / actual_channels -
+ (nr_folios * i) / actual_channels;
+ if (folios_per_chan == 0)
+ continue;
+
+ ret = setup_sg_tables(&works[i], &src_pos, &dst_pos,
+ folios_per_chan);
+ if (ret)
+ goto err_cleanup;
+ }
+
+ for (i = 0; i < actual_channels; i++) {
+ ret = submit_dma_transfers(&works[i]);
+ if (ret)
+ goto err_cleanup;
+ }
+
+ for (i = 0; i < actual_channels; i++) {
+ if (atomic_read(&works[i].pending) > 0)
+ dma_async_issue_pending(works[i].chan);
+ }
+
+ for (i = 0; i < actual_channels; i++) {
+ if (atomic_read(&works[i].pending) == 0)
+ continue;
+ if (!wait_for_completion_timeout(&works[i].done,
+ msecs_to_jiffies(10000))) {
+ ret = -ETIMEDOUT;
+ goto err_cleanup;
+ }
+ }
+
+ cleanup_dma_work(works, actual_channels);
+
+ mutex_lock(&dcbm_mutex);
+ folios_migrated += nr_folios;
+ mutex_unlock(&dcbm_mutex);
+ return 0;
+
+err_cleanup:
+ pr_warn_ratelimited("dcbm: DMA copy failed (%d), falling back to CPU\n",
+ ret);
+ cleanup_dma_work(works, actual_channels);
+
+ mutex_lock(&dcbm_mutex);
+ folios_failures += nr_folios;
+ mutex_unlock(&dcbm_mutex);
+ return ret;
+}
+
+/* TODO: tune based on usecase */
+static bool dma_should_batch(int reason)
+{
+ if (reason == MR_SYSCALL || reason == MR_COMPACTION || reason == MR_DEMOTION ||
+ reason == MR_NUMA_MISPLACED)
+ return true;
+ return false;
+}
+
+static struct migrator dma_migrator = {
+ .name = "DCBM",
+ .offload_copy = folios_copy_dma,
+ .should_batch = dma_should_batch,
+ .owner = THIS_MODULE,
+};
+
+static ssize_t offloading_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return sysfs_emit(buf, "%d\n", offloading_enabled);
+}
+
+static ssize_t offloading_store(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ bool enable;
+ int ret;
+
+ ret = kstrtobool(buf, &enable);
+ if (ret)
+ return ret;
+
+ mutex_lock(&dcbm_mutex);
+
+ if (enable == offloading_enabled)
+ goto out;
+
+ if (enable) {
+ ret = migrate_offload_start(&dma_migrator);
+ if (ret) {
+ mutex_unlock(&dcbm_mutex);
+ return ret;
+ }
+ offloading_enabled = true;
+ } else {
+ migrate_offload_stop(&dma_migrator);
+ offloading_enabled = false;
+ }
+out:
+ mutex_unlock(&dcbm_mutex);
+ return count;
+}
+
+static ssize_t folios_migrated_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return sysfs_emit(buf, "%llu\n", folios_migrated);
+}
+
+static ssize_t folios_migrated_store(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ mutex_lock(&dcbm_mutex);
+ folios_migrated = 0;
+ mutex_unlock(&dcbm_mutex);
+ return count;
+}
+
+static ssize_t folios_failures_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return sysfs_emit(buf, "%llu\n", folios_failures);
+}
+
+static ssize_t folios_failures_store(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ mutex_lock(&dcbm_mutex);
+ folios_failures = 0;
+ mutex_unlock(&dcbm_mutex);
+ return count;
+}
+
+static ssize_t nr_dma_chan_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return sysfs_emit(buf, "%u\n", nr_dma_channels);
+}
+
+static ssize_t nr_dma_chan_store(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ unsigned int val;
+ int ret;
+
+ ret = kstrtouint(buf, 0, &val);
+ if (ret)
+ return ret;
+
+ if (val < 1 || val > MAX_DMA_CHANNELS)
+ return -EINVAL;
+
+ mutex_lock(&dcbm_mutex);
+ nr_dma_channels = val;
+ mutex_unlock(&dcbm_mutex);
+ return count;
+}
+
+static struct kobj_attribute offloading_attr = __ATTR_RW(offloading);
+static struct kobj_attribute nr_dma_chan_attr = __ATTR_RW(nr_dma_chan);
+static struct kobj_attribute folios_migrated_attr = __ATTR_RW(folios_migrated);
+static struct kobj_attribute folios_failures_attr = __ATTR_RW(folios_failures);
+
+static struct attribute *dcbm_attrs[] = {
+ &offloading_attr.attr,
+ &nr_dma_chan_attr.attr,
+ &folios_migrated_attr.attr,
+ &folios_failures_attr.attr,
+ NULL
+};
+ATTRIBUTE_GROUPS(dcbm);
+
+static struct kobject *dcbm_kobj;
+
+static int __init dcbm_init(void)
+{
+ int ret;
+
+ dcbm_kobj = kobject_create_and_add("dcbm", kernel_kobj);
+ if (!dcbm_kobj)
+ return -ENOMEM;
+
+ ret = sysfs_create_groups(dcbm_kobj, dcbm_groups);
+ if (ret) {
+ kobject_put(dcbm_kobj);
+ return ret;
+ }
+
+ pr_info("dcbm: DMA Core Batch Migrator initialized\n");
+ return 0;
+}
+
+static void __exit dcbm_exit(void)
+{
+ mutex_lock(&dcbm_mutex);
+ if (offloading_enabled) {
+ migrate_offload_stop(&dma_migrator);
+ offloading_enabled = false;
+ }
+ mutex_unlock(&dcbm_mutex);
+
+ sysfs_remove_groups(dcbm_kobj, dcbm_groups);
+ kobject_put(dcbm_kobj);
+ pr_info("dcbm: DMA Core Batch Migrator unloaded\n");
+}
+
+module_init(dcbm_init);
+module_exit(dcbm_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Shivank Garg");
+MODULE_DESCRIPTION("DMA Core Batch Migrator");
--
2.43.0
next prev parent reply other threads:[~2026-03-09 12:12 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-09 12:07 [RFC PATCH v4 0/6] Accelerate page migration with batch copying and hardware offload Shivank Garg
2026-03-09 12:07 ` [RFC PATCH v4 1/6] mm: introduce folios_mc_copy() for batch folio copying Shivank Garg
2026-03-12 9:41 ` David Hildenbrand (Arm)
2026-03-15 18:09 ` Garg, Shivank
2026-03-09 12:07 ` [RFC PATCH v4 2/6] mm/migrate: skip data copy for already-copied folios Shivank Garg
2026-03-12 9:44 ` David Hildenbrand (Arm)
2026-03-15 18:25 ` Garg, Shivank
2026-03-23 12:20 ` David Hildenbrand (Arm)
2026-03-24 8:22 ` Huang, Ying
2026-04-03 11:08 ` Garg, Shivank
2026-04-07 6:52 ` Huang, Ying
2026-03-09 12:07 ` [RFC PATCH v4 3/6] mm/migrate: add batch-copy path in migrate_pages_batch Shivank Garg
2026-03-24 8:42 ` Huang, Ying
2026-04-03 11:09 ` Garg, Shivank
2026-03-09 12:07 ` [RFC PATCH v4 4/6] mm/migrate: add copy offload registration infrastructure Shivank Garg
2026-03-09 17:54 ` Gregory Price
2026-03-10 10:07 ` Garg, Shivank
2026-03-24 10:54 ` Huang, Ying
2026-04-03 11:11 ` Garg, Shivank
2026-04-07 7:40 ` Huang, Ying
2026-03-09 12:07 ` Shivank Garg [this message]
2026-03-09 18:04 ` [RFC PATCH v4 5/6] drivers/migrate_offload: add DMA batch copy driver (dcbm) Gregory Price
2026-03-12 9:33 ` Garg, Shivank
2026-03-24 8:10 ` Huang, Ying
2026-04-03 11:06 ` Garg, Shivank
2026-03-09 12:07 ` [RFC PATCH v4 6/6] mm/migrate: adjust NR_MAX_BATCHED_MIGRATION for testing Shivank Garg
2026-03-18 14:29 ` [RFC PATCH v4 0/6] Accelerate page migration with batch copying and hardware offload Garg, Shivank
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260309120725.308854-14-shivankg@amd.com \
--to=shivankg@amd.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=bharata@amd.com \
--cc=byungchul@sk.com \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@intel.com \
--cc=dave@stgolabs.net \
--cc=david@kernel.org \
--cc=gourry@gourry.net \
--cc=hannes@cmpxchg.org \
--cc=jhubbard@nvidia.com \
--cc=joshua.hahnjy@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=matthew.brost@intel.com \
--cc=mhocko@suse.com \
--cc=nifan.cxl@gmail.com \
--cc=peterx@redhat.com \
--cc=rakie.kim@sk.com \
--cc=riel@surriel.com \
--cc=rientjes@google.com \
--cc=rkodsara@amd.com \
--cc=rppt@kernel.org \
--cc=shakeel.butt@linux.dev \
--cc=sj@kernel.org \
--cc=stalexan@redhat.com \
--cc=surenb@google.com \
--cc=tj@kernel.org \
--cc=vbabka@kernel.org \
--cc=vkoul@kernel.org \
--cc=weixugc@google.com \
--cc=willy@infradead.org \
--cc=xuezhengchu@huawei.com \
--cc=yiannis@zptcorp.com \
--cc=ying.huang@linux.alibaba.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox