linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: airlied@linux.ie, akpm@linux-foundation.org,
	ard.biesheuvel@linaro.org, ardb@kernel.org,
	benh@kernel.crashing.org, bhelgaas@google.com,
	boris.ostrovsky@oracle.com, bp@alien8.de, Brice.Goglin@inria.fr,
	bskeggs@redhat.com, catalin.marinas@arm.com,
	dan.carpenter@oracle.com, dan.j.williams@intel.com,
	daniel@ffwll.ch, dave.hansen@linux.intel.com,
	dave.jiang@intel.com, david@redhat.com,
	gregkh@linuxfoundation.org, hpa@zytor.com, hulkci@huawei.com,
	ira.weiny@intel.com, jgg@mellanox.com, jglisse@redhat.com,
	jgross@suse.com, jmoyer@redhat.com, joao.m.martins@oracle.com,
	Jonathan.Cameron@huawei.com, justin.he@arm.com,
	linux-mm@kvack.org, lkp@intel.com, luto@kernel.org,
	mingo@redhat.com, mm-commits@vger.kernel.org, mpe@ellerman.id.au,
	pasha.tatashin@soleen.com, paulus@ozlabs.org,
	peterz@infradead.org, rafael.j.wysocki@intel.com,
	rdunlap@infradead.org, richard.weiyang@linux.alibaba.com,
	rppt@linux.ibm.com, sstabellini@kernel.org, tglx@linutronix.de,
	thomas.lendacky@amd.com, torvalds@linux-foundation.org,
	vgoyal@redhat.com, vishal.l.verma@intel.com, will@kernel.org,
	yanaijie@huawei.com
Subject: [patch 044/181] mm/memremap_pages: convert to 'struct range'
Date: Tue, 13 Oct 2020 16:50:29 -0700	[thread overview]
Message-ID: <20201013235029.X5kgzScuh%akpm@linux-foundation.org> (raw)
In-Reply-To: <20201013164658.3bfd96cc224d8923e66a9f4e@linux-foundation.org>

From: Dan Williams <dan.j.williams@intel.com>
Subject: mm/memremap_pages: convert to 'struct range'

The 'struct resource' in 'struct dev_pagemap' is only used for holding
resource span information.  The other fields, 'name', 'flags', 'desc',
'parent', 'sibling', and 'child' are all unused wasted space.

This is in preparation for introducing a multi-range extension of
devm_memremap_pages().

The bulk of this change is unwinding all the places internal to libnvdimm
that used 'struct resource' unnecessarily, and replacing instances of
'struct dev_pagemap'.res with 'struct dev_pagemap'.range.

P2PDMA had a minor usage of the resource flags field, but only to report
failures with "%pR".  That is replaced with an open coded print of the
range.

[dan.carpenter@oracle.com: mm/hmm/test: use after free in dmirror_allocate_chunk()]
  Link: https://lkml.kernel.org/r/20200926121402.GA7467@kadam
Link: https://lkml.kernel.org/r/159643103173.4062302.768998885691711532.stgit@dwillia2-desk3.amr.corp.intel.com
Link: https://lkml.kernel.org/r/160106115761.30709.13539840236873663620.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>	[xen]
Cc: Paul Mackerras <paulus@ozlabs.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Vishal Verma <vishal.l.verma@intel.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Ira Weiny <ira.weiny@intel.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brice Goglin <Brice.Goglin@inria.fr>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Hulk Robot <hulkci@huawei.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jason Gunthorpe <jgg@mellanox.com>
Cc: Jason Yan <yanaijie@huawei.com>
Cc: Jeff Moyer <jmoyer@redhat.com>
Cc: Jia He <justin.he@arm.com>
Cc: Joao Martins <joao.m.martins@oracle.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: kernel test robot <lkp@intel.com>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 arch/powerpc/kvm/book3s_hv_uvmem.c     |   13 ++-
 drivers/dax/bus.c                      |   10 +-
 drivers/dax/bus.h                      |    2 
 drivers/dax/dax-private.h              |    5 -
 drivers/dax/device.c                   |    3 
 drivers/dax/hmem/hmem.c                |    5 +
 drivers/dax/pmem/core.c                |   12 +--
 drivers/gpu/drm/nouveau/nouveau_dmem.c |   14 ++--
 drivers/nvdimm/badrange.c              |   26 +++----
 drivers/nvdimm/claim.c                 |   13 ++-
 drivers/nvdimm/nd.h                    |    3 
 drivers/nvdimm/pfn_devs.c              |   12 +--
 drivers/nvdimm/pmem.c                  |   26 ++++---
 drivers/nvdimm/region.c                |   21 +++---
 drivers/pci/p2pdma.c                   |   11 +--
 drivers/xen/unpopulated-alloc.c        |   44 ++++++++-----
 include/linux/memremap.h               |    5 -
 include/linux/range.h                  |    6 +
 lib/test_hmm.c                         |   50 +++++++-------
 mm/memremap.c                          |   77 +++++++++++------------
 tools/testing/nvdimm/test/iomap.c      |    2 
 21 files changed, 195 insertions(+), 165 deletions(-)

--- a/arch/powerpc/kvm/book3s_hv_uvmem.c~mm-memremap_pages-convert-to-struct-range
+++ a/arch/powerpc/kvm/book3s_hv_uvmem.c
@@ -687,9 +687,9 @@ static struct page *kvmppc_uvmem_get_pag
 	struct kvmppc_uvmem_page_pvt *pvt;
 	unsigned long pfn_last, pfn_first;
 
-	pfn_first = kvmppc_uvmem_pgmap.res.start >> PAGE_SHIFT;
+	pfn_first = kvmppc_uvmem_pgmap.range.start >> PAGE_SHIFT;
 	pfn_last = pfn_first +
-		   (resource_size(&kvmppc_uvmem_pgmap.res) >> PAGE_SHIFT);
+		   (range_len(&kvmppc_uvmem_pgmap.range) >> PAGE_SHIFT);
 
 	spin_lock(&kvmppc_uvmem_bitmap_lock);
 	bit = find_first_zero_bit(kvmppc_uvmem_bitmap,
@@ -1007,7 +1007,7 @@ static vm_fault_t kvmppc_uvmem_migrate_t
 static void kvmppc_uvmem_page_free(struct page *page)
 {
 	unsigned long pfn = page_to_pfn(page) -
-			(kvmppc_uvmem_pgmap.res.start >> PAGE_SHIFT);
+			(kvmppc_uvmem_pgmap.range.start >> PAGE_SHIFT);
 	struct kvmppc_uvmem_page_pvt *pvt;
 
 	spin_lock(&kvmppc_uvmem_bitmap_lock);
@@ -1170,7 +1170,8 @@ int kvmppc_uvmem_init(void)
 	}
 
 	kvmppc_uvmem_pgmap.type = MEMORY_DEVICE_PRIVATE;
-	kvmppc_uvmem_pgmap.res = *res;
+	kvmppc_uvmem_pgmap.range.start = res->start;
+	kvmppc_uvmem_pgmap.range.end = res->end;
 	kvmppc_uvmem_pgmap.ops = &kvmppc_uvmem_ops;
 	/* just one global instance: */
 	kvmppc_uvmem_pgmap.owner = &kvmppc_uvmem_pgmap;
@@ -1205,7 +1206,7 @@ void kvmppc_uvmem_free(void)
 		return;
 
 	memunmap_pages(&kvmppc_uvmem_pgmap);
-	release_mem_region(kvmppc_uvmem_pgmap.res.start,
-			   resource_size(&kvmppc_uvmem_pgmap.res));
+	release_mem_region(kvmppc_uvmem_pgmap.range.start,
+			   range_len(&kvmppc_uvmem_pgmap.range));
 	kfree(kvmppc_uvmem_bitmap);
 }
--- a/drivers/dax/bus.c~mm-memremap_pages-convert-to-struct-range
+++ a/drivers/dax/bus.c
@@ -515,7 +515,7 @@ static void dax_region_unregister(void *
 }
 
 struct dax_region *alloc_dax_region(struct device *parent, int region_id,
-		struct resource *res, int target_node, unsigned int align,
+		struct range *range, int target_node, unsigned int align,
 		unsigned long flags)
 {
 	struct dax_region *dax_region;
@@ -530,8 +530,8 @@ struct dax_region *alloc_dax_region(stru
 		return NULL;
 	}
 
-	if (!IS_ALIGNED(res->start, align)
-			|| !IS_ALIGNED(resource_size(res), align))
+	if (!IS_ALIGNED(range->start, align)
+			|| !IS_ALIGNED(range_len(range), align))
 		return NULL;
 
 	dax_region = kzalloc(sizeof(*dax_region), GFP_KERNEL);
@@ -546,8 +546,8 @@ struct dax_region *alloc_dax_region(stru
 	dax_region->target_node = target_node;
 	ida_init(&dax_region->ida);
 	dax_region->res = (struct resource) {
-		.start = res->start,
-		.end = res->end,
+		.start = range->start,
+		.end = range->end,
 		.flags = IORESOURCE_MEM | flags,
 	};
 
--- a/drivers/dax/bus.h~mm-memremap_pages-convert-to-struct-range
+++ a/drivers/dax/bus.h
@@ -13,7 +13,7 @@ void dax_region_put(struct dax_region *d
 
 #define IORESOURCE_DAX_STATIC (1UL << 0)
 struct dax_region *alloc_dax_region(struct device *parent, int region_id,
-		struct resource *res, int target_node, unsigned int align,
+		struct range *range, int target_node, unsigned int align,
 		unsigned long flags);
 
 enum dev_dax_subsys {
--- a/drivers/dax/dax-private.h~mm-memremap_pages-convert-to-struct-range
+++ a/drivers/dax/dax-private.h
@@ -61,11 +61,6 @@ struct dev_dax {
 	struct range range;
 };
 
-static inline u64 range_len(struct range *range)
-{
-	return range->end - range->start + 1;
-}
-
 static inline struct dev_dax *to_dev_dax(struct device *dev)
 {
 	return container_of(dev, struct dev_dax, dev);
--- a/drivers/dax/device.c~mm-memremap_pages-convert-to-struct-range
+++ a/drivers/dax/device.c
@@ -416,8 +416,7 @@ int dev_dax_probe(struct dev_dax *dev_da
 		pgmap = devm_kzalloc(dev, sizeof(*pgmap), GFP_KERNEL);
 		if (!pgmap)
 			return -ENOMEM;
-		pgmap->res.start = range->start;
-		pgmap->res.end = range->end;
+		pgmap->range = *range;
 	}
 	pgmap->type = MEMORY_DEVICE_GENERIC;
 	addr = devm_memremap_pages(dev, pgmap);
--- a/drivers/dax/hmem/hmem.c~mm-memremap_pages-convert-to-struct-range
+++ a/drivers/dax/hmem/hmem.c
@@ -13,13 +13,16 @@ static int dax_hmem_probe(struct platfor
 	struct dev_dax_data data;
 	struct dev_dax *dev_dax;
 	struct resource *res;
+	struct range range;
 
 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
 	if (!res)
 		return -ENOMEM;
 
 	mri = dev->platform_data;
-	dax_region = alloc_dax_region(dev, pdev->id, res, mri->target_node,
+	range.start = res->start;
+	range.end = res->end;
+	dax_region = alloc_dax_region(dev, pdev->id, &range, mri->target_node,
 			PMD_SIZE, 0);
 	if (!dax_region)
 		return -ENOMEM;
--- a/drivers/dax/pmem/core.c~mm-memremap_pages-convert-to-struct-range
+++ a/drivers/dax/pmem/core.c
@@ -9,7 +9,7 @@
 
 struct dev_dax *__dax_pmem_probe(struct device *dev, enum dev_dax_subsys subsys)
 {
-	struct resource res;
+	struct range range;
 	int rc, id, region_id;
 	resource_size_t offset;
 	struct nd_pfn_sb *pfn_sb;
@@ -50,10 +50,10 @@ struct dev_dax *__dax_pmem_probe(struct
 	if (rc != 2)
 		return ERR_PTR(-EINVAL);
 
-	/* adjust the dax_region resource to the start of data */
-	memcpy(&res, &pgmap.res, sizeof(res));
-	res.start += offset;
-	dax_region = alloc_dax_region(dev, region_id, &res,
+	/* adjust the dax_region range to the start of data */
+	range = pgmap.range;
+	range.start += offset,
+	dax_region = alloc_dax_region(dev, region_id, &range,
 			nd_region->target_node, le32_to_cpu(pfn_sb->align),
 			IORESOURCE_DAX_STATIC);
 	if (!dax_region)
@@ -64,7 +64,7 @@ struct dev_dax *__dax_pmem_probe(struct
 		.id = id,
 		.pgmap = &pgmap,
 		.subsys = subsys,
-		.size = resource_size(&res),
+		.size = range_len(&range),
 	};
 	dev_dax = devm_create_dev_dax(&data);
 
--- a/drivers/gpu/drm/nouveau/nouveau_dmem.c~mm-memremap_pages-convert-to-struct-range
+++ a/drivers/gpu/drm/nouveau/nouveau_dmem.c
@@ -101,7 +101,7 @@ unsigned long nouveau_dmem_page_addr(str
 {
 	struct nouveau_dmem_chunk *chunk = nouveau_page_to_chunk(page);
 	unsigned long off = (page_to_pfn(page) << PAGE_SHIFT) -
-				chunk->pagemap.res.start;
+				chunk->pagemap.range.start;
 
 	return chunk->bo->offset + off;
 }
@@ -249,7 +249,8 @@ nouveau_dmem_chunk_alloc(struct nouveau_
 
 	chunk->drm = drm;
 	chunk->pagemap.type = MEMORY_DEVICE_PRIVATE;
-	chunk->pagemap.res = *res;
+	chunk->pagemap.range.start = res->start;
+	chunk->pagemap.range.end = res->end;
 	chunk->pagemap.ops = &nouveau_dmem_pagemap_ops;
 	chunk->pagemap.owner = drm->dev;
 
@@ -273,7 +274,7 @@ nouveau_dmem_chunk_alloc(struct nouveau_
 	list_add(&chunk->list, &drm->dmem->chunks);
 	mutex_unlock(&drm->dmem->mutex);
 
-	pfn_first = chunk->pagemap.res.start >> PAGE_SHIFT;
+	pfn_first = chunk->pagemap.range.start >> PAGE_SHIFT;
 	page = pfn_to_page(pfn_first);
 	spin_lock(&drm->dmem->lock);
 	for (i = 0; i < DMEM_CHUNK_NPAGES - 1; ++i, ++page) {
@@ -294,8 +295,7 @@ out_bo_unpin:
 out_bo_free:
 	nouveau_bo_ref(NULL, &chunk->bo);
 out_release:
-	release_mem_region(chunk->pagemap.res.start,
-			   resource_size(&chunk->pagemap.res));
+	release_mem_region(chunk->pagemap.range.start, range_len(&chunk->pagemap.range));
 out_free:
 	kfree(chunk);
 out:
@@ -382,8 +382,8 @@ nouveau_dmem_fini(struct nouveau_drm *dr
 		nouveau_bo_ref(NULL, &chunk->bo);
 		list_del(&chunk->list);
 		memunmap_pages(&chunk->pagemap);
-		release_mem_region(chunk->pagemap.res.start,
-				   resource_size(&chunk->pagemap.res));
+		release_mem_region(chunk->pagemap.range.start,
+				   range_len(&chunk->pagemap.range));
 		kfree(chunk);
 	}
 
--- a/drivers/nvdimm/badrange.c~mm-memremap_pages-convert-to-struct-range
+++ a/drivers/nvdimm/badrange.c
@@ -211,7 +211,7 @@ static void __add_badblock_range(struct
 }
 
 static void badblocks_populate(struct badrange *badrange,
-		struct badblocks *bb, const struct resource *res)
+		struct badblocks *bb, const struct range *range)
 {
 	struct badrange_entry *bre;
 
@@ -222,34 +222,34 @@ static void badblocks_populate(struct ba
 		u64 bre_end = bre->start + bre->length - 1;
 
 		/* Discard intervals with no intersection */
-		if (bre_end < res->start)
+		if (bre_end < range->start)
 			continue;
-		if (bre->start >  res->end)
+		if (bre->start > range->end)
 			continue;
 		/* Deal with any overlap after start of the namespace */
-		if (bre->start >= res->start) {
+		if (bre->start >= range->start) {
 			u64 start = bre->start;
 			u64 len;
 
-			if (bre_end <= res->end)
+			if (bre_end <= range->end)
 				len = bre->length;
 			else
-				len = res->start + resource_size(res)
+				len = range->start + range_len(range)
 					- bre->start;
-			__add_badblock_range(bb, start - res->start, len);
+			__add_badblock_range(bb, start - range->start, len);
 			continue;
 		}
 		/*
 		 * Deal with overlap for badrange starting before
 		 * the namespace.
 		 */
-		if (bre->start < res->start) {
+		if (bre->start < range->start) {
 			u64 len;
 
-			if (bre_end < res->end)
-				len = bre->start + bre->length - res->start;
+			if (bre_end < range->end)
+				len = bre->start + bre->length - range->start;
 			else
-				len = resource_size(res);
+				len = range_len(range);
 			__add_badblock_range(bb, 0, len);
 		}
 	}
@@ -267,7 +267,7 @@ static void badblocks_populate(struct ba
  * and add badblocks entries for all matching sub-ranges
  */
 void nvdimm_badblocks_populate(struct nd_region *nd_region,
-		struct badblocks *bb, const struct resource *res)
+		struct badblocks *bb, const struct range *range)
 {
 	struct nvdimm_bus *nvdimm_bus;
 
@@ -279,7 +279,7 @@ void nvdimm_badblocks_populate(struct nd
 	nvdimm_bus = walk_to_nvdimm_bus(&nd_region->dev);
 
 	nvdimm_bus_lock(&nvdimm_bus->dev);
-	badblocks_populate(&nvdimm_bus->badrange, bb, res);
+	badblocks_populate(&nvdimm_bus->badrange, bb, range);
 	nvdimm_bus_unlock(&nvdimm_bus->dev);
 }
 EXPORT_SYMBOL_GPL(nvdimm_badblocks_populate);
--- a/drivers/nvdimm/claim.c~mm-memremap_pages-convert-to-struct-range
+++ a/drivers/nvdimm/claim.c
@@ -303,13 +303,16 @@ static int nsio_rw_bytes(struct nd_names
 int devm_nsio_enable(struct device *dev, struct nd_namespace_io *nsio,
 		resource_size_t size)
 {
-	struct resource *res = &nsio->res;
 	struct nd_namespace_common *ndns = &nsio->common;
+	struct range range = {
+		.start = nsio->res.start,
+		.end = nsio->res.end,
+	};
 
 	nsio->size = size;
-	if (!devm_request_mem_region(dev, res->start, size,
+	if (!devm_request_mem_region(dev, range.start, size,
 				dev_name(&ndns->dev))) {
-		dev_warn(dev, "could not reserve region %pR\n", res);
+		dev_warn(dev, "could not reserve region %pR\n", &nsio->res);
 		return -EBUSY;
 	}
 
@@ -317,9 +320,9 @@ int devm_nsio_enable(struct device *dev,
 	if (devm_init_badblocks(dev, &nsio->bb))
 		return -ENOMEM;
 	nvdimm_badblocks_populate(to_nd_region(ndns->dev.parent), &nsio->bb,
-			&nsio->res);
+			&range);
 
-	nsio->addr = devm_memremap(dev, res->start, size, ARCH_MEMREMAP_PMEM);
+	nsio->addr = devm_memremap(dev, range.start, size, ARCH_MEMREMAP_PMEM);
 
 	return PTR_ERR_OR_ZERO(nsio->addr);
 }
--- a/drivers/nvdimm/nd.h~mm-memremap_pages-convert-to-struct-range
+++ a/drivers/nvdimm/nd.h
@@ -377,8 +377,9 @@ int nvdimm_namespace_detach_btt(struct n
 const char *nvdimm_namespace_disk_name(struct nd_namespace_common *ndns,
 		char *name);
 unsigned int pmem_sector_size(struct nd_namespace_common *ndns);
+struct range;
 void nvdimm_badblocks_populate(struct nd_region *nd_region,
-		struct badblocks *bb, const struct resource *res);
+		struct badblocks *bb, const struct range *range);
 int devm_namespace_enable(struct device *dev, struct nd_namespace_common *ndns,
 		resource_size_t size);
 void devm_namespace_disable(struct device *dev,
--- a/drivers/nvdimm/pfn_devs.c~mm-memremap_pages-convert-to-struct-range
+++ a/drivers/nvdimm/pfn_devs.c
@@ -672,7 +672,7 @@ static unsigned long init_altmap_reserve
 
 static int __nvdimm_setup_pfn(struct nd_pfn *nd_pfn, struct dev_pagemap *pgmap)
 {
-	struct resource *res = &pgmap->res;
+	struct range *range = &pgmap->range;
 	struct vmem_altmap *altmap = &pgmap->altmap;
 	struct nd_pfn_sb *pfn_sb = nd_pfn->pfn_sb;
 	u64 offset = le64_to_cpu(pfn_sb->dataoff);
@@ -689,16 +689,16 @@ static int __nvdimm_setup_pfn(struct nd_
 		.end_pfn = PHYS_PFN(end),
 	};
 
-	memcpy(res, &nsio->res, sizeof(*res));
-	res->start += start_pad;
-	res->end -= end_trunc;
-
+	*range = (struct range) {
+		.start = nsio->res.start + start_pad,
+		.end = nsio->res.end - end_trunc,
+	};
 	if (nd_pfn->mode == PFN_MODE_RAM) {
 		if (offset < reserve)
 			return -EINVAL;
 		nd_pfn->npfns = le64_to_cpu(pfn_sb->npfns);
 	} else if (nd_pfn->mode == PFN_MODE_PMEM) {
-		nd_pfn->npfns = PHYS_PFN((resource_size(res) - offset));
+		nd_pfn->npfns = PHYS_PFN((range_len(range) - offset));
 		if (le64_to_cpu(nd_pfn->pfn_sb->npfns) > nd_pfn->npfns)
 			dev_info(&nd_pfn->dev,
 					"number of pfns truncated from %lld to %ld\n",
--- a/drivers/nvdimm/pmem.c~mm-memremap_pages-convert-to-struct-range
+++ a/drivers/nvdimm/pmem.c
@@ -375,7 +375,7 @@ static int pmem_attach_disk(struct devic
 	struct nd_region *nd_region = to_nd_region(dev->parent);
 	int nid = dev_to_node(dev), fua;
 	struct resource *res = &nsio->res;
-	struct resource bb_res;
+	struct range bb_range;
 	struct nd_pfn *nd_pfn = NULL;
 	struct dax_device *dax_dev;
 	struct nd_pfn_sb *pfn_sb;
@@ -434,24 +434,26 @@ static int pmem_attach_disk(struct devic
 		pfn_sb = nd_pfn->pfn_sb;
 		pmem->data_offset = le64_to_cpu(pfn_sb->dataoff);
 		pmem->pfn_pad = resource_size(res) -
-			resource_size(&pmem->pgmap.res);
+			range_len(&pmem->pgmap.range);
 		pmem->pfn_flags |= PFN_MAP;
-		memcpy(&bb_res, &pmem->pgmap.res, sizeof(bb_res));
-		bb_res.start += pmem->data_offset;
+		bb_range = pmem->pgmap.range;
+		bb_range.start += pmem->data_offset;
 	} else if (pmem_should_map_pages(dev)) {
-		memcpy(&pmem->pgmap.res, &nsio->res, sizeof(pmem->pgmap.res));
+		pmem->pgmap.range.start = res->start;
+		pmem->pgmap.range.end = res->end;
 		pmem->pgmap.type = MEMORY_DEVICE_FS_DAX;
 		pmem->pgmap.ops = &fsdax_pagemap_ops;
 		addr = devm_memremap_pages(dev, &pmem->pgmap);
 		pmem->pfn_flags |= PFN_MAP;
-		memcpy(&bb_res, &pmem->pgmap.res, sizeof(bb_res));
+		bb_range = pmem->pgmap.range;
 	} else {
 		if (devm_add_action_or_reset(dev, pmem_release_queue,
 					&pmem->pgmap))
 			return -ENOMEM;
 		addr = devm_memremap(dev, pmem->phys_addr,
 				pmem->size, ARCH_MEMREMAP_PMEM);
-		memcpy(&bb_res, &nsio->res, sizeof(bb_res));
+		bb_range.start =  res->start;
+		bb_range.end = res->end;
 	}
 
 	if (IS_ERR(addr))
@@ -480,7 +482,7 @@ static int pmem_attach_disk(struct devic
 			/ 512);
 	if (devm_init_badblocks(dev, &pmem->bb))
 		return -ENOMEM;
-	nvdimm_badblocks_populate(nd_region, &pmem->bb, &bb_res);
+	nvdimm_badblocks_populate(nd_region, &pmem->bb, &bb_range);
 	disk->bb = &pmem->bb;
 
 	if (is_nvdimm_sync(nd_region))
@@ -591,8 +593,8 @@ static void nd_pmem_notify(struct device
 	resource_size_t offset = 0, end_trunc = 0;
 	struct nd_namespace_common *ndns;
 	struct nd_namespace_io *nsio;
-	struct resource res;
 	struct badblocks *bb;
+	struct range range;
 	struct kernfs_node *bb_state;
 
 	if (event != NVDIMM_REVALIDATE_POISON)
@@ -628,9 +630,9 @@ static void nd_pmem_notify(struct device
 		nsio = to_nd_namespace_io(&ndns->dev);
 	}
 
-	res.start = nsio->res.start + offset;
-	res.end = nsio->res.end - end_trunc;
-	nvdimm_badblocks_populate(nd_region, bb, &res);
+	range.start = nsio->res.start + offset;
+	range.end = nsio->res.end - end_trunc;
+	nvdimm_badblocks_populate(nd_region, bb, &range);
 	if (bb_state)
 		sysfs_notify_dirent(bb_state);
 }
--- a/drivers/nvdimm/region.c~mm-memremap_pages-convert-to-struct-range
+++ a/drivers/nvdimm/region.c
@@ -35,7 +35,10 @@ static int nd_region_probe(struct device
 		return rc;
 
 	if (is_memory(&nd_region->dev)) {
-		struct resource ndr_res;
+		struct range range = {
+			.start = nd_region->ndr_start,
+			.end = nd_region->ndr_start + nd_region->ndr_size - 1,
+		};
 
 		if (devm_init_badblocks(dev, &nd_region->bb))
 			return -ENODEV;
@@ -44,9 +47,7 @@ static int nd_region_probe(struct device
 		if (!nd_region->bb_state)
 			dev_warn(&nd_region->dev,
 					"'badblocks' notification disabled\n");
-		ndr_res.start = nd_region->ndr_start;
-		ndr_res.end = nd_region->ndr_start + nd_region->ndr_size - 1;
-		nvdimm_badblocks_populate(nd_region, &nd_region->bb, &ndr_res);
+		nvdimm_badblocks_populate(nd_region, &nd_region->bb, &range);
 	}
 
 	rc = nd_region_register_namespaces(nd_region, &err);
@@ -121,14 +122,16 @@ static void nd_region_notify(struct devi
 {
 	if (event == NVDIMM_REVALIDATE_POISON) {
 		struct nd_region *nd_region = to_nd_region(dev);
-		struct resource res;
 
 		if (is_memory(&nd_region->dev)) {
-			res.start = nd_region->ndr_start;
-			res.end = nd_region->ndr_start +
-				nd_region->ndr_size - 1;
+			struct range range = {
+				.start = nd_region->ndr_start,
+				.end = nd_region->ndr_start +
+					nd_region->ndr_size - 1,
+			};
+
 			nvdimm_badblocks_populate(nd_region,
-					&nd_region->bb, &res);
+					&nd_region->bb, &range);
 			if (nd_region->bb_state)
 				sysfs_notify_dirent(nd_region->bb_state);
 		}
--- a/drivers/pci/p2pdma.c~mm-memremap_pages-convert-to-struct-range
+++ a/drivers/pci/p2pdma.c
@@ -185,9 +185,8 @@ int pci_p2pdma_add_resource(struct pci_d
 		return -ENOMEM;
 
 	pgmap = &p2p_pgmap->pgmap;
-	pgmap->res.start = pci_resource_start(pdev, bar) + offset;
-	pgmap->res.end = pgmap->res.start + size - 1;
-	pgmap->res.flags = pci_resource_flags(pdev, bar);
+	pgmap->range.start = pci_resource_start(pdev, bar) + offset;
+	pgmap->range.end = pgmap->range.start + size - 1;
 	pgmap->type = MEMORY_DEVICE_PCI_P2PDMA;
 
 	p2p_pgmap->provider = pdev;
@@ -202,13 +201,13 @@ int pci_p2pdma_add_resource(struct pci_d
 
 	error = gen_pool_add_owner(pdev->p2pdma->pool, (unsigned long)addr,
 			pci_bus_address(pdev, bar) + offset,
-			resource_size(&pgmap->res), dev_to_node(&pdev->dev),
+			range_len(&pgmap->range), dev_to_node(&pdev->dev),
 			pgmap->ref);
 	if (error)
 		goto pages_free;
 
-	pci_info(pdev, "added peer-to-peer DMA memory %pR\n",
-		 &pgmap->res);
+	pci_info(pdev, "added peer-to-peer DMA memory %#llx-%#llx\n",
+		 pgmap->range.start, pgmap->range.end);
 
 	return 0;
 
--- a/drivers/xen/unpopulated-alloc.c~mm-memremap_pages-convert-to-struct-range
+++ a/drivers/xen/unpopulated-alloc.c
@@ -18,27 +18,37 @@ static unsigned int list_count;
 static int fill_list(unsigned int nr_pages)
 {
 	struct dev_pagemap *pgmap;
+	struct resource *res;
 	void *vaddr;
 	unsigned int i, alloc_pages = round_up(nr_pages, PAGES_PER_SECTION);
-	int ret;
+	int ret = -ENOMEM;
+
+	res = kzalloc(sizeof(*res), GFP_KERNEL);
+	if (!res)
+		return -ENOMEM;
 
 	pgmap = kzalloc(sizeof(*pgmap), GFP_KERNEL);
 	if (!pgmap)
-		return -ENOMEM;
+		goto err_pgmap;
 
 	pgmap->type = MEMORY_DEVICE_GENERIC;
-	pgmap->res.name = "Xen scratch";
-	pgmap->res.flags = IORESOURCE_MEM | IORESOURCE_BUSY;
+	res->name = "Xen scratch";
+	res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
 
-	ret = allocate_resource(&iomem_resource, &pgmap->res,
+	ret = allocate_resource(&iomem_resource, res,
 				alloc_pages * PAGE_SIZE, 0, -1,
 				PAGES_PER_SECTION * PAGE_SIZE, NULL, NULL);
 	if (ret < 0) {
 		pr_err("Cannot allocate new IOMEM resource\n");
-		kfree(pgmap);
-		return ret;
+		goto err_resource;
 	}
 
+	pgmap->range = (struct range) {
+		.start = res->start,
+		.end = res->end,
+	};
+	pgmap->owner = res;
+
 #ifdef CONFIG_XEN_HAVE_PVMMU
         /*
          * memremap will build page tables for the new memory so
@@ -50,14 +60,13 @@ static int fill_list(unsigned int nr_pag
          * conflict with any devices.
          */
 	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-		xen_pfn_t pfn = PFN_DOWN(pgmap->res.start);
+		xen_pfn_t pfn = PFN_DOWN(res->start);
 
 		for (i = 0; i < alloc_pages; i++) {
 			if (!set_phys_to_machine(pfn + i, INVALID_P2M_ENTRY)) {
 				pr_warn("set_phys_to_machine() failed, no memory added\n");
-				release_resource(&pgmap->res);
-				kfree(pgmap);
-				return -ENOMEM;
+				ret = -ENOMEM;
+				goto err_memremap;
 			}
                 }
 	}
@@ -66,9 +75,8 @@ static int fill_list(unsigned int nr_pag
 	vaddr = memremap_pages(pgmap, NUMA_NO_NODE);
 	if (IS_ERR(vaddr)) {
 		pr_err("Cannot remap memory range\n");
-		release_resource(&pgmap->res);
-		kfree(pgmap);
-		return PTR_ERR(vaddr);
+		ret = PTR_ERR(vaddr);
+		goto err_memremap;
 	}
 
 	for (i = 0; i < alloc_pages; i++) {
@@ -80,6 +88,14 @@ static int fill_list(unsigned int nr_pag
 	}
 
 	return 0;
+
+err_memremap:
+	release_resource(res);
+err_resource:
+	kfree(pgmap);
+err_pgmap:
+	kfree(res);
+	return ret;
 }
 
 /**
--- a/include/linux/memremap.h~mm-memremap_pages-convert-to-struct-range
+++ a/include/linux/memremap.h
@@ -1,6 +1,7 @@
 /* SPDX-License-Identifier: GPL-2.0 */
 #ifndef _LINUX_MEMREMAP_H_
 #define _LINUX_MEMREMAP_H_
+#include <linux/range.h>
 #include <linux/ioport.h>
 #include <linux/percpu-refcount.h>
 
@@ -93,7 +94,7 @@ struct dev_pagemap_ops {
 /**
  * struct dev_pagemap - metadata for ZONE_DEVICE mappings
  * @altmap: pre-allocated/reserved memory for vmemmap allocations
- * @res: physical address range covered by @ref
+ * @range: physical address range covered by @ref
  * @ref: reference count that pins the devm_memremap_pages() mapping
  * @internal_ref: internal reference if @ref is not provided by the caller
  * @done: completion for @internal_ref
@@ -106,7 +107,7 @@ struct dev_pagemap_ops {
  */
 struct dev_pagemap {
 	struct vmem_altmap altmap;
-	struct resource res;
+	struct range range;
 	struct percpu_ref *ref;
 	struct percpu_ref internal_ref;
 	struct completion done;
--- a/include/linux/range.h~mm-memremap_pages-convert-to-struct-range
+++ a/include/linux/range.h
@@ -1,12 +1,18 @@
 /* SPDX-License-Identifier: GPL-2.0 */
 #ifndef _LINUX_RANGE_H
 #define _LINUX_RANGE_H
+#include <linux/types.h>
 
 struct range {
 	u64   start;
 	u64   end;
 };
 
+static inline u64 range_len(const struct range *range)
+{
+	return range->end - range->start + 1;
+}
+
 int add_range(struct range *range, int az, int nr_range,
 		u64 start, u64 end);
 
--- a/lib/test_hmm.c~mm-memremap_pages-convert-to-struct-range
+++ a/lib/test_hmm.c
@@ -460,6 +460,21 @@ static bool dmirror_allocate_chunk(struc
 	unsigned long pfn_last;
 	void *ptr;
 
+	devmem = kzalloc(sizeof(*devmem), GFP_KERNEL);
+	if (!devmem)
+		return -ENOMEM;
+
+	res = request_free_mem_region(&iomem_resource, DEVMEM_CHUNK_SIZE,
+				      "hmm_dmirror");
+	if (IS_ERR(res))
+		goto err_devmem;
+
+	devmem->pagemap.type = MEMORY_DEVICE_PRIVATE;
+	devmem->pagemap.range.start = res->start;
+	devmem->pagemap.range.end = res->end;
+	devmem->pagemap.ops = &dmirror_devmem_ops;
+	devmem->pagemap.owner = mdevice;
+
 	mutex_lock(&mdevice->devmem_lock);
 
 	if (mdevice->devmem_count == mdevice->devmem_capacity) {
@@ -472,33 +487,18 @@ static bool dmirror_allocate_chunk(struc
 				sizeof(new_chunks[0]) * new_capacity,
 				GFP_KERNEL);
 		if (!new_chunks)
-			goto err;
+			goto err_release;
 		mdevice->devmem_capacity = new_capacity;
 		mdevice->devmem_chunks = new_chunks;
 	}
 
-	res = request_free_mem_region(&iomem_resource, DEVMEM_CHUNK_SIZE,
-					"hmm_dmirror");
-	if (IS_ERR(res))
-		goto err;
-
-	devmem = kzalloc(sizeof(*devmem), GFP_KERNEL);
-	if (!devmem)
-		goto err_release;
-
-	devmem->pagemap.type = MEMORY_DEVICE_PRIVATE;
-	devmem->pagemap.res = *res;
-	devmem->pagemap.ops = &dmirror_devmem_ops;
-	devmem->pagemap.owner = mdevice;
-
 	ptr = memremap_pages(&devmem->pagemap, numa_node_id());
 	if (IS_ERR(ptr))
-		goto err_free;
+		goto err_release;
 
 	devmem->mdevice = mdevice;
-	pfn_first = devmem->pagemap.res.start >> PAGE_SHIFT;
-	pfn_last = pfn_first +
-		(resource_size(&devmem->pagemap.res) >> PAGE_SHIFT);
+	pfn_first = devmem->pagemap.range.start >> PAGE_SHIFT;
+	pfn_last = pfn_first + (range_len(&devmem->pagemap.range) >> PAGE_SHIFT);
 	mdevice->devmem_chunks[mdevice->devmem_count++] = devmem;
 
 	mutex_unlock(&mdevice->devmem_lock);
@@ -525,12 +525,12 @@ static bool dmirror_allocate_chunk(struc
 
 	return true;
 
-err_free:
-	kfree(devmem);
 err_release:
-	release_mem_region(res->start, resource_size(res));
-err:
 	mutex_unlock(&mdevice->devmem_lock);
+	release_mem_region(devmem->pagemap.range.start, range_len(&devmem->pagemap.range));
+err_devmem:
+	kfree(devmem);
+
 	return false;
 }
 
@@ -1100,8 +1100,8 @@ static void dmirror_device_remove(struct
 				mdevice->devmem_chunks[i];
 
 			memunmap_pages(&devmem->pagemap);
-			release_mem_region(devmem->pagemap.res.start,
-					   resource_size(&devmem->pagemap.res));
+			release_mem_region(devmem->pagemap.range.start,
+					   range_len(&devmem->pagemap.range));
 			kfree(devmem);
 		}
 		kfree(mdevice->devmem_chunks);
--- a/mm/memremap.c~mm-memremap_pages-convert-to-struct-range
+++ a/mm/memremap.c
@@ -70,24 +70,24 @@ static void devmap_managed_enable_put(vo
 }
 #endif /* CONFIG_DEV_PAGEMAP_OPS */
 
-static void pgmap_array_delete(struct resource *res)
+static void pgmap_array_delete(struct range *range)
 {
-	xa_store_range(&pgmap_array, PHYS_PFN(res->start), PHYS_PFN(res->end),
+	xa_store_range(&pgmap_array, PHYS_PFN(range->start), PHYS_PFN(range->end),
 			NULL, GFP_KERNEL);
 	synchronize_rcu();
 }
 
 static unsigned long pfn_first(struct dev_pagemap *pgmap)
 {
-	return PHYS_PFN(pgmap->res.start) +
+	return PHYS_PFN(pgmap->range.start) +
 		vmem_altmap_offset(pgmap_altmap(pgmap));
 }
 
 static unsigned long pfn_end(struct dev_pagemap *pgmap)
 {
-	const struct resource *res = &pgmap->res;
+	const struct range *range = &pgmap->range;
 
-	return (res->start + resource_size(res)) >> PAGE_SHIFT;
+	return (range->start + range_len(range)) >> PAGE_SHIFT;
 }
 
 static unsigned long pfn_next(unsigned long pfn)
@@ -126,7 +126,7 @@ static void dev_pagemap_cleanup(struct d
 
 void memunmap_pages(struct dev_pagemap *pgmap)
 {
-	struct resource *res = &pgmap->res;
+	struct range *range = &pgmap->range;
 	struct page *first_page;
 	unsigned long pfn;
 	int nid;
@@ -143,20 +143,20 @@ void memunmap_pages(struct dev_pagemap *
 	nid = page_to_nid(first_page);
 
 	mem_hotplug_begin();
-	remove_pfn_range_from_zone(page_zone(first_page), PHYS_PFN(res->start),
-				   PHYS_PFN(resource_size(res)));
+	remove_pfn_range_from_zone(page_zone(first_page), PHYS_PFN(range->start),
+				   PHYS_PFN(range_len(range)));
 	if (pgmap->type == MEMORY_DEVICE_PRIVATE) {
-		__remove_pages(PHYS_PFN(res->start),
-			       PHYS_PFN(resource_size(res)), NULL);
+		__remove_pages(PHYS_PFN(range->start),
+			       PHYS_PFN(range_len(range)), NULL);
 	} else {
-		arch_remove_memory(nid, res->start, resource_size(res),
+		arch_remove_memory(nid, range->start, range_len(range),
 				pgmap_altmap(pgmap));
-		kasan_remove_zero_shadow(__va(res->start), resource_size(res));
+		kasan_remove_zero_shadow(__va(range->start), range_len(range));
 	}
 	mem_hotplug_done();
 
-	untrack_pfn(NULL, PHYS_PFN(res->start), resource_size(res));
-	pgmap_array_delete(res);
+	untrack_pfn(NULL, PHYS_PFN(range->start), range_len(range));
+	pgmap_array_delete(range);
 	WARN_ONCE(pgmap->altmap.alloc, "failed to free all reserved pages\n");
 	devmap_managed_enable_put();
 }
@@ -182,7 +182,7 @@ static void dev_pagemap_percpu_release(s
  */
 void *memremap_pages(struct dev_pagemap *pgmap, int nid)
 {
-	struct resource *res = &pgmap->res;
+	struct range *range = &pgmap->range;
 	struct dev_pagemap *conflict_pgmap;
 	struct mhp_params params = {
 		/*
@@ -251,7 +251,7 @@ void *memremap_pages(struct dev_pagemap
 			return ERR_PTR(error);
 	}
 
-	conflict_pgmap = get_dev_pagemap(PHYS_PFN(res->start), NULL);
+	conflict_pgmap = get_dev_pagemap(PHYS_PFN(range->start), NULL);
 	if (conflict_pgmap) {
 		WARN(1, "Conflicting mapping in same section\n");
 		put_dev_pagemap(conflict_pgmap);
@@ -259,7 +259,7 @@ void *memremap_pages(struct dev_pagemap
 		goto err_array;
 	}
 
-	conflict_pgmap = get_dev_pagemap(PHYS_PFN(res->end), NULL);
+	conflict_pgmap = get_dev_pagemap(PHYS_PFN(range->end), NULL);
 	if (conflict_pgmap) {
 		WARN(1, "Conflicting mapping in same section\n");
 		put_dev_pagemap(conflict_pgmap);
@@ -267,26 +267,27 @@ void *memremap_pages(struct dev_pagemap
 		goto err_array;
 	}
 
-	is_ram = region_intersects(res->start, resource_size(res),
+	is_ram = region_intersects(range->start, range_len(range),
 		IORESOURCE_SYSTEM_RAM, IORES_DESC_NONE);
 
 	if (is_ram != REGION_DISJOINT) {
-		WARN_ONCE(1, "%s attempted on %s region %pr\n", __func__,
-				is_ram == REGION_MIXED ? "mixed" : "ram", res);
+		WARN_ONCE(1, "attempted on %s region %#llx-%#llx\n",
+				is_ram == REGION_MIXED ? "mixed" : "ram",
+				range->start, range->end);
 		error = -ENXIO;
 		goto err_array;
 	}
 
-	error = xa_err(xa_store_range(&pgmap_array, PHYS_PFN(res->start),
-				PHYS_PFN(res->end), pgmap, GFP_KERNEL));
+	error = xa_err(xa_store_range(&pgmap_array, PHYS_PFN(range->start),
+				PHYS_PFN(range->end), pgmap, GFP_KERNEL));
 	if (error)
 		goto err_array;
 
 	if (nid < 0)
 		nid = numa_mem_id();
 
-	error = track_pfn_remap(NULL, &params.pgprot, PHYS_PFN(res->start),
-				0, resource_size(res));
+	error = track_pfn_remap(NULL, &params.pgprot, PHYS_PFN(range->start), 0,
+			range_len(range));
 	if (error)
 		goto err_pfn_remap;
 
@@ -304,16 +305,16 @@ void *memremap_pages(struct dev_pagemap
 	 * arch_add_memory().
 	 */
 	if (pgmap->type == MEMORY_DEVICE_PRIVATE) {
-		error = add_pages(nid, PHYS_PFN(res->start),
-				PHYS_PFN(resource_size(res)), &params);
+		error = add_pages(nid, PHYS_PFN(range->start),
+				PHYS_PFN(range_len(range)), &params);
 	} else {
-		error = kasan_add_zero_shadow(__va(res->start), resource_size(res));
+		error = kasan_add_zero_shadow(__va(range->start), range_len(range));
 		if (error) {
 			mem_hotplug_done();
 			goto err_kasan;
 		}
 
-		error = arch_add_memory(nid, res->start, resource_size(res),
+		error = arch_add_memory(nid, range->start, range_len(range),
 					&params);
 	}
 
@@ -321,8 +322,8 @@ void *memremap_pages(struct dev_pagemap
 		struct zone *zone;
 
 		zone = &NODE_DATA(nid)->node_zones[ZONE_DEVICE];
-		move_pfn_range_to_zone(zone, PHYS_PFN(res->start),
-				PHYS_PFN(resource_size(res)), params.altmap);
+		move_pfn_range_to_zone(zone, PHYS_PFN(range->start),
+				PHYS_PFN(range_len(range)), params.altmap);
 	}
 
 	mem_hotplug_done();
@@ -334,17 +335,17 @@ void *memremap_pages(struct dev_pagemap
 	 * to allow us to do the work while not holding the hotplug lock.
 	 */
 	memmap_init_zone_device(&NODE_DATA(nid)->node_zones[ZONE_DEVICE],
-				PHYS_PFN(res->start),
-				PHYS_PFN(resource_size(res)), pgmap);
+				PHYS_PFN(range->start),
+				PHYS_PFN(range_len(range)), pgmap);
 	percpu_ref_get_many(pgmap->ref, pfn_end(pgmap) - pfn_first(pgmap));
-	return __va(res->start);
+	return __va(range->start);
 
  err_add_memory:
-	kasan_remove_zero_shadow(__va(res->start), resource_size(res));
+	kasan_remove_zero_shadow(__va(range->start), range_len(range));
  err_kasan:
-	untrack_pfn(NULL, PHYS_PFN(res->start), resource_size(res));
+	untrack_pfn(NULL, PHYS_PFN(range->start), range_len(range));
  err_pfn_remap:
-	pgmap_array_delete(res);
+	pgmap_array_delete(range);
  err_array:
 	dev_pagemap_kill(pgmap);
 	dev_pagemap_cleanup(pgmap);
@@ -369,7 +370,7 @@ EXPORT_SYMBOL_GPL(memremap_pages);
  *    'live' on entry and will be killed and reaped at
  *    devm_memremap_pages_release() time, or if this routine fails.
  *
- * 4/ res is expected to be a host memory range that could feasibly be
+ * 4/ range is expected to be a host memory range that could feasibly be
  *    treated as a "System RAM" range, i.e. not a device mmio range, but
  *    this is not enforced.
  */
@@ -426,7 +427,7 @@ struct dev_pagemap *get_dev_pagemap(unsi
 	 * In the cached case we're already holding a live reference.
 	 */
 	if (pgmap) {
-		if (phys >= pgmap->res.start && phys <= pgmap->res.end)
+		if (phys >= pgmap->range.start && phys <= pgmap->range.end)
 			return pgmap;
 		put_dev_pagemap(pgmap);
 	}
--- a/tools/testing/nvdimm/test/iomap.c~mm-memremap_pages-convert-to-struct-range
+++ a/tools/testing/nvdimm/test/iomap.c
@@ -126,7 +126,7 @@ static void dev_pagemap_percpu_release(s
 void *__wrap_devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
 {
 	int error;
-	resource_size_t offset = pgmap->res.start;
+	resource_size_t offset = pgmap->range.start;
 	struct nfit_test_resource *nfit_res = get_nfit_res(offset);
 
 	if (!nfit_res)
_


  parent reply	other threads:[~2020-10-13 23:50 UTC|newest]

Thread overview: 182+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-13 23:46 incoming Andrew Morton
2020-10-13 23:47 ` [patch 001/181] compiler-clang: add build check for clang 10.0.1 Andrew Morton
2020-10-13 23:47 ` [patch 002/181] Revert "kbuild: disable clang's default use of -fmerge-all-constants" Andrew Morton
2020-10-13 23:47 ` [patch 003/181] Revert "arm64: bti: Require clang >= 10.0.1 for in-kernel BTI support" Andrew Morton
2020-10-13 23:47 ` [patch 004/181] Revert "arm64: vdso: Fix compilation with clang older than 8" Andrew Morton
2020-10-13 23:47 ` [patch 005/181] Partially revert "ARM: 8905/1: Emit __gnu_mcount_nc when using Clang 10.0.0 or newer" Andrew Morton
2020-10-13 23:47 ` [patch 006/181] kasan: remove mentions of unsupported Clang versions Andrew Morton
2020-10-13 23:47 ` [patch 007/181] compiler-gcc: improve version error Andrew Morton
2020-10-13 23:47 ` [patch 008/181] compiler.h: avoid escaped section names Andrew Morton
2020-10-13 23:48 ` [patch 009/181] export.h: fix section name for CONFIG_TRIM_UNUSED_KSYMS for Clang Andrew Morton
2020-10-13 23:48 ` [patch 010/181] kbuild: doc: describe proper script invocation Andrew Morton
2020-10-13 23:48 ` [patch 011/181] scripts/spelling.txt: increase error-prone spell checking Andrew Morton
2020-10-13 23:48 ` [patch 012/181] scripts/spelling.txt: add "arbitrary" typo Andrew Morton
2020-10-13 23:48 ` [patch 013/181] scripts/decodecode: add the capability to supply the program counter Andrew Morton
2020-10-13 23:48 ` [patch 014/181] ntfs: add check for mft record size in superblock Andrew Morton
2020-10-13 23:48 ` [patch 015/181] ocfs2: delete repeated words in comments Andrew Morton
2020-10-13 23:48 ` [patch 016/181] ocfs2: fix potential soft lockup during fstrim Andrew Morton
2020-10-13 23:48 ` [patch 017/181] fs/xattr.c: fix kernel-doc warnings for setxattr & removexattr Andrew Morton
2020-10-13 23:48 ` [patch 018/181] fs_parse: mark fs_param_bad_value() as static Andrew Morton
2020-10-13 23:48 ` [patch 019/181] mm/slab.c: clean code by removing redundant if condition Andrew Morton
2020-10-13 23:48 ` [patch 020/181] include/linux/slab.h: fix a typo error in comment Andrew Morton
2020-10-13 23:48 ` [patch 021/181] mm/slub.c: branch optimization in free slowpath Andrew Morton
2020-10-13 23:48 ` [patch 022/181] mm/slub: fix missing ALLOC_SLOWPATH stat when bulk alloc Andrew Morton
2020-10-13 23:48 ` [patch 023/181] mm/slub: make add_full() condition more explicit Andrew Morton
2020-10-13 23:48 ` [patch 024/181] mm/kmemleak: rely on rcu for task stack scanning Andrew Morton
2020-10-13 23:48 ` [patch 025/181] mm,kmemleak-test.c: move kmemleak-test.c to samples dir Andrew Morton
2020-10-13 23:48 ` [patch 026/181] x86/numa: cleanup configuration dependent command-line options Andrew Morton
2020-10-13 23:49 ` [patch 027/181] x86/numa: add 'nohmat' option Andrew Morton
2020-10-13 23:49 ` [patch 028/181] efi/fake_mem: arrange for a resource entry per efi_fake_mem instance Andrew Morton
2020-10-13 23:49 ` [patch 029/181] ACPI: HMAT: refactor hmat_register_target_device to hmem_register_device Andrew Morton
2020-10-13 23:49 ` [patch 030/181] resource: report parent to walk_iomem_res_desc() callback Andrew Morton
2020-10-13 23:49 ` [patch 031/181] mm/memory_hotplug: introduce default phys_to_target_node() implementation Andrew Morton
2020-10-13 23:49 ` [patch 032/181] ACPI: HMAT: attach a device for each soft-reserved range Andrew Morton
2020-10-13 23:49 ` [patch 033/181] device-dax: drop the dax_region.pfn_flags attribute Andrew Morton
2020-10-13 23:49 ` [patch 034/181] device-dax: move instance creation parameters to 'struct dev_dax_data' Andrew Morton
2020-10-13 23:49 ` [patch 035/181] device-dax: make pgmap optional for instance creation Andrew Morton
2020-10-13 23:49 ` [patch 036/181] device-dax/kmem: introduce dax_kmem_range() Andrew Morton
2020-10-13 23:49 ` [patch 037/181] device-dax/kmem: move resource name tracking to drvdata Andrew Morton
2020-10-13 23:49 ` [patch 038/181] device-dax/kmem: replace release_resource() with release_mem_region() Andrew Morton
2020-10-13 23:50 ` [patch 039/181] device-dax: add an allocation interface for device-dax instances Andrew Morton
2020-10-13 23:50 ` [patch 040/181] device-dax: introduce 'struct dev_dax' typed-driver operations Andrew Morton
2020-10-13 23:50 ` [patch 041/181] device-dax: introduce 'seed' devices Andrew Morton
2020-10-13 23:50 ` [patch 042/181] drivers/base: make device_find_child_by_name() compatible with sysfs inputs Andrew Morton
2020-10-13 23:50 ` [patch 043/181] device-dax: add resize support Andrew Morton
2020-10-13 23:50 ` Andrew Morton [this message]
2020-10-13 23:50 ` [patch 045/181] mm/memremap_pages: support multiple ranges per invocation Andrew Morton
2020-10-13 23:50 ` [patch 046/181] device-dax: add dis-contiguous resource support Andrew Morton
2020-10-13 23:50 ` [patch 047/181] device-dax: introduce 'mapping' devices Andrew Morton
2020-10-13 23:50 ` [patch 048/181] device-dax: make align a per-device property Andrew Morton
2020-10-13 23:50 ` [patch 049/181] device-dax: add an 'align' attribute Andrew Morton
2020-10-13 23:51 ` [patch 050/181] dax/hmem: introduce dax_hmem.region_idle parameter Andrew Morton
2020-10-13 23:51 ` [patch 051/181] device-dax: add a range mapping allocation attribute Andrew Morton
2020-10-13 23:51 ` [patch 052/181] mm/debug.c: do not dereference i_ino blindly Andrew Morton
2020-10-13 23:51 ` [patch 053/181] mm, dump_page: rename head_mapcount() --> head_compound_mapcount() Andrew Morton
2020-10-13 23:51 ` [patch 054/181] mm: factor find_get_incore_page out of mincore_page Andrew Morton
2020-10-13 23:51 ` [patch 055/181] mm: use find_get_incore_page in memcontrol Andrew Morton
2020-10-13 23:51 ` [patch 056/181] mm: optimise madvise WILLNEED Andrew Morton
2020-10-13 23:51 ` [patch 057/181] proc: optimise smaps for shmem entries Andrew Morton
2020-10-13 23:51 ` [patch 058/181] i915: use find_lock_page instead of find_lock_entry Andrew Morton
2020-10-13 23:51 ` [patch 059/181] mm: convert find_get_entry to return the head page Andrew Morton
2020-10-13 23:51 ` [patch 060/181] mm/shmem: return head page from find_lock_entry Andrew Morton
2020-10-13 23:51 ` [patch 061/181] mm: add find_lock_head Andrew Morton
2020-10-13 23:51 ` [patch 062/181] mm/filemap: fix filemap_map_pages for THP Andrew Morton
2020-10-13 23:51 ` [patch 063/181] mm, fadvise: improve the expensive remote LRU cache draining after FADV_DONTNEED Andrew Morton
2020-10-13 23:51 ` [patch 064/181] mm/gup_benchmark: update the documentation in Kconfig Andrew Morton
2020-10-13 23:51 ` [patch 065/181] mm/gup_benchmark: use pin_user_pages for FOLL_LONGTERM flag Andrew Morton
2020-10-13 23:51 ` [patch 066/181] mm/gup: don't permit users to call get_user_pages with FOLL_LONGTERM Andrew Morton
2020-10-13 23:52 ` [patch 067/181] mm/gup: protect unpin_user_pages() against npages==-ERRNO Andrew Morton
2020-10-13 23:52 ` [patch 068/181] swap: rename SWP_FS to SWAP_FS_OPS to avoid ambiguity Andrew Morton
2020-10-13 23:52 ` [patch 069/181] mm: remove activate_page() from unuse_pte() Andrew Morton
2020-10-13 23:52 ` [patch 070/181] mm: remove superfluous __ClearPageActive() Andrew Morton
2020-10-13 23:52 ` [patch 071/181] mm/swap.c: fix confusing comment in release_pages() Andrew Morton
2020-10-13 23:52 ` [patch 072/181] mm/swap_slots.c: remove always zero and unused return value of enable_swap_slots_cache() Andrew Morton
2020-10-13 23:52 ` [patch 073/181] mm/page_io.c: remove useless out label in __swap_writepage() Andrew Morton
2020-10-13 23:52 ` [patch 074/181] mm/swap.c: fix incomplete comment in lru_cache_add_inactive_or_unevictable() Andrew Morton
2020-10-13 23:52 ` [patch 075/181] mm/swapfile.c: remove unnecessary goto out in _swap_info_get() Andrew Morton
2020-10-13 23:52 ` [patch 076/181] mm/swapfile.c: fix potential memory leak in sys_swapon Andrew Morton
2020-10-13 23:52 ` [patch 077/181] mm/memremap.c: convert devmap static branch to {inc,dec} Andrew Morton
2020-10-13 23:52 ` [patch 078/181] mm: memcontrol: use flex_array_size() helper in memcpy() Andrew Morton
2020-10-13 23:52 ` [patch 079/181] mm: memcontrol: use the preferred form for passing the size of a structure type Andrew Morton
2020-10-13 23:52 ` [patch 080/181] mm: memcg/slab: fix racy access to page->mem_cgroup in mem_cgroup_from_obj() Andrew Morton
2020-10-13 23:52 ` [patch 081/181] mm: memcontrol: correct the comment of mem_cgroup_iter() Andrew Morton
2020-10-13 23:52 ` [patch 082/181] mm/memcg: clean up obsolete enum charge_type Andrew Morton
2020-10-13 23:52 ` [patch 083/181] mm/memcg: simplify mem_cgroup_get_max() Andrew Morton
2020-10-13 23:52 ` [patch 084/181] mm/memcg: unify swap and memsw page counters Andrew Morton
2020-10-13 23:52 ` [patch 085/181] mm: memcontrol: add the missing numa_stat interface for cgroup v2 Andrew Morton
2020-10-13 23:53 ` [patch 086/181] mm/page_counter: correct the obsolete func name in the comment of page_counter_try_charge() Andrew Morton
2020-10-13 23:53 ` [patch 087/181] mm: memcontrol: reword obsolete comment of mem_cgroup_unmark_under_oom() Andrew Morton
2020-10-13 23:53 ` [patch 088/181] mm: memcg/slab: uncharge during kmem_cache_free_bulk() Andrew Morton
2020-10-13 23:53 ` [patch 089/181] mm/memcg: fix device private memcg accounting Andrew Morton
2020-10-13 23:53 ` [patch 090/181] selftests/vm: fix false build success on the second and later attempts Andrew Morton
2020-10-13 23:53 ` [patch 091/181] selftests/vm: fix incorrect gcc invocation in some cases Andrew Morton
2020-10-13 23:53 ` [patch 092/181] mm: account PMD tables like PTE tables Andrew Morton
2020-10-13 23:53 ` [patch 093/181] mm/memory.c: fix typo in __do_fault() comment Andrew Morton
2020-10-13 23:53 ` [patch 094/181] mm/memory.c: replace vmf->vma with variable vma Andrew Morton
2020-10-13 23:53 ` [patch 095/181] mm/mmap: rename __vma_unlink_common() to __vma_unlink() Andrew Morton
2020-10-13 23:53 ` [patch 096/181] mm/mmap: leverage vma_rb_erase_ignore() to implement vma_rb_erase() Andrew Morton
2020-10-13 23:53 ` [patch 097/181] mmap locking API: add mmap_lock_is_contended() Andrew Morton
2020-10-13 23:53 ` [patch 098/181] mm: smaps*: extend smap_gather_stats to support specified beginning Andrew Morton
2020-10-13 23:53 ` [patch 099/181] mm: proc: smaps_rollup: do not stall write attempts on mmap_lock Andrew Morton
2020-10-13 23:53 ` [patch 100/181] mm: move PageDoubleMap bit Andrew Morton
2020-10-13 23:53 ` [patch 101/181] mm: simplify PageDoubleMap with PF_SECOND policy Andrew Morton
2020-10-13 23:53 ` [patch 102/181] mm/mmap: leave adjust_next as virtual address instead of page frame number Andrew Morton
2020-10-13 23:54 ` [patch 103/181] mm/memory.c: fix spello of "function" Andrew Morton
2020-10-13 23:54 ` [patch 104/181] mm/mmap: not necessary to check mapping separately Andrew Morton
2020-10-13 23:54 ` [patch 105/181] mm/mmap: check on file instead of the rb_root_cached of its address_space Andrew Morton
2020-10-13 23:54 ` [patch 106/181] mm: use helper function mapping_allow_writable() Andrew Morton
2020-10-13 23:54 ` [patch 107/181] mm/mmap.c: use helper function allow_write_access() in __remove_shared_vm_struct() Andrew Morton
2020-10-13 23:54 ` [patch 108/181] mm/mmap.c: replace do_brk with do_brk_flags in comment of insert_vm_struct() Andrew Morton
2020-10-13 23:54 ` [patch 109/181] mm: remove src/dst mm parameter in copy_page_range() Andrew Morton
2020-10-13 23:54 ` [patch 110/181] include/linux/huge_mm.h: remove mincore_huge_pmd declaration Andrew Morton
2020-10-13 23:54 ` [patch 111/181] tools/testing/selftests/vm/hmm-tests.c: use the new SKIP() macro Andrew Morton
2020-10-13 23:54 ` [patch 112/181] lib/test_hmm.c: remove unused dmirror_zero_page Andrew Morton
2020-10-13 23:54 ` [patch 113/181] mm/dmapool.c: replace open-coded list_for_each_entry_safe() Andrew Morton
2020-10-13 23:54 ` [patch 114/181] mm/dmapool.c: replace hard coded function name with __func__ Andrew Morton
2020-10-13 23:54 ` [patch 115/181] mm/memory-failure: do pgoff calculation before for_each_process() Andrew Morton
2020-10-13 23:54 ` [patch 116/181] mm/memory-failure.c: remove unused macro `writeback' Andrew Morton
2020-10-13 23:54 ` [patch 117/181] mm/vmalloc.c: update the comment in __vmalloc_area_node() Andrew Morton
2020-10-13 23:54 ` [patch 118/181] mm/vmalloc.c: fix the comment of find_vm_area Andrew Morton
2020-10-13 23:54 ` [patch 119/181] docs/vm: fix 'mm_count' vs 'mm_users' counter confusion Andrew Morton
2020-10-13 23:54 ` [patch 120/181] kasan/kunit: add KUnit Struct to Current Task Andrew Morton
2020-10-13 23:55 ` [patch 121/181] KUnit: KASAN Integration Andrew Morton
2020-10-13 23:55 ` [patch 122/181] KASAN: port KASAN Tests to KUnit Andrew Morton
2020-10-13 23:55 ` [patch 123/181] KASAN: Testing Documentation Andrew Morton
2020-10-13 23:55 ` [patch 124/181] mm: kasan: do not panic if both panic_on_warn and kasan_multishot set Andrew Morton
2020-10-13 23:55 ` [patch 125/181] mm/page_alloc: tweak comments in has_unmovable_pages() Andrew Morton
2020-10-13 23:55 ` [patch 126/181] mm/page_isolation: exit early when pageblock is isolated in set_migratetype_isolate() Andrew Morton
2020-10-13 23:55 ` [patch 127/181] mm/page_isolation: drop WARN_ON_ONCE() " Andrew Morton
2020-10-13 23:55 ` [patch 128/181] mm/page_isolation: cleanup set_migratetype_isolate() Andrew Morton
2020-10-13 23:55 ` [patch 129/181] virtio-mem: don't special-case ZONE_MOVABLE Andrew Morton
2020-10-13 23:55 ` [patch 130/181] mm: document semantics of ZONE_MOVABLE Andrew Morton
2020-10-13 23:55 ` [patch 131/181] mm, isolation: avoid checking unmovable pages across pageblock boundary Andrew Morton
2020-10-13 23:55 ` [patch 132/181] mm/page_alloc.c: clean code by removing unnecessary initialization Andrew Morton
2020-10-13 23:55 ` [patch 133/181] mm/page_alloc.c: micro-optimization remove unnecessary branch Andrew Morton
2020-10-13 23:55 ` [patch 134/181] mm/page_alloc.c: fix early params garbage value accesses Andrew Morton
2020-10-13 23:55 ` [patch 135/181] mm/page_alloc.c: clean code by merging two functions Andrew Morton
2020-10-13 23:55 ` [patch 136/181] mm/page_alloc.c: __perform_reclaim should return 'unsigned long' Andrew Morton
2020-10-13 23:55 ` [patch 137/181] mmzone: clean code by removing unused macro parameter Andrew Morton
2020-10-13 23:56 ` [patch 138/181] mm: move call to compound_head() in release_pages() Andrew Morton
2020-10-13 23:56 ` [patch 139/181] mm/page_alloc.c: fix freeing non-compound pages Andrew Morton
2020-10-13 23:56 ` [patch 140/181] include/linux/gfp.h: clarify usage of GFP_ATOMIC in !preemptible contexts Andrew Morton
2020-10-13 23:56 ` [patch 141/181] mm/hugetlb.c: make is_hugetlb_entry_hwpoisoned return bool Andrew Morton
2020-10-13 23:56 ` [patch 142/181] mm/hugetlb.c: remove the unnecessary non_swap_entry() Andrew Morton
2020-10-13 23:56 ` [patch 143/181] doc/vm: fix typo in the hugetlb admin documentation Andrew Morton
2020-10-13 23:56 ` [patch 144/181] mm/hugetlb: not necessary to coalesce regions recursively Andrew Morton
2020-10-13 23:56 ` [patch 145/181] mm/hugetlb: remove VM_BUG_ON(!nrg) in get_file_region_entry_from_cache() Andrew Morton
2020-10-13 23:56 ` [patch 146/181] mm/hugetlb: use list_splice to merge two list at once Andrew Morton
2020-10-13 23:56 ` [patch 147/181] mm/hugetlb: count file_region to be added when regions_needed != NULL Andrew Morton
2020-10-13 23:56 ` [patch 148/181] mm/hugetlb: a page from buddy is not on any list Andrew Morton
2020-10-13 23:56 ` [patch 149/181] mm/hugetlb: narrow the hugetlb_lock protection area during preparing huge page Andrew Morton
2020-10-13 23:56 ` [patch 150/181] mm/hugetlb: take the free hpage during the iteration directly Andrew Morton
2020-10-13 23:56 ` [patch 151/181] hugetlb: add lockdep check for i_mmap_rwsem held in huge_pmd_share Andrew Morton
2020-10-13 23:56 ` [patch 152/181] mm/vmscan: fix infinite loop in drop_slab_node Andrew Morton
2020-10-13 23:56 ` [patch 153/181] mm/vmscan: fix comments for isolate_lru_page() Andrew Morton
2020-10-13 23:56 ` [patch 154/181] mm/z3fold.c: use xx_zalloc instead xx_alloc and memset Andrew Morton
2020-10-13 23:56 ` [patch 155/181] mm/zbud: remove redundant initialization Andrew Morton
2020-10-13 23:56 ` [patch 156/181] mm/compaction.c: micro-optimization remove unnecessary branch Andrew Morton
2020-10-13 23:57 ` [patch 157/181] include/linux/compaction.h: clean code by removing unused enum value Andrew Morton
2020-10-13 23:57 ` [patch 158/181] selftests/vm: 8x compaction_test speedup Andrew Morton
2020-10-13 23:57 ` [patch 159/181] mm/mempolicy: remove or narrow the lock on current Andrew Morton
2020-10-13 23:57 ` [patch 160/181] mm: remove unused alloc_page_vma_node() Andrew Morton
2020-10-13 23:57 ` [patch 161/181] mm/mempool: add 'else' to split mutually exclusive case Andrew Morton
2020-10-13 23:57 ` [patch 162/181] KVM: PPC: Book3S HV: simplify kvm_cma_reserve() Andrew Morton
2020-10-13 23:57 ` [patch 163/181] dma-contiguous: simplify cma_early_percent_memory() Andrew Morton
2020-10-13 23:57 ` [patch 164/181] arm, xtensa: simplify initialization of high memory pages Andrew Morton
2020-10-13 23:57 ` [patch 165/181] arm64: numa: simplify dummy_numa_init() Andrew Morton
2020-10-13 23:57 ` [patch 166/181] h8300, nds32, openrisc: simplify detection of memory extents Andrew Morton
2020-10-13 23:57 ` [patch 167/181] riscv: drop unneeded node initialization Andrew Morton
2020-10-13 23:57 ` [patch 168/181] mircoblaze: drop unneeded NUMA and sparsemem initializations Andrew Morton
2020-10-13 23:57 ` [patch 169/181] memblock: make for_each_memblock_type() iterator private Andrew Morton
2020-10-13 23:57 ` [patch 170/181] memblock: make memblock_debug and related functionality private Andrew Morton
2020-10-13 23:57 ` [patch 171/181] memblock: reduce number of parameters in for_each_mem_range() Andrew Morton
2020-10-13 23:58 ` [patch 172/181] arch, mm: replace for_each_memblock() with for_each_mem_pfn_range() Andrew Morton
2020-10-13 23:58 ` [patch 173/181] arch, drivers: replace for_each_membock() with for_each_mem_range() Andrew Morton
2020-10-13 23:58 ` [patch 174/181] x86/setup: simplify initrd relocation and reservation Andrew Morton
2020-10-13 23:58 ` [patch 175/181] x86/setup: simplify reserve_crashkernel() Andrew Morton
2020-10-13 23:58 ` [patch 176/181] memblock: remove unused memblock_mem_size() Andrew Morton
2020-10-13 23:58 ` [patch 177/181] memblock: implement for_each_reserved_mem_region() using __next_mem_region() Andrew Morton
2020-10-13 23:58 ` [patch 178/181] memblock: use separate iterators for memory and reserved regions Andrew Morton
2020-10-13 23:58 ` [patch 179/181] mm, oom_adj: don't loop through tasks in __set_oom_adj when not necessary Andrew Morton
2020-10-13 23:58 ` [patch 180/181] mm/migrate: remove cpages-- in migrate_vma_finalize() Andrew Morton
2020-10-13 23:58 ` [patch 181/181] mm/migrate: remove obsolete comment about device public Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201013235029.X5kgzScuh%akpm@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=Brice.Goglin@inria.fr \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=airlied@linux.ie \
    --cc=ard.biesheuvel@linaro.org \
    --cc=ardb@kernel.org \
    --cc=benh@kernel.crashing.org \
    --cc=bhelgaas@google.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=bp@alien8.de \
    --cc=bskeggs@redhat.com \
    --cc=catalin.marinas@arm.com \
    --cc=dan.carpenter@oracle.com \
    --cc=dan.j.williams@intel.com \
    --cc=daniel@ffwll.ch \
    --cc=dave.hansen@linux.intel.com \
    --cc=dave.jiang@intel.com \
    --cc=david@redhat.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=hpa@zytor.com \
    --cc=hulkci@huawei.com \
    --cc=ira.weiny@intel.com \
    --cc=jgg@mellanox.com \
    --cc=jglisse@redhat.com \
    --cc=jgross@suse.com \
    --cc=jmoyer@redhat.com \
    --cc=joao.m.martins@oracle.com \
    --cc=justin.he@arm.com \
    --cc=linux-mm@kvack.org \
    --cc=lkp@intel.com \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=mm-commits@vger.kernel.org \
    --cc=mpe@ellerman.id.au \
    --cc=pasha.tatashin@soleen.com \
    --cc=paulus@ozlabs.org \
    --cc=peterz@infradead.org \
    --cc=rafael.j.wysocki@intel.com \
    --cc=rdunlap@infradead.org \
    --cc=richard.weiyang@linux.alibaba.com \
    --cc=rppt@linux.ibm.com \
    --cc=sstabellini@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=thomas.lendacky@amd.com \
    --cc=torvalds@linux-foundation.org \
    --cc=vgoyal@redhat.com \
    --cc=vishal.l.verma@intel.com \
    --cc=will@kernel.org \
    --cc=yanaijie@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox