From: David Hildenbrand <david@redhat.com>
To: linux-kernel@vger.kernel.org
Cc: virtualization@lists.linux-foundation.org, linux-mm@kvack.org,
linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org,
David Hildenbrand <david@redhat.com>,
Andrew Morton <akpm@linux-foundation.org>,
Michal Hocko <mhocko@suse.com>,
Dan Williams <dan.j.williams@intel.com>,
"Michael S . Tsirkin" <mst@redhat.com>,
Jason Wang <jasowang@redhat.com>,
Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
Baoquan He <bhe@redhat.com>,
Wei Yang <richardw.yang@linux.intel.com>
Subject: [PATCH v1 3/5] virtio-mem: try to merge system ram resources
Date: Fri, 21 Aug 2020 12:34:29 +0200 [thread overview]
Message-ID: <20200821103431.13481-4-david@redhat.com> (raw)
In-Reply-To: <20200821103431.13481-1-david@redhat.com>
virtio-mem adds memory in memory block granularity, to be able to
remove it in the same granularity again later, and to grow slowly on
demand. This, however, results in quite a lot of resources when
adding a lot of memory. Resources are effectively stored in a list-based
tree. Having a lot of resources not only wastes memory, it also makes
traversing that tree more expensive, and makes /proc/iomem explode in
size (e.g., requiring kexec-tools to manually merge resources later
when e.g., trying to create a kdump header).
Before this patch, we get (/proc/iomem) when hotplugging 2G via virtio-mem
on x86-64:
[...]
100000000-13fffffff : System RAM
140000000-33fffffff : virtio0
140000000-147ffffff : System RAM (virtio_mem)
148000000-14fffffff : System RAM (virtio_mem)
150000000-157ffffff : System RAM (virtio_mem)
158000000-15fffffff : System RAM (virtio_mem)
160000000-167ffffff : System RAM (virtio_mem)
168000000-16fffffff : System RAM (virtio_mem)
170000000-177ffffff : System RAM (virtio_mem)
178000000-17fffffff : System RAM (virtio_mem)
180000000-187ffffff : System RAM (virtio_mem)
188000000-18fffffff : System RAM (virtio_mem)
190000000-197ffffff : System RAM (virtio_mem)
198000000-19fffffff : System RAM (virtio_mem)
1a0000000-1a7ffffff : System RAM (virtio_mem)
1a8000000-1afffffff : System RAM (virtio_mem)
1b0000000-1b7ffffff : System RAM (virtio_mem)
1b8000000-1bfffffff : System RAM (virtio_mem)
3280000000-32ffffffff : PCI Bus 0000:00
With this patch, we get (/proc/iomem):
[...]
fffc0000-ffffffff : Reserved
100000000-13fffffff : System RAM
140000000-33fffffff : virtio0
140000000-1bfffffff : System RAM (virtio_mem)
3280000000-32ffffffff : PCI Bus 0000:00
Of course, with more hotplugged memory, it gets worse. When unplugging
memory blocks again, try_remove_memory() (via
offline_and_remove_memory()) will properly split the resource up again.
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Wei Yang <richardw.yang@linux.intel.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
drivers/virtio/virtio_mem.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c
index 834b7c13ef3dc..3aae0f87073a8 100644
--- a/drivers/virtio/virtio_mem.c
+++ b/drivers/virtio/virtio_mem.c
@@ -407,6 +407,7 @@ static int virtio_mem_mb_add(struct virtio_mem *vm, unsigned long mb_id)
{
const uint64_t addr = virtio_mem_mb_id_to_phys(mb_id);
int nid = vm->nid;
+ int rc;
if (nid == NUMA_NO_NODE)
nid = memory_add_physaddr_to_nid(addr);
@@ -423,8 +424,17 @@ static int virtio_mem_mb_add(struct virtio_mem *vm, unsigned long mb_id)
}
dev_dbg(&vm->vdev->dev, "adding memory block: %lu\n", mb_id);
- return add_memory_driver_managed(nid, addr, memory_block_size_bytes(),
- vm->resource_name);
+ rc = add_memory_driver_managed(nid, addr, memory_block_size_bytes(),
+ vm->resource_name);
+ if (!rc) {
+ /*
+ * Try to reduce the number of system ram resources in our
+ * resource container. The memory removal path will properly
+ * split them up again.
+ */
+ merge_system_ram_resources(vm->parent_resource);
+ }
+ return rc;
}
/*
--
2.26.2
next prev parent reply other threads:[~2020-08-21 10:35 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-08-21 10:34 [PATCH v1 0/5] mm/memory_hotplug: selective merging of " David Hildenbrand
2020-08-21 10:34 ` [PATCH v1 1/5] kernel/resource: make release_mem_region_adjustable() never fail David Hildenbrand
2020-08-21 10:34 ` [PATCH v1 2/5] kernel/resource: merge_system_ram_resources() to merge resources after hotplug David Hildenbrand
2020-08-31 9:35 ` Pankaj Gupta
2020-09-08 10:26 ` David Hildenbrand
2020-08-21 10:34 ` David Hildenbrand [this message]
2020-08-21 10:34 ` [PATCH v1 4/5] xen/balloon: try to merge system ram resources David Hildenbrand
2020-09-02 10:15 ` Jürgen Groß
2020-09-02 10:34 ` David Hildenbrand
2020-08-21 10:34 ` [PATCH v1 5/5] hv_balloon: " David Hildenbrand
2020-09-04 17:30 ` Wei Liu
2020-09-08 10:26 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200821103431.13481-4-david@redhat.com \
--to=david@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=bhe@redhat.com \
--cc=dan.j.williams@intel.com \
--cc=jasowang@redhat.com \
--cc=linux-hyperv@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=mst@redhat.com \
--cc=pankaj.gupta.linux@gmail.com \
--cc=richardw.yang@linux.intel.com \
--cc=virtualization@lists.linux-foundation.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox