* [PATCH 1/2] mm/vmalloc: Do not adjust the search size for alignment overhead
@ 2021-10-04 14:28 Uladzislau Rezki (Sony)
2021-10-04 14:28 ` [PATCH 2/2] mm/vmalloc: Check various alignments when debugging Uladzislau Rezki (Sony)
0 siblings, 1 reply; 2+ messages in thread
From: Uladzislau Rezki (Sony) @ 2021-10-04 14:28 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, LKML, Mel Gorman, Christoph Hellwig, Matthew Wilcox,
Nicholas Piggin, Uladzislau Rezki, Hillf Danton, Michal Hocko,
Oleksiy Avramchenko, Steven Rostedt, Ping Fang,
David Hildenbrand
We used to include an alignment overhead into a search length, in
that case we guarantee that a found area will definitely fit after
applying a specific alignment that user specifies. From the other
hand we do not guarantee that an area has the lowest address if
an alignment is >= PAGE_SIZE.
It means that, when a user specifies a special alignment together
with a range that corresponds to an exact requested size then an
allocation will fail. This is what happens to KASAN, it wants the
free block that exactly matches a specified range during onlining
memory banks:
[root@vm-0 fedora]# echo online > /sys/devices/system/memory/memory82/state
[root@vm-0 fedora]# echo online > /sys/devices/system/memory/memory83/state
[root@vm-0 fedora]# echo online > /sys/devices/system/memory/memory85/state
[root@vm-0 fedora]# echo online > /sys/devices/system/memory/memory84/state
[ 223.858115] vmap allocation for size 16777216 failed: use vmalloc=<size> to increase size
[ 223.859415] bash: vmalloc: allocation failure: 16777216 bytes, mode:0x6000c0(GFP_KERNEL), nodemask=(null),cpuset=/,mems_allowed=0
[ 223.860992] CPU: 4 PID: 1644 Comm: bash Kdump: loaded Not tainted 4.18.0-339.el8.x86_64+debug #1
[ 223.862149] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[ 223.863580] Call Trace:
[ 223.863946] dump_stack+0x8e/0xd0
[ 223.864420] warn_alloc.cold.90+0x8a/0x1b2
[ 223.864990] ? zone_watermark_ok_safe+0x300/0x300
[ 223.865626] ? slab_free_freelist_hook+0x85/0x1a0
[ 223.866264] ? __get_vm_area_node+0x240/0x2c0
[ 223.866858] ? kfree+0xdd/0x570
[ 223.867309] ? kmem_cache_alloc_node_trace+0x157/0x230
[ 223.868028] ? notifier_call_chain+0x90/0x160
[ 223.868625] __vmalloc_node_range+0x465/0x840
[ 223.869230] ? mark_held_locks+0xb7/0x120
Fix it by making sure that find_vmap_lowest_match() returns lowest
start address with any given alignment value, i.e. for alignments
bigger then PAGE_SIZE the algorithm rolls back toward parent nodes
checking right sub-trees if the most left free block did not fit
due to alignment overhead.
Fixes: 68ad4a330433 ("mm/vmalloc.c: keep track of free blocks for vmap allocation")
Reported-by: Ping Fang <pifang@redhat.com>
Tested-by: David Hildenbrand <david@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
mm/vmalloc.c | 22 +++++++++++++---------
1 file changed, 13 insertions(+), 9 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 48e717626e94..9cce45dbdee0 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1195,18 +1195,14 @@ find_vmap_lowest_match(unsigned long size,
{
struct vmap_area *va;
struct rb_node *node;
- unsigned long length;
/* Start from the root. */
node = free_vmap_area_root.rb_node;
- /* Adjust the search size for alignment overhead. */
- length = size + align - 1;
-
while (node) {
va = rb_entry(node, struct vmap_area, rb_node);
- if (get_subtree_max_size(node->rb_left) >= length &&
+ if (get_subtree_max_size(node->rb_left) >= size &&
vstart < va->va_start) {
node = node->rb_left;
} else {
@@ -1216,9 +1212,9 @@ find_vmap_lowest_match(unsigned long size,
/*
* Does not make sense to go deeper towards the right
* sub-tree if it does not have a free block that is
- * equal or bigger to the requested search length.
+ * equal or bigger to the requested search size.
*/
- if (get_subtree_max_size(node->rb_right) >= length) {
+ if (get_subtree_max_size(node->rb_right) >= size) {
node = node->rb_right;
continue;
}
@@ -1226,15 +1222,23 @@ find_vmap_lowest_match(unsigned long size,
/*
* OK. We roll back and find the first right sub-tree,
* that will satisfy the search criteria. It can happen
- * only once due to "vstart" restriction.
+ * due to "vstart" restriction or an alignment overhead
+ * that is bigger then PAGE_SIZE.
*/
while ((node = rb_parent(node))) {
va = rb_entry(node, struct vmap_area, rb_node);
if (is_within_this_va(va, size, align, vstart))
return va;
- if (get_subtree_max_size(node->rb_right) >= length &&
+ if (get_subtree_max_size(node->rb_right) >= size &&
vstart <= va->va_start) {
+ /*
+ * Shift the vstart forward. Please note, we update it with
+ * parent's start address adding "1" because we do not want
+ * to enter same sub-tree after it has already been checked
+ * and no suitable free block found there.
+ */
+ vstart = va->va_start + 1;
node = node->rb_right;
break;
}
--
2.20.1
^ permalink raw reply [flat|nested] 2+ messages in thread* [PATCH 2/2] mm/vmalloc: Check various alignments when debugging
2021-10-04 14:28 [PATCH 1/2] mm/vmalloc: Do not adjust the search size for alignment overhead Uladzislau Rezki (Sony)
@ 2021-10-04 14:28 ` Uladzislau Rezki (Sony)
0 siblings, 0 replies; 2+ messages in thread
From: Uladzislau Rezki (Sony) @ 2021-10-04 14:28 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, LKML, Mel Gorman, Christoph Hellwig, Matthew Wilcox,
Nicholas Piggin, Uladzislau Rezki, Hillf Danton, Michal Hocko,
Oleksiy Avramchenko, Steven Rostedt
Before we did not guarantee a free block with lowest start address
for allocations with alignment >= PAGE_SIZE. Because an alignment
overhead was included into a search length like below:
length = size + align - 1;
doing so we make sure that a bigger block would fit after applying
an alignment adjustment. Now there is no such limitation, i.e. any
alignment that user wants to apply will result to a lowest address
of returned free area.
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
mm/vmalloc.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 9cce45dbdee0..343cb5d40706 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1269,7 +1269,7 @@ find_vmap_lowest_linear_match(unsigned long size,
}
static void
-find_vmap_lowest_match_check(unsigned long size)
+find_vmap_lowest_match_check(unsigned long size, unsigned long align)
{
struct vmap_area *va_1, *va_2;
unsigned long vstart;
@@ -1278,8 +1278,8 @@ find_vmap_lowest_match_check(unsigned long size)
get_random_bytes(&rnd, sizeof(rnd));
vstart = VMALLOC_START + rnd;
- va_1 = find_vmap_lowest_match(size, 1, vstart);
- va_2 = find_vmap_lowest_linear_match(size, 1, vstart);
+ va_1 = find_vmap_lowest_match(size, align, vstart);
+ va_2 = find_vmap_lowest_linear_match(size, align, vstart);
if (va_1 != va_2)
pr_emerg("not lowest: t: 0x%p, l: 0x%p, v: 0x%lx\n",
@@ -1458,7 +1458,7 @@ __alloc_vmap_area(unsigned long size, unsigned long align,
return vend;
#if DEBUG_AUGMENT_LOWEST_MATCH_CHECK
- find_vmap_lowest_match_check(size);
+ find_vmap_lowest_match_check(size, align);
#endif
return nva_start_addr;
--
2.20.1
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2021-10-04 14:28 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-04 14:28 [PATCH 1/2] mm/vmalloc: Do not adjust the search size for alignment overhead Uladzislau Rezki (Sony)
2021-10-04 14:28 ` [PATCH 2/2] mm/vmalloc: Check various alignments when debugging Uladzislau Rezki (Sony)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox