* [PATCH v2 0/2] mm/vmalloc: free unused pages on vrealloc() shrink
@ 2026-03-04 14:53 Shivam Kalra via B4 Relay
2026-03-04 14:53 ` [PATCH v2 1/2] mm/vmalloc: extract vmalloc_free_pages() helper from vfree() Shivam Kalra via B4 Relay
2026-03-04 14:53 ` [PATCH v2 2/2] mm/vmalloc: free unused pages on vrealloc() shrink Shivam Kalra via B4 Relay
0 siblings, 2 replies; 3+ messages in thread
From: Shivam Kalra via B4 Relay @ 2026-03-04 14:53 UTC (permalink / raw)
To: Andrew Morton, Uladzislau Rezki
Cc: linux-mm, linux-kernel, Alice Ryhl, Danilo Krummrich, Shivam Kalra
This series implements the TODO in vrealloc() to unmap and free unused
pages when shrinking across a page boundary.
Problem:
When vrealloc() shrinks an allocation, it updates bookkeeping
(requested_size, KASAN shadow) but does not free the underlying physical
pages. This wastes memory for the lifetime of the allocation.
Solution:
- Patch 1: Extracts a vmalloc_free_pages(vm, start, end) helper from
vfree() that frees a range of pages with memcg and nr_vmalloc_pages
accounting. Pure refactor, no functional change.
- Patch 2: Uses the helper to free tail pages when vrealloc() shrinks
across a page boundary. Skips huge page allocations (page_order > 0)
since compound pages cannot be partially freed. Also fixes the
grow-in-place path to check vm->nr_pages instead of
get_vm_area_size(), which reflects the virtual reservation and does
not change on shrink.
The virtual address reservation is kept intact to preserve the range
for potential future grow-in-place support.
A concrete user is the Rust binder driver's KVVec::shrink_to [1], which
performs explicit vrealloc() shrinks for memory reclamation.
Tested:
- KASAN KUnit (vmalloc_oob passes)
- lib/test_vmalloc stress tests (3/3, 1M iterations each)
- checkpatch, sparse, W=1, allmodconfig, coccicheck clean
[1] https://lore.kernel.org/all/20260216-binder-shrink-vec-v3-v6-0-ece8e8593e53@zohomail.in/
Signed-off-by: Shivam Kalra <shivamkalra98@zohomail.in>
---
Changes in v2:
- Updated the base-commit to mm-new
- Fix conflicts after rebase
- Ran `clang-format` on the changes made
- Use a single `kasan_vrealloc` (Alice Ryhl)
- Link to v1: https://lore.kernel.org/r/20260302-vmalloc-shrink-v1-0-46deff465b7e@zohomail.in
---
Shivam Kalra (2):
mm/vmalloc: extract vmalloc_free_pages() helper from vfree()
mm/vmalloc: free unused pages on vrealloc() shrink
mm/vmalloc.c | 61 +++++++++++++++++++++++++++++++++++++++++-------------------
1 file changed, 42 insertions(+), 19 deletions(-)
---
base-commit: 2bff1816949a0849a384ed3cc66c8385d9590861
change-id: 20260302-vmalloc-shrink-04b2fa688a14
Best regards,
--
Shivam Kalra <shivamkalra98@zohomail.in>
^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH v2 1/2] mm/vmalloc: extract vmalloc_free_pages() helper from vfree()
2026-03-04 14:53 [PATCH v2 0/2] mm/vmalloc: free unused pages on vrealloc() shrink Shivam Kalra via B4 Relay
@ 2026-03-04 14:53 ` Shivam Kalra via B4 Relay
2026-03-04 14:53 ` [PATCH v2 2/2] mm/vmalloc: free unused pages on vrealloc() shrink Shivam Kalra via B4 Relay
1 sibling, 0 replies; 3+ messages in thread
From: Shivam Kalra via B4 Relay @ 2026-03-04 14:53 UTC (permalink / raw)
To: Andrew Morton, Uladzislau Rezki
Cc: linux-mm, linux-kernel, Alice Ryhl, Danilo Krummrich, Shivam Kalra
From: Shivam Kalra <shivamkalra98@zohomail.in>
Extract the page-freeing loop and NR_VMALLOC stat accounting from
vfree() into a reusable vmalloc_free_pages() helper. The helper operates
on a range [start, end) of pages from a vm_struct, making it suitable
for both full free (vfree) and partial free (upcoming vrealloc shrink).
No functional change.
Signed-off-by: Shivam Kalra <shivamkalra98@zohomail.in>
---
mm/vmalloc.c | 42 ++++++++++++++++++++++++++++--------------
1 file changed, 28 insertions(+), 14 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index c607307c657a..e2aef0a79f2e 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3416,6 +3416,32 @@ void vfree_atomic(const void *addr)
schedule_work(&p->wq);
}
+/*
+ * vmalloc_free_pages - free a range of pages from a vmalloc allocation
+ * @vm: the vm_struct containing the pages
+ * @start: first page index to free (inclusive)
+ * @end: last page index to free (exclusive)
+ *
+ * Free pages [start, end) updating NR_VMALLOC stat accounting.
+ * Caller is responsible for unmapping (vunmap_range) and KASAN
+ * poisoning before calling this.
+ */
+static void vmalloc_free_pages(struct vm_struct *vm, unsigned int start,
+ unsigned int end)
+{
+ unsigned int i;
+
+ for (i = start; i < end; i++) {
+ struct page *page = vm->pages[i];
+
+ BUG_ON(!page);
+ if (!(vm->flags & VM_MAP_PUT_PAGES))
+ mod_lruvec_page_state(page, NR_VMALLOC, -1);
+ __free_page(page);
+ cond_resched();
+ }
+}
+
/**
* vfree - Release memory allocated by vmalloc()
* @addr: Memory base address
@@ -3436,7 +3462,6 @@ void vfree_atomic(const void *addr)
void vfree(const void *addr)
{
struct vm_struct *vm;
- int i;
if (unlikely(in_interrupt())) {
vfree_atomic(addr);
@@ -3459,19 +3484,8 @@ void vfree(const void *addr)
if (unlikely(vm->flags & VM_FLUSH_RESET_PERMS))
vm_reset_perms(vm);
- for (i = 0; i < vm->nr_pages; i++) {
- struct page *page = vm->pages[i];
-
- BUG_ON(!page);
- /*
- * High-order allocs for huge vmallocs are split, so
- * can be freed as an array of order-0 allocations
- */
- if (!(vm->flags & VM_MAP_PUT_PAGES))
- mod_lruvec_page_state(page, NR_VMALLOC, -1);
- __free_page(page);
- cond_resched();
- }
+ if (vm->nr_pages)
+ vmalloc_free_pages(vm, 0, vm->nr_pages);
kvfree(vm->pages);
kfree(vm);
}
--
2.43.0
^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH v2 2/2] mm/vmalloc: free unused pages on vrealloc() shrink
2026-03-04 14:53 [PATCH v2 0/2] mm/vmalloc: free unused pages on vrealloc() shrink Shivam Kalra via B4 Relay
2026-03-04 14:53 ` [PATCH v2 1/2] mm/vmalloc: extract vmalloc_free_pages() helper from vfree() Shivam Kalra via B4 Relay
@ 2026-03-04 14:53 ` Shivam Kalra via B4 Relay
1 sibling, 0 replies; 3+ messages in thread
From: Shivam Kalra via B4 Relay @ 2026-03-04 14:53 UTC (permalink / raw)
To: Andrew Morton, Uladzislau Rezki
Cc: linux-mm, linux-kernel, Alice Ryhl, Danilo Krummrich, Shivam Kalra
From: Shivam Kalra <shivamkalra98@zohomail.in>
When vrealloc() shrinks an allocation and the new size crosses a page
boundary, unmap and free the tail pages that are no longer needed. This
reclaims physical memory that was previously wasted for the lifetime
of the allocation.
The heuristic is simple: always free when at least one full page becomes
unused. Huge page allocations (page_order > 0) are skipped, as partial
freeing would require splitting.
The virtual address reservation (vm->size / vmap_area) is intentionally
kept unchanged, preserving the address for potential future grow-in-place
support.
Fix the grow-in-place check to compare against vm->nr_pages rather than
get_vm_area_size(), since the latter reflects the virtual reservation
which does not shrink. Without this fix, a grow after shrink would
access freed pages.
Signed-off-by: Shivam Kalra <shivamkalra98@zohomail.in>
---
mm/vmalloc.c | 19 ++++++++++++++-----
1 file changed, 14 insertions(+), 5 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index e2aef0a79f2e..1a59afb94ba4 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -4340,14 +4340,23 @@ void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align
goto need_realloc;
}
- /*
- * TODO: Shrink the vm_area, i.e. unmap and free unused pages. What
- * would be a good heuristic for when to shrink the vm_area?
- */
if (size <= old_size) {
+ unsigned int new_nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
+
/* Zero out "freed" memory, potentially for future realloc. */
if (want_init_on_free() || want_init_on_alloc(flags))
memset((void *)p + size, 0, old_size - size);
+
+ /* Free tail pages when shrink crosses a page boundary. */
+ if (new_nr_pages < vm->nr_pages && !vm_area_page_order(vm)) {
+ unsigned long addr = (unsigned long)p;
+
+ vunmap_range(addr + (new_nr_pages << PAGE_SHIFT),
+ addr + (vm->nr_pages << PAGE_SHIFT));
+
+ vmalloc_free_pages(vm, new_nr_pages, vm->nr_pages);
+ vm->nr_pages = new_nr_pages;
+ }
vm->requested_size = size;
kasan_vrealloc(p, old_size, size);
return (void *)p;
@@ -4356,7 +4365,7 @@ void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align
/*
* We already have the bytes available in the allocation; use them.
*/
- if (size <= alloced_size) {
+ if (size <= (size_t)vm->nr_pages << PAGE_SHIFT) {
/*
* No need to zero memory here, as unused memory will have
* already been zeroed at initial allocation time or during
--
2.43.0
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2026-03-04 14:53 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-03-04 14:53 [PATCH v2 0/2] mm/vmalloc: free unused pages on vrealloc() shrink Shivam Kalra via B4 Relay
2026-03-04 14:53 ` [PATCH v2 1/2] mm/vmalloc: extract vmalloc_free_pages() helper from vfree() Shivam Kalra via B4 Relay
2026-03-04 14:53 ` [PATCH v2 2/2] mm/vmalloc: free unused pages on vrealloc() shrink Shivam Kalra via B4 Relay
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox