From: Dev Jain <dev.jain@arm.com>
To: catalin.marinas@arm.com, will@kernel.org, urezki@gmail.com,
akpm@linux-foundation.org
Cc: ryan.roberts@arm.com, anshuman.khandual@arm.com,
shijie@os.amperecomputing.com, yang@os.amperecomputing.com,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
npiggin@gmail.com, willy@infradead.org, david@kernel.org,
ziy@nvidia.com, Dev Jain <dev.jain@arm.com>
Subject: [RFC PATCH 1/2] mm/vmalloc: Do not align size to huge size
Date: Wed, 12 Nov 2025 16:38:06 +0530 [thread overview]
Message-ID: <20251112110807.69958-2-dev.jain@arm.com> (raw)
In-Reply-To: <20251112110807.69958-1-dev.jain@arm.com>
vmalloc() consists of the following:
(1) find empty space in the vmalloc space -> (2) get physical pages from
the buddy system -> (3) map the pages into the pagetable.
It turns out that the cost of (1) and (3) is pretty insignificant. Hence,
the cost of vmalloc becomes highly sensitive to physical memory allocation
time.
Currently, if we decide to use huge mappings, apart from aligning the start
of the target vm_struct region to the huge-alignment, we also align the
size. This does not seem to produce any benefit (apart from simplification
of the code), and there is a clear disadvantage - as mentioned above, the
main cost of vmalloc comes from its interaction with the buddy system, and
thus requesting more memory than was requested by the caller is suboptimal
and unnecessary.
This change is also motivated due to the next patch ("arm64/mm: Enable
vmalloc-huge by default"). Suppose that some user of vmalloc maps 17 pages,
uses that mapping for an extremely short time, and vfree's it. That patch,
without this patch, on arm64 will ultimately map 16 * 2 = 32 pages in a
contiguous way. Since the mapping is used for a very short time, it is
likely that the extra cost of mapping 15 pages defeats any benefit from
reduced TLB pressure, and regresses that code path.
Signed-off-by: Dev Jain <dev.jain@arm.com>
---
mm/vmalloc.c | 38 ++++++++++++++++++++++++++++++--------
1 file changed, 30 insertions(+), 8 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 798b2ed21e46..ddd9294a4634 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -647,7 +647,7 @@ static int vmap_small_pages_range_noflush(unsigned long addr, unsigned long end,
int __vmap_pages_range_noflush(unsigned long addr, unsigned long end,
pgprot_t prot, struct page **pages, unsigned int page_shift)
{
- unsigned int i, nr = (end - addr) >> PAGE_SHIFT;
+ unsigned int i, step, nr = (end - addr) >> PAGE_SHIFT;
WARN_ON(page_shift < PAGE_SHIFT);
@@ -655,7 +655,8 @@ int __vmap_pages_range_noflush(unsigned long addr, unsigned long end,
page_shift == PAGE_SHIFT)
return vmap_small_pages_range_noflush(addr, end, prot, pages);
- for (i = 0; i < nr; i += 1U << (page_shift - PAGE_SHIFT)) {
+ step = 1U << (page_shift - PAGE_SHIFT);
+ for (i = 0; i < ALIGN_DOWN(nr, step); i += step) {
int err;
err = vmap_range_noflush(addr, addr + (1UL << page_shift),
@@ -666,8 +667,9 @@ int __vmap_pages_range_noflush(unsigned long addr, unsigned long end,
addr += 1UL << page_shift;
}
-
- return 0;
+ if (IS_ALIGNED(nr, step))
+ return 0;
+ return vmap_small_pages_range_noflush(addr, end, prot, pages + i);
}
int vmap_pages_range_noflush(unsigned long addr, unsigned long end,
@@ -3171,7 +3173,7 @@ struct vm_struct *__get_vm_area_node(unsigned long size,
unsigned long requested_size = size;
BUG_ON(in_interrupt());
- size = ALIGN(size, 1ul << shift);
+ size = PAGE_ALIGN(size);
if (unlikely(!size))
return NULL;
@@ -3327,7 +3329,7 @@ static void vm_reset_perms(struct vm_struct *area)
* Find the start and end range of the direct mappings to make sure that
* the vm_unmap_aliases() flush includes the direct map.
*/
- for (i = 0; i < area->nr_pages; i += 1U << page_order) {
+ for (i = 0; i < ALIGN_DOWN(area->nr_pages, 1U << page_order); i += (1U << page_order)) {
unsigned long addr = (unsigned long)page_address(area->pages[i]);
if (addr) {
@@ -3339,6 +3341,18 @@ static void vm_reset_perms(struct vm_struct *area)
flush_dmap = 1;
}
}
+ for (; i < area->nr_pages; ++i) {
+ unsigned long addr = (unsigned long)page_address(area->pages[i]);
+
+ if (addr) {
+ unsigned long page_size;
+
+ page_size = PAGE_SIZE;
+ start = min(addr, start);
+ end = max(addr + page_size, end);
+ flush_dmap = 1;
+ }
+ }
/*
* Set direct map to something invalid so that it won't be cached if
@@ -3602,6 +3616,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
* more permissive.
*/
if (!order) {
+page_map:
while (nr_allocated < nr_pages) {
unsigned int nr, nr_pages_request;
@@ -3633,13 +3648,18 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
* If zero or pages were obtained partly,
* fallback to a single page allocator.
*/
- if (nr != nr_pages_request)
+ if (nr != nr_pages_request) {
+ order = 0;
break;
+ }
}
}
/* High-order pages or fallback path if "bulk" fails. */
while (nr_allocated < nr_pages) {
+ if (nr_pages - nr_allocated < (1UL << order))
+ goto page_map;
+
if (!(gfp & __GFP_NOFAIL) && fatal_signal_pending(current))
break;
@@ -5024,7 +5044,9 @@ static void show_numa_info(struct seq_file *m, struct vm_struct *v,
memset(counters, 0, nr_node_ids * sizeof(unsigned int));
- for (nr = 0; nr < v->nr_pages; nr += step)
+ for (nr = 0; nr < ALIGN_DOWN(v->nr_pages, step); nr += step)
+ counters[page_to_nid(v->pages[nr])] += step;
+ for (; nr < v->nr_pages; ++nr)
counters[page_to_nid(v->pages[nr])] += step;
for_each_node_state(nr, N_HIGH_MEMORY)
if (counters[nr])
--
2.30.2
next prev parent reply other threads:[~2025-11-12 11:08 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-12 11:08 [RFC PATCH 0/2] Enable vmalloc block mappings by default on arm64 Dev Jain
2025-11-12 11:08 ` Dev Jain [this message]
2025-11-12 11:08 ` [RFC PATCH 2/2] arm64/mm: Enable vmalloc-huge by default Dev Jain
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251112110807.69958-2-dev.jain@arm.com \
--to=dev.jain@arm.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=catalin.marinas@arm.com \
--cc=david@kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=npiggin@gmail.com \
--cc=ryan.roberts@arm.com \
--cc=shijie@os.amperecomputing.com \
--cc=urezki@gmail.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=yang@os.amperecomputing.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox