* [PATCH] mm: Refactor vma_map_pages to use vm_insert_pages
@ 2026-01-28 22:56 Justin Green
2026-01-28 22:59 ` Brian Geffon
0 siblings, 1 reply; 4+ messages in thread
From: Justin Green @ 2026-01-28 22:56 UTC (permalink / raw)
To: akpm
Cc: linux-mm, david, lorenzo.stoakes, Liam.Howlett, vbabka, rppt,
surenb, mhocko, linux-kernel, greenjustin, greenjustin, rientjes,
bgeffon, arjunroy
vma_map_pages currently calls vm_insert_page on each individual page in
the mapping, which creates significant overhead because we are
repeatedly spinlocking. Instead, we should batch insert pages using
vm_insert_pages, which amortizes the cost of the spinlock.
Tested through watching hardware accelerated video on a MTK ChromeOS
device. This particular path maps both a V4L2 buffer and a GEM allocated
buffer into userspace and converts the contents from one pixel format to
another. Both vb2_mmap() and mtk_gem_object_mmap() exercise this
pathway.
Signed-off-by: Justin Green <greenjustin@chromium.org>
---
mm/memory.c | 10 +---------
1 file changed, 1 insertion(+), 9 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index da360a6eb8a4..7ae6ac42e7d8 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2499,7 +2499,6 @@ static int __vm_map_pages(struct vm_area_struct *vma, struct page **pages,
{
unsigned long count = vma_pages(vma);
unsigned long uaddr = vma->vm_start;
- int ret, i;
/* Fail if the user requested offset is beyond the end of the object */
if (offset >= num)
@@ -2509,14 +2508,7 @@ static int __vm_map_pages(struct vm_area_struct *vma, struct page **pages,
if (count > num - offset)
return -ENXIO;
- for (i = 0; i < count; i++) {
- ret = vm_insert_page(vma, uaddr, pages[offset + i]);
- if (ret < 0)
- return ret;
- uaddr += PAGE_SIZE;
- }
-
- return 0;
+ return vm_insert_pages(vma, uaddr, pages + offset, &count);
}
/**
--
2.53.0.rc1.217.geba53bf80e-goog
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] mm: Refactor vma_map_pages to use vm_insert_pages
2026-01-28 22:56 [PATCH] mm: Refactor vma_map_pages to use vm_insert_pages Justin Green
@ 2026-01-28 22:59 ` Brian Geffon
2026-01-29 0:51 ` Matthew Wilcox
0 siblings, 1 reply; 4+ messages in thread
From: Brian Geffon @ 2026-01-28 22:59 UTC (permalink / raw)
To: Justin Green
Cc: akpm, linux-mm, david, lorenzo.stoakes, Liam.Howlett, vbabka,
rppt, surenb, mhocko, linux-kernel, greenjustin, rientjes,
arjunroy
On Wed, Jan 28, 2026 at 5:57 PM Justin Green <greenjustin@chromium.org> wrote:
>
> vma_map_pages currently calls vm_insert_page on each individual page in
> the mapping, which creates significant overhead because we are
> repeatedly spinlocking. Instead, we should batch insert pages using
> vm_insert_pages, which amortizes the cost of the spinlock.
This makes sense, I wonder why this wasn't done previously?
>
> Tested through watching hardware accelerated video on a MTK ChromeOS
> device. This particular path maps both a V4L2 buffer and a GEM allocated
> buffer into userspace and converts the contents from one pixel format to
> another. Both vb2_mmap() and mtk_gem_object_mmap() exercise this
> pathway.
>
> Signed-off-by: Justin Green <greenjustin@chromium.org>
Acked-by: Brian Geffon <bgeffon@google.com>
> ---
> mm/memory.c | 10 +---------
> 1 file changed, 1 insertion(+), 9 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index da360a6eb8a4..7ae6ac42e7d8 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2499,7 +2499,6 @@ static int __vm_map_pages(struct vm_area_struct *vma, struct page **pages,
> {
> unsigned long count = vma_pages(vma);
> unsigned long uaddr = vma->vm_start;
> - int ret, i;
>
> /* Fail if the user requested offset is beyond the end of the object */
> if (offset >= num)
> @@ -2509,14 +2508,7 @@ static int __vm_map_pages(struct vm_area_struct *vma, struct page **pages,
> if (count > num - offset)
> return -ENXIO;
>
> - for (i = 0; i < count; i++) {
> - ret = vm_insert_page(vma, uaddr, pages[offset + i]);
> - if (ret < 0)
> - return ret;
> - uaddr += PAGE_SIZE;
> - }
> -
> - return 0;
> + return vm_insert_pages(vma, uaddr, pages + offset, &count);
> }
>
> /**
> --
> 2.53.0.rc1.217.geba53bf80e-goog
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] mm: Refactor vma_map_pages to use vm_insert_pages
2026-01-28 22:59 ` Brian Geffon
@ 2026-01-29 0:51 ` Matthew Wilcox
2026-01-29 4:44 ` Arjun Roy
0 siblings, 1 reply; 4+ messages in thread
From: Matthew Wilcox @ 2026-01-29 0:51 UTC (permalink / raw)
To: Brian Geffon
Cc: Justin Green, akpm, linux-mm, david, lorenzo.stoakes,
Liam.Howlett, vbabka, rppt, surenb, mhocko, linux-kernel,
greenjustin, rientjes, arjunroy
On Wed, Jan 28, 2026 at 05:59:12PM -0500, Brian Geffon wrote:
> On Wed, Jan 28, 2026 at 5:57 PM Justin Green <greenjustin@chromium.org> wrote:
> >
> > vma_map_pages currently calls vm_insert_page on each individual page in
> > the mapping, which creates significant overhead because we are
> > repeatedly spinlocking. Instead, we should batch insert pages using
> > vm_insert_pages, which amortizes the cost of the spinlock.
>
> This makes sense, I wonder why this wasn't done previously?
That's always a good question, because it might reveal why this patch is
a bad idea ...
However in this case, it simply seems to be an oversight.
__vm_map_pages() was introduced in May 2019 and then vm_insert_pages()
was added in April 2020.
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] mm: Refactor vma_map_pages to use vm_insert_pages
2026-01-29 0:51 ` Matthew Wilcox
@ 2026-01-29 4:44 ` Arjun Roy
0 siblings, 0 replies; 4+ messages in thread
From: Arjun Roy @ 2026-01-29 4:44 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Brian Geffon, Justin Green, akpm, linux-mm, david,
lorenzo.stoakes, Liam.Howlett, vbabka, rppt, surenb, mhocko,
linux-kernel, greenjustin, rientjes
On Wed, Jan 28, 2026 at 4:51 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Wed, Jan 28, 2026 at 05:59:12PM -0500, Brian Geffon wrote:
> > On Wed, Jan 28, 2026 at 5:57 PM Justin Green <greenjustin@chromium.org> wrote:
> > >
> > > vma_map_pages currently calls vm_insert_page on each individual page in
> > > the mapping, which creates significant overhead because we are
> > > repeatedly spinlocking. Instead, we should batch insert pages using
> > > vm_insert_pages, which amortizes the cost of the spinlock.
> >
> > This makes sense, I wonder why this wasn't done previously?
>
> That's always a good question, because it might reveal why this patch is
> a bad idea ...
>
> However in this case, it simply seems to be an oversight.
> __vm_map_pages() was introduced in May 2019 and then vm_insert_pages()
> was added in April 2020.
>
Yes, it was an oversight. I had originally cooked up vm_insert_pages()
to amortize
that spinlock for TCP zerocopy receive, and had not noticed __vm_map_pages()
sitting right there.
Reviewed-by: Arjun Roy <arjunroy@google.com>
-Arjun
> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-01-29 4:45 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-01-28 22:56 [PATCH] mm: Refactor vma_map_pages to use vm_insert_pages Justin Green
2026-01-28 22:59 ` Brian Geffon
2026-01-29 0:51 ` Matthew Wilcox
2026-01-29 4:44 ` Arjun Roy
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox