linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/memblock: fix off-by-one page leak in reserve_mem_release_by_name()
@ 2026-04-14  9:44 DaeMyung Kang
  2026-04-14 10:10 ` Donet Tom
  2026-04-14 10:43 ` [PATCH v2] " DaeMyung Kang
  0 siblings, 2 replies; 6+ messages in thread
From: DaeMyung Kang @ 2026-04-14  9:44 UTC (permalink / raw)
  To: Mike Rapoport, Andrew Morton
  Cc: Steven Rostedt, linux-mm, linux-kernel, DaeMyung Kang

free_reserved_area() treats its 'end' argument as exclusive: it aligns
end down via 'end & PAGE_MASK' and iterates with 'pos < end'.

reserve_mem_release_by_name() instead passes 'start + map->size - 1',
which causes the last page of a page-aligned reservation to never be
freed. For a reservation spanning N pages, only N - 1 pages are
released back to the allocator.

Fix it by passing the exclusive end address, 'start + map->size'.

Signed-off-by: DaeMyung Kang <charsyam@gmail.com>
---
 mm/memblock.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index b3ddfdec7a80..d4a02f1750e9 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -2434,7 +2434,7 @@ int reserve_mem_release_by_name(const char *name)
 		return 0;
 
 	start = phys_to_virt(map->start);
-	end = start + map->size - 1;
+	end = start + map->size;
 	snprintf(buf, sizeof(buf), "reserve_mem:%s", name);
 	free_reserved_area(start, end, 0, buf);
 	map->size = 0;
-- 
2.43.0



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm/memblock: fix off-by-one page leak in reserve_mem_release_by_name()
  2026-04-14  9:44 [PATCH] mm/memblock: fix off-by-one page leak in reserve_mem_release_by_name() DaeMyung Kang
@ 2026-04-14 10:10 ` Donet Tom
  2026-04-14 10:43 ` [PATCH v2] " DaeMyung Kang
  1 sibling, 0 replies; 6+ messages in thread
From: Donet Tom @ 2026-04-14 10:10 UTC (permalink / raw)
  To: DaeMyung Kang, Mike Rapoport, Andrew Morton
  Cc: Steven Rostedt, linux-mm, linux-kernel

Hi

On 4/14/26 3:14 PM, DaeMyung Kang wrote:
> free_reserved_area() treats its 'end' argument as exclusive: it aligns
> end down via 'end & PAGE_MASK' and iterates with 'pos < end'.
>
> reserve_mem_release_by_name() instead passes 'start + map->size - 1',
> which causes the last page of a page-aligned reservation to never be
> freed. For a reservation spanning N pages, only N - 1 pages are
> released back to the allocator.
>
> Fix it by passing the exclusive end address, 'start + map->size'.
>
> Signed-off-by: DaeMyung Kang <charsyam@gmail.com>


Do we need a fixes tag?

-Donet

> ---
>   mm/memblock.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/memblock.c b/mm/memblock.c
> index b3ddfdec7a80..d4a02f1750e9 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -2434,7 +2434,7 @@ int reserve_mem_release_by_name(const char *name)
>   		return 0;
>   
>   	start = phys_to_virt(map->start);
> -	end = start + map->size - 1;
> +	end = start + map->size;
>   	snprintf(buf, sizeof(buf), "reserve_mem:%s", name);
>   	free_reserved_area(start, end, 0, buf);
>   	map->size = 0;


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v2] mm/memblock: fix off-by-one page leak in reserve_mem_release_by_name()
  2026-04-14  9:44 [PATCH] mm/memblock: fix off-by-one page leak in reserve_mem_release_by_name() DaeMyung Kang
  2026-04-14 10:10 ` Donet Tom
@ 2026-04-14 10:43 ` DaeMyung Kang
  2026-04-14 11:13   ` Donet Tom
  1 sibling, 1 reply; 6+ messages in thread
From: DaeMyung Kang @ 2026-04-14 10:43 UTC (permalink / raw)
  To: Mike Rapoport, Andrew Morton
  Cc: Masami Hiramatsu, Steven Rostedt, Donet Tom, stable, linux-mm,
	linux-kernel, DaeMyung Kang

free_reserved_area() treats its 'end' argument as exclusive: it aligns
end down via 'end & PAGE_MASK' and iterates with 'pos < end'.

reserve_mem_release_by_name() instead passes 'start + map->size - 1',
which causes the last page of a page-aligned reservation to never be
freed. For a reservation spanning N pages, only N - 1 pages are
released back to the allocator.

Fix it by passing the exclusive end address, 'start + map->size'.

Fixes: 74e2498ccf7b ("mm/memblock: Add reserved memory release function")
Cc: stable@vger.kernel.org
Signed-off-by: DaeMyung Kang <charsyam@gmail.com>
---
Changes in v2:
 - Add Fixes: tag and Cc: stable (per Donet Tom's review).
 - v1: https://lore.kernel.org/lkml/

 mm/memblock.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index b3ddfdec7a80..d4a02f1750e9 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -2434,7 +2434,7 @@ int reserve_mem_release_by_name(const char *name)
 		return 0;
 
 	start = phys_to_virt(map->start);
-	end = start + map->size - 1;
+	end = start + map->size;
 	snprintf(buf, sizeof(buf), "reserve_mem:%s", name);
 	free_reserved_area(start, end, 0, buf);
 	map->size = 0;
-- 
2.43.0



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] mm/memblock: fix off-by-one page leak in reserve_mem_release_by_name()
  2026-04-14 10:43 ` [PATCH v2] " DaeMyung Kang
@ 2026-04-14 11:13   ` Donet Tom
  2026-04-14 11:20     ` CharSyam
  0 siblings, 1 reply; 6+ messages in thread
From: Donet Tom @ 2026-04-14 11:13 UTC (permalink / raw)
  To: DaeMyung Kang, Mike Rapoport, Andrew Morton
  Cc: Masami Hiramatsu, Steven Rostedt, stable, linux-mm, linux-kernel

Hi

On 4/14/26 4:13 PM, DaeMyung Kang wrote:
> free_reserved_area() treats its 'end' argument as exclusive: it aligns
> end down via 'end & PAGE_MASK' and iterates with 'pos < end'.
>
> reserve_mem_release_by_name() instead passes 'start + map->size - 1',
> which causes the last page of a page-aligned reservation to never be
> freed. For a reservation spanning N pages, only N - 1 pages are
> released back to the allocator.
>
> Fix it by passing the exclusive end address, 'start + map->size'.
>
> Fixes: 74e2498ccf7b ("mm/memblock: Add reserved memory release function")
> Cc: stable@vger.kernel.org
> Signed-off-by: DaeMyung Kang <charsyam@gmail.com>


I think it might be better to send v2 as a separate patch  rather than 
as a reply to the previous version.

This patch looks good to me.

Reviewed-by: Donet Tom donettom@linux.ibm.com

-Donet


> ---
> Changes in v2:
>   - Add Fixes: tag and Cc: stable (per Donet Tom's review).
>   - v1: https://lore.kernel.org/lkml/
>
>   mm/memblock.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/memblock.c b/mm/memblock.c
> index b3ddfdec7a80..d4a02f1750e9 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -2434,7 +2434,7 @@ int reserve_mem_release_by_name(const char *name)
>   		return 0;
>   
>   	start = phys_to_virt(map->start);
> -	end = start + map->size - 1;
> +	end = start + map->size;
>   	snprintf(buf, sizeof(buf), "reserve_mem:%s", name);
>   	free_reserved_area(start, end, 0, buf);
>   	map->size = 0;


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] mm/memblock: fix off-by-one page leak in reserve_mem_release_by_name()
  2026-04-14 11:13   ` Donet Tom
@ 2026-04-14 11:20     ` CharSyam
  0 siblings, 0 replies; 6+ messages in thread
From: CharSyam @ 2026-04-14 11:20 UTC (permalink / raw)
  To: Donet Tom
  Cc: Mike Rapoport, Andrew Morton, Masami Hiramatsu, Steven Rostedt,
	stable, linux-mm, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 2031 bytes --]

  Please disregard this patch -- I just noticed the same fix is already
  in linux-next as commit c12c3e150780 ("memblock: reserve_mem: fix end
  caclulation in reserve_mem_release_by_name()") by Mike Rapoport.

  Sorry for the noise.

  Thanks,
  DaeMyung

2026년 4월 14일 (화) 오후 8:13, Donet Tom <donettom@linux.ibm.com>님이 작성:

> Hi
>
> On 4/14/26 4:13 PM, DaeMyung Kang wrote:
> > free_reserved_area() treats its 'end' argument as exclusive: it aligns
> > end down via 'end & PAGE_MASK' and iterates with 'pos < end'.
> >
> > reserve_mem_release_by_name() instead passes 'start + map->size - 1',
> > which causes the last page of a page-aligned reservation to never be
> > freed. For a reservation spanning N pages, only N - 1 pages are
> > released back to the allocator.
> >
> > Fix it by passing the exclusive end address, 'start + map->size'.
> >
> > Fixes: 74e2498ccf7b ("mm/memblock: Add reserved memory release function")
> > Cc: stable@vger.kernel.org
> > Signed-off-by: DaeMyung Kang <charsyam@gmail.com>
>
>
> I think it might be better to send v2 as a separate patch  rather than
> as a reply to the previous version.
>
> This patch looks good to me.
>
> Reviewed-by: Donet Tom donettom@linux.ibm.com
>
> -Donet
>
>
> > ---
> > Changes in v2:
> >   - Add Fixes: tag and Cc: stable (per Donet Tom's review).
> >   - v1: https://lore.kernel.org/lkml/
> >
> >   mm/memblock.c | 2 +-
> >   1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/mm/memblock.c b/mm/memblock.c
> > index b3ddfdec7a80..d4a02f1750e9 100644
> > --- a/mm/memblock.c
> > +++ b/mm/memblock.c
> > @@ -2434,7 +2434,7 @@ int reserve_mem_release_by_name(const char *name)
> >               return 0;
> >
> >       start = phys_to_virt(map->start);
> > -     end = start + map->size - 1;
> > +     end = start + map->size;
> >       snprintf(buf, sizeof(buf), "reserve_mem:%s", name);
> >       free_reserved_area(start, end, 0, buf);
> >       map->size = 0;
>

[-- Attachment #2: Type: text/html, Size: 2980 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v2] mm/memblock: fix off-by-one page leak in reserve_mem_release_by_name()
       [not found] <V1-MESSAGE-ID>
@ 2026-04-14 10:42 ` DaeMyung Kang
  0 siblings, 0 replies; 6+ messages in thread
From: DaeMyung Kang @ 2026-04-14 10:42 UTC (permalink / raw)
  To: Mike Rapoport, Andrew Morton
  Cc: Masami Hiramatsu, Steven Rostedt, Donet Tom, stable, linux-mm,
	linux-kernel, DaeMyung Kang

free_reserved_area() treats its 'end' argument as exclusive: it aligns
end down via 'end & PAGE_MASK' and iterates with 'pos < end'.

reserve_mem_release_by_name() instead passes 'start + map->size - 1',
which causes the last page of a page-aligned reservation to never be
freed. For a reservation spanning N pages, only N - 1 pages are
released back to the allocator.

Fix it by passing the exclusive end address, 'start + map->size'.

Fixes: 74e2498ccf7b ("mm/memblock: Add reserved memory release function")
Cc: stable@vger.kernel.org
Signed-off-by: DaeMyung Kang <charsyam@gmail.com>
---
Changes in v2:
 - Add Fixes: tag and Cc: stable (per Donet Tom's review).
 - v1: https://lore.kernel.org/lkml/

 mm/memblock.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index b3ddfdec7a80..d4a02f1750e9 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -2434,7 +2434,7 @@ int reserve_mem_release_by_name(const char *name)
 		return 0;
 
 	start = phys_to_virt(map->start);
-	end = start + map->size - 1;
+	end = start + map->size;
 	snprintf(buf, sizeof(buf), "reserve_mem:%s", name);
 	free_reserved_area(start, end, 0, buf);
 	map->size = 0;
-- 
2.43.0



^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2026-04-14 11:20 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-04-14  9:44 [PATCH] mm/memblock: fix off-by-one page leak in reserve_mem_release_by_name() DaeMyung Kang
2026-04-14 10:10 ` Donet Tom
2026-04-14 10:43 ` [PATCH v2] " DaeMyung Kang
2026-04-14 11:13   ` Donet Tom
2026-04-14 11:20     ` CharSyam
     [not found] <V1-MESSAGE-ID>
2026-04-14 10:42 ` DaeMyung Kang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox