linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/memtest: prevent arithmetic underflow in end pointer calculation
@ 2025-12-20 15:10 klourencodev
  2025-12-28 19:56 ` Mike Rapoport
  0 siblings, 1 reply; 5+ messages in thread
From: klourencodev @ 2025-12-20 15:10 UTC (permalink / raw)
  To: linux-mm; +Cc: rppt, Kevin Lourenco

From: Kevin Lourenco <klourencodev@gmail.com>

The computation of the loop end pointer can underflow when size is
smaller than the alignment offset:

    (size - (start_phys_aligned - start_phys))

If size < offset, the unsigned subtraction wraps to ~0, causing a
massive loop iteration that writes far beyond the intended region,
leading to memory corruption during early boot.

While unlikely in practice (memblock regions are typically KB/MB), cost is negligible
(one comparison), but it prevents catastrophic memory corruption in
edge cases.

Signed-off-by: Kevin Lourenco <klourencodev@gmail.com>
---
 mm/memtest.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/mm/memtest.c b/mm/memtest.c
index c2c609c39119..d86c41f1c189 100644
--- a/mm/memtest.c
+++ b/mm/memtest.c
@@ -41,12 +41,17 @@ static void __init memtest(u64 pattern, phys_addr_t start_phys, phys_addr_t size
 {
 	u64 *p, *start, *end;
 	phys_addr_t start_bad, last_bad;
-	phys_addr_t start_phys_aligned;
+	phys_addr_t start_phys_aligned, offset;
 	const size_t incr = sizeof(pattern);
 
 	start_phys_aligned = ALIGN(start_phys, incr);
 	start = __va(start_phys_aligned);
-	end = start + (size - (start_phys_aligned - start_phys)) / incr;
+	
+	offset = start_phys_aligned - start_phys;
+	if (size < offset)
+		return;
+
+	end = start + (size - offset) / incr;
 	start_bad = 0;
 	last_bad = 0;
 
-- 
2.47.3



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm/memtest: prevent arithmetic underflow in end pointer calculation
  2025-12-20 15:10 [PATCH] mm/memtest: prevent arithmetic underflow in end pointer calculation klourencodev
@ 2025-12-28 19:56 ` Mike Rapoport
  2025-12-29 15:47   ` Kevin Lourenco
  0 siblings, 1 reply; 5+ messages in thread
From: Mike Rapoport @ 2025-12-28 19:56 UTC (permalink / raw)
  To: klourencodev; +Cc: linux-mm

On Sat, Dec 20, 2025 at 04:10:19PM +0100, klourencodev@gmail.com wrote:
> From: Kevin Lourenco <klourencodev@gmail.com>
> 
> The computation of the loop end pointer can underflow when size is
> smaller than the alignment offset:
> 
>     (size - (start_phys_aligned - start_phys))
> 
> If size < offset, the unsigned subtraction wraps to ~0, causing a

Is it exactly ~0?  

> massive loop iteration that writes far beyond the intended region,
> leading to memory corruption during early boot.
> 
> While unlikely in practice (memblock regions are typically KB/MB), cost is negligible
> (one comparison), but it prevents catastrophic memory corruption in
> edge cases.
>
> Signed-off-by: Kevin Lourenco <klourencodev@gmail.com>
> ---
>  mm/memtest.c | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/memtest.c b/mm/memtest.c
> index c2c609c39119..d86c41f1c189 100644
> --- a/mm/memtest.c
> +++ b/mm/memtest.c
> @@ -41,12 +41,17 @@ static void __init memtest(u64 pattern, phys_addr_t start_phys, phys_addr_t size
>  {
>  	u64 *p, *start, *end;
>  	phys_addr_t start_bad, last_bad;
> -	phys_addr_t start_phys_aligned;
> +	phys_addr_t start_phys_aligned, offset;
>  	const size_t incr = sizeof(pattern);
>  
>  	start_phys_aligned = ALIGN(start_phys, incr);
>  	start = __va(start_phys_aligned);
> -	end = start + (size - (start_phys_aligned - start_phys)) / incr;

I believe VM_WARN_ON_ONCE(size < start_phys_aligned - start_phys) is
sufficient here to detect those theoretical edge cases.

> +	
> +	offset = start_phys_aligned - start_phys;
> +	if (size < offset)
> +		return;
> +
> +	end = start + (size - offset) / incr;
>  	start_bad = 0;
>  	last_bad = 0;
>  
> -- 
> 2.47.3
> 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm/memtest: prevent arithmetic underflow in end pointer calculation
  2025-12-28 19:56 ` Mike Rapoport
@ 2025-12-29 15:47   ` Kevin Lourenco
  2025-12-29 16:13     ` [PATCH v2] mm/memtest: add underflow detection for size calculation klourencodev
  0 siblings, 1 reply; 5+ messages in thread
From: Kevin Lourenco @ 2025-12-29 15:47 UTC (permalink / raw)
  To: Mike Rapoport; +Cc: linux-mm

Hi Mike,

> Is it exactly ~0?

Not exactly - it's UINT64_MAX - (offset - size), but since offset
is at most 7 bytes (from ALIGN), the result is in the range
[~0 - 7, ~0], which after division by 8 still yields ~2^61 iterations.

Either way, if it happens, it's not great, WARN_ON_ONCE usage sounds
good, I'll send a v2 patch.

Thanks for the feedback!

Le dim. 28 déc. 2025 à 20:56, Mike Rapoport <rppt@kernel.org> a écrit :
>
> On Sat, Dec 20, 2025 at 04:10:19PM +0100, klourencodev@gmail.com wrote:
> > From: Kevin Lourenco <klourencodev@gmail.com>
> >
> > The computation of the loop end pointer can underflow when size is
> > smaller than the alignment offset:
> >
> >     (size - (start_phys_aligned - start_phys))
> >
> > If size < offset, the unsigned subtraction wraps to ~0, causing a
>
> Is it exactly ~0?
>
> > massive loop iteration that writes far beyond the intended region,
> > leading to memory corruption during early boot.
> >
> > While unlikely in practice (memblock regions are typically KB/MB), cost is negligible
> > (one comparison), but it prevents catastrophic memory corruption in
> > edge cases.
> >
> > Signed-off-by: Kevin Lourenco <klourencodev@gmail.com>
> > ---
> >  mm/memtest.c | 9 +++++++--
> >  1 file changed, 7 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/memtest.c b/mm/memtest.c
> > index c2c609c39119..d86c41f1c189 100644
> > --- a/mm/memtest.c
> > +++ b/mm/memtest.c
> > @@ -41,12 +41,17 @@ static void __init memtest(u64 pattern, phys_addr_t start_phys, phys_addr_t size
> >  {
> >       u64 *p, *start, *end;
> >       phys_addr_t start_bad, last_bad;
> > -     phys_addr_t start_phys_aligned;
> > +     phys_addr_t start_phys_aligned, offset;
> >       const size_t incr = sizeof(pattern);
> >
> >       start_phys_aligned = ALIGN(start_phys, incr);
> >       start = __va(start_phys_aligned);
> > -     end = start + (size - (start_phys_aligned - start_phys)) / incr;
>
> I believe VM_WARN_ON_ONCE(size < start_phys_aligned - start_phys) is
> sufficient here to detect those theoretical edge cases.
>
> > +
> > +     offset = start_phys_aligned - start_phys;
> > +     if (size < offset)
> > +             return;
> > +
> > +     end = start + (size - offset) / incr;
> >       start_bad = 0;
> >       last_bad = 0;
> >
> > --
> > 2.47.3
> >
>
> --
> Sincerely yours,
> Mike.


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v2] mm/memtest: add underflow detection for size calculation
  2025-12-29 15:47   ` Kevin Lourenco
@ 2025-12-29 16:13     ` klourencodev
  2025-12-30 13:36       ` Mike Rapoport
  0 siblings, 1 reply; 5+ messages in thread
From: klourencodev @ 2025-12-29 16:13 UTC (permalink / raw)
  To: linux-mm; +Cc: rppt, Kevin Lourenco

From: Kevin Lourenco <klourencodev@gmail.com>

The computation: end = start + (size - (start_phys_aligned - start_phys)) / incr
could theoretically underflow if size < offset, leading to a massive
iteration.

Add VM_WARN_ON_ONCE() to detect cases where the region size is smaller
than the alignment offset. While this should never happen in practice
due to memblock guarantees, the warning helps catch potential bugs in
early memory initialization code.

Suggested-by: Mike Rapoport <rppt@kernel.org>
Signed-off-by: Kevin Lourenco <klourencodev@gmail.com>
---
 mm/memtest.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/mm/memtest.c b/mm/memtest.c
index c2c609c39119..bac195e6077a 100644
--- a/mm/memtest.c
+++ b/mm/memtest.c
@@ -46,6 +46,7 @@ static void __init memtest(u64 pattern, phys_addr_t start_phys, phys_addr_t size
 
 	start_phys_aligned = ALIGN(start_phys, incr);
 	start = __va(start_phys_aligned);
+	VM_WARN_ON_ONCE(size < start_phys_aligned - start_phys);
 	end = start + (size - (start_phys_aligned - start_phys)) / incr;
 	start_bad = 0;
 	last_bad = 0;
-- 
2.47.3



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] mm/memtest: add underflow detection for size calculation
  2025-12-29 16:13     ` [PATCH v2] mm/memtest: add underflow detection for size calculation klourencodev
@ 2025-12-30 13:36       ` Mike Rapoport
  0 siblings, 0 replies; 5+ messages in thread
From: Mike Rapoport @ 2025-12-30 13:36 UTC (permalink / raw)
  To: linux-mm, klourencodev; +Cc: Mike Rapoport

On Mon, 29 Dec 2025 17:13:21 +0100, klourencodev@gmail.com wrote:
> The computation: end = start + (size - (start_phys_aligned - start_phys)) / incr
> could theoretically underflow if size < offset, leading to a massive
> iteration.
> 
> Add VM_WARN_ON_ONCE() to detect cases where the region size is smaller
> than the alignment offset. While this should never happen in practice
> due to memblock guarantees, the warning helps catch potential bugs in
> early memory initialization code.
> 
> [...]

Applied to for-next branch of memblock.git tree, thanks!

I massaged the changelog a bit and moved VM_WARN_ON_ONCE() down while applying.

[1/1] mm/memtest: add underflow detection for size calculation
      commit: acd8aed2f2b6ce4bcc4f20e48ae88882b1b29fc3

tree: https://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock
branch: for-next

--
Sincerely yours,
Mike.



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2025-12-30 13:36 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-12-20 15:10 [PATCH] mm/memtest: prevent arithmetic underflow in end pointer calculation klourencodev
2025-12-28 19:56 ` Mike Rapoport
2025-12-29 15:47   ` Kevin Lourenco
2025-12-29 16:13     ` [PATCH v2] mm/memtest: add underflow detection for size calculation klourencodev
2025-12-30 13:36       ` Mike Rapoport

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox