linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC v3 -next] cma: Enforce non-zero pageblock_order during cma_init_reserved_mem()
@ 2024-10-11 14:56 Ritesh Harjani (IBM)
  2024-10-11 15:04 ` Zi Yan
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Ritesh Harjani (IBM) @ 2024-10-11 14:56 UTC (permalink / raw)
  To: linux-mm
  Cc: linuxppc-dev, Sourabh Jain, Hari Bathini, Zi Yan,
	David Hildenbrand, Kirill A . Shutemov, Mahesh J Salgaonkar,
	Michael Ellerman, Madhavan Srinivasan, Aneesh Kumar K . V,
	Donet Tom, LKML, Sachin P Bappalige, Ritesh Harjani (IBM)

cma_init_reserved_mem() checks base and size alignment with
CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during
early boot when pageblock_order is 0. That means if base and size does
not have pageblock_order alignment, it can cause functional failures
during cma activate area.

So let's enforce pageblock_order to be non-zero during
cma_init_reserved_mem().

Acked-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
---
v2 -> v3: Separated the series into 2 as discussed in v2.
[v2]: https://lore.kernel.org/linuxppc-dev/cover.1728585512.git.ritesh.list@gmail.com/

 mm/cma.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/mm/cma.c b/mm/cma.c
index 3e9724716bad..36d753e7a0bf 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -182,6 +182,15 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
 	if (!size || !memblock_is_region_reserved(base, size))
 		return -EINVAL;

+	/*
+	 * CMA uses CMA_MIN_ALIGNMENT_BYTES as alignment requirement which
+	 * needs pageblock_order to be initialized. Let's enforce it.
+	 */
+	if (!pageblock_order) {
+		pr_err("pageblock_order not yet initialized. Called during early boot?\n");
+		return -EINVAL;
+	}
+
 	/* ensure minimal alignment required by mm core */
 	if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES))
 		return -EINVAL;
--
2.46.0



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC v3 -next] cma: Enforce non-zero pageblock_order during cma_init_reserved_mem()
  2024-10-11 14:56 [RFC v3 -next] cma: Enforce non-zero pageblock_order during cma_init_reserved_mem() Ritesh Harjani (IBM)
@ 2024-10-11 15:04 ` Zi Yan
  2024-10-14  6:44 ` Anshuman Khandual
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Zi Yan @ 2024-10-11 15:04 UTC (permalink / raw)
  To: Ritesh Harjani (IBM)
  Cc: linux-mm, linuxppc-dev, Sourabh Jain, Hari Bathini,
	David Hildenbrand, Kirill A . Shutemov, Mahesh J Salgaonkar,
	Michael Ellerman, Madhavan Srinivasan, Aneesh Kumar K . V,
	Donet Tom, LKML, Sachin P Bappalige

[-- Attachment #1: Type: text/plain, Size: 861 bytes --]

On 11 Oct 2024, at 10:56, Ritesh Harjani (IBM) wrote:

> cma_init_reserved_mem() checks base and size alignment with
> CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during
> early boot when pageblock_order is 0. That means if base and size does
> not have pageblock_order alignment, it can cause functional failures
> during cma activate area.
>
> So let's enforce pageblock_order to be non-zero during
> cma_init_reserved_mem().
>
> Acked-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
> ---
> v2 -> v3: Separated the series into 2 as discussed in v2.
> [v2]: https://lore.kernel.org/linuxppc-dev/cover.1728585512.git.ritesh.list@gmail.com/
>
>  mm/cma.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
>

Acked-by: Zi Yan <ziy@nvidia.com>

Best Regards,
Yan, Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC v3 -next] cma: Enforce non-zero pageblock_order during cma_init_reserved_mem()
  2024-10-11 14:56 [RFC v3 -next] cma: Enforce non-zero pageblock_order during cma_init_reserved_mem() Ritesh Harjani (IBM)
  2024-10-11 15:04 ` Zi Yan
@ 2024-10-14  6:44 ` Anshuman Khandual
  2024-11-13  1:53 ` Ritesh Harjani
  2024-11-13  6:59 ` Andrew Morton
  3 siblings, 0 replies; 6+ messages in thread
From: Anshuman Khandual @ 2024-10-14  6:44 UTC (permalink / raw)
  To: Ritesh Harjani (IBM), linux-mm
  Cc: linuxppc-dev, Sourabh Jain, Hari Bathini, Zi Yan,
	David Hildenbrand, Kirill A . Shutemov, Mahesh J Salgaonkar,
	Michael Ellerman, Madhavan Srinivasan, Aneesh Kumar K . V,
	Donet Tom, LKML, Sachin P Bappalige



On 10/11/24 20:26, Ritesh Harjani (IBM) wrote:
> cma_init_reserved_mem() checks base and size alignment with
> CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during
> early boot when pageblock_order is 0. That means if base and size does
> not have pageblock_order alignment, it can cause functional failures
> during cma activate area.
> 
> So let's enforce pageblock_order to be non-zero during
> cma_init_reserved_mem().
> 
> Acked-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
> ---
> v2 -> v3: Separated the series into 2 as discussed in v2.
> [v2]: https://lore.kernel.org/linuxppc-dev/cover.1728585512.git.ritesh.list@gmail.com/
> 
>  mm/cma.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/mm/cma.c b/mm/cma.c
> index 3e9724716bad..36d753e7a0bf 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -182,6 +182,15 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
>  	if (!size || !memblock_is_region_reserved(base, size))
>  		return -EINVAL;
> 
> +	/*
> +	 * CMA uses CMA_MIN_ALIGNMENT_BYTES as alignment requirement which
> +	 * needs pageblock_order to be initialized. Let's enforce it.
> +	 */
> +	if (!pageblock_order) {
> +		pr_err("pageblock_order not yet initialized. Called during early boot?\n");
> +		return -EINVAL;
> +	}
> +
>  	/* ensure minimal alignment required by mm core */
>  	if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES))
>  		return -EINVAL;
> --
> 2.46.0
> 
> 

LGTM, hopefully this comment regarding CMA_MIN_ALIGNMENT_BYTES alignment
requirement will also probably remind us, to drop this new check in case
CMA_MIN_ALIGNMENT_BYTES no longer depends on pageblock_order later.

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC v3 -next] cma: Enforce non-zero pageblock_order during cma_init_reserved_mem()
  2024-10-11 14:56 [RFC v3 -next] cma: Enforce non-zero pageblock_order during cma_init_reserved_mem() Ritesh Harjani (IBM)
  2024-10-11 15:04 ` Zi Yan
  2024-10-14  6:44 ` Anshuman Khandual
@ 2024-11-13  1:53 ` Ritesh Harjani
  2024-11-13  6:52   ` Andrew Morton
  2024-11-13  6:59 ` Andrew Morton
  3 siblings, 1 reply; 6+ messages in thread
From: Ritesh Harjani @ 2024-11-13  1:53 UTC (permalink / raw)
  To: linux-mm
  Cc: linuxppc-dev, Sourabh Jain, Hari Bathini, Zi Yan,
	David Hildenbrand, Kirill A . Shutemov, Mahesh J Salgaonkar,
	Michael Ellerman, Madhavan Srinivasan, Aneesh Kumar K . V,
	Donet Tom, LKML, Sachin P Bappalige, Andrew Morton

"Ritesh Harjani (IBM)" <ritesh.list@gmail.com> writes:

> cma_init_reserved_mem() checks base and size alignment with
> CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during
> early boot when pageblock_order is 0. That means if base and size does
> not have pageblock_order alignment, it can cause functional failures
> during cma activate area.
>
> So let's enforce pageblock_order to be non-zero during
> cma_init_reserved_mem().
>
> Acked-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
> ---
> v2 -> v3: Separated the series into 2 as discussed in v2.
> [v2]: https://lore.kernel.org/linuxppc-dev/cover.1728585512.git.ritesh.list@gmail.com/
>
>  mm/cma.c | 9 +++++++++
>  1 file changed, 9 insertions(+)

Gentle ping. Is this going into -next?

-ritesh

>
> diff --git a/mm/cma.c b/mm/cma.c
> index 3e9724716bad..36d753e7a0bf 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -182,6 +182,15 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
>  	if (!size || !memblock_is_region_reserved(base, size))
>  		return -EINVAL;
>
> +	/*
> +	 * CMA uses CMA_MIN_ALIGNMENT_BYTES as alignment requirement which
> +	 * needs pageblock_order to be initialized. Let's enforce it.
> +	 */
> +	if (!pageblock_order) {
> +		pr_err("pageblock_order not yet initialized. Called during early boot?\n");
> +		return -EINVAL;
> +	}
> +
>  	/* ensure minimal alignment required by mm core */
>  	if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES))
>  		return -EINVAL;
> --
> 2.46.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC v3 -next] cma: Enforce non-zero pageblock_order during cma_init_reserved_mem()
  2024-11-13  1:53 ` Ritesh Harjani
@ 2024-11-13  6:52   ` Andrew Morton
  0 siblings, 0 replies; 6+ messages in thread
From: Andrew Morton @ 2024-11-13  6:52 UTC (permalink / raw)
  To: Ritesh Harjani
  Cc: linux-mm, linuxppc-dev, Sourabh Jain, Hari Bathini, Zi Yan,
	David Hildenbrand, Kirill A . Shutemov, Mahesh J Salgaonkar,
	Michael Ellerman, Madhavan Srinivasan, Aneesh Kumar K . V,
	Donet Tom, LKML, Sachin P Bappalige

On Wed, 13 Nov 2024 07:23:43 +0530 Ritesh Harjani (IBM) <ritesh.list@gmail.com> wrote:

> "Ritesh Harjani (IBM)" <ritesh.list@gmail.com> writes:
> 
> > cma_init_reserved_mem() checks base and size alignment with
> > CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during
> > early boot when pageblock_order is 0. That means if base and size does
> > not have pageblock_order alignment, it can cause functional failures
> > during cma activate area.
> >
> > So let's enforce pageblock_order to be non-zero during
> > cma_init_reserved_mem().
> >
> > Acked-by: David Hildenbrand <david@redhat.com>
> > Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
> > ---
> > v2 -> v3: Separated the series into 2 as discussed in v2.
> > [v2]: https://lore.kernel.org/linuxppc-dev/cover.1728585512.git.ritesh.list@gmail.com/
> >
> >  mm/cma.c | 9 +++++++++
> >  1 file changed, 9 insertions(+)
> 
> Gentle ping. Is this going into -next?

I pay little attention to anything marked "RFC".  Let me take a look.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC v3 -next] cma: Enforce non-zero pageblock_order during cma_init_reserved_mem()
  2024-10-11 14:56 [RFC v3 -next] cma: Enforce non-zero pageblock_order during cma_init_reserved_mem() Ritesh Harjani (IBM)
                   ` (2 preceding siblings ...)
  2024-11-13  1:53 ` Ritesh Harjani
@ 2024-11-13  6:59 ` Andrew Morton
  3 siblings, 0 replies; 6+ messages in thread
From: Andrew Morton @ 2024-11-13  6:59 UTC (permalink / raw)
  To: Ritesh Harjani (IBM)
  Cc: linux-mm, linuxppc-dev, Sourabh Jain, Hari Bathini, Zi Yan,
	David Hildenbrand, Kirill A . Shutemov, Mahesh J Salgaonkar,
	Michael Ellerman, Madhavan Srinivasan, Aneesh Kumar K . V,
	Donet Tom, LKML, Sachin P Bappalige

On Fri, 11 Oct 2024 20:26:09 +0530 "Ritesh Harjani (IBM)" <ritesh.list@gmail.com> wrote:

> cma_init_reserved_mem() checks base and size alignment with
> CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during
> early boot when pageblock_order is 0.

This sounds like "some users" are in error.  Please tell us precisely
which users we're talking about here.

Is there a startup ordering issue here?  It feels like a bad idea to
work around callers' flaws within the callee.

Please also describe the userspace-visible effects of this.  Because it
might be the case that we will want to backport any fix into earlier
kernels, and we shouldn't do that until we know how those kernels will
benefit.

And to aid all of this, please attempt to identify a Fixes: target, to
aid others in identifying which kernel version(s) need patching.

Please answer all the above in the next (non-RFC!) version's changelog.

Meanwhile, I'll queue up this version for some testing.

Thanks.


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2024-11-13  6:59 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-10-11 14:56 [RFC v3 -next] cma: Enforce non-zero pageblock_order during cma_init_reserved_mem() Ritesh Harjani (IBM)
2024-10-11 15:04 ` Zi Yan
2024-10-14  6:44 ` Anshuman Khandual
2024-11-13  1:53 ` Ritesh Harjani
2024-11-13  6:52   ` Andrew Morton
2024-11-13  6:59 ` Andrew Morton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox