* [PATCH v4 -next] cma: Enforce non-zero pageblock_order during cma_init_reserved_mem()
@ 2024-11-13 14:19 Ritesh Harjani (IBM)
0 siblings, 0 replies; only message in thread
From: Ritesh Harjani (IBM) @ 2024-11-13 14:19 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linuxppc-dev, Sourabh Jain, Hari Bathini, Zi Yan,
David Hildenbrand, Kirill A . Shutemov, Mahesh J Salgaonkar,
Michael Ellerman, Madhavan Srinivasan, Aneesh Kumar K . V,
Donet Tom, LKML, Sachin P Bappalige, Ritesh Harjani (IBM),
Anshuman Khandual
cma_init_reserved_mem() checks base and size alignment with
CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during
early boot when pageblock_order is 0. That means if base and size does
not have pageblock_order alignment, it can cause functional failures
during cma activate area.
So let's enforce pageblock_order to be non-zero during
cma_init_reserved_mem() to catch such wrong usages.
Acked-by: David Hildenbrand <david@redhat.com>
Acked-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
---
RFCv3 -> v4:
1. Dropped RFC tagged as requested by Andrew.
2. Updated the changelog & added background as requested by Andrew [RFCv3]
3. Added acked-by and RBs tags.
4. Small commit msg update.
5. No functional changes.
[RFCv3]: https://lore.kernel.org/all/20241112225902.f20215e5015f4d7cdf502302@linux-foundation.org/
Background -
============
1. This was seen with fadump on PowerPC which was calling
cma_init_reserved_mem() before the pageblock_order was initialized.
This is now fixed in the fadump on PowerPC itself. The details of that
can be found in the patch including the userspace-visible effect of the issue [1].
2. However it was also decided that we should add a stronger enforcement check
within cma_init_reserved_mem() to catch such wrong usages [2]. Hence this
patch. This is ok to be in -next and there is no "Fixes" tag required for
this patch.
[1]: https://lore.kernel.org/all/3ae208e48c0d9cefe53d2dc4f593388067405b7d.1729146153.git.ritesh.list@gmail.com/
[2]: https://lore.kernel.org/all/83eb128e-4f06-4725-a843-a4563f246a44@redhat.com/
mm/cma.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/mm/cma.c b/mm/cma.c
index 3e9724716bad..36d753e7a0bf 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -182,6 +182,15 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
if (!size || !memblock_is_region_reserved(base, size))
return -EINVAL;
+ /*
+ * CMA uses CMA_MIN_ALIGNMENT_BYTES as alignment requirement which
+ * needs pageblock_order to be initialized. Let's enforce it.
+ */
+ if (!pageblock_order) {
+ pr_err("pageblock_order not yet initialized. Called during early boot?\n");
+ return -EINVAL;
+ }
+
/* ensure minimal alignment required by mm core */
if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES))
return -EINVAL;
--
2.46.0
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2024-11-13 14:20 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-11-13 14:19 [PATCH v4 -next] cma: Enforce non-zero pageblock_order during cma_init_reserved_mem() Ritesh Harjani (IBM)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox