* [RFC v2 1/4] cma: Enforce non-zero pageblock_order during cma_init_reserved_mem()
2024-10-11 7:23 [RFC v2 0/4] cma: powerpc fadump fixes Ritesh Harjani (IBM)
@ 2024-10-11 7:23 ` Ritesh Harjani (IBM)
2024-10-11 10:12 ` David Hildenbrand
2024-10-11 7:23 ` [RFC v2 2/4] fadump: Refactor and prepare fadump_cma_init for late init Ritesh Harjani (IBM)
` (3 subsequent siblings)
4 siblings, 1 reply; 10+ messages in thread
From: Ritesh Harjani (IBM) @ 2024-10-11 7:23 UTC (permalink / raw)
To: linuxppc-dev
Cc: linux-mm, Sourabh Jain, Hari Bathini, Zi Yan, David Hildenbrand,
Kirill A . Shutemov, Mahesh J Salgaonkar, Michael Ellerman,
Madhavan Srinivasan, Aneesh Kumar K . V, Donet Tom, LKML,
Sachin P Bappalige, Ritesh Harjani (IBM)
cma_init_reserved_mem() checks base and size alignment with
CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during
early boot when pageblock_order is 0. That means if base and size does
not have pageblock_order alignment, it can cause functional failures
during cma activate area.
So let's enforce pageblock_order to be non-zero during
cma_init_reserved_mem().
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
---
mm/cma.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/mm/cma.c b/mm/cma.c
index 3e9724716bad..36d753e7a0bf 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -182,6 +182,15 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
if (!size || !memblock_is_region_reserved(base, size))
return -EINVAL;
+ /*
+ * CMA uses CMA_MIN_ALIGNMENT_BYTES as alignment requirement which
+ * needs pageblock_order to be initialized. Let's enforce it.
+ */
+ if (!pageblock_order) {
+ pr_err("pageblock_order not yet initialized. Called during early boot?\n");
+ return -EINVAL;
+ }
+
/* ensure minimal alignment required by mm core */
if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES))
return -EINVAL;
--
2.46.0
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [RFC v2 1/4] cma: Enforce non-zero pageblock_order during cma_init_reserved_mem()
2024-10-11 7:23 ` [RFC v2 1/4] cma: Enforce non-zero pageblock_order during cma_init_reserved_mem() Ritesh Harjani (IBM)
@ 2024-10-11 10:12 ` David Hildenbrand
0 siblings, 0 replies; 10+ messages in thread
From: David Hildenbrand @ 2024-10-11 10:12 UTC (permalink / raw)
To: Ritesh Harjani (IBM), linuxppc-dev
Cc: linux-mm, Sourabh Jain, Hari Bathini, Zi Yan,
Kirill A . Shutemov, Mahesh J Salgaonkar, Michael Ellerman,
Madhavan Srinivasan, Aneesh Kumar K . V, Donet Tom, LKML,
Sachin P Bappalige
On 11.10.24 09:23, Ritesh Harjani (IBM) wrote:
> cma_init_reserved_mem() checks base and size alignment with
> CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during
> early boot when pageblock_order is 0. That means if base and size does
> not have pageblock_order alignment, it can cause functional failures
> during cma activate area.
>
> So let's enforce pageblock_order to be non-zero during
> cma_init_reserved_mem().
>
> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
> ---
> mm/cma.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/mm/cma.c b/mm/cma.c
> index 3e9724716bad..36d753e7a0bf 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -182,6 +182,15 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
> if (!size || !memblock_is_region_reserved(base, size))
> return -EINVAL;
>
> + /*
> + * CMA uses CMA_MIN_ALIGNMENT_BYTES as alignment requirement which
> + * needs pageblock_order to be initialized. Let's enforce it.
> + */
> + if (!pageblock_order) {
> + pr_err("pageblock_order not yet initialized. Called during early boot?\n");
> + return -EINVAL;
> + }
> +
> /* ensure minimal alignment required by mm core */
> if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES))
> return -EINVAL;
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 10+ messages in thread
* [RFC v2 2/4] fadump: Refactor and prepare fadump_cma_init for late init
2024-10-11 7:23 [RFC v2 0/4] cma: powerpc fadump fixes Ritesh Harjani (IBM)
2024-10-11 7:23 ` [RFC v2 1/4] cma: Enforce non-zero pageblock_order during cma_init_reserved_mem() Ritesh Harjani (IBM)
@ 2024-10-11 7:23 ` Ritesh Harjani (IBM)
2024-10-11 7:23 ` [RFC v2 3/4] fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem Ritesh Harjani (IBM)
` (2 subsequent siblings)
4 siblings, 0 replies; 10+ messages in thread
From: Ritesh Harjani (IBM) @ 2024-10-11 7:23 UTC (permalink / raw)
To: linuxppc-dev
Cc: linux-mm, Sourabh Jain, Hari Bathini, Zi Yan, David Hildenbrand,
Kirill A . Shutemov, Mahesh J Salgaonkar, Michael Ellerman,
Madhavan Srinivasan, Aneesh Kumar K . V, Donet Tom, LKML,
Sachin P Bappalige, Ritesh Harjani (IBM)
We anyway don't use any return values from fadump_cma_init(). Since
fadump_reserve_mem() from where fadump_cma_init() gets called today,
already has the required checks.
This patch makes this function return type as void. Let's also handle
extra cases like return if fadump_supported is false or dump_active, so
that in later patches we can call fadump_cma_init() separately from
setup_arch().
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
---
arch/powerpc/kernel/fadump.c | 23 +++++++++--------------
1 file changed, 9 insertions(+), 14 deletions(-)
diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
index a612e7513a4f..162327d66982 100644
--- a/arch/powerpc/kernel/fadump.c
+++ b/arch/powerpc/kernel/fadump.c
@@ -78,27 +78,23 @@ static struct cma *fadump_cma;
* But for some reason even if it fails we still have the memory reservation
* with us and we can still continue doing fadump.
*/
-static int __init fadump_cma_init(void)
+static void __init fadump_cma_init(void)
{
unsigned long long base, size;
int rc;
- if (!fw_dump.fadump_enabled)
- return 0;
-
+ if (!fw_dump.fadump_supported || !fw_dump.fadump_enabled ||
+ fw_dump.dump_active)
+ return;
/*
* Do not use CMA if user has provided fadump=nocma kernel parameter.
- * Return 1 to continue with fadump old behaviour.
*/
- if (fw_dump.nocma)
- return 1;
+ if (fw_dump.nocma || !fw_dump.boot_memory_size)
+ return;
base = fw_dump.reserve_dump_area_start;
size = fw_dump.boot_memory_size;
- if (!size)
- return 0;
-
rc = cma_init_reserved_mem(base, size, 0, "fadump_cma", &fadump_cma);
if (rc) {
pr_err("Failed to init cma area for firmware-assisted dump,%d\n", rc);
@@ -108,7 +104,7 @@ static int __init fadump_cma_init(void)
* blocked from production system usage. Hence return 1,
* so that we can continue with fadump.
*/
- return 1;
+ return;
}
/*
@@ -125,10 +121,9 @@ static int __init fadump_cma_init(void)
cma_get_size(fadump_cma),
(unsigned long)cma_get_base(fadump_cma) >> 20,
fw_dump.reserve_dump_area_size);
- return 1;
}
#else
-static int __init fadump_cma_init(void) { return 1; }
+static void __init fadump_cma_init(void) { }
#endif /* CONFIG_CMA */
/*
@@ -638,7 +633,7 @@ int __init fadump_reserve_mem(void)
pr_info("Reserved %lldMB of memory at %#016llx (System RAM: %lldMB)\n",
(size >> 20), base, (memblock_phys_mem_size() >> 20));
- ret = fadump_cma_init();
+ fadump_cma_init();
}
return ret;
--
2.46.0
^ permalink raw reply [flat|nested] 10+ messages in thread* [RFC v2 3/4] fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem
2024-10-11 7:23 [RFC v2 0/4] cma: powerpc fadump fixes Ritesh Harjani (IBM)
2024-10-11 7:23 ` [RFC v2 1/4] cma: Enforce non-zero pageblock_order during cma_init_reserved_mem() Ritesh Harjani (IBM)
2024-10-11 7:23 ` [RFC v2 2/4] fadump: Refactor and prepare fadump_cma_init for late init Ritesh Harjani (IBM)
@ 2024-10-11 7:23 ` Ritesh Harjani (IBM)
2024-10-11 10:51 ` Hari Bathini
2024-10-11 7:23 ` [RFC v2 4/4] fadump: Move fadump_cma_init to setup_arch() after initmem_init() Ritesh Harjani (IBM)
2024-10-11 10:17 ` [RFC v2 0/4] cma: powerpc fadump fixes Michael Ellerman
4 siblings, 1 reply; 10+ messages in thread
From: Ritesh Harjani (IBM) @ 2024-10-11 7:23 UTC (permalink / raw)
To: linuxppc-dev
Cc: linux-mm, Sourabh Jain, Hari Bathini, Zi Yan, David Hildenbrand,
Kirill A . Shutemov, Mahesh J Salgaonkar, Michael Ellerman,
Madhavan Srinivasan, Aneesh Kumar K . V, Donet Tom, LKML,
Sachin P Bappalige, Ritesh Harjani (IBM)
This patch refactors all CMA related initialization and alignment code
to within fadump_cma_init() which gets called in the end. This also means
that we keep [reserve_dump_area_start, boot_memory_size] page aligned
during fadump_reserve_mem(). Then later in fadump_cma_init() we extract the
aligned chunk and provide it to CMA. This inherently also fixes an issue in
the current code where the reserve_dump_area_start is not aligned
when the physical memory can have holes and the suitable chunk starts at
an unaligned boundary.
After this we should be able to call fadump_cma_init() independently
later in setup_arch() where pageblock_order is non-zero.
Suggested-by: Sourabh Jain <sourabhjain@linux.ibm.com>
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
---
arch/powerpc/kernel/fadump.c | 34 ++++++++++++++++++++++------------
1 file changed, 22 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
index 162327d66982..ffaec625b7a8 100644
--- a/arch/powerpc/kernel/fadump.c
+++ b/arch/powerpc/kernel/fadump.c
@@ -80,7 +80,7 @@ static struct cma *fadump_cma;
*/
static void __init fadump_cma_init(void)
{
- unsigned long long base, size;
+ unsigned long long base, size, end;
int rc;
if (!fw_dump.fadump_supported || !fw_dump.fadump_enabled ||
@@ -92,8 +92,24 @@ static void __init fadump_cma_init(void)
if (fw_dump.nocma || !fw_dump.boot_memory_size)
return;
+ /*
+ * [base, end) should be reserved during early init in
+ * fadump_reserve_mem(). No need to check this here as
+ * cma_init_reserved_mem() already checks for overlap.
+ * Here we give the aligned chunk of this reserved memory to CMA.
+ */
base = fw_dump.reserve_dump_area_start;
size = fw_dump.boot_memory_size;
+ end = base + size;
+
+ base = ALIGN(base, CMA_MIN_ALIGNMENT_BYTES);
+ end = ALIGN_DOWN(end, CMA_MIN_ALIGNMENT_BYTES);
+ size = end - base;
+
+ if (end <= base) {
+ pr_warn("%s: Too less memory to give to CMA\n", __func__);
+ return;
+ }
rc = cma_init_reserved_mem(base, size, 0, "fadump_cma", &fadump_cma);
if (rc) {
@@ -116,11 +132,12 @@ static void __init fadump_cma_init(void)
/*
* So we now have successfully initialized cma area for fadump.
*/
- pr_info("Initialized 0x%lx bytes cma area at %ldMB from 0x%lx "
+ pr_info("Initialized [0x%llx, %luMB] cma area from [0x%lx, %luMB] "
"bytes of memory reserved for firmware-assisted dump\n",
- cma_get_size(fadump_cma),
- (unsigned long)cma_get_base(fadump_cma) >> 20,
- fw_dump.reserve_dump_area_size);
+ cma_get_base(fadump_cma), cma_get_size(fadump_cma) >> 20,
+ fw_dump.reserve_dump_area_start,
+ fw_dump.boot_memory_size >> 20);
+ return;
}
#else
static void __init fadump_cma_init(void) { }
@@ -553,13 +570,6 @@ int __init fadump_reserve_mem(void)
if (!fw_dump.dump_active) {
fw_dump.boot_memory_size =
PAGE_ALIGN(fadump_calculate_reserve_size());
-#ifdef CONFIG_CMA
- if (!fw_dump.nocma) {
- fw_dump.boot_memory_size =
- ALIGN(fw_dump.boot_memory_size,
- CMA_MIN_ALIGNMENT_BYTES);
- }
-#endif
bootmem_min = fw_dump.ops->fadump_get_bootmem_min();
if (fw_dump.boot_memory_size < bootmem_min) {
--
2.46.0
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [RFC v2 3/4] fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem
2024-10-11 7:23 ` [RFC v2 3/4] fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem Ritesh Harjani (IBM)
@ 2024-10-11 10:51 ` Hari Bathini
0 siblings, 0 replies; 10+ messages in thread
From: Hari Bathini @ 2024-10-11 10:51 UTC (permalink / raw)
To: Ritesh Harjani (IBM), linuxppc-dev
Cc: linux-mm, Sourabh Jain, Zi Yan, David Hildenbrand,
Kirill A . Shutemov, Mahesh J Salgaonkar, Michael Ellerman,
Madhavan Srinivasan, Aneesh Kumar K . V, Donet Tom, LKML,
Sachin P Bappalige
On 11/10/24 12:53 pm, Ritesh Harjani (IBM) wrote:
> This patch refactors all CMA related initialization and alignment code
> to within fadump_cma_init() which gets called in the end. This also means
> that we keep [reserve_dump_area_start, boot_memory_size] page aligned
> during fadump_reserve_mem(). Then later in fadump_cma_init() we extract the
> aligned chunk and provide it to CMA. This inherently also fixes an issue in
> the current code where the reserve_dump_area_start is not aligned
> when the physical memory can have holes and the suitable chunk starts at
> an unaligned boundary.
>
> After this we should be able to call fadump_cma_init() independently
> later in setup_arch() where pageblock_order is non-zero.
>
> Suggested-by: Sourabh Jain <sourabhjain@linux.ibm.com>
> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
> ---
> arch/powerpc/kernel/fadump.c | 34 ++++++++++++++++++++++------------
> 1 file changed, 22 insertions(+), 12 deletions(-)
>
> diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
> index 162327d66982..ffaec625b7a8 100644
> --- a/arch/powerpc/kernel/fadump.c
> +++ b/arch/powerpc/kernel/fadump.c
> @@ -80,7 +80,7 @@ static struct cma *fadump_cma;
> */
> static void __init fadump_cma_init(void)
> {
> - unsigned long long base, size;
> + unsigned long long base, size, end;
> int rc;
>
> if (!fw_dump.fadump_supported || !fw_dump.fadump_enabled ||
> @@ -92,8 +92,24 @@ static void __init fadump_cma_init(void)
> if (fw_dump.nocma || !fw_dump.boot_memory_size)
> return;
>
> + /*
> + * [base, end) should be reserved during early init in
> + * fadump_reserve_mem(). No need to check this here as
> + * cma_init_reserved_mem() already checks for overlap.
> + * Here we give the aligned chunk of this reserved memory to CMA.
> + */
> base = fw_dump.reserve_dump_area_start;
> size = fw_dump.boot_memory_size;
> + end = base + size;
> +
> + base = ALIGN(base, CMA_MIN_ALIGNMENT_BYTES);
> + end = ALIGN_DOWN(end, CMA_MIN_ALIGNMENT_BYTES);
> + size = end - base;
> +
> + if (end <= base) {
> + pr_warn("%s: Too less memory to give to CMA\n", __func__);
> + return;
> + }
>
> rc = cma_init_reserved_mem(base, size, 0, "fadump_cma", &fadump_cma);
> if (rc) {
> @@ -116,11 +132,12 @@ static void __init fadump_cma_init(void)
> /*
> * So we now have successfully initialized cma area for fadump.
> */
> - pr_info("Initialized 0x%lx bytes cma area at %ldMB from 0x%lx "
> + pr_info("Initialized [0x%llx, %luMB] cma area from [0x%lx, %luMB] "
> "bytes of memory reserved for firmware-assisted dump\n",
> - cma_get_size(fadump_cma),
> - (unsigned long)cma_get_base(fadump_cma) >> 20,
> - fw_dump.reserve_dump_area_size);
> + cma_get_base(fadump_cma), cma_get_size(fadump_cma) >> 20,
> + fw_dump.reserve_dump_area_start,
> + fw_dump.boot_memory_size >> 20);
The changes look good. Thanks for looking into it.
For patches 2, 3 & 4
Acked-by: Hari Bathini <hbathini@linux.ibm.com>
> + return;
> }
> #else
> static void __init fadump_cma_init(void) { }
> @@ -553,13 +570,6 @@ int __init fadump_reserve_mem(void)
> if (!fw_dump.dump_active) {
> fw_dump.boot_memory_size =
> PAGE_ALIGN(fadump_calculate_reserve_size());
> -#ifdef CONFIG_CMA
> - if (!fw_dump.nocma) {
> - fw_dump.boot_memory_size =
> - ALIGN(fw_dump.boot_memory_size,
> - CMA_MIN_ALIGNMENT_BYTES);
> - }
> -#endif
>
> bootmem_min = fw_dump.ops->fadump_get_bootmem_min();
> if (fw_dump.boot_memory_size < bootmem_min) {
^ permalink raw reply [flat|nested] 10+ messages in thread
* [RFC v2 4/4] fadump: Move fadump_cma_init to setup_arch() after initmem_init()
2024-10-11 7:23 [RFC v2 0/4] cma: powerpc fadump fixes Ritesh Harjani (IBM)
` (2 preceding siblings ...)
2024-10-11 7:23 ` [RFC v2 3/4] fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem Ritesh Harjani (IBM)
@ 2024-10-11 7:23 ` Ritesh Harjani (IBM)
2024-10-11 10:17 ` [RFC v2 0/4] cma: powerpc fadump fixes Michael Ellerman
4 siblings, 0 replies; 10+ messages in thread
From: Ritesh Harjani (IBM) @ 2024-10-11 7:23 UTC (permalink / raw)
To: linuxppc-dev
Cc: linux-mm, Sourabh Jain, Hari Bathini, Zi Yan, David Hildenbrand,
Kirill A . Shutemov, Mahesh J Salgaonkar, Michael Ellerman,
Madhavan Srinivasan, Aneesh Kumar K . V, Donet Tom, LKML,
Sachin P Bappalige, Ritesh Harjani (IBM)
During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE,
since pageblock_order is still zero and it gets initialized
later during initmem_init() e.g.
setup_arch() -> initmem_init() -> sparse_init() -> set_pageblock_order()
One such use case where this causes issues is -
early_setup() -> early_init_devtree() -> fadump_reserve_mem() -> fadump_cma_init()
This causes CMA memory alignment check to be bypassed in
cma_init_reserved_mem(). Then later cma_activate_area() can hit
a VM_BUG_ON_PAGE(pfn & ((1 << order) - 1)) if the reserved memory
area was not pageblock_order aligned.
Fix it by moving the fadump_cma_init() after initmem_init(),
where other such cma reservations also gets called.
<stack trace>
==============
page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10010
flags: 0x13ffff800000000(node=1|zone=0|lastcpupid=0x7ffff) CMA
raw: 013ffff800000000 5deadbeef0000100 5deadbeef0000122 0000000000000000
raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: VM_BUG_ON_PAGE(pfn & ((1 << order) - 1))
------------[ cut here ]------------
kernel BUG at mm/page_alloc.c:778!
Call Trace:
__free_one_page+0x57c/0x7b0 (unreliable)
free_pcppages_bulk+0x1a8/0x2c8
free_unref_page_commit+0x3d4/0x4e4
free_unref_page+0x458/0x6d0
init_cma_reserved_pageblock+0x114/0x198
cma_init_reserved_areas+0x270/0x3e0
do_one_initcall+0x80/0x2f8
kernel_init_freeable+0x33c/0x530
kernel_init+0x34/0x26c
ret_from_kernel_user_thread+0x14/0x1c
Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment")
Suggested-by: David Hildenbrand <david@redhat.com>
Reported-by: Sachin P Bappalige <sachinpb@linux.ibm.com>
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
---
arch/powerpc/include/asm/fadump.h | 7 +++++++
arch/powerpc/kernel/fadump.c | 6 +-----
arch/powerpc/kernel/setup-common.c | 6 ++++--
3 files changed, 12 insertions(+), 7 deletions(-)
diff --git a/arch/powerpc/include/asm/fadump.h b/arch/powerpc/include/asm/fadump.h
index ef40c9b6972a..3638f04447f5 100644
--- a/arch/powerpc/include/asm/fadump.h
+++ b/arch/powerpc/include/asm/fadump.h
@@ -34,4 +34,11 @@ extern int early_init_dt_scan_fw_dump(unsigned long node, const char *uname,
int depth, void *data);
extern int fadump_reserve_mem(void);
#endif
+
+#if defined(CONFIG_FA_DUMP) && defined(CONFIG_CMA)
+void fadump_cma_init(void);
+#else
+static inline void fadump_cma_init(void) { }
+#endif
+
#endif /* _ASM_POWERPC_FADUMP_H */
diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
index ffaec625b7a8..c42f89862893 100644
--- a/arch/powerpc/kernel/fadump.c
+++ b/arch/powerpc/kernel/fadump.c
@@ -78,7 +78,7 @@ static struct cma *fadump_cma;
* But for some reason even if it fails we still have the memory reservation
* with us and we can still continue doing fadump.
*/
-static void __init fadump_cma_init(void)
+void __init fadump_cma_init(void)
{
unsigned long long base, size, end;
int rc;
@@ -139,8 +139,6 @@ static void __init fadump_cma_init(void)
fw_dump.boot_memory_size >> 20);
return;
}
-#else
-static void __init fadump_cma_init(void) { }
#endif /* CONFIG_CMA */
/*
@@ -642,8 +640,6 @@ int __init fadump_reserve_mem(void)
pr_info("Reserved %lldMB of memory at %#016llx (System RAM: %lldMB)\n",
(size >> 20), base, (memblock_phys_mem_size() >> 20));
-
- fadump_cma_init();
}
return ret;
diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index 943430077375..b6b01502e504 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -997,9 +997,11 @@ void __init setup_arch(char **cmdline_p)
initmem_init();
/*
- * Reserve large chunks of memory for use by CMA for KVM and hugetlb. These must
- * be called after initmem_init(), so that pageblock_order is initialised.
+ * Reserve large chunks of memory for use by CMA for fadump, KVM and
+ * hugetlb. These must be called after initmem_init(), so that
+ * pageblock_order is initialised.
*/
+ fadump_cma_init();
kvm_cma_reserve();
gigantic_hugetlb_cma_reserve();
--
2.46.0
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [RFC v2 0/4] cma: powerpc fadump fixes
2024-10-11 7:23 [RFC v2 0/4] cma: powerpc fadump fixes Ritesh Harjani (IBM)
` (3 preceding siblings ...)
2024-10-11 7:23 ` [RFC v2 4/4] fadump: Move fadump_cma_init to setup_arch() after initmem_init() Ritesh Harjani (IBM)
@ 2024-10-11 10:17 ` Michael Ellerman
2024-10-11 10:25 ` David Hildenbrand
2024-10-11 11:00 ` Ritesh Harjani
4 siblings, 2 replies; 10+ messages in thread
From: Michael Ellerman @ 2024-10-11 10:17 UTC (permalink / raw)
To: Ritesh Harjani (IBM), linuxppc-dev
Cc: linux-mm, Sourabh Jain, Hari Bathini, Zi Yan, David Hildenbrand,
Kirill A . Shutemov, Mahesh J Salgaonkar, Madhavan Srinivasan,
Aneesh Kumar K . V, Donet Tom, LKML, Sachin P Bappalige,
Ritesh Harjani (IBM)
"Ritesh Harjani (IBM)" <ritesh.list@gmail.com> writes:
> Please find the v2 of cma related powerpc fadump fixes.
>
> Patch-1 is a change in mm/cma.c to make sure we return an error if someone uses
> cma_init_reserved_mem() before the pageblock_order is initalized.
>
> I guess, it's best if Patch-1 goes via mm tree and since rest of the changes
> are powerpc fadump fixes hence those should go via powerpc tree. Right?
Yes I think that will work.
Because there's no actual dependency on patch 1, correct?
Let's see if the mm folks are happy with the approach, and if so you
should send patch 1 on its own, and patches 2-4 as a separate series.
Then I can take the series (2-4) as fixes, and patch 1 can go via the mm
tree (probably in next, not as a fix).
cheers
> v1 -> v2:
> =========
> 1. Review comments from David to call fadump_cma_init() after the
> pageblock_order is initialized. Also to catch usages if someone tries
> to call cma_init_reserved_mem() before pageblock_order is initialized.
>
> [v1]: https://lore.kernel.org/linuxppc-dev/c1e66d3e69c8d90988c02b84c79db5d9dd93f053.1728386179.git.ritesh.list@gmail.com/
>
> Ritesh Harjani (IBM) (4):
> cma: Enforce non-zero pageblock_order during cma_init_reserved_mem()
> fadump: Refactor and prepare fadump_cma_init for late init
> fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem
> fadump: Move fadump_cma_init to setup_arch() after initmem_init()
>
> arch/powerpc/include/asm/fadump.h | 7 ++++
> arch/powerpc/kernel/fadump.c | 55 +++++++++++++++---------------
> arch/powerpc/kernel/setup-common.c | 6 ++--
> mm/cma.c | 9 +++++
> 4 files changed, 48 insertions(+), 29 deletions(-)
>
> --
> 2.46.0
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [RFC v2 0/4] cma: powerpc fadump fixes
2024-10-11 10:17 ` [RFC v2 0/4] cma: powerpc fadump fixes Michael Ellerman
@ 2024-10-11 10:25 ` David Hildenbrand
2024-10-11 11:00 ` Ritesh Harjani
1 sibling, 0 replies; 10+ messages in thread
From: David Hildenbrand @ 2024-10-11 10:25 UTC (permalink / raw)
To: Michael Ellerman, Ritesh Harjani (IBM), linuxppc-dev
Cc: linux-mm, Sourabh Jain, Hari Bathini, Zi Yan,
Kirill A . Shutemov, Mahesh J Salgaonkar, Madhavan Srinivasan,
Aneesh Kumar K . V, Donet Tom, LKML, Sachin P Bappalige
On 11.10.24 12:17, Michael Ellerman wrote:
> "Ritesh Harjani (IBM)" <ritesh.list@gmail.com> writes:
>> Please find the v2 of cma related powerpc fadump fixes.
>>
>> Patch-1 is a change in mm/cma.c to make sure we return an error if someone uses
>> cma_init_reserved_mem() before the pageblock_order is initalized.
>>
>> I guess, it's best if Patch-1 goes via mm tree and since rest of the changes
>> are powerpc fadump fixes hence those should go via powerpc tree. Right?
>
> Yes I think that will work.
>
> Because there's no actual dependency on patch 1, correct?
>
> Let's see if the mm folks are happy with the approach, and if so you
> should send patch 1 on its own, and patches 2-4 as a separate series.
Makes sense to me.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC v2 0/4] cma: powerpc fadump fixes
2024-10-11 10:17 ` [RFC v2 0/4] cma: powerpc fadump fixes Michael Ellerman
2024-10-11 10:25 ` David Hildenbrand
@ 2024-10-11 11:00 ` Ritesh Harjani
1 sibling, 0 replies; 10+ messages in thread
From: Ritesh Harjani @ 2024-10-11 11:00 UTC (permalink / raw)
To: Michael Ellerman, linuxppc-dev
Cc: linux-mm, Sourabh Jain, Hari Bathini, Zi Yan, David Hildenbrand,
Kirill A . Shutemov, Mahesh J Salgaonkar, Madhavan Srinivasan,
Aneesh Kumar K . V, Donet Tom, LKML, Sachin P Bappalige
Michael Ellerman <mpe@ellerman.id.au> writes:
> "Ritesh Harjani (IBM)" <ritesh.list@gmail.com> writes:
>> Please find the v2 of cma related powerpc fadump fixes.
>>
>> Patch-1 is a change in mm/cma.c to make sure we return an error if someone uses
>> cma_init_reserved_mem() before the pageblock_order is initalized.
>>
>> I guess, it's best if Patch-1 goes via mm tree and since rest of the changes
>> are powerpc fadump fixes hence those should go via powerpc tree. Right?
>
> Yes I think that will work.
>
> Because there's no actual dependency on patch 1, correct?
There is no dependency, yes.
>
> Let's see if the mm folks are happy with the approach, and if so you
> should send patch 1 on its own, and patches 2-4 as a separate series.
>
> Then I can take the series (2-4) as fixes, and patch 1 can go via the mm
> tree (probably in next, not as a fix).
>
Sure. Since David has acked patch-1, let me split this into 2 series
as you mentioned above and re-send both seperately, so that it can be
picked up in their respective trees.
Will just do it in sometime. Thanks!
-ritesh
> cheers
>
>> v1 -> v2:
>> =========
>> 1. Review comments from David to call fadump_cma_init() after the
>> pageblock_order is initialized. Also to catch usages if someone tries
>> to call cma_init_reserved_mem() before pageblock_order is initialized.
>>
>> [v1]: https://lore.kernel.org/linuxppc-dev/c1e66d3e69c8d90988c02b84c79db5d9dd93f053.1728386179.git.ritesh.list@gmail.com/
>>
>> Ritesh Harjani (IBM) (4):
>> cma: Enforce non-zero pageblock_order during cma_init_reserved_mem()
>> fadump: Refactor and prepare fadump_cma_init for late init
>> fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem
>> fadump: Move fadump_cma_init to setup_arch() after initmem_init()
>>
>> arch/powerpc/include/asm/fadump.h | 7 ++++
>> arch/powerpc/kernel/fadump.c | 55 +++++++++++++++---------------
>> arch/powerpc/kernel/setup-common.c | 6 ++--
>> mm/cma.c | 9 +++++
>> 4 files changed, 48 insertions(+), 29 deletions(-)
>>
>> --
>> 2.46.0
^ permalink raw reply [flat|nested] 10+ messages in thread