From: Mike Rapoport <rppt@kernel.org>
To: priyanshukumarpu@gmail.com
Cc: akpm@linux-foundation.org, changyuanl@google.com,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] tools/testing/memblock: fix stale NUMA reservation tests
Date: Tue, 14 Apr 2026 18:14:05 +0300 [thread overview]
Message-ID: <ad5ZvYLyC--CXPZY@kernel.org> (raw)
In-Reply-To: <20260413091458.774770-1-priyanshukumarpu@gmail.com>
Hi,
On Mon, Apr 13, 2026 at 09:14:58AM +0000, priyanshukumarpu@gmail.com wrote:
> From: Priyanshu Kumar <priyanshukumarpu@gmail.com>
>
> memblock allocations now reserve memory with MEMBLOCK_RSRV_KERN and,
> on NUMA configurations, record the requested node on the reserved
> region. Several memblock simulator NUMA tests still expected merges
> that only worked before those reservation semantics changed, so the
> suite aborted even though the allocator behavior was correct.
>
> Update the NUMA merge expectations in the memblock_alloc_try_nid()
> and memblock_alloc_exact_nid_raw() tests to match the current reserved
> region metadata rules. For cases that should still merge, create the
> pre-existing reservation with matching nid and MEMBLOCK_RSRV_KERN
> metadata. Also strengthen the memblock_alloc_node() coverage by
> checking the newly created reserved region directly instead of
> re-reading the source memory node descriptor.
>
> Finally, drop the stale README/TODO notes that still claimed
> memblock_alloc_node() could not be tested.
>
> The memblock simulator passes again with NUMA enabled after these
> updates.
>
> Signed-off-by: Priyanshu Kumar <priyanshukumarpu@gmail.com>
> ---
> tools/testing/memblock/README | 5 +--
> tools/testing/memblock/TODO | 4 +-
> .../memblock/tests/alloc_exact_nid_api.c | 29 +++++++-----
> tools/testing/memblock/tests/alloc_nid_api.c | 44 +++++++++++++------
> 4 files changed, 53 insertions(+), 29 deletions(-)
>
> diff --git a/tools/testing/memblock/README b/tools/testing/memblock/README
> index 7ca437d81806..b435f48d8a70 100644
> --- a/tools/testing/memblock/README
> +++ b/tools/testing/memblock/README
> @@ -104,10 +104,7 @@ called at the beginning of each test.
> Known issues
> ============
>
> -1. Requesting a specific NUMA node via memblock_alloc_node() does not work as
> - intended. Once the fix is in place, tests for this function can be added.
> -
> -2. Tests for memblock_alloc_low() can't be easily implemented. The function uses
> +1. Tests for memblock_alloc_low() can't be easily implemented. The function uses
> ARCH_LOW_ADDRESS_LIMIT marco, which can't be changed to point at the low
> memory of the memory_block.
>
> diff --git a/tools/testing/memblock/TODO b/tools/testing/memblock/TODO
> index e306c90c535f..c13ad0dae776 100644
> --- a/tools/testing/memblock/TODO
> +++ b/tools/testing/memblock/TODO
> @@ -1,5 +1,5 @@
> TODO
> =====
>
> -1. Add tests for memblock_alloc_node() to check if the correct NUMA node is set
> - for the new region
> +1. Add tests for memblock_alloc_low() once the simulator can model
> + ARCH_LOW_ADDRESS_LIMIT against the low memory in memory_block
> diff --git a/tools/testing/memblock/tests/alloc_exact_nid_api.c b/tools/testing/memblock/tests/alloc_exact_nid_api.c
> index 6e14447da6e1..3f5ab994f63a 100644
> --- a/tools/testing/memblock/tests/alloc_exact_nid_api.c
> +++ b/tools/testing/memblock/tests/alloc_exact_nid_api.c
> @@ -368,7 +368,8 @@ static int alloc_exact_nid_bottom_up_numa_part_reserved_check(void)
> max_addr = memblock_end_of_DRAM();
> total_size = size + r1.size;
>
> - memblock_reserve(r1.base, r1.size);
> + ASSERT_EQ(0, __memblock_reserve(r1.base, r1.size, nid_req,
> + MEMBLOCK_RSRV_KERN));
No need to check the return value here.
> allocated_ptr = memblock_alloc_exact_nid_raw(size, SMP_CACHE_BYTES,
> min_addr, max_addr,
> nid_req);
> @@ -831,14 +832,17 @@ static int alloc_exact_nid_numa_large_region_generic_check(void)
> * | | r2 | new | r1 | |
> * +-------------+----+-----------------------+----+------------------+
> *
> - * Expect to merge all of the regions into one. The region counter and total
> - * size fields get updated.
> + * Expect to allocate the requested node as a separate kernel-reserved region.
> + * The neighboring reservations remain distinct because the new region records
> + * the requested NUMA node and MEMBLOCK_RSRV_KERN flag.
Please don't change the test. Just use MEMBLOCK_RSRV_KERN for the first
reserved region.
The same comment applies to other changes as well.
> */
> static int alloc_exact_nid_numa_reserved_full_merge_generic_check(void)
> {
> int nid_req = 6;
> int nid_next = nid_req + 1;
> - struct memblock_region *new_rgn = &memblock.reserved.regions[0];
> + struct memblock_region *left_rgn = &memblock.reserved.regions[0];
> + struct memblock_region *new_rgn = &memblock.reserved.regions[1];
> + struct memblock_region *right_rgn = &memblock.reserved.regions[2];
--
Sincerely yours,
Mike.
prev parent reply other threads:[~2026-04-14 15:14 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-13 9:14 priyanshukumarpu
2026-04-14 15:14 ` Mike Rapoport [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ad5ZvYLyC--CXPZY@kernel.org \
--to=rppt@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=changyuanl@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=priyanshukumarpu@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox