linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [Patch v3 0/7] memblock: cleanup
@ 2024-05-07  7:58 Wei Yang
  2024-05-07  7:58 ` [Patch v3 1/7] memblock tests: add memblock_reserve_all_locations_check() Wei Yang
                   ` (7 more replies)
  0 siblings, 8 replies; 9+ messages in thread
From: Wei Yang @ 2024-05-07  7:58 UTC (permalink / raw)
  To: rppt, akpm; +Cc: linux-mm, Wei Yang

Changes in v3:

* separate case memblock_reserve_all_locations_check()
* add test description for memblock_reserve_many_may_conflict_check()
* drop patch 4

Changes in v2:

* remove the two round reduction patch
* add 129th memory block to memblock.reserved
* add memblock_reserve_many_may_conflict_check()
* two new patch at the end

Wei Yang (7):
  memblock tests: add memblock_reserve_all_locations_check()
  memblock tests: add memblock_reserve_many_may_conflict_check()
  mm/memblock: fix comment for memblock_isolate_range()
  memblock tests: add memblock_overlaps_region_checks
  mm/memblock: return true directly on finding overlap region
  mm/memblock: use PAGE_ALIGN_DOWN to get pgend in free_memmap
  mm/memblock: default region's nid may be MAX_NUMNODES

 mm/memblock.c                            |  11 +-
 tools/include/linux/mm.h                 |   1 +
 tools/testing/memblock/tests/basic_api.c | 306 +++++++++++++++++++++++
 tools/testing/memblock/tests/common.c    |   4 +-
 tools/testing/memblock/tests/common.h    |   4 +
 5 files changed, 319 insertions(+), 7 deletions(-)

-- 
2.34.1



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Patch v3 1/7] memblock tests: add memblock_reserve_all_locations_check()
  2024-05-07  7:58 [Patch v3 0/7] memblock: cleanup Wei Yang
@ 2024-05-07  7:58 ` Wei Yang
  2024-05-07  7:58 ` [Patch v3 2/7] memblock tests: add memblock_reserve_many_may_conflict_check() Wei Yang
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Wei Yang @ 2024-05-07  7:58 UTC (permalink / raw)
  To: rppt, akpm; +Cc: linux-mm, Wei Yang

Instead of adding 129th memory block at the last position, let's try all
possible position.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 tools/testing/memblock/tests/basic_api.c | 107 +++++++++++++++++++++++
 1 file changed, 107 insertions(+)

diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c
index f317fe691fc4..bd3ebbf6b697 100644
--- a/tools/testing/memblock/tests/basic_api.c
+++ b/tools/testing/memblock/tests/basic_api.c
@@ -982,6 +982,112 @@ static int memblock_reserve_many_check(void)
 	return 0;
 }
 
+
+/*
+ * A test that trying to reserve the 129th memory block at all locations.
+ * Expect to trigger memblock_double_array() to double the
+ * memblock.memory.max, find a new valid memory as reserved.regions.
+ *
+ *  0               1               2                 128
+ *  +-------+       +-------+       +-------+         +-------+
+ *  |  32K  |       |  32K  |       |  32K  |   ...   |  32K  |
+ *  +-------+-------+-------+-------+-------+         +-------+
+ *          |<-32K->|       |<-32K->|
+ *
+ */
+/* Keep the gap so these memory region will not be merged. */
+#define MEMORY_BASE(idx) (SZ_128K + (MEM_SIZE * 2) * (idx))
+static int memblock_reserve_all_locations_check(void)
+{
+	int i, skip;
+	void *orig_region;
+	struct region r = {
+		.base = SZ_16K,
+		.size = SZ_16K,
+	};
+	phys_addr_t new_reserved_regions_size;
+
+	PREFIX_PUSH();
+
+	/* Reserve the 129th memory block for all possible positions*/
+	for (skip = 0; skip < INIT_MEMBLOCK_REGIONS + 1; skip++) {
+		reset_memblock_regions();
+		memblock_allow_resize();
+
+		/* Add a valid memory region used by double_array(). */
+		dummy_physical_memory_init();
+		memblock_add(dummy_physical_memory_base(), MEM_SIZE);
+
+		for (i = 0; i < INIT_MEMBLOCK_REGIONS + 1; i++) {
+			if (i == skip)
+				continue;
+
+			/* Reserve some fakes memory region to fulfill the memblock. */
+			memblock_reserve(MEMORY_BASE(i), MEM_SIZE);
+
+			if (i < skip) {
+				ASSERT_EQ(memblock.reserved.cnt, i + 1);
+				ASSERT_EQ(memblock.reserved.total_size, (i + 1) * MEM_SIZE);
+			} else {
+				ASSERT_EQ(memblock.reserved.cnt, i);
+				ASSERT_EQ(memblock.reserved.total_size, i * MEM_SIZE);
+			}
+		}
+
+		orig_region = memblock.reserved.regions;
+
+		/* This reserve the 129 memory_region, and makes it double array. */
+		memblock_reserve(MEMORY_BASE(skip), MEM_SIZE);
+
+		/*
+		 * This is the memory region size used by the doubled reserved.regions,
+		 * and it has been reserved due to it has been used. The size is used to
+		 * calculate the total_size that the memblock.reserved have now.
+		 */
+		new_reserved_regions_size = PAGE_ALIGN((INIT_MEMBLOCK_REGIONS * 2) *
+						sizeof(struct memblock_region));
+		/*
+		 * The double_array() will find a free memory region as the new
+		 * reserved.regions, and the used memory region will be reserved, so
+		 * there will be one more region exist in the reserved memblock. And the
+		 * one more reserved region's size is new_reserved_regions_size.
+		 */
+		ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 2);
+		ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE +
+							new_reserved_regions_size);
+		ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2);
+
+		/*
+		 * Now memblock_double_array() works fine. Let's check after the
+		 * double_array(), the memblock_reserve() still works as normal.
+		 */
+		memblock_reserve(r.base, r.size);
+		ASSERT_EQ(memblock.reserved.regions[0].base, r.base);
+		ASSERT_EQ(memblock.reserved.regions[0].size, r.size);
+
+		ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 3);
+		ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE +
+							new_reserved_regions_size +
+							r.size);
+		ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2);
+
+		dummy_physical_memory_cleanup();
+
+		/*
+		 * The current reserved.regions is occupying a range of memory that
+		 * allocated from dummy_physical_memory_init(). After free the memory,
+		 * we must not use it. So restore the origin memory region to make sure
+		 * the tests can run as normal and not affected by the double array.
+		 */
+		memblock.reserved.regions = orig_region;
+		memblock.reserved.cnt = INIT_MEMBLOCK_RESERVED_REGIONS;
+	}
+
+	test_pass_pop();
+
+	return 0;
+}
+
 static int memblock_reserve_checks(void)
 {
 	prefix_reset();
@@ -997,6 +1103,7 @@ static int memblock_reserve_checks(void)
 	memblock_reserve_between_check();
 	memblock_reserve_near_max_check();
 	memblock_reserve_many_check();
+	memblock_reserve_all_locations_check();
 
 	prefix_pop();
 
-- 
2.34.1



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Patch v3 2/7] memblock tests: add memblock_reserve_many_may_conflict_check()
  2024-05-07  7:58 [Patch v3 0/7] memblock: cleanup Wei Yang
  2024-05-07  7:58 ` [Patch v3 1/7] memblock tests: add memblock_reserve_all_locations_check() Wei Yang
@ 2024-05-07  7:58 ` Wei Yang
  2024-05-07  7:58 ` [Patch v3 3/7] mm/memblock: fix comment for memblock_isolate_range() Wei Yang
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Wei Yang @ 2024-05-07  7:58 UTC (permalink / raw)
  To: rppt, akpm; +Cc: linux-mm, Wei Yang

This may trigger the case fixed by commit 48c3b583bbdd ("mm/memblock:
fix overlapping allocation when doubling reserved array").

This is done by adding the 129th reserve region into memblock.memory. If
memblock_double_array() use this reserve region as new array, it fails.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>

---
v3:
  * rename MEM_ALLOC_SIZE to PHYS_MEM_SIZE
  * add test description
---
 tools/testing/memblock/tests/basic_api.c | 151 +++++++++++++++++++++++
 tools/testing/memblock/tests/common.c    |   4 +-
 tools/testing/memblock/tests/common.h    |   1 +
 3 files changed, 154 insertions(+), 2 deletions(-)

diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c
index bd3ebbf6b697..fdac82656d15 100644
--- a/tools/testing/memblock/tests/basic_api.c
+++ b/tools/testing/memblock/tests/basic_api.c
@@ -1088,6 +1088,156 @@ static int memblock_reserve_all_locations_check(void)
 	return 0;
 }
 
+/*
+ * A test that trying to reserve the 129th memory block at all locations.
+ * Expect to trigger memblock_double_array() to double the
+ * memblock.memory.max, find a new valid memory as reserved.regions. And make
+ * sure it doesn't conflict with the range we want to reserve.
+ *
+ * For example, we have 128 regions in reserved and now want to reserve
+ * the skipped one. Since reserved is full, memblock_double_array() would find
+ * an available range in memory for the new array. We intended to put two
+ * ranges in memory with one is the exact range of the skipped one. Before
+ * commit 48c3b583bbdd ("mm/memblock: fix overlapping allocation when doubling
+ * reserved array"), the new array would sits in the skipped range which is a
+ * conflict. The expected new array should be allocated from memory.regions[0].
+ *
+ *           0                               1
+ * memory    +-------+                       +-------+
+ *           |  32K  |                       |  32K  |
+ *           +-------+ ------+-------+-------+-------+
+ *                   |<-32K->|<-32K->|<-32K->|
+ *
+ *                           0               skipped           127
+ * reserved                  +-------+       .........         +-------+
+ *                           |  32K  |       .  32K  .   ...   |  32K  |
+ *                           +-------+-------+-------+         +-------+
+ *                                   |<-32K->|
+ *                                           ^
+ *                                           |
+ *                                           |
+ *                                           skipped one
+ */
+/* Keep the gap so these memory region will not be merged. */
+#define MEMORY_BASE_OFFSET(idx, offset) ((offset) + (MEM_SIZE * 2) * (idx))
+static int memblock_reserve_many_may_conflict_check(void)
+{
+	int i, skip;
+	void *orig_region;
+	struct region r = {
+		.base = SZ_16K,
+		.size = SZ_16K,
+	};
+	phys_addr_t new_reserved_regions_size;
+
+	/*
+	 *  0        1          129
+	 *  +---+    +---+      +---+
+	 *  |32K|    |32K|  ..  |32K|
+	 *  +---+    +---+      +---+
+	 *
+	 * Pre-allocate the range for 129 memory block + one range for double
+	 * memblock.reserved.regions at idx 0.
+	 */
+	dummy_physical_memory_init();
+	phys_addr_t memory_base = dummy_physical_memory_base();
+	phys_addr_t offset = PAGE_ALIGN(memory_base);
+
+	PREFIX_PUSH();
+
+	/* Reserve the 129th memory block for all possible positions*/
+	for (skip = 1; skip <= INIT_MEMBLOCK_REGIONS + 1; skip++) {
+		reset_memblock_regions();
+		memblock_allow_resize();
+
+		reset_memblock_attributes();
+		/* Add a valid memory region used by double_array(). */
+		memblock_add(MEMORY_BASE_OFFSET(0, offset), MEM_SIZE);
+		/*
+		 * Add a memory region which will be reserved as 129th memory
+		 * region. This is not expected to be used by double_array().
+		 */
+		memblock_add(MEMORY_BASE_OFFSET(skip, offset), MEM_SIZE);
+
+		for (i = 1; i <= INIT_MEMBLOCK_REGIONS + 1; i++) {
+			if (i == skip)
+				continue;
+
+			/* Reserve some fakes memory region to fulfill the memblock. */
+			memblock_reserve(MEMORY_BASE_OFFSET(i, offset), MEM_SIZE);
+
+			if (i < skip) {
+				ASSERT_EQ(memblock.reserved.cnt, i);
+				ASSERT_EQ(memblock.reserved.total_size, i * MEM_SIZE);
+			} else {
+				ASSERT_EQ(memblock.reserved.cnt, i - 1);
+				ASSERT_EQ(memblock.reserved.total_size, (i - 1) * MEM_SIZE);
+			}
+		}
+
+		orig_region = memblock.reserved.regions;
+
+		/* This reserve the 129 memory_region, and makes it double array. */
+		memblock_reserve(MEMORY_BASE_OFFSET(skip, offset), MEM_SIZE);
+
+		/*
+		 * This is the memory region size used by the doubled reserved.regions,
+		 * and it has been reserved due to it has been used. The size is used to
+		 * calculate the total_size that the memblock.reserved have now.
+		 */
+		new_reserved_regions_size = PAGE_ALIGN((INIT_MEMBLOCK_REGIONS * 2) *
+						sizeof(struct memblock_region));
+		/*
+		 * The double_array() will find a free memory region as the new
+		 * reserved.regions, and the used memory region will be reserved, so
+		 * there will be one more region exist in the reserved memblock. And the
+		 * one more reserved region's size is new_reserved_regions_size.
+		 */
+		ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 2);
+		ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE +
+							new_reserved_regions_size);
+		ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2);
+
+		/*
+		 * The first reserved region is allocated for double array
+		 * with the size of new_reserved_regions_size and the base to be
+		 * MEMORY_BASE_OFFSET(0, offset) + SZ_32K - new_reserved_regions_size
+		 */
+		ASSERT_EQ(memblock.reserved.regions[0].base + memblock.reserved.regions[0].size,
+			  MEMORY_BASE_OFFSET(0, offset) + SZ_32K);
+		ASSERT_EQ(memblock.reserved.regions[0].size, new_reserved_regions_size);
+
+		/*
+		 * Now memblock_double_array() works fine. Let's check after the
+		 * double_array(), the memblock_reserve() still works as normal.
+		 */
+		memblock_reserve(r.base, r.size);
+		ASSERT_EQ(memblock.reserved.regions[0].base, r.base);
+		ASSERT_EQ(memblock.reserved.regions[0].size, r.size);
+
+		ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 3);
+		ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE +
+							new_reserved_regions_size +
+							r.size);
+		ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2);
+
+		/*
+		 * The current reserved.regions is occupying a range of memory that
+		 * allocated from dummy_physical_memory_init(). After free the memory,
+		 * we must not use it. So restore the origin memory region to make sure
+		 * the tests can run as normal and not affected by the double array.
+		 */
+		memblock.reserved.regions = orig_region;
+		memblock.reserved.cnt = INIT_MEMBLOCK_RESERVED_REGIONS;
+	}
+
+	dummy_physical_memory_cleanup();
+
+	test_pass_pop();
+
+	return 0;
+}
+
 static int memblock_reserve_checks(void)
 {
 	prefix_reset();
@@ -1104,6 +1254,7 @@ static int memblock_reserve_checks(void)
 	memblock_reserve_near_max_check();
 	memblock_reserve_many_check();
 	memblock_reserve_all_locations_check();
+	memblock_reserve_many_may_conflict_check();
 
 	prefix_pop();
 
diff --git a/tools/testing/memblock/tests/common.c b/tools/testing/memblock/tests/common.c
index c2c569f12178..3250c8e5124b 100644
--- a/tools/testing/memblock/tests/common.c
+++ b/tools/testing/memblock/tests/common.c
@@ -61,7 +61,7 @@ void reset_memblock_attributes(void)
 
 static inline void fill_memblock(void)
 {
-	memset(memory_block.base, 1, MEM_SIZE);
+	memset(memory_block.base, 1, PHYS_MEM_SIZE);
 }
 
 void setup_memblock(void)
@@ -103,7 +103,7 @@ void setup_numa_memblock(const unsigned int node_fracs[])
 
 void dummy_physical_memory_init(void)
 {
-	memory_block.base = malloc(MEM_SIZE);
+	memory_block.base = malloc(PHYS_MEM_SIZE);
 	assert(memory_block.base);
 	fill_memblock();
 }
diff --git a/tools/testing/memblock/tests/common.h b/tools/testing/memblock/tests/common.h
index b5ec59aa62d7..2f26405562b0 100644
--- a/tools/testing/memblock/tests/common.h
+++ b/tools/testing/memblock/tests/common.h
@@ -12,6 +12,7 @@
 #include <../selftests/kselftest.h>
 
 #define MEM_SIZE		SZ_32K
+#define PHYS_MEM_SIZE		SZ_16M
 #define NUMA_NODES		8
 
 #define INIT_MEMBLOCK_REGIONS			128
-- 
2.34.1



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Patch v3 3/7] mm/memblock: fix comment for memblock_isolate_range()
  2024-05-07  7:58 [Patch v3 0/7] memblock: cleanup Wei Yang
  2024-05-07  7:58 ` [Patch v3 1/7] memblock tests: add memblock_reserve_all_locations_check() Wei Yang
  2024-05-07  7:58 ` [Patch v3 2/7] memblock tests: add memblock_reserve_many_may_conflict_check() Wei Yang
@ 2024-05-07  7:58 ` Wei Yang
  2024-05-07  7:58 ` [Patch v3 4/7] memblock tests: add memblock_overlaps_region_checks Wei Yang
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Wei Yang @ 2024-05-07  7:58 UTC (permalink / raw)
  To: rppt, akpm; +Cc: linux-mm, Wei Yang

The isolated range is [*@start_rgn, *@end_rgn - 1], while the comment says
"the end region inside the range" is *@end_rgn.

Let's correct it.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>

---
v3: rppt reword comment
---
 mm/memblock.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index 98d25689cf10..7f3cd96d6769 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -777,7 +777,8 @@ bool __init_memblock memblock_validate_numa_coverage(unsigned long threshold_byt
  * Walk @type and ensure that regions don't cross the boundaries defined by
  * [@base, @base + @size).  Crossing regions are split at the boundaries,
  * which may create at most two more regions.  The index of the first
- * region inside the range is returned in *@start_rgn and end in *@end_rgn.
+ * region inside the range is returned in *@start_rgn and the index of the
+ * first region after the range is returned in *@end_rgn.
  *
  * Return:
  * 0 on success, -errno on failure.
-- 
2.34.1



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Patch v3 4/7] memblock tests: add memblock_overlaps_region_checks
  2024-05-07  7:58 [Patch v3 0/7] memblock: cleanup Wei Yang
                   ` (2 preceding siblings ...)
  2024-05-07  7:58 ` [Patch v3 3/7] mm/memblock: fix comment for memblock_isolate_range() Wei Yang
@ 2024-05-07  7:58 ` Wei Yang
  2024-05-07  7:58 ` [Patch v3 5/7] mm/memblock: return true directly on finding overlap region Wei Yang
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Wei Yang @ 2024-05-07  7:58 UTC (permalink / raw)
  To: rppt, akpm; +Cc: linux-mm, Wei Yang

Add a test case for memblock_overlaps_region().

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 tools/testing/memblock/tests/basic_api.c | 48 ++++++++++++++++++++++++
 tools/testing/memblock/tests/common.h    |  3 ++
 2 files changed, 51 insertions(+)

diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c
index fdac82656d15..67503089e6a0 100644
--- a/tools/testing/memblock/tests/basic_api.c
+++ b/tools/testing/memblock/tests/basic_api.c
@@ -2387,6 +2387,53 @@ static int memblock_trim_memory_checks(void)
 	return 0;
 }
 
+static int memblock_overlaps_region_check(void)
+{
+	struct region r = {
+		.base = SZ_1G,
+		.size = SZ_4M
+	};
+
+	PREFIX_PUSH();
+
+	reset_memblock_regions();
+	memblock_add(r.base, r.size);
+
+	/* Far Away */
+	ASSERT_FALSE(memblock_overlaps_region(&memblock.memory, SZ_1M, SZ_1M));
+	ASSERT_FALSE(memblock_overlaps_region(&memblock.memory, SZ_2G, SZ_1M));
+
+	/* Neighbor */
+	ASSERT_FALSE(memblock_overlaps_region(&memblock.memory, SZ_1G - SZ_1M, SZ_1M));
+	ASSERT_FALSE(memblock_overlaps_region(&memblock.memory, SZ_1G + SZ_4M, SZ_1M));
+
+	/* Partial Overlap */
+	ASSERT_TRUE(memblock_overlaps_region(&memblock.memory, SZ_1G - SZ_1M, SZ_2M));
+	ASSERT_TRUE(memblock_overlaps_region(&memblock.memory, SZ_1G + SZ_2M, SZ_2M));
+
+	/* Totally Overlap */
+	ASSERT_TRUE(memblock_overlaps_region(&memblock.memory, SZ_1G, SZ_4M));
+	ASSERT_TRUE(memblock_overlaps_region(&memblock.memory, SZ_1G - SZ_2M, SZ_8M));
+	ASSERT_TRUE(memblock_overlaps_region(&memblock.memory, SZ_1G + SZ_1M, SZ_1M));
+
+	test_pass_pop();
+
+	return 0;
+}
+
+static int memblock_overlaps_region_checks(void)
+{
+	prefix_reset();
+	prefix_push("memblock_overlaps_region");
+	test_print("Running memblock_overlaps_region tests...\n");
+
+	memblock_overlaps_region_check();
+
+	prefix_pop();
+
+	return 0;
+}
+
 int memblock_basic_checks(void)
 {
 	memblock_initialization_check();
@@ -2396,6 +2443,7 @@ int memblock_basic_checks(void)
 	memblock_free_checks();
 	memblock_bottom_up_checks();
 	memblock_trim_memory_checks();
+	memblock_overlaps_region_checks();
 
 	return 0;
 }
diff --git a/tools/testing/memblock/tests/common.h b/tools/testing/memblock/tests/common.h
index 2f26405562b0..e1138e06c903 100644
--- a/tools/testing/memblock/tests/common.h
+++ b/tools/testing/memblock/tests/common.h
@@ -40,6 +40,9 @@ enum test_flags {
 	assert((_expected) == (_seen)); \
 } while (0)
 
+#define ASSERT_TRUE(_seen) ASSERT_EQ(true, _seen)
+#define ASSERT_FALSE(_seen) ASSERT_EQ(false, _seen)
+
 /**
  * ASSERT_NE():
  * Check the condition
-- 
2.34.1



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Patch v3 5/7] mm/memblock: return true directly on finding overlap region
  2024-05-07  7:58 [Patch v3 0/7] memblock: cleanup Wei Yang
                   ` (3 preceding siblings ...)
  2024-05-07  7:58 ` [Patch v3 4/7] memblock tests: add memblock_overlaps_region_checks Wei Yang
@ 2024-05-07  7:58 ` Wei Yang
  2024-05-07  7:58 ` [Patch v3 6/7] mm/memblock: use PAGE_ALIGN_DOWN to get pgend in free_memmap Wei Yang
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Wei Yang @ 2024-05-07  7:58 UTC (permalink / raw)
  To: rppt, akpm; +Cc: linux-mm, Wei Yang

Not necessary to break and check i against type->cnt again.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 mm/memblock.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index 7f3cd96d6769..da9a6c862a69 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -194,8 +194,8 @@ bool __init_memblock memblock_overlaps_region(struct memblock_type *type,
 	for (i = 0; i < type->cnt; i++)
 		if (memblock_addrs_overlap(base, size, type->regions[i].base,
 					   type->regions[i].size))
-			break;
-	return i < type->cnt;
+			return true;
+	return false;
 }
 
 /**
-- 
2.34.1



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Patch v3 6/7] mm/memblock: use PAGE_ALIGN_DOWN to get pgend in free_memmap
  2024-05-07  7:58 [Patch v3 0/7] memblock: cleanup Wei Yang
                   ` (4 preceding siblings ...)
  2024-05-07  7:58 ` [Patch v3 5/7] mm/memblock: return true directly on finding overlap region Wei Yang
@ 2024-05-07  7:58 ` Wei Yang
  2024-05-07  7:58 ` [Patch v3 7/7] mm/memblock: default region's nid may be MAX_NUMNODES Wei Yang
  2024-05-08  5:35 ` [Patch v3 0/7] memblock: cleanup Mike Rapoport
  7 siblings, 0 replies; 9+ messages in thread
From: Wei Yang @ 2024-05-07  7:58 UTC (permalink / raw)
  To: rppt, akpm; +Cc: linux-mm, Wei Yang

Leverage the macro PAGE_ALIGN_DOWN to get pgend.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 mm/memblock.c            | 2 +-
 tools/include/linux/mm.h | 1 +
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index da9a6c862a69..33a8b6f7b626 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -2039,7 +2039,7 @@ static void __init free_memmap(unsigned long start_pfn, unsigned long end_pfn)
 	 * downwards.
 	 */
 	pg = PAGE_ALIGN(__pa(start_pg));
-	pgend = __pa(end_pg) & PAGE_MASK;
+	pgend = PAGE_ALIGN_DOWN(__pa(end_pg));
 
 	/*
 	 * If there are free pages between these, free the section of the
diff --git a/tools/include/linux/mm.h b/tools/include/linux/mm.h
index 7d73da098047..caf68f5084b3 100644
--- a/tools/include/linux/mm.h
+++ b/tools/include/linux/mm.h
@@ -15,6 +15,7 @@
 #define ALIGN_DOWN(x, a)		__ALIGN_KERNEL((x) - ((a) - 1), (a))
 
 #define PAGE_ALIGN(addr) ALIGN(addr, PAGE_SIZE)
+#define PAGE_ALIGN_DOWN(addr) ALIGN_DOWN(addr, PAGE_SIZE)
 
 #define __va(x) ((void *)((unsigned long)(x)))
 #define __pa(x) ((unsigned long)(x))
-- 
2.34.1



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Patch v3 7/7] mm/memblock: default region's nid may be MAX_NUMNODES
  2024-05-07  7:58 [Patch v3 0/7] memblock: cleanup Wei Yang
                   ` (5 preceding siblings ...)
  2024-05-07  7:58 ` [Patch v3 6/7] mm/memblock: use PAGE_ALIGN_DOWN to get pgend in free_memmap Wei Yang
@ 2024-05-07  7:58 ` Wei Yang
  2024-05-08  5:35 ` [Patch v3 0/7] memblock: cleanup Mike Rapoport
  7 siblings, 0 replies; 9+ messages in thread
From: Wei Yang @ 2024-05-07  7:58 UTC (permalink / raw)
  To: rppt, akpm; +Cc: linux-mm, Wei Yang

On x86, the call flow looks like this:

numa_init()
    memblock_set_node(..., MAX_NUMNODES)
    numa_register_memblks()
        memblock_validate_numa_coverage()

If there is a hole, the nid for this region would stay to be
MAX_NUMNODES. Then memblock_validate_numa_coverage() will miss to report
it.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 mm/memblock.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index 33a8b6f7b626..e085de63688a 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -751,7 +751,7 @@ bool __init_memblock memblock_validate_numa_coverage(unsigned long threshold_byt
 
 	/* calculate lose page */
 	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) {
-		if (nid == NUMA_NO_NODE)
+		if (nid == MAX_NUMNODES || nid == NUMA_NO_NODE)
 			nr_pages += end_pfn - start_pfn;
 	}
 
-- 
2.34.1



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Patch v3 0/7] memblock: cleanup
  2024-05-07  7:58 [Patch v3 0/7] memblock: cleanup Wei Yang
                   ` (6 preceding siblings ...)
  2024-05-07  7:58 ` [Patch v3 7/7] mm/memblock: default region's nid may be MAX_NUMNODES Wei Yang
@ 2024-05-08  5:35 ` Mike Rapoport
  7 siblings, 0 replies; 9+ messages in thread
From: Mike Rapoport @ 2024-05-08  5:35 UTC (permalink / raw)
  To: akpm, Wei Yang; +Cc: Mike Rapoport, linux-mm

From: Mike Rapoport (IBM) <rppt@kernel.org>

On Tue, 07 May 2024 07:58:26 +0000, Wei Yang wrote:
> Changes in v3:
> 
> * separate case memblock_reserve_all_locations_check()
> * add test description for memblock_reserve_many_may_conflict_check()
> * drop patch 4
> 
> Changes in v2:
> 
> [...]

Applied to for-6.11 branch of memblock.git tree, thanks!

[1/7] memblock tests: add memblock_reserve_all_locations_check()
      commit: a5ccbf9e6be63678c9324761c20c6a1afa3580b0
[2/7] memblock tests: add memblock_reserve_many_may_conflict_check()
      commit: d70ec404376ddd280401bf728b85be854ab37a2f
[3/7] mm/memblock: fix comment for memblock_isolate_range()
      commit: a148689e15e7215e502742d61d7eaa56016759d2
[4/7] memblock tests: add memblock_overlaps_region_checks
      commit: f23de2662dd0937b2c122b8db223166d5339e084
[5/7] mm/memblock: return true directly on finding overlap region
      commit: 19680b0a91222923efdedc934b21440d6207b071
[6/7] mm/memblock: use PAGE_ALIGN_DOWN to get pgend in free_memmap
      commit: c0c7cc4efa8ae089b59358c1433c0a7c8a40defe
[7/7] mm/memblock: default region's nid may be MAX_NUMNODES
      commit: 9b725e3569d8965884ac984bc05976861dee9e34

tree: https://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock
branch: for-6.11

--
Sincerely yours,
Mike.



^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2024-05-08  5:35 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-05-07  7:58 [Patch v3 0/7] memblock: cleanup Wei Yang
2024-05-07  7:58 ` [Patch v3 1/7] memblock tests: add memblock_reserve_all_locations_check() Wei Yang
2024-05-07  7:58 ` [Patch v3 2/7] memblock tests: add memblock_reserve_many_may_conflict_check() Wei Yang
2024-05-07  7:58 ` [Patch v3 3/7] mm/memblock: fix comment for memblock_isolate_range() Wei Yang
2024-05-07  7:58 ` [Patch v3 4/7] memblock tests: add memblock_overlaps_region_checks Wei Yang
2024-05-07  7:58 ` [Patch v3 5/7] mm/memblock: return true directly on finding overlap region Wei Yang
2024-05-07  7:58 ` [Patch v3 6/7] mm/memblock: use PAGE_ALIGN_DOWN to get pgend in free_memmap Wei Yang
2024-05-07  7:58 ` [Patch v3 7/7] mm/memblock: default region's nid may be MAX_NUMNODES Wei Yang
2024-05-08  5:35 ` [Patch v3 0/7] memblock: cleanup Mike Rapoport

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox