linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5] mm: Add CONFIG_PAGE_BLOCK_ORDER to select page block order
@ 2025-05-16 23:23 Juan Yescas
  2025-05-17 18:39 ` kernel test robot
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Juan Yescas @ 2025-05-16 23:23 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Juan Yescas, Zi Yan, linux-mm,
	linux-kernel
  Cc: tjmercier, isaacmanjarres, kaleshsingh, Minchan Kim

Problem: On large page size configurations (16KiB, 64KiB), the CMA
alignment requirement (CMA_MIN_ALIGNMENT_BYTES) increases considerably,
and this causes the CMA reservations to be larger than necessary.
This means that system will have less available MIGRATE_UNMOVABLE and
MIGRATE_RECLAIMABLE page blocks since MIGRATE_CMA can't fallback to them.

The CMA_MIN_ALIGNMENT_BYTES increases because it depends on
MAX_PAGE_ORDER which depends on ARCH_FORCE_MAX_ORDER. The value of
ARCH_FORCE_MAX_ORDER increases on 16k and 64k kernels.

For example, in ARM, the CMA alignment requirement when:

- CONFIG_ARCH_FORCE_MAX_ORDER default value is used
- CONFIG_TRANSPARENT_HUGEPAGE is set:

PAGE_SIZE | MAX_PAGE_ORDER | pageblock_order | CMA_MIN_ALIGNMENT_BYTES
-----------------------------------------------------------------------
   4KiB   |      10        |      10         |  4KiB * (2 ^ 10)  =  4MiB
  16Kib   |      11        |      11         | 16KiB * (2 ^ 11) =  32MiB
  64KiB   |      13        |      13         | 64KiB * (2 ^ 13) = 512MiB

There are some extreme cases for the CMA alignment requirement when:

- CONFIG_ARCH_FORCE_MAX_ORDER maximum value is set
- CONFIG_TRANSPARENT_HUGEPAGE is NOT set:
- CONFIG_HUGETLB_PAGE is NOT set

PAGE_SIZE | MAX_PAGE_ORDER | pageblock_order |  CMA_MIN_ALIGNMENT_BYTES
------------------------------------------------------------------------
   4KiB   |      15        |      15         |  4KiB * (2 ^ 15) = 128MiB
  16Kib   |      13        |      13         | 16KiB * (2 ^ 13) = 128MiB
  64KiB   |      13        |      13         | 64KiB * (2 ^ 13) = 512MiB

This affects the CMA reservations for the drivers. If a driver in a
4KiB kernel needs 4MiB of CMA memory, in a 16KiB kernel, the minimal
reservation has to be 32MiB due to the alignment requirements:

reserved-memory {
    ...
    cma_test_reserve: cma_test_reserve {
        compatible = "shared-dma-pool";
        size = <0x0 0x400000>; /* 4 MiB */
        ...
    };
};

reserved-memory {
    ...
    cma_test_reserve: cma_test_reserve {
        compatible = "shared-dma-pool";
        size = <0x0 0x2000000>; /* 32 MiB */
        ...
    };
};

Solution: Add a new config CONFIG_PAGE_BLOCK_ORDER that
allows to set the page block order in all the architectures.
The maximum page block order will be given by
ARCH_FORCE_MAX_ORDER.

By default, CONFIG_PAGE_BLOCK_ORDER will have the same
value that ARCH_FORCE_MAX_ORDER. This will make sure that
current kernel configurations won't be affected by this
change. It is a opt-in change.

This patch will allow to have the same CMA alignment
requirements for large page sizes (16KiB, 64KiB) as that
in 4kb kernels by setting a lower pageblock_order.

Tests:

- Verified that HugeTLB pages work when pageblock_order is 1, 7, 10
on 4k and 16k kernels.

- Verified that Transparent Huge Pages work when pageblock_order
is 1, 7, 10 on 4k and 16k kernels.

- Verified that dma-buf heaps allocations work when pageblock_order
is 1, 7, 10 on 4k and 16k kernels.

Benchmarks:

The benchmarks compare 16kb kernels with pageblock_order 10 and 7. The
reason for the pageblock_order 7 is because this value makes the min
CMA alignment requirement the same as that in 4kb kernels (2MB).

- Perform 100K dma-buf heaps (/dev/dma_heap/system) allocations of
SZ_8M, SZ_4M, SZ_2M, SZ_1M, SZ_64, SZ_8, SZ_4. Use simpleperf
(https://developer.android.com/ndk/guides/simpleperf) to measure
the # of instructions and page-faults on 16k kernels.
The benchmark was executed 10 times. The averages are below:

           # instructions         |     #page-faults
    order 10     |  order 7       | order 10 | order 7
--------------------------------------------------------
 13,891,765,770	 | 11,425,777,314 |    220   |   217
 14,456,293,487	 | 12,660,819,302 |    224   |   219
 13,924,261,018	 | 13,243,970,736 |    217   |   221
 13,910,886,504	 | 13,845,519,630 |    217   |   221
 14,388,071,190	 | 13,498,583,098 |    223   |   224
 13,656,442,167	 | 12,915,831,681 |    216   |   218
 13,300,268,343	 | 12,930,484,776 |    222   |   218
 13,625,470,223	 | 14,234,092,777 |    219   |   218
 13,508,964,965	 | 13,432,689,094 |    225   |   219
 13,368,950,667	 | 13,683,587,37  |    219   |   225
-------------------------------------------------------------------
 13,803,137,433  | 13,131,974,268 |    220   |   220    Averages

There were 4.85% #instructions when order was 7, in comparison
with order 10.

     13,803,137,433 - 13,131,974,268 = -671,163,166 (-4.86%)

The number of page faults in order 7 and 10 were the same.

These results didn't show any significant regression when the
pageblock_order is set to 7 on 16kb kernels.

- Run speedometer 3.1 (https://browserbench.org/Speedometer3.1/) 5 times
 on the 16k kernels with pageblock_order 7 and 10.

order 10 | order 7  | order 7 - order 10 | (order 7 - order 10) %
-------------------------------------------------------------------
  15.8	 |  16.4    |         0.6        |     3.80%
  16.4	 |  16.2    |        -0.2        |    -1.22%
  16.6	 |  16.3    |        -0.3        |    -1.81%
  16.8	 |  16.3    |        -0.5        |    -2.98%
  16.6	 |  16.8    |         0.2        |     1.20%
-------------------------------------------------------------------
  16.44     16.4            -0.04	          -0.24%   Averages

The results didn't show any significant regression when the
pageblock_order is set to 7 on 16kb kernels.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: David Hildenbrand <david@redhat.com>
CC: Mike Rapoport <rppt@kernel.org>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Signed-off-by: Juan Yescas <jyescas@google.com>
Acked-by: Zi Yan <ziy@nvidia.com>
---

Changes in v5:
  - Remove the ranges for CONFIG_PAGE_BLOCK_ORDER. The
    ranges with config definitions don't work in Kconfig,
    for example (range 1 MY_CONFIG).
  - Add PAGE_BLOCK_ORDER_MANUAL config for the
    page block order number. The default value was not
    defined.
  - Fix typos reported by Andrew.
  - Test default configs in powerpc. 

Changes in v4:
  - Set PAGE_BLOCK_ORDER in incluxe/linux/mmzone.h to
    validate that MAX_PAGE_ORDER >= PAGE_BLOCK_ORDER at
    compile time.
  - This change fixes the warning in:
    https://lore.kernel.org/oe-kbuild-all/202505091548.FuKO4b4v-lkp@intel.com/

Changes in v3:
  - Rename ARCH_FORCE_PAGE_BLOCK_ORDER to PAGE_BLOCK_ORDER
    as per Matthew's suggestion.
  - Update comments in pageblock-flags.h for pageblock_order
    value when THP or HugeTLB are not used.

Changes in v2:
  - Add Zi's Acked-by tag.
  - Move ARCH_FORCE_PAGE_BLOCK_ORDER config to mm/Kconfig as
    per Zi and Matthew suggestion so it is available to
    all the architectures.
  - Set ARCH_FORCE_PAGE_BLOCK_ORDER to 10 by default when
    ARCH_FORCE_MAX_ORDER is not available.


 include/linux/mmzone.h          | 16 ++++++++++++++++
 include/linux/pageblock-flags.h |  8 ++++----
 mm/Kconfig                      | 30 ++++++++++++++++++++++++++++++
 3 files changed, 50 insertions(+), 4 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 6ccec1bf2896..6fdb8f7f74d6 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -37,6 +37,22 @@
 
 #define NR_PAGE_ORDERS (MAX_PAGE_ORDER + 1)
 
+/* Defines the order for the number of pages that have a migrate type. */
+#ifndef CONFIG_PAGE_BLOCK_ORDER_MANUAL
+#define PAGE_BLOCK_ORDER MAX_PAGE_ORDER
+#else
+#define PAGE_BLOCK_ORDER CONFIG_PAGE_BLOCK_ORDER_MANUAL
+#endif /* CONFIG_PAGE_BLOCK_ORDER_MANUAL */
+
+/*
+ * The MAX_PAGE_ORDER, which defines the max order of pages to be allocated
+ * by the buddy allocator, has to be larger or equal to the PAGE_BLOCK_ORDER,
+ * which defines the order for the number of pages that can have a migrate type
+ */
+#if (PAGE_BLOCK_ORDER > MAX_PAGE_ORDER)
+#error MAX_PAGE_ORDER must be >= PAGE_BLOCK_ORDER
+#endif
+
 /*
  * PAGE_ALLOC_COSTLY_ORDER is the order at which allocations are deemed
  * costly to service.  That is between allocation orders which should
diff --git a/include/linux/pageblock-flags.h b/include/linux/pageblock-flags.h
index fc6b9c87cb0a..e73a4292ef02 100644
--- a/include/linux/pageblock-flags.h
+++ b/include/linux/pageblock-flags.h
@@ -41,18 +41,18 @@ extern unsigned int pageblock_order;
  * Huge pages are a constant size, but don't exceed the maximum allocation
  * granularity.
  */
-#define pageblock_order		MIN_T(unsigned int, HUGETLB_PAGE_ORDER, MAX_PAGE_ORDER)
+#define pageblock_order		MIN_T(unsigned int, HUGETLB_PAGE_ORDER, PAGE_BLOCK_ORDER)
 
 #endif /* CONFIG_HUGETLB_PAGE_SIZE_VARIABLE */
 
 #elif defined(CONFIG_TRANSPARENT_HUGEPAGE)
 
-#define pageblock_order		MIN_T(unsigned int, HPAGE_PMD_ORDER, MAX_PAGE_ORDER)
+#define pageblock_order		MIN_T(unsigned int, HPAGE_PMD_ORDER, PAGE_BLOCK_ORDER)
 
 #else /* CONFIG_TRANSPARENT_HUGEPAGE */
 
-/* If huge pages are not used, group by MAX_ORDER_NR_PAGES */
-#define pageblock_order		MAX_PAGE_ORDER
+/* If huge pages are not used, group by PAGE_BLOCK_ORDER */
+#define pageblock_order		PAGE_BLOCK_ORDER
 
 #endif /* CONFIG_HUGETLB_PAGE */
 
diff --git a/mm/Kconfig b/mm/Kconfig
index e113f713b493..bd8012b30b39 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -989,6 +989,36 @@ config CMA_AREAS
 
 	  If unsure, leave the default value "8" in UMA and "20" in NUMA.
 
+config PAGE_BLOCK_ORDER
+	bool "Allow setting a custom page block order"
+	default n
+	help
+	  This config allows overriding the default page block order when the
+	  page block order is required to be smaller than ARCH_FORCE_MAX_ORDER or
+	  MAX_PAGE_ORDER.
+
+	  If unsure, do not enable it.
+
+#
+# When PAGE_BLOCK_ORDER is not enabled or ARCH_FORCE_MAX_ORDER is not defined,
+# the default page block order is MAX_PAGE_ORDER (10) as per
+# include/linux/mmzone.h.
+#
+config PAGE_BLOCK_ORDER_MANUAL
+	int "Page Block Order"
+	depends on PAGE_BLOCK_ORDER
+	help
+	  The page block order refers to the power of two number of pages that
+	  are physically contiguous and can have a migrate type associated to
+	  them. The maximum size of the page block order is limited by
+	  ARCH_FORCE_MAX_ORDER.
+
+	  Reducing pageblock order can negatively impact THP generation
+	  success rate. If your workloads uses THP heavily, please use this
+	  option with caution.
+
+	  Don't change if unsure.
+
 config MEM_SOFT_DIRTY
 	bool "Track memory changes"
 	depends on CHECKPOINT_RESTORE && HAVE_ARCH_SOFT_DIRTY && PROC_FS
-- 
2.49.0.1101.gccaa498523-goog



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v5] mm: Add CONFIG_PAGE_BLOCK_ORDER to select page block order
  2025-05-16 23:23 [PATCH v5] mm: Add CONFIG_PAGE_BLOCK_ORDER to select page block order Juan Yescas
@ 2025-05-17 18:39 ` kernel test robot
  2025-05-18  5:23 ` kernel test robot
  2025-05-18 11:05 ` kernel test robot
  2 siblings, 0 replies; 4+ messages in thread
From: kernel test robot @ 2025-05-17 18:39 UTC (permalink / raw)
  To: Juan Yescas, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Zi Yan, linux-kernel
  Cc: oe-kbuild-all, Linux Memory Management List, tjmercier,
	isaacmanjarres, kaleshsingh, Minchan Kim

Hi Juan,

kernel test robot noticed the following build warnings:

[auto build test WARNING on akpm-mm/mm-everything]

url:    https://github.com/intel-lab-lkp/linux/commits/Juan-Yescas/mm-Add-CONFIG_PAGE_BLOCK_ORDER-to-select-page-block-order/20250517-072434
base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link:    https://lore.kernel.org/r/20250516232341.659513-1-jyescas%40google.com
patch subject: [PATCH v5] mm: Add CONFIG_PAGE_BLOCK_ORDER to select page block order
config: loongarch-allyesconfig (https://download.01.org/0day-ci/archive/20250518/202505180240.0VkUp5gq-lkp@intel.com/config)
compiler: loongarch64-linux-gcc (GCC) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250518/202505180240.0VkUp5gq-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202505180240.0VkUp5gq-lkp@intel.com/

All warnings (new ones prefixed by >>):

   mm/page_alloc.c: In function 'try_to_claim_block':
>> mm/page_alloc.c:2160:44: warning: left shift count >= width of type [-Wshift-count-overflow]
    2160 |         if (free_pages + alike_pages >= (1 << (pageblock_order-1)) ||
         |                                            ^~


vim +2160 mm/page_alloc.c

1c30844d2dfe27 Mel Gorman       2018-12-28  2095  
4eb7dce6200711 Joonsoo Kim      2015-04-14  2096  /*
e47f1f56dd82cc Brendan Jackman  2025-02-28  2097   * This function implements actual block claiming behaviour. If order is large
e47f1f56dd82cc Brendan Jackman  2025-02-28  2098   * enough, we can claim the whole pageblock for the requested migratetype. If
e47f1f56dd82cc Brendan Jackman  2025-02-28  2099   * not, we check the pageblock for constituent pages; if at least half of the
e47f1f56dd82cc Brendan Jackman  2025-02-28  2100   * pages are free or compatible, we can still claim the whole block, so pages
e47f1f56dd82cc Brendan Jackman  2025-02-28  2101   * freed in the future will be put on the correct free list.
4eb7dce6200711 Joonsoo Kim      2015-04-14  2102   */
c0cd6f557b9090 Johannes Weiner  2024-03-20  2103  static struct page *
e47f1f56dd82cc Brendan Jackman  2025-02-28  2104  try_to_claim_block(struct zone *zone, struct page *page,
c0cd6f557b9090 Johannes Weiner  2024-03-20  2105  		   int current_order, int order, int start_type,
020396a581dc69 Johannes Weiner  2025-02-24  2106  		   int block_type, unsigned int alloc_flags)
fef903efcf0cb9 Srivatsa S. Bhat 2013-09-11  2107  {
02aa0cdd72483c Vlastimil Babka  2017-05-08  2108  	int free_pages, movable_pages, alike_pages;
e1f42a577f6364 Vlastimil Babka  2024-04-25  2109  	unsigned long start_pfn;
3bc48f96cf11ce Vlastimil Babka  2017-05-08  2110  
fef903efcf0cb9 Srivatsa S. Bhat 2013-09-11  2111  	/* Take ownership for orders >= pageblock_order */
fef903efcf0cb9 Srivatsa S. Bhat 2013-09-11  2112  	if (current_order >= pageblock_order) {
94deaf69dcd334 Huan Yang        2024-08-26  2113  		unsigned int nr_added;
94deaf69dcd334 Huan Yang        2024-08-26  2114  
e0932b6c1f942f Johannes Weiner  2024-03-20  2115  		del_page_from_free_list(page, zone, current_order, block_type);
fef903efcf0cb9 Srivatsa S. Bhat 2013-09-11  2116  		change_pageblock_range(page, current_order, start_type);
94deaf69dcd334 Huan Yang        2024-08-26  2117  		nr_added = expand(zone, page, order, current_order, start_type);
94deaf69dcd334 Huan Yang        2024-08-26  2118  		account_freepages(zone, nr_added, start_type);
c0cd6f557b9090 Johannes Weiner  2024-03-20  2119  		return page;
fef903efcf0cb9 Srivatsa S. Bhat 2013-09-11  2120  	}
fef903efcf0cb9 Srivatsa S. Bhat 2013-09-11  2121  
1c30844d2dfe27 Mel Gorman       2018-12-28  2122  	/*
1c30844d2dfe27 Mel Gorman       2018-12-28  2123  	 * Boost watermarks to increase reclaim pressure to reduce the
1c30844d2dfe27 Mel Gorman       2018-12-28  2124  	 * likelihood of future fallbacks. Wake kswapd now as the node
1c30844d2dfe27 Mel Gorman       2018-12-28  2125  	 * may be balanced overall and kswapd will not wake naturally.
1c30844d2dfe27 Mel Gorman       2018-12-28  2126  	 */
597c892038e080 Johannes Weiner  2020-12-14  2127  	if (boost_watermark(zone) && (alloc_flags & ALLOC_KSWAPD))
73444bc4d8f92e Mel Gorman       2019-01-08  2128  		set_bit(ZONE_BOOSTED_WATERMARK, &zone->flags);
1c30844d2dfe27 Mel Gorman       2018-12-28  2129  
ebddd111fcd13f Miaohe Lin       2023-08-01  2130  	/* moving whole block can fail due to zone boundary conditions */
e1f42a577f6364 Vlastimil Babka  2024-04-25  2131  	if (!prep_move_freepages_block(zone, page, &start_pfn, &free_pages,
e1f42a577f6364 Vlastimil Babka  2024-04-25  2132  				       &movable_pages))
c2f6ea38fc1b64 Johannes Weiner  2025-02-24  2133  		return NULL;
ebddd111fcd13f Miaohe Lin       2023-08-01  2134  
02aa0cdd72483c Vlastimil Babka  2017-05-08  2135  	/*
02aa0cdd72483c Vlastimil Babka  2017-05-08  2136  	 * Determine how many pages are compatible with our allocation.
02aa0cdd72483c Vlastimil Babka  2017-05-08  2137  	 * For movable allocation, it's the number of movable pages which
02aa0cdd72483c Vlastimil Babka  2017-05-08  2138  	 * we just obtained. For other types it's a bit more tricky.
02aa0cdd72483c Vlastimil Babka  2017-05-08  2139  	 */
02aa0cdd72483c Vlastimil Babka  2017-05-08  2140  	if (start_type == MIGRATE_MOVABLE) {
02aa0cdd72483c Vlastimil Babka  2017-05-08  2141  		alike_pages = movable_pages;
02aa0cdd72483c Vlastimil Babka  2017-05-08  2142  	} else {
02aa0cdd72483c Vlastimil Babka  2017-05-08  2143  		/*
02aa0cdd72483c Vlastimil Babka  2017-05-08  2144  		 * If we are falling back a RECLAIMABLE or UNMOVABLE allocation
02aa0cdd72483c Vlastimil Babka  2017-05-08  2145  		 * to MOVABLE pageblock, consider all non-movable pages as
02aa0cdd72483c Vlastimil Babka  2017-05-08  2146  		 * compatible. If it's UNMOVABLE falling back to RECLAIMABLE or
02aa0cdd72483c Vlastimil Babka  2017-05-08  2147  		 * vice versa, be conservative since we can't distinguish the
02aa0cdd72483c Vlastimil Babka  2017-05-08  2148  		 * exact migratetype of non-movable pages.
02aa0cdd72483c Vlastimil Babka  2017-05-08  2149  		 */
c0cd6f557b9090 Johannes Weiner  2024-03-20  2150  		if (block_type == MIGRATE_MOVABLE)
02aa0cdd72483c Vlastimil Babka  2017-05-08  2151  			alike_pages = pageblock_nr_pages
02aa0cdd72483c Vlastimil Babka  2017-05-08  2152  						- (free_pages + movable_pages);
02aa0cdd72483c Vlastimil Babka  2017-05-08  2153  		else
02aa0cdd72483c Vlastimil Babka  2017-05-08  2154  			alike_pages = 0;
02aa0cdd72483c Vlastimil Babka  2017-05-08  2155  	}
02aa0cdd72483c Vlastimil Babka  2017-05-08  2156  	/*
02aa0cdd72483c Vlastimil Babka  2017-05-08  2157  	 * If a sufficient number of pages in the block are either free or of
ebddd111fcd13f Miaohe Lin       2023-08-01  2158  	 * compatible migratability as our allocation, claim the whole block.
02aa0cdd72483c Vlastimil Babka  2017-05-08  2159  	 */
02aa0cdd72483c Vlastimil Babka  2017-05-08 @2160  	if (free_pages + alike_pages >= (1 << (pageblock_order-1)) ||
c0cd6f557b9090 Johannes Weiner  2024-03-20  2161  			page_group_by_mobility_disabled) {
e1f42a577f6364 Vlastimil Babka  2024-04-25  2162  		__move_freepages_block(zone, start_pfn, block_type, start_type);
c0cd6f557b9090 Johannes Weiner  2024-03-20  2163  		return __rmqueue_smallest(zone, order, start_type);
c0cd6f557b9090 Johannes Weiner  2024-03-20  2164  	}
3bc48f96cf11ce Vlastimil Babka  2017-05-08  2165  
c2f6ea38fc1b64 Johannes Weiner  2025-02-24  2166  	return NULL;
0aaa29a56e4fb0 Mel Gorman       2015-11-06  2167  }
0aaa29a56e4fb0 Mel Gorman       2015-11-06  2168  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v5] mm: Add CONFIG_PAGE_BLOCK_ORDER to select page block order
  2025-05-16 23:23 [PATCH v5] mm: Add CONFIG_PAGE_BLOCK_ORDER to select page block order Juan Yescas
  2025-05-17 18:39 ` kernel test robot
@ 2025-05-18  5:23 ` kernel test robot
  2025-05-18 11:05 ` kernel test robot
  2 siblings, 0 replies; 4+ messages in thread
From: kernel test robot @ 2025-05-18  5:23 UTC (permalink / raw)
  To: Juan Yescas, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Zi Yan, linux-kernel
  Cc: oe-kbuild-all, Linux Memory Management List, tjmercier,
	isaacmanjarres, kaleshsingh, Minchan Kim

Hi Juan,

kernel test robot noticed the following build warnings:

[auto build test WARNING on akpm-mm/mm-everything]

url:    https://github.com/intel-lab-lkp/linux/commits/Juan-Yescas/mm-Add-CONFIG_PAGE_BLOCK_ORDER-to-select-page-block-order/20250517-072434
base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link:    https://lore.kernel.org/r/20250516232341.659513-1-jyescas%40google.com
patch subject: [PATCH v5] mm: Add CONFIG_PAGE_BLOCK_ORDER to select page block order
config: i386-randconfig-r073-20250518 (https://download.01.org/0day-ci/archive/20250518/202505181321.IBrAyg7D-lkp@intel.com/config)
compiler: gcc-12 (Debian 12.2.0-14) 12.2.0

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202505181321.IBrAyg7D-lkp@intel.com/

smatch warnings:
mm/compaction.c:849 skip_isolation_on_order() warn: always true condition '(order >= (((((22 - 12))) < ((0))) ?(((22 - 12))):((0)))) => (s32min-s32max >= 0)'
mm/page_alloc.c:730 __del_page_from_free_list() warn: always true condition '(order >= (((((22 - 12))) < ((0))) ?(((22 - 12))):((0)))) => (0-u32max >= 0)'
mm/page_alloc.c:679 __add_to_free_list() warn: always true condition '(order >= (((((22 - 12))) < ((0))) ?(((22 - 12))):((0)))) => (0-u32max >= 0)'
mm/page_alloc.c:704 move_to_free_list() warn: always true condition '(order >= (((((22 - 12))) < ((0))) ?(((22 - 12))):((0)))) => (0-u32max >= 0)'
mm/page_alloc.c:2036 should_try_claim_block() warn: always true condition '(order >= (((((22 - 12))) < ((0))) ?(((22 - 12))):((0)))) => (0-u32max >= 0)'
mm/page_alloc.c:2043 should_try_claim_block() warn: always true condition '(order >= (((((22 - 12))) < ((0))) ?(((22 - 12))):((0))) / 2) => (0-u32max >= 0)'
mm/page_alloc.c:2112 try_to_claim_block() warn: always true condition '(current_order >= (((((22 - 12))) < ((0))) ?(((22 - 12))):((0)))) => (s32min-s32max >= 0)'
mm/page_alloc.c:2192 __rmqueue_claim() warn: unsigned 'order' is never less than zero.
mm/page_alloc.c:3200 reserve_highatomic_pageblock() warn: unsigned 'order' is never less than zero.
mm/page_alloc.c:3273 unreserve_highatomic_pageblock() warn: unsigned 'order' is never less than zero.

vim +849 mm/compaction.c

748446bb6b5a93 Mel Gorman 2010-05-24  825  
ee6f62fd34f0bb Zi Yan     2024-02-20  826  /**
ee6f62fd34f0bb Zi Yan     2024-02-20  827   * skip_isolation_on_order() - determine when to skip folio isolation based on
ee6f62fd34f0bb Zi Yan     2024-02-20  828   *			       folio order and compaction target order
ee6f62fd34f0bb Zi Yan     2024-02-20  829   * @order:		to-be-isolated folio order
ee6f62fd34f0bb Zi Yan     2024-02-20  830   * @target_order:	compaction target order
ee6f62fd34f0bb Zi Yan     2024-02-20  831   *
ee6f62fd34f0bb Zi Yan     2024-02-20  832   * This avoids unnecessary folio isolations during compaction.
ee6f62fd34f0bb Zi Yan     2024-02-20  833   */
ee6f62fd34f0bb Zi Yan     2024-02-20  834  static bool skip_isolation_on_order(int order, int target_order)
ee6f62fd34f0bb Zi Yan     2024-02-20  835  {
ee6f62fd34f0bb Zi Yan     2024-02-20  836  	/*
ee6f62fd34f0bb Zi Yan     2024-02-20  837  	 * Unless we are performing global compaction (i.e.,
ee6f62fd34f0bb Zi Yan     2024-02-20  838  	 * is_via_compact_memory), skip any folios that are larger than the
ee6f62fd34f0bb Zi Yan     2024-02-20  839  	 * target order: we wouldn't be here if we'd have a free folio with
ee6f62fd34f0bb Zi Yan     2024-02-20  840  	 * the desired target_order, so migrating this folio would likely fail
ee6f62fd34f0bb Zi Yan     2024-02-20  841  	 * later.
ee6f62fd34f0bb Zi Yan     2024-02-20  842  	 */
ee6f62fd34f0bb Zi Yan     2024-02-20  843  	if (!is_via_compact_memory(target_order) && order >= target_order)
ee6f62fd34f0bb Zi Yan     2024-02-20  844  		return true;
ee6f62fd34f0bb Zi Yan     2024-02-20  845  	/*
ee6f62fd34f0bb Zi Yan     2024-02-20  846  	 * We limit memory compaction to pageblocks and won't try
ee6f62fd34f0bb Zi Yan     2024-02-20  847  	 * creating free blocks of memory that are larger than that.
ee6f62fd34f0bb Zi Yan     2024-02-20  848  	 */
ee6f62fd34f0bb Zi Yan     2024-02-20 @849  	return order >= pageblock_order;
ee6f62fd34f0bb Zi Yan     2024-02-20  850  }
ee6f62fd34f0bb Zi Yan     2024-02-20  851  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v5] mm: Add CONFIG_PAGE_BLOCK_ORDER to select page block order
  2025-05-16 23:23 [PATCH v5] mm: Add CONFIG_PAGE_BLOCK_ORDER to select page block order Juan Yescas
  2025-05-17 18:39 ` kernel test robot
  2025-05-18  5:23 ` kernel test robot
@ 2025-05-18 11:05 ` kernel test robot
  2 siblings, 0 replies; 4+ messages in thread
From: kernel test robot @ 2025-05-18 11:05 UTC (permalink / raw)
  To: Juan Yescas, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Zi Yan, linux-kernel
  Cc: oe-kbuild-all, Linux Memory Management List, tjmercier,
	isaacmanjarres, kaleshsingh, Minchan Kim

Hi Juan,

kernel test robot noticed the following build warnings:

[auto build test WARNING on akpm-mm/mm-everything]

url:    https://github.com/intel-lab-lkp/linux/commits/Juan-Yescas/mm-Add-CONFIG_PAGE_BLOCK_ORDER-to-select-page-block-order/20250517-072434
base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link:    https://lore.kernel.org/r/20250516232341.659513-1-jyescas%40google.com
patch subject: [PATCH v5] mm: Add CONFIG_PAGE_BLOCK_ORDER to select page block order
config: i386-randconfig-r072-20250518 (https://download.01.org/0day-ci/archive/20250518/202505181825.FdKgAQ16-lkp@intel.com/config)
compiler: gcc-12 (Debian 12.2.0-14) 12.2.0

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202505181825.FdKgAQ16-lkp@intel.com/

smatch warnings:
mm/compaction.c:302 pageblock_skip_persistent() warn: always true condition '(compound_order(page) >= (((((22 - 12))) < ((0))) ?(((22 - 12))):((0)))) => (0-u32max >= 0)'
mm/page_alloc.c:618 compaction_capture() warn: unsigned 'order' is never less than zero.

vim +302 mm/compaction.c

9721fd82351d47a Baolin Wang     2023-06-14  289  
21dc7e023611fbc David Rientjes  2017-11-17  290  /*
2271b016bf368d1 Hui Su          2020-12-14  291   * Compound pages of >= pageblock_order should consistently be skipped until
b527cfe5bc23208 Vlastimil Babka 2017-11-17  292   * released. It is always pointless to compact pages of such order (if they are
b527cfe5bc23208 Vlastimil Babka 2017-11-17  293   * migratable), and the pageblocks they occupy cannot contain any free pages.
21dc7e023611fbc David Rientjes  2017-11-17  294   */
b527cfe5bc23208 Vlastimil Babka 2017-11-17  295  static bool pageblock_skip_persistent(struct page *page)
21dc7e023611fbc David Rientjes  2017-11-17  296  {
b527cfe5bc23208 Vlastimil Babka 2017-11-17  297  	if (!PageCompound(page))
21dc7e023611fbc David Rientjes  2017-11-17  298  		return false;
b527cfe5bc23208 Vlastimil Babka 2017-11-17  299  
b527cfe5bc23208 Vlastimil Babka 2017-11-17  300  	page = compound_head(page);
b527cfe5bc23208 Vlastimil Babka 2017-11-17  301  
b527cfe5bc23208 Vlastimil Babka 2017-11-17 @302  	if (compound_order(page) >= pageblock_order)
21dc7e023611fbc David Rientjes  2017-11-17  303  		return true;
b527cfe5bc23208 Vlastimil Babka 2017-11-17  304  
b527cfe5bc23208 Vlastimil Babka 2017-11-17  305  	return false;
21dc7e023611fbc David Rientjes  2017-11-17  306  }
21dc7e023611fbc David Rientjes  2017-11-17  307  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2025-05-18 11:06 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-05-16 23:23 [PATCH v5] mm: Add CONFIG_PAGE_BLOCK_ORDER to select page block order Juan Yescas
2025-05-17 18:39 ` kernel test robot
2025-05-18  5:23 ` kernel test robot
2025-05-18 11:05 ` kernel test robot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox