From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A24EC3ABD8 for ; Fri, 16 May 2025 23:23:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C50D16B000A; Fri, 16 May 2025 19:23:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BD90F6B0082; Fri, 16 May 2025 19:23:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A523F6B0083; Fri, 16 May 2025 19:23:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 72B566B000A for ; Fri, 16 May 2025 19:23:46 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id D661BC034A for ; Fri, 16 May 2025 23:23:47 +0000 (UTC) X-FDA: 83450350494.07.5D9E1D6 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) by imf03.hostedemail.com (Postfix) with ESMTP id 1C26E20010 for ; Fri, 16 May 2025 23:23:45 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=juOnatt1; spf=pass (imf03.hostedemail.com: domain of 3AMknaAcKCHkgvbpZXpdlldib.Zljifkru-jjhsXZh.lod@flex--jyescas.bounces.google.com designates 209.85.210.202 as permitted sender) smtp.mailfrom=3AMknaAcKCHkgvbpZXpdlldib.Zljifkru-jjhsXZh.lod@flex--jyescas.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747437826; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=/Qy75UNCDbhvgShF7QirTdkT6HEAxA37LSBHQ4luFaE=; b=xzzKADZWToOU4eoeiJrscw9rWO0RY9F+MJi7ilqJ41nM+cHtlZkJAKYrpy2ioYWkdCFjDf 9t9pIj1X5pv5AFspP1AxWNjoqjOCPaJB9Jz/u2myEb7DwCO3xTzXs+6r6zqAua+SvZSkPW GyULmRsQ5yCfIc9Fp0qyBrOwdgjabKk= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=juOnatt1; spf=pass (imf03.hostedemail.com: domain of 3AMknaAcKCHkgvbpZXpdlldib.Zljifkru-jjhsXZh.lod@flex--jyescas.bounces.google.com designates 209.85.210.202 as permitted sender) smtp.mailfrom=3AMknaAcKCHkgvbpZXpdlldib.Zljifkru-jjhsXZh.lod@flex--jyescas.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747437826; a=rsa-sha256; cv=none; b=rxdGCDBBDvNqqxlh1qNEsYOAT5WZNn4X4zlJJ0/ix/8Ognln4XqRskALcN8Rup3sK+zPt3 T6VSza+Es7Uc0O/2GFILsmbQ8wQDuaAJDeDzXj5kqFJItQpdwVsINhxVCIiC+UH3BJiGrb 0hUV4GABSwh6aWqJ2C92/+9YNbWe/ts= Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-742b01ad1a5so1325377b3a.0 for ; Fri, 16 May 2025 16:23:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747437825; x=1748042625; darn=kvack.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=/Qy75UNCDbhvgShF7QirTdkT6HEAxA37LSBHQ4luFaE=; b=juOnatt1az2kbbLLjbk5UUj7cczUIqyVACHgS+nE7gzrOqFmjriYGGFiwLZRJvl6WO 59daSYSIl26OiiEe8COt+u+KY5NtpVdQTa8q+9wxtgJQUiCQn1hrHVX+u59iGtIHsraz /YS+v/qAPBrIhLHLFJWd0xPOPL7GrwAjpcTVXYG779MU0HDPho8scg5yMXiZ/zMfUqRF ki9JYWVmxLaWTUVFflJGdO9VDjBAIwCnRQI/WduUPkzAAS6SqiHUiLuDKNYQH5r6cImf 7jn0/EG1HHXs9FkhJ3XcNh/sohpgzvu/K7Cp1+GAXrH0Uw5F/XAWrOBexPIhF5Gxg60b Zu9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747437825; x=1748042625; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=/Qy75UNCDbhvgShF7QirTdkT6HEAxA37LSBHQ4luFaE=; b=QpnasNlyfiAIdJ64uo0QivB69/MPWPyK6meJXJlxwqFDT41h0CbOxCkws8hwtpUTTd HXUafEj02DZrJufQ2VGsbOSKNB3JIiPdcCu6Rz/MThrJZPFI6wq87yHL36qfkbkXeFu7 UppirkHZHjLqbtlWrsjzKBYgzn4mOmtA11ChKHZttRE9CyRAc0W5ss5WqNbuo/fG3hOs PvTgRTMJKM5W7Mh4yi3+8vVAKxL0WLM+qewknIKcfW3x8P4L664CmQw8PN5Zwtfeuii2 L4IB7Fg5Mf8lP/hw5eAH+qYlOQZrpo7pADVsVg0YxK5ppLLeuwPDcTflQQF8r+DLtdSh +PtA== X-Forwarded-Encrypted: i=1; AJvYcCWb0IG0n8VlXKl4JYtBE0xAClElj+HPAwfppfAH1TxJRTAvih3I+5G7cg36J25NCbSDwAGXLKDwOQ==@kvack.org X-Gm-Message-State: AOJu0YxDB2tsnXNNQ7yV/2/K5zao+16QNHNigOdXEGpJfTzmbHb6KjG3 99yXNHXI8yvqil2GPJXfaE1v/3mLDtQVauArLR876fdyYbp9bvB0HTGHKpWrlzfSP69EJkzHOlz /RoiumffA8Q== X-Google-Smtp-Source: AGHT+IE5elXbSEvAhAu/1u+W1DmV8E5CTaDvcfkH0DVNNds39b3JQLhXRejeRYVGv96d8CPoJ59HFSZvuyk4 X-Received: from pgbdv10.prod.google.com ([2002:a05:6a02:446a:b0:b21:13c5:ed97]) (user=jyescas job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:9183:b0:215:dd02:4473 with SMTP id adf61e73a8af0-2170cde5191mr7338958637.32.1747437824945; Fri, 16 May 2025 16:23:44 -0700 (PDT) Date: Fri, 16 May 2025 16:23:20 -0700 Mime-Version: 1.0 X-Mailer: git-send-email 2.49.0.1101.gccaa498523-goog Message-ID: <20250516232341.659513-1-jyescas@google.com> Subject: [PATCH v5] mm: Add CONFIG_PAGE_BLOCK_ORDER to select page block order From: Juan Yescas To: Andrew Morton , David Hildenbrand , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Juan Yescas , Zi Yan , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: tjmercier@google.com, isaacmanjarres@google.com, kaleshsingh@google.com, Minchan Kim Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 1C26E20010 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: q8xgatcxso1uc867bhumescz5iughmyg X-HE-Tag: 1747437825-537218 X-HE-Meta: U2FsdGVkX1+bkNoiRHvLlc+lOz6c6w2EmFNLJeJQ0Su4vP0mQcvX16SDcIDwOE3mLwNoSNHBTYexi2LJZQX5IBzVyWxulE8OAjzRqS/+OJBYWchPJvyEUV3JUBTCbzW2V/+nVCfvzvUuO6ndKjuZ2ableN0uEBMdEtw+wnuDMw8r0jiZCoXl5ldfnadzPMhBfvB82a/uLnrNKFMn2dXwZgGlYO8t0WRmCpvWq4aNuRcKOajiM5783tm31H4o0q4CGy8W75kqw6SvDe+PwJi8tegTMqFWQbl8+0Vj6Bg+vYw5O8ZH0MUAjoPVg9scJCu5A0VVsoYk8TPfBhExu+pUmHqaXN/3DbUynQNiDOm1ahF5IwnHZncQMcBX9lrZ5cIYL7ZYMPV7SJjSha5i5VXAR3qB3rZw31MHr1TCxF7LAhJfsBybZcPUvj5q4Fn28FZvzCatCC7v4lNl3KvTY87K4iiBWC1aoeUPUmQWJDLl6JlzWcVDrfkFISGjvlHG6H4f5Zy2IBtS6ZifNG9qdWCZu54jxMsHGyt8NUlTo4JNUhknl+elVsU0H66tCWiGvaRa2MN6jCGeAAE8qbFlNAlhxu4J3ekLk3nwmolpZs94WZFX4oxLYJUpDD+EJi8dhb6eYnp4HJr5mrTgvSIqNs69C0UwoIlQPNXuHNH75iSB/Onh8zDKLUSc+42dxGPYBshKLemNMrOP24RUB1QbKhDjAerPHbQezx3TKf7IoVtZT7TQ5JdxbDsgaEpDTEZiNiQBntiBBON8m+LNb8Z++2uI16W/VTQWRH8fhTb3si/6DIBLMZi+Poo0R4YlJF8KRe+1xq2eU82zg8JNMpZtgq/lsfDZ7JZt0NKv8o0TDvJAF9yxhhXNNeGDWhpeoeSvkYc0StfpIj1E6OfGr+cz5jQUj3pwtcO5/IDxVn906tt/DsKUlpvw3YBJLMuuuFK4Lwg+aTSm1n3gPmeyhR/pZAY DkcDZvHX GEhOSL2wLopNztIV3vZVLXID5Nf+AQ6jWeuK2f9xSRrYxG5C0Im810esbdhC1PpRkMvp3mrtqyGtyj312HbORJ83hwdb9+uCrPQBID7t0SRBEzYVG7t3LuPrVXq0ihLuJ1b9yxr66x+f162zDMeKlir8n4yVAA4H2c5BaQOrBZtLqQ0mDiR6LtriZFb+amUoXqWEuT8N56s/u24YstD2TbPOwELPqw042wTMZdu40y9mfmbQGeFI/zLQa3vJcQ36ev7y91Sc4nPqhhNqohP3zeD+0dCS5NTdzQmFp3tT+oEVGJwjebXcBpepZFVzWK14SGiJAMGG+X//sLmv87U6ggPbLVHcxrTMnNxzdHPlt5ZJUYlMgUVDBMPPWcTBgD7wpzdQxXK6kaLca3zgLTL+YHxR1Haz9ui0nmbtUrwfCO/GHYlyXWCHbbqhTVikDIrVHixeidNCeVdfFFFP5IWOjys0Czs9NSwoWHwzR7MR9puhzl1W/VQKO+o06NnFzAlTSvrRVDh8EvLLMvf4rbJ/Cmpz/iujM7EC4ODa6RBMcIKcW5ONMWrOKPC0qB/jCH7RRyEsA5aE5VnUFpOXhBpsJl5VhThIW59oVwC/T4HxahJp8IhC/R+GpjS/Ykd70Ydn2FQkXLeTGYUy7r/+JjhuInfR4yFDRkCDAVDPxXtqud35FWeW8l0iFHQlYK19SlbxTSUhFmkF52dejcTwad6uHCoTf8m58H1Q2u5Ywp/mIlg6TOEgSVd43g5i+WWvD6CkISjtMYJWx5ha+FWM7fJwyBGyVYiglzRz1/kiwx3ouU/v0H/bKfxnepVRBR9PWKi6sfFdAoMJJkv5PKLjjbKinq0+itN3nD94P5PY3 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Problem: On large page size configurations (16KiB, 64KiB), the CMA alignment requirement (CMA_MIN_ALIGNMENT_BYTES) increases considerably, and this causes the CMA reservations to be larger than necessary. This means that system will have less available MIGRATE_UNMOVABLE and MIGRATE_RECLAIMABLE page blocks since MIGRATE_CMA can't fallback to them. The CMA_MIN_ALIGNMENT_BYTES increases because it depends on MAX_PAGE_ORDER which depends on ARCH_FORCE_MAX_ORDER. The value of ARCH_FORCE_MAX_ORDER increases on 16k and 64k kernels. For example, in ARM, the CMA alignment requirement when: - CONFIG_ARCH_FORCE_MAX_ORDER default value is used - CONFIG_TRANSPARENT_HUGEPAGE is set: PAGE_SIZE | MAX_PAGE_ORDER | pageblock_order | CMA_MIN_ALIGNMENT_BYTES ----------------------------------------------------------------------- 4KiB | 10 | 10 | 4KiB * (2 ^ 10) = 4MiB 16Kib | 11 | 11 | 16KiB * (2 ^ 11) = 32MiB 64KiB | 13 | 13 | 64KiB * (2 ^ 13) = 512MiB There are some extreme cases for the CMA alignment requirement when: - CONFIG_ARCH_FORCE_MAX_ORDER maximum value is set - CONFIG_TRANSPARENT_HUGEPAGE is NOT set: - CONFIG_HUGETLB_PAGE is NOT set PAGE_SIZE | MAX_PAGE_ORDER | pageblock_order | CMA_MIN_ALIGNMENT_BYTES ------------------------------------------------------------------------ 4KiB | 15 | 15 | 4KiB * (2 ^ 15) = 128MiB 16Kib | 13 | 13 | 16KiB * (2 ^ 13) = 128MiB 64KiB | 13 | 13 | 64KiB * (2 ^ 13) = 512MiB This affects the CMA reservations for the drivers. If a driver in a 4KiB kernel needs 4MiB of CMA memory, in a 16KiB kernel, the minimal reservation has to be 32MiB due to the alignment requirements: reserved-memory { ... cma_test_reserve: cma_test_reserve { compatible = "shared-dma-pool"; size = <0x0 0x400000>; /* 4 MiB */ ... }; }; reserved-memory { ... cma_test_reserve: cma_test_reserve { compatible = "shared-dma-pool"; size = <0x0 0x2000000>; /* 32 MiB */ ... }; }; Solution: Add a new config CONFIG_PAGE_BLOCK_ORDER that allows to set the page block order in all the architectures. The maximum page block order will be given by ARCH_FORCE_MAX_ORDER. By default, CONFIG_PAGE_BLOCK_ORDER will have the same value that ARCH_FORCE_MAX_ORDER. This will make sure that current kernel configurations won't be affected by this change. It is a opt-in change. This patch will allow to have the same CMA alignment requirements for large page sizes (16KiB, 64KiB) as that in 4kb kernels by setting a lower pageblock_order. Tests: - Verified that HugeTLB pages work when pageblock_order is 1, 7, 10 on 4k and 16k kernels. - Verified that Transparent Huge Pages work when pageblock_order is 1, 7, 10 on 4k and 16k kernels. - Verified that dma-buf heaps allocations work when pageblock_order is 1, 7, 10 on 4k and 16k kernels. Benchmarks: The benchmarks compare 16kb kernels with pageblock_order 10 and 7. The reason for the pageblock_order 7 is because this value makes the min CMA alignment requirement the same as that in 4kb kernels (2MB). - Perform 100K dma-buf heaps (/dev/dma_heap/system) allocations of SZ_8M, SZ_4M, SZ_2M, SZ_1M, SZ_64, SZ_8, SZ_4. Use simpleperf (https://developer.android.com/ndk/guides/simpleperf) to measure the # of instructions and page-faults on 16k kernels. The benchmark was executed 10 times. The averages are below: # instructions | #page-faults order 10 | order 7 | order 10 | order 7 -------------------------------------------------------- 13,891,765,770 | 11,425,777,314 | 220 | 217 14,456,293,487 | 12,660,819,302 | 224 | 219 13,924,261,018 | 13,243,970,736 | 217 | 221 13,910,886,504 | 13,845,519,630 | 217 | 221 14,388,071,190 | 13,498,583,098 | 223 | 224 13,656,442,167 | 12,915,831,681 | 216 | 218 13,300,268,343 | 12,930,484,776 | 222 | 218 13,625,470,223 | 14,234,092,777 | 219 | 218 13,508,964,965 | 13,432,689,094 | 225 | 219 13,368,950,667 | 13,683,587,37 | 219 | 225 ------------------------------------------------------------------- 13,803,137,433 | 13,131,974,268 | 220 | 220 Averages There were 4.85% #instructions when order was 7, in comparison with order 10. 13,803,137,433 - 13,131,974,268 = -671,163,166 (-4.86%) The number of page faults in order 7 and 10 were the same. These results didn't show any significant regression when the pageblock_order is set to 7 on 16kb kernels. - Run speedometer 3.1 (https://browserbench.org/Speedometer3.1/) 5 times on the 16k kernels with pageblock_order 7 and 10. order 10 | order 7 | order 7 - order 10 | (order 7 - order 10) % ------------------------------------------------------------------- 15.8 | 16.4 | 0.6 | 3.80% 16.4 | 16.2 | -0.2 | -1.22% 16.6 | 16.3 | -0.3 | -1.81% 16.8 | 16.3 | -0.5 | -2.98% 16.6 | 16.8 | 0.2 | 1.20% ------------------------------------------------------------------- 16.44 16.4 -0.04 -0.24% Averages The results didn't show any significant regression when the pageblock_order is set to 7 on 16kb kernels. Cc: Andrew Morton Cc: Vlastimil Babka Cc: Liam R. Howlett Cc: Lorenzo Stoakes Cc: David Hildenbrand CC: Mike Rapoport Cc: Zi Yan Cc: Suren Baghdasaryan Cc: Minchan Kim Signed-off-by: Juan Yescas Acked-by: Zi Yan --- Changes in v5: - Remove the ranges for CONFIG_PAGE_BLOCK_ORDER. The ranges with config definitions don't work in Kconfig, for example (range 1 MY_CONFIG). - Add PAGE_BLOCK_ORDER_MANUAL config for the page block order number. The default value was not defined. - Fix typos reported by Andrew. - Test default configs in powerpc. Changes in v4: - Set PAGE_BLOCK_ORDER in incluxe/linux/mmzone.h to validate that MAX_PAGE_ORDER >= PAGE_BLOCK_ORDER at compile time. - This change fixes the warning in: https://lore.kernel.org/oe-kbuild-all/202505091548.FuKO4b4v-lkp@intel.com/ Changes in v3: - Rename ARCH_FORCE_PAGE_BLOCK_ORDER to PAGE_BLOCK_ORDER as per Matthew's suggestion. - Update comments in pageblock-flags.h for pageblock_order value when THP or HugeTLB are not used. Changes in v2: - Add Zi's Acked-by tag. - Move ARCH_FORCE_PAGE_BLOCK_ORDER config to mm/Kconfig as per Zi and Matthew suggestion so it is available to all the architectures. - Set ARCH_FORCE_PAGE_BLOCK_ORDER to 10 by default when ARCH_FORCE_MAX_ORDER is not available. include/linux/mmzone.h | 16 ++++++++++++++++ include/linux/pageblock-flags.h | 8 ++++---- mm/Kconfig | 30 ++++++++++++++++++++++++++++++ 3 files changed, 50 insertions(+), 4 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 6ccec1bf2896..6fdb8f7f74d6 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -37,6 +37,22 @@ #define NR_PAGE_ORDERS (MAX_PAGE_ORDER + 1) +/* Defines the order for the number of pages that have a migrate type. */ +#ifndef CONFIG_PAGE_BLOCK_ORDER_MANUAL +#define PAGE_BLOCK_ORDER MAX_PAGE_ORDER +#else +#define PAGE_BLOCK_ORDER CONFIG_PAGE_BLOCK_ORDER_MANUAL +#endif /* CONFIG_PAGE_BLOCK_ORDER_MANUAL */ + +/* + * The MAX_PAGE_ORDER, which defines the max order of pages to be allocated + * by the buddy allocator, has to be larger or equal to the PAGE_BLOCK_ORDER, + * which defines the order for the number of pages that can have a migrate type + */ +#if (PAGE_BLOCK_ORDER > MAX_PAGE_ORDER) +#error MAX_PAGE_ORDER must be >= PAGE_BLOCK_ORDER +#endif + /* * PAGE_ALLOC_COSTLY_ORDER is the order at which allocations are deemed * costly to service. That is between allocation orders which should diff --git a/include/linux/pageblock-flags.h b/include/linux/pageblock-flags.h index fc6b9c87cb0a..e73a4292ef02 100644 --- a/include/linux/pageblock-flags.h +++ b/include/linux/pageblock-flags.h @@ -41,18 +41,18 @@ extern unsigned int pageblock_order; * Huge pages are a constant size, but don't exceed the maximum allocation * granularity. */ -#define pageblock_order MIN_T(unsigned int, HUGETLB_PAGE_ORDER, MAX_PAGE_ORDER) +#define pageblock_order MIN_T(unsigned int, HUGETLB_PAGE_ORDER, PAGE_BLOCK_ORDER) #endif /* CONFIG_HUGETLB_PAGE_SIZE_VARIABLE */ #elif defined(CONFIG_TRANSPARENT_HUGEPAGE) -#define pageblock_order MIN_T(unsigned int, HPAGE_PMD_ORDER, MAX_PAGE_ORDER) +#define pageblock_order MIN_T(unsigned int, HPAGE_PMD_ORDER, PAGE_BLOCK_ORDER) #else /* CONFIG_TRANSPARENT_HUGEPAGE */ -/* If huge pages are not used, group by MAX_ORDER_NR_PAGES */ -#define pageblock_order MAX_PAGE_ORDER +/* If huge pages are not used, group by PAGE_BLOCK_ORDER */ +#define pageblock_order PAGE_BLOCK_ORDER #endif /* CONFIG_HUGETLB_PAGE */ diff --git a/mm/Kconfig b/mm/Kconfig index e113f713b493..bd8012b30b39 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -989,6 +989,36 @@ config CMA_AREAS If unsure, leave the default value "8" in UMA and "20" in NUMA. +config PAGE_BLOCK_ORDER + bool "Allow setting a custom page block order" + default n + help + This config allows overriding the default page block order when the + page block order is required to be smaller than ARCH_FORCE_MAX_ORDER or + MAX_PAGE_ORDER. + + If unsure, do not enable it. + +# +# When PAGE_BLOCK_ORDER is not enabled or ARCH_FORCE_MAX_ORDER is not defined, +# the default page block order is MAX_PAGE_ORDER (10) as per +# include/linux/mmzone.h. +# +config PAGE_BLOCK_ORDER_MANUAL + int "Page Block Order" + depends on PAGE_BLOCK_ORDER + help + The page block order refers to the power of two number of pages that + are physically contiguous and can have a migrate type associated to + them. The maximum size of the page block order is limited by + ARCH_FORCE_MAX_ORDER. + + Reducing pageblock order can negatively impact THP generation + success rate. If your workloads uses THP heavily, please use this + option with caution. + + Don't change if unsure. + config MEM_SOFT_DIRTY bool "Track memory changes" depends on CHECKPOINT_RESTORE && HAVE_ARCH_SOFT_DIRTY && PROC_FS -- 2.49.0.1101.gccaa498523-goog