* [PATCH v3 1/4] mm: defer THP insertion to khugepaged
2025-04-14 22:24 [PATCH v3 0/4] mm: introduce THP deferred setting Nico Pache
@ 2025-04-14 22:24 ` Nico Pache
2025-04-14 22:24 ` [PATCH v3 2/4] mm: document (m)THP defer usage Nico Pache
` (3 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Nico Pache @ 2025-04-14 22:24 UTC (permalink / raw)
To: linux-mm, linux-doc, linux-kernel, linux-kselftest
Cc: akpm, corbet, shuah, david, baohua, baolin.wang, ryan.roberts,
willy, peterx, ioworker0, ziy, wangkefeng.wang, dev.jain, mhocko,
rientjes, hannes, zokeefe, surenb, jglisse, cl, jack,
dave.hansen, will, tiwai, catalin.marinas, anshuman.khandual,
raquini, aarcange, kirill.shutemov, yang, thomas.hellstrom,
vishal.moola, sunnanyong, usamaarif642, mathieu.desnoyers,
mhiramat, rostedt
setting /transparent_hugepages/enabled=always allows applications
to benefit from THPs without having to madvise. However, the pf handler
takes very few considerations to decide weather or not to actually use a
THP. This can lead to a lot of wasted memory. khugepaged only operates
on memory that was either allocated with enabled=always or MADV_HUGEPAGE.
Introduce the ability to set enabled=defer, which will prevent THPs from
being allocated by the page fault handler unless madvise is set,
leaving it up to khugepaged to decide which allocations will collapse to a
THP. This should allow applications to benefits from THPs, while curbing
some of the memory waste.
Co-developed-by: Rafael Aquini <raquini@redhat.com>
Signed-off-by: Rafael Aquini <raquini@redhat.com>
Signed-off-by: Nico Pache <npache@redhat.com>
---
include/linux/huge_mm.h | 15 +++++++++++++--
mm/huge_memory.c | 31 +++++++++++++++++++++++++++----
2 files changed, 40 insertions(+), 6 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 95be9d266eff..b154bd11bd93 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -48,6 +48,7 @@ enum transparent_hugepage_flag {
TRANSPARENT_HUGEPAGE_UNSUPPORTED,
TRANSPARENT_HUGEPAGE_FLAG,
TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
+ TRANSPARENT_HUGEPAGE_DEFER_PF_INST_FLAG,
TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG,
TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG,
TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG,
@@ -186,6 +187,7 @@ static inline bool hugepage_global_enabled(void)
{
return transparent_hugepage_flags &
((1<<TRANSPARENT_HUGEPAGE_FLAG) |
+ (1<<TRANSPARENT_HUGEPAGE_DEFER_PF_INST_FLAG) |
(1<<TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG));
}
@@ -195,6 +197,12 @@ static inline bool hugepage_global_always(void)
(1<<TRANSPARENT_HUGEPAGE_FLAG);
}
+static inline bool hugepage_global_defer(void)
+{
+ return transparent_hugepage_flags &
+ (1<<TRANSPARENT_HUGEPAGE_DEFER_PF_INST_FLAG);
+}
+
static inline int highest_order(unsigned long orders)
{
return fls_long(orders) - 1;
@@ -291,13 +299,16 @@ unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
unsigned long tva_flags,
unsigned long orders)
{
+ if ((tva_flags & TVA_IN_PF) && hugepage_global_defer() &&
+ !(vm_flags & VM_HUGEPAGE))
+ return 0;
+
/* Optimization to check if required orders are enabled early. */
if ((tva_flags & TVA_ENFORCE_SYSFS) && vma_is_anonymous(vma)) {
unsigned long mask = READ_ONCE(huge_anon_orders_always);
-
if (vm_flags & VM_HUGEPAGE)
mask |= READ_ONCE(huge_anon_orders_madvise);
- if (hugepage_global_always() ||
+ if (hugepage_global_always() || hugepage_global_defer() ||
((vm_flags & VM_HUGEPAGE) && hugepage_global_enabled()))
mask |= READ_ONCE(huge_anon_orders_inherit);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 35515bff392b..4eee0001e0fa 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -297,12 +297,15 @@ static ssize_t enabled_show(struct kobject *kobj,
const char *output;
if (test_bit(TRANSPARENT_HUGEPAGE_FLAG, &transparent_hugepage_flags))
- output = "[always] madvise never";
+ output = "[always] madvise defer never";
else if (test_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
&transparent_hugepage_flags))
- output = "always [madvise] never";
+ output = "always [madvise] defer never";
+ else if (test_bit(TRANSPARENT_HUGEPAGE_DEFER_PF_INST_FLAG,
+ &transparent_hugepage_flags))
+ output = "always madvise [defer] never";
else
- output = "always madvise [never]";
+ output = "always madvise defer [never]";
return sysfs_emit(buf, "%s\n", output);
}
@@ -315,13 +318,20 @@ static ssize_t enabled_store(struct kobject *kobj,
if (sysfs_streq(buf, "always")) {
clear_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG, &transparent_hugepage_flags);
+ clear_bit(TRANSPARENT_HUGEPAGE_DEFER_PF_INST_FLAG, &transparent_hugepage_flags);
set_bit(TRANSPARENT_HUGEPAGE_FLAG, &transparent_hugepage_flags);
+ } else if (sysfs_streq(buf, "defer")) {
+ clear_bit(TRANSPARENT_HUGEPAGE_FLAG, &transparent_hugepage_flags);
+ clear_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG, &transparent_hugepage_flags);
+ set_bit(TRANSPARENT_HUGEPAGE_DEFER_PF_INST_FLAG, &transparent_hugepage_flags);
} else if (sysfs_streq(buf, "madvise")) {
clear_bit(TRANSPARENT_HUGEPAGE_FLAG, &transparent_hugepage_flags);
+ clear_bit(TRANSPARENT_HUGEPAGE_DEFER_PF_INST_FLAG, &transparent_hugepage_flags);
set_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG, &transparent_hugepage_flags);
} else if (sysfs_streq(buf, "never")) {
clear_bit(TRANSPARENT_HUGEPAGE_FLAG, &transparent_hugepage_flags);
clear_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG, &transparent_hugepage_flags);
+ clear_bit(TRANSPARENT_HUGEPAGE_DEFER_PF_INST_FLAG, &transparent_hugepage_flags);
} else
ret = -EINVAL;
@@ -954,18 +964,31 @@ static int __init setup_transparent_hugepage(char *str)
&transparent_hugepage_flags);
clear_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
&transparent_hugepage_flags);
+ clear_bit(TRANSPARENT_HUGEPAGE_DEFER_PF_INST_FLAG,
+ &transparent_hugepage_flags);
ret = 1;
+ } else if (!strcmp(str, "defer")) {
+ clear_bit(TRANSPARENT_HUGEPAGE_FLAG,
+ &transparent_hugepage_flags);
+ clear_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
+ &transparent_hugepage_flags);
+ set_bit(TRANSPARENT_HUGEPAGE_DEFER_PF_INST_FLAG,
+ &transparent_hugepage_flags);
} else if (!strcmp(str, "madvise")) {
clear_bit(TRANSPARENT_HUGEPAGE_FLAG,
&transparent_hugepage_flags);
+ clear_bit(TRANSPARENT_HUGEPAGE_DEFER_PF_INST_FLAG,
+ &transparent_hugepage_flags);
set_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
- &transparent_hugepage_flags);
+ &transparent_hugepage_flags);
ret = 1;
} else if (!strcmp(str, "never")) {
clear_bit(TRANSPARENT_HUGEPAGE_FLAG,
&transparent_hugepage_flags);
clear_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
&transparent_hugepage_flags);
+ clear_bit(TRANSPARENT_HUGEPAGE_DEFER_PF_INST_FLAG,
+ &transparent_hugepage_flags);
ret = 1;
}
out:
--
2.48.1
^ permalink raw reply [flat|nested] 8+ messages in thread* [PATCH v3 2/4] mm: document (m)THP defer usage
2025-04-14 22:24 [PATCH v3 0/4] mm: introduce THP deferred setting Nico Pache
2025-04-14 22:24 ` [PATCH v3 1/4] mm: defer THP insertion to khugepaged Nico Pache
@ 2025-04-14 22:24 ` Nico Pache
2025-04-14 22:33 ` Randy Dunlap
2025-04-14 22:24 ` [PATCH v3 3/4] khugepaged: add defer option to mTHP options Nico Pache
` (2 subsequent siblings)
4 siblings, 1 reply; 8+ messages in thread
From: Nico Pache @ 2025-04-14 22:24 UTC (permalink / raw)
To: linux-mm, linux-doc, linux-kernel, linux-kselftest
Cc: akpm, corbet, shuah, david, baohua, baolin.wang, ryan.roberts,
willy, peterx, ioworker0, ziy, wangkefeng.wang, dev.jain, mhocko,
rientjes, hannes, zokeefe, surenb, jglisse, cl, jack,
dave.hansen, will, tiwai, catalin.marinas, anshuman.khandual,
raquini, aarcange, kirill.shutemov, yang, thomas.hellstrom,
vishal.moola, sunnanyong, usamaarif642, mathieu.desnoyers,
mhiramat, rostedt
The new defer option for (m)THPs allows for a more conservative
approach to (m)THPs. Document its usage in the transhuge admin-guide.
Signed-off-by: Nico Pache <npache@redhat.com>
---
Documentation/admin-guide/mm/transhuge.rst | 31 ++++++++++++++++------
1 file changed, 23 insertions(+), 8 deletions(-)
diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst
index f0d4e78cedaa..d3f072bdd932 100644
--- a/Documentation/admin-guide/mm/transhuge.rst
+++ b/Documentation/admin-guide/mm/transhuge.rst
@@ -88,8 +88,9 @@ In certain cases when hugepages are enabled system wide, application
may end up allocating more memory resources. An application may mmap a
large region but only touch 1 byte of it, in that case a 2M page might
be allocated instead of a 4k page for no good. This is why it's
-possible to disable hugepages system-wide and to only have them inside
-MADV_HUGEPAGE madvise regions.
+possible to disable hugepages system-wide, only have them inside
+MADV_HUGEPAGE madvise regions, or defer them away from the page fault
+handler to khugepaged.
Embedded systems should enable hugepages only inside madvise regions
to eliminate any risk of wasting any precious byte of memory and to
@@ -99,6 +100,15 @@ Applications that gets a lot of benefit from hugepages and that don't
risk to lose memory by using hugepages, should use
madvise(MADV_HUGEPAGE) on their critical mmapped regions.
+Applications that would like to benefit from THPs but would still like a
+more memory conservative approach can choose 'defer'. This avoids
+inserting THPs at the page fault handler unless they are MADV_HUGEPAGE.
+Khugepaged will then scan the mappings for potential collapses into (m)THP
+pages. Admins using this the 'defer' setting should consider
+tweaking khugepaged/max_ptes_none. The current default of 511 may
+aggressively collapse your PTEs into PMDs. Lower this value to conserve
+more memory (ie. max_ptes_none=64).
+
.. _thp_sysfs:
sysfs
@@ -109,11 +119,14 @@ Global THP controls
Transparent Hugepage Support for anonymous memory can be entirely disabled
(mostly for debugging purposes) or only enabled inside MADV_HUGEPAGE
-regions (to avoid the risk of consuming more memory resources) or enabled
-system wide. This can be achieved per-supported-THP-size with one of::
+regions (to avoid the risk of consuming more memory resources), defered to
+khugepaged, or enabled system wide.
+
+This can be achieved per-supported-THP-size with one of::
echo always >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
echo madvise >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
+ echo defer >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
echo never >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
where <size> is the hugepage size being addressed, the available sizes
@@ -136,6 +149,7 @@ The top-level setting (for use with "inherit") can be set by issuing
one of the following commands::
echo always >/sys/kernel/mm/transparent_hugepage/enabled
+ echo defer >/sys/kernel/mm/transparent_hugepage/enabled
echo madvise >/sys/kernel/mm/transparent_hugepage/enabled
echo never >/sys/kernel/mm/transparent_hugepage/enabled
@@ -281,7 +295,8 @@ of small pages into one large page::
A higher value leads to use additional memory for programs.
A lower value leads to gain less thp performance. Value of
max_ptes_none can waste cpu time very little, you can
-ignore it.
+ignore it. Consider lowering this value when using
+``transparent_hugepage=defer``
``max_ptes_swap`` specifies how many pages can be brought in from
swap when collapsing a group of pages into a transparent huge page::
@@ -306,14 +321,14 @@ Boot parameters
You can change the sysfs boot time default for the top-level "enabled"
control by passing the parameter ``transparent_hugepage=always`` or
-``transparent_hugepage=madvise`` or ``transparent_hugepage=never`` to the
-kernel command line.
+``transparent_hugepage=madvise`` or ``transparent_hugepage=defer`` or
+``transparent_hugepage=never`` to the kernel command line.
Alternatively, each supported anonymous THP size can be controlled by
passing ``thp_anon=<size>[KMG],<size>[KMG]:<state>;<size>[KMG]-<size>[KMG]:<state>``,
where ``<size>`` is the THP size (must be a power of 2 of PAGE_SIZE and
supported anonymous THP) and ``<state>`` is one of ``always``, ``madvise``,
-``never`` or ``inherit``.
+``defer``, ``never`` or ``inherit``.
For example, the following will set 16K, 32K, 64K THP to ``always``,
set 128K, 512K to ``inherit``, set 256K to ``madvise`` and 1M, 2M
--
2.48.1
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH v3 2/4] mm: document (m)THP defer usage
2025-04-14 22:24 ` [PATCH v3 2/4] mm: document (m)THP defer usage Nico Pache
@ 2025-04-14 22:33 ` Randy Dunlap
0 siblings, 0 replies; 8+ messages in thread
From: Randy Dunlap @ 2025-04-14 22:33 UTC (permalink / raw)
To: Nico Pache, linux-mm, linux-doc, linux-kernel, linux-kselftest
Cc: akpm, corbet, shuah, david, baohua, baolin.wang, ryan.roberts,
willy, peterx, ioworker0, ziy, wangkefeng.wang, dev.jain, mhocko,
rientjes, hannes, zokeefe, surenb, jglisse, cl, jack,
dave.hansen, will, tiwai, catalin.marinas, anshuman.khandual,
raquini, aarcange, kirill.shutemov, yang, thomas.hellstrom,
vishal.moola, sunnanyong, usamaarif642, mathieu.desnoyers,
mhiramat, rostedt
On 4/14/25 3:24 PM, Nico Pache wrote:
> The new defer option for (m)THPs allows for a more conservative
> approach to (m)THPs. Document its usage in the transhuge admin-guide.
>
> Signed-off-by: Nico Pache <npache@redhat.com>
> ---
> Documentation/admin-guide/mm/transhuge.rst | 31 ++++++++++++++++------
> 1 file changed, 23 insertions(+), 8 deletions(-)
>
> diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst
> index f0d4e78cedaa..d3f072bdd932 100644
> --- a/Documentation/admin-guide/mm/transhuge.rst
> +++ b/Documentation/admin-guide/mm/transhuge.rst
> @@ -88,8 +88,9 @@ In certain cases when hugepages are enabled system wide, application
> may end up allocating more memory resources. An application may mmap a
> large region but only touch 1 byte of it, in that case a 2M page might
> be allocated instead of a 4k page for no good. This is why it's
> -possible to disable hugepages system-wide and to only have them inside
> -MADV_HUGEPAGE madvise regions.
> +possible to disable hugepages system-wide, only have them inside
> +MADV_HUGEPAGE madvise regions, or defer them away from the page fault
> +handler to khugepaged.
>
> Embedded systems should enable hugepages only inside madvise regions
> to eliminate any risk of wasting any precious byte of memory and to
> @@ -99,6 +100,15 @@ Applications that gets a lot of benefit from hugepages and that don't
> risk to lose memory by using hugepages, should use
> madvise(MADV_HUGEPAGE) on their critical mmapped regions.
>
> +Applications that would like to benefit from THPs but would still like a
> +more memory conservative approach can choose 'defer'. This avoids
> +inserting THPs at the page fault handler unless they are MADV_HUGEPAGE.
> +Khugepaged will then scan the mappings for potential collapses into (m)THP
> +pages. Admins using this the 'defer' setting should consider
> +tweaking khugepaged/max_ptes_none. The current default of 511 may
> +aggressively collapse your PTEs into PMDs. Lower this value to conserve
> +more memory (ie. max_ptes_none=64).
i.e.,
> +
> .. _thp_sysfs:
>
> sysfs
> @@ -109,11 +119,14 @@ Global THP controls
>
> Transparent Hugepage Support for anonymous memory can be entirely disabled
> (mostly for debugging purposes) or only enabled inside MADV_HUGEPAGE
> -regions (to avoid the risk of consuming more memory resources) or enabled
> -system wide. This can be achieved per-supported-THP-size with one of::
> +regions (to avoid the risk of consuming more memory resources), defered to
deferred
> +khugepaged, or enabled system wide.
> +
> +This can be achieved per-supported-THP-size with one of::
>
> echo always >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
> echo madvise >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
> + echo defer >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
> echo never >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
>
> where <size> is the hugepage size being addressed, the available sizes
> @@ -136,6 +149,7 @@ The top-level setting (for use with "inherit") can be set by issuing
> one of the following commands::
>
> echo always >/sys/kernel/mm/transparent_hugepage/enabled
> + echo defer >/sys/kernel/mm/transparent_hugepage/enabled
> echo madvise >/sys/kernel/mm/transparent_hugepage/enabled
> echo never >/sys/kernel/mm/transparent_hugepage/enabled
>
> @@ -281,7 +295,8 @@ of small pages into one large page::
> A higher value leads to use additional memory for programs.
> A lower value leads to gain less thp performance. Value of
> max_ptes_none can waste cpu time very little, you can
> -ignore it.
> +ignore it. Consider lowering this value when using
> +``transparent_hugepage=defer``
>
> ``max_ptes_swap`` specifies how many pages can be brought in from
> swap when collapsing a group of pages into a transparent huge page::
> @@ -306,14 +321,14 @@ Boot parameters
>
> You can change the sysfs boot time default for the top-level "enabled"
> control by passing the parameter ``transparent_hugepage=always`` or
> -``transparent_hugepage=madvise`` or ``transparent_hugepage=never`` to the
> -kernel command line.
> +``transparent_hugepage=madvise`` or ``transparent_hugepage=defer`` or
> +``transparent_hugepage=never`` to the kernel command line.
>
> Alternatively, each supported anonymous THP size can be controlled by
> passing ``thp_anon=<size>[KMG],<size>[KMG]:<state>;<size>[KMG]-<size>[KMG]:<state>``,
> where ``<size>`` is the THP size (must be a power of 2 of PAGE_SIZE and
> supported anonymous THP) and ``<state>`` is one of ``always``, ``madvise``,
> -``never`` or ``inherit``.
> +``defer``, ``never`` or ``inherit``.
>
> For example, the following will set 16K, 32K, 64K THP to ``always``,
> set 128K, 512K to ``inherit``, set 256K to ``madvise`` and 1M, 2M
--
~Randy
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v3 3/4] khugepaged: add defer option to mTHP options
2025-04-14 22:24 [PATCH v3 0/4] mm: introduce THP deferred setting Nico Pache
2025-04-14 22:24 ` [PATCH v3 1/4] mm: defer THP insertion to khugepaged Nico Pache
2025-04-14 22:24 ` [PATCH v3 2/4] mm: document (m)THP defer usage Nico Pache
@ 2025-04-14 22:24 ` Nico Pache
2025-04-14 22:24 ` [PATCH v3 4/4] selftests: mm: add defer to thp setting parser Nico Pache
2025-04-15 0:37 ` [PATCH v3 0/4] mm: introduce THP deferred setting Andrew Morton
4 siblings, 0 replies; 8+ messages in thread
From: Nico Pache @ 2025-04-14 22:24 UTC (permalink / raw)
To: linux-mm, linux-doc, linux-kernel, linux-kselftest
Cc: akpm, corbet, shuah, david, baohua, baolin.wang, ryan.roberts,
willy, peterx, ioworker0, ziy, wangkefeng.wang, dev.jain, mhocko,
rientjes, hannes, zokeefe, surenb, jglisse, cl, jack,
dave.hansen, will, tiwai, catalin.marinas, anshuman.khandual,
raquini, aarcange, kirill.shutemov, yang, thomas.hellstrom,
vishal.moola, sunnanyong, usamaarif642, mathieu.desnoyers,
mhiramat, rostedt
Now that we have defer to globally disable THPs at fault time, lets add
a defer setting to the mTHP options. This will allow khugepaged to
operate at that order, while avoiding it at PF time.
Signed-off-by: Nico Pache <npache@redhat.com>
---
include/linux/huge_mm.h | 5 +++++
mm/huge_memory.c | 38 +++++++++++++++++++++++++++++++++-----
mm/khugepaged.c | 10 +++++-----
3 files changed, 43 insertions(+), 10 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index b154bd11bd93..fe6e8dd68100 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -96,6 +96,7 @@ extern struct kobj_attribute thpsize_shmem_enabled_attr;
#define TVA_SMAPS (1 << 0) /* Will be used for procfs */
#define TVA_IN_PF (1 << 1) /* Page fault handler */
#define TVA_ENFORCE_SYSFS (1 << 2) /* Obey sysfs configuration */
+#define TVA_IN_KHUGEPAGE ((1 << 2) | (1 << 3)) /* Khugepaged defer support */
#define thp_vma_allowable_order(vma, vm_flags, tva_flags, order) \
(!!thp_vma_allowable_orders(vma, vm_flags, tva_flags, BIT(order)))
@@ -182,6 +183,7 @@ extern unsigned long transparent_hugepage_flags;
extern unsigned long huge_anon_orders_always;
extern unsigned long huge_anon_orders_madvise;
extern unsigned long huge_anon_orders_inherit;
+extern unsigned long huge_anon_orders_defer;
static inline bool hugepage_global_enabled(void)
{
@@ -306,6 +308,9 @@ unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
/* Optimization to check if required orders are enabled early. */
if ((tva_flags & TVA_ENFORCE_SYSFS) && vma_is_anonymous(vma)) {
unsigned long mask = READ_ONCE(huge_anon_orders_always);
+
+ if ((tva_flags & TVA_IN_KHUGEPAGE) == TVA_IN_KHUGEPAGE)
+ mask |= READ_ONCE(huge_anon_orders_defer);
if (vm_flags & VM_HUGEPAGE)
mask |= READ_ONCE(huge_anon_orders_madvise);
if (hugepage_global_always() || hugepage_global_defer() ||
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 4eee0001e0fa..30ea898d4ce6 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -81,6 +81,7 @@ unsigned long huge_zero_pfn __read_mostly = ~0UL;
unsigned long huge_anon_orders_always __read_mostly;
unsigned long huge_anon_orders_madvise __read_mostly;
unsigned long huge_anon_orders_inherit __read_mostly;
+unsigned long huge_anon_orders_defer __read_mostly;
static bool anon_orders_configured __initdata;
static inline bool file_thp_enabled(struct vm_area_struct *vma)
@@ -505,13 +506,15 @@ static ssize_t anon_enabled_show(struct kobject *kobj,
const char *output;
if (test_bit(order, &huge_anon_orders_always))
- output = "[always] inherit madvise never";
+ output = "[always] inherit madvise defer never";
else if (test_bit(order, &huge_anon_orders_inherit))
- output = "always [inherit] madvise never";
+ output = "always [inherit] madvise defer never";
else if (test_bit(order, &huge_anon_orders_madvise))
- output = "always inherit [madvise] never";
+ output = "always inherit [madvise] defer never";
+ else if (test_bit(order, &huge_anon_orders_defer))
+ output = "always inherit madvise [defer] never";
else
- output = "always inherit madvise [never]";
+ output = "always inherit madvise defer [never]";
return sysfs_emit(buf, "%s\n", output);
}
@@ -527,25 +530,36 @@ static ssize_t anon_enabled_store(struct kobject *kobj,
spin_lock(&huge_anon_orders_lock);
clear_bit(order, &huge_anon_orders_inherit);
clear_bit(order, &huge_anon_orders_madvise);
+ clear_bit(order, &huge_anon_orders_defer);
set_bit(order, &huge_anon_orders_always);
spin_unlock(&huge_anon_orders_lock);
} else if (sysfs_streq(buf, "inherit")) {
spin_lock(&huge_anon_orders_lock);
clear_bit(order, &huge_anon_orders_always);
clear_bit(order, &huge_anon_orders_madvise);
+ clear_bit(order, &huge_anon_orders_defer);
set_bit(order, &huge_anon_orders_inherit);
spin_unlock(&huge_anon_orders_lock);
} else if (sysfs_streq(buf, "madvise")) {
spin_lock(&huge_anon_orders_lock);
clear_bit(order, &huge_anon_orders_always);
clear_bit(order, &huge_anon_orders_inherit);
+ clear_bit(order, &huge_anon_orders_defer);
set_bit(order, &huge_anon_orders_madvise);
spin_unlock(&huge_anon_orders_lock);
+ } else if (sysfs_streq(buf, "defer")) {
+ spin_lock(&huge_anon_orders_lock);
+ clear_bit(order, &huge_anon_orders_always);
+ clear_bit(order, &huge_anon_orders_inherit);
+ clear_bit(order, &huge_anon_orders_madvise);
+ set_bit(order, &huge_anon_orders_defer);
+ spin_unlock(&huge_anon_orders_lock);
} else if (sysfs_streq(buf, "never")) {
spin_lock(&huge_anon_orders_lock);
clear_bit(order, &huge_anon_orders_always);
clear_bit(order, &huge_anon_orders_inherit);
clear_bit(order, &huge_anon_orders_madvise);
+ clear_bit(order, &huge_anon_orders_defer);
spin_unlock(&huge_anon_orders_lock);
} else
ret = -EINVAL;
@@ -1002,7 +1016,7 @@ static char str_dup[PAGE_SIZE] __initdata;
static int __init setup_thp_anon(char *str)
{
char *token, *range, *policy, *subtoken;
- unsigned long always, inherit, madvise;
+ unsigned long always, inherit, madvise, defer;
char *start_size, *end_size;
int start, end, nr;
char *p;
@@ -1014,6 +1028,8 @@ static int __init setup_thp_anon(char *str)
always = huge_anon_orders_always;
madvise = huge_anon_orders_madvise;
inherit = huge_anon_orders_inherit;
+ defer = huge_anon_orders_defer;
+
p = str_dup;
while ((token = strsep(&p, ";")) != NULL) {
range = strsep(&token, ":");
@@ -1053,18 +1069,28 @@ static int __init setup_thp_anon(char *str)
bitmap_set(&always, start, nr);
bitmap_clear(&inherit, start, nr);
bitmap_clear(&madvise, start, nr);
+ bitmap_clear(&defer, start, nr);
} else if (!strcmp(policy, "madvise")) {
bitmap_set(&madvise, start, nr);
bitmap_clear(&inherit, start, nr);
bitmap_clear(&always, start, nr);
+ bitmap_clear(&defer, start, nr);
} else if (!strcmp(policy, "inherit")) {
bitmap_set(&inherit, start, nr);
bitmap_clear(&madvise, start, nr);
bitmap_clear(&always, start, nr);
+ bitmap_clear(&defer, start, nr);
+ } else if (!strcmp(policy, "defer")) {
+ bitmap_set(&defer, start, nr);
+ bitmap_clear(&madvise, start, nr);
+ bitmap_clear(&always, start, nr);
+ bitmap_clear(&inherit, start, nr);
} else if (!strcmp(policy, "never")) {
bitmap_clear(&inherit, start, nr);
bitmap_clear(&madvise, start, nr);
bitmap_clear(&always, start, nr);
+ bitmap_clear(&defer, start, nr);
+
} else {
pr_err("invalid policy %s in thp_anon boot parameter\n", policy);
goto err;
@@ -1075,6 +1101,8 @@ static int __init setup_thp_anon(char *str)
huge_anon_orders_always = always;
huge_anon_orders_madvise = madvise;
huge_anon_orders_inherit = inherit;
+ huge_anon_orders_defer = defer;
+
anon_orders_configured = true;
return 1;
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index d35abae76c93..1123a2225d6f 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -491,7 +491,7 @@ void khugepaged_enter_vma(struct vm_area_struct *vma,
{
if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) &&
hugepage_pmd_enabled()) {
- if (thp_vma_allowable_order(vma, vm_flags, TVA_ENFORCE_SYSFS,
+ if (thp_vma_allowable_order(vma, vm_flags, TVA_IN_KHUGEPAGE,
PMD_ORDER))
__khugepaged_enter(vma->vm_mm);
}
@@ -955,7 +955,7 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
struct collapse_control *cc, int order)
{
struct vm_area_struct *vma;
- unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0;
+ unsigned long tva_flags = cc->is_khugepaged ? TVA_IN_KHUGEPAGE : 0;
if (unlikely(khugepaged_test_exit_or_disable(mm)))
return SCAN_ANY_PROCESS;
@@ -1430,7 +1430,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
bool writable = false;
int chunk_none_count = 0;
int scaled_none = khugepaged_max_ptes_none >> (HPAGE_PMD_ORDER - KHUGEPAGED_MIN_MTHP_ORDER);
- unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0;
+ unsigned long tva_flags = cc->is_khugepaged ? TVA_IN_KHUGEPAGE : 0;
VM_BUG_ON(address & ~HPAGE_PMD_MASK);
result = find_pmd_or_thp_or_none(mm, address, &pmd);
@@ -2550,7 +2550,7 @@ static int khugepaged_collapse_single_pmd(unsigned long addr,
{
int result = SCAN_FAIL;
struct mm_struct *mm = vma->vm_mm;
- unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0;
+ unsigned long tva_flags = cc->is_khugepaged ? TVA_IN_KHUGEPAGE : 0;
if (thp_vma_allowable_order(vma, vma->vm_flags,
tva_flags, PMD_ORDER)) {
@@ -2635,7 +2635,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
break;
}
if (!thp_vma_allowable_order(vma, vma->vm_flags,
- TVA_ENFORCE_SYSFS, PMD_ORDER)) {
+ TVA_IN_KHUGEPAGE, PMD_ORDER)) {
skip:
progress++;
continue;
--
2.48.1
^ permalink raw reply [flat|nested] 8+ messages in thread* [PATCH v3 4/4] selftests: mm: add defer to thp setting parser
2025-04-14 22:24 [PATCH v3 0/4] mm: introduce THP deferred setting Nico Pache
` (2 preceding siblings ...)
2025-04-14 22:24 ` [PATCH v3 3/4] khugepaged: add defer option to mTHP options Nico Pache
@ 2025-04-14 22:24 ` Nico Pache
2025-04-15 0:37 ` [PATCH v3 0/4] mm: introduce THP deferred setting Andrew Morton
4 siblings, 0 replies; 8+ messages in thread
From: Nico Pache @ 2025-04-14 22:24 UTC (permalink / raw)
To: linux-mm, linux-doc, linux-kernel, linux-kselftest
Cc: akpm, corbet, shuah, david, baohua, baolin.wang, ryan.roberts,
willy, peterx, ioworker0, ziy, wangkefeng.wang, dev.jain, mhocko,
rientjes, hannes, zokeefe, surenb, jglisse, cl, jack,
dave.hansen, will, tiwai, catalin.marinas, anshuman.khandual,
raquini, aarcange, kirill.shutemov, yang, thomas.hellstrom,
vishal.moola, sunnanyong, usamaarif642, mathieu.desnoyers,
mhiramat, rostedt
add the defer setting to the selftests library for reading thp settings.
Signed-off-by: Nico Pache <npache@redhat.com>
---
tools/testing/selftests/mm/thp_settings.c | 1 +
tools/testing/selftests/mm/thp_settings.h | 1 +
2 files changed, 2 insertions(+)
diff --git a/tools/testing/selftests/mm/thp_settings.c b/tools/testing/selftests/mm/thp_settings.c
index ad872af1c81a..b2f9f62b302a 100644
--- a/tools/testing/selftests/mm/thp_settings.c
+++ b/tools/testing/selftests/mm/thp_settings.c
@@ -20,6 +20,7 @@ static const char * const thp_enabled_strings[] = {
"always",
"inherit",
"madvise",
+ "defer",
NULL
};
diff --git a/tools/testing/selftests/mm/thp_settings.h b/tools/testing/selftests/mm/thp_settings.h
index fc131d23d593..0d52e6d4f754 100644
--- a/tools/testing/selftests/mm/thp_settings.h
+++ b/tools/testing/selftests/mm/thp_settings.h
@@ -11,6 +11,7 @@ enum thp_enabled {
THP_ALWAYS,
THP_INHERIT,
THP_MADVISE,
+ THP_DEFER,
};
enum thp_defrag {
--
2.48.1
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH v3 0/4] mm: introduce THP deferred setting
2025-04-14 22:24 [PATCH v3 0/4] mm: introduce THP deferred setting Nico Pache
` (3 preceding siblings ...)
2025-04-14 22:24 ` [PATCH v3 4/4] selftests: mm: add defer to thp setting parser Nico Pache
@ 2025-04-15 0:37 ` Andrew Morton
2025-04-15 0:50 ` Andrew Morton
4 siblings, 1 reply; 8+ messages in thread
From: Andrew Morton @ 2025-04-15 0:37 UTC (permalink / raw)
To: Nico Pache
Cc: linux-mm, linux-doc, linux-kernel, linux-kselftest, corbet,
shuah, david, baohua, baolin.wang, ryan.roberts, willy, peterx,
ioworker0, ziy, wangkefeng.wang, dev.jain, mhocko, rientjes,
hannes, zokeefe, surenb, jglisse, cl, jack, dave.hansen, will,
tiwai, catalin.marinas, anshuman.khandual, raquini, aarcange,
kirill.shutemov, yang, thomas.hellstrom, vishal.moola,
sunnanyong, usamaarif642, mathieu.desnoyers, mhiramat, rostedt
This will need updating to latest kernels, please.
btw, hpage_collapse_scan_pmd() has a trace_mm_khugepaged_scan_pmd() in
it, which seems wrong...
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH v3 0/4] mm: introduce THP deferred setting
2025-04-15 0:37 ` [PATCH v3 0/4] mm: introduce THP deferred setting Andrew Morton
@ 2025-04-15 0:50 ` Andrew Morton
0 siblings, 0 replies; 8+ messages in thread
From: Andrew Morton @ 2025-04-15 0:50 UTC (permalink / raw)
To: Nico Pache, linux-mm, linux-doc, linux-kernel, linux-kselftest,
corbet, shuah, david, baohua, baolin.wang, ryan.roberts, willy,
peterx, ioworker0, ziy, wangkefeng.wang, dev.jain, mhocko,
rientjes, hannes, zokeefe, surenb, jglisse, cl, jack,
dave.hansen, will, tiwai, catalin.marinas, anshuman.khandual,
raquini, aarcange, kirill.shutemov, yang, thomas.hellstrom,
vishal.moola, sunnanyong, usamaarif642, mathieu.desnoyers,
mhiramat, rostedt
On Mon, 14 Apr 2025 17:37:35 -0700 Andrew Morton <akpm@linux-foundation.org> wrote:
> This will need updating to latest kernels, please.
There I go, applying patchsets in reverse time order again.
"[PATCH v3 07/12] khugepaged: add mTHP support" has one reject against
current mainline, and two against current mm.git.
^ permalink raw reply [flat|nested] 8+ messages in thread