* [PATCH v2 0/2] kfence: allow change objects number
@ 2025-12-18 6:39 yuan linyu
2025-12-18 6:39 ` [PATCH v2 1/2] LoongArch: kfence: avoid use CONFIG_KFENCE_NUM_OBJECTS yuan linyu
2025-12-18 6:39 ` [PATCH v2 2/2] kfence: allow change number of object by early parameter yuan linyu
0 siblings, 2 replies; 14+ messages in thread
From: yuan linyu @ 2025-12-18 6:39 UTC (permalink / raw)
To: Alexander Potapenko, Marco Elver, Dmitry Vyukov, Andrew Morton,
Huacai Chen, WANG Xuerui, kasan-dev, linux-mm, loongarch
Cc: linux-kernel, yuan linyu
patch01 use common KFENCE_POOL_SIZE for LoongArch
patch02 allow change objects number
v1: https://lore.kernel.org/lkml/20251218015849.1414609-1-yuanlinyu@honor.com/
v2: remove patch02 in v1
yuan linyu (2):
LoongArch: kfence: avoid use CONFIG_KFENCE_NUM_OBJECTS
kfence: allow change number of object by early parameter
arch/loongarch/include/asm/pgtable.h | 3 +-
include/linux/kfence.h | 5 +-
mm/kfence/core.c | 122 +++++++++++++++++++--------
mm/kfence/kfence.h | 4 +-
mm/kfence/kfence_test.c | 2 +-
5 files changed, 98 insertions(+), 38 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v2 1/2] LoongArch: kfence: avoid use CONFIG_KFENCE_NUM_OBJECTS
2025-12-18 6:39 [PATCH v2 0/2] kfence: allow change objects number yuan linyu
@ 2025-12-18 6:39 ` yuan linyu
2025-12-19 2:13 ` Huacai Chen
2025-12-20 14:34 ` kernel test robot
2025-12-18 6:39 ` [PATCH v2 2/2] kfence: allow change number of object by early parameter yuan linyu
1 sibling, 2 replies; 14+ messages in thread
From: yuan linyu @ 2025-12-18 6:39 UTC (permalink / raw)
To: Alexander Potapenko, Marco Elver, Dmitry Vyukov, Andrew Morton,
Huacai Chen, WANG Xuerui, kasan-dev, linux-mm, loongarch
Cc: linux-kernel, yuan linyu
use common kfence macro KFENCE_POOL_SIZE for KFENCE_AREA_SIZE definition
Signed-off-by: yuan linyu <yuanlinyu@honor.com>
---
arch/loongarch/include/asm/pgtable.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h
index f41a648a3d9e..e9966c9f844f 100644
--- a/arch/loongarch/include/asm/pgtable.h
+++ b/arch/loongarch/include/asm/pgtable.h
@@ -10,6 +10,7 @@
#define _ASM_PGTABLE_H
#include <linux/compiler.h>
+#include <linux/kfence.h>
#include <asm/addrspace.h>
#include <asm/asm.h>
#include <asm/page.h>
@@ -96,7 +97,7 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
#define MODULES_END (MODULES_VADDR + SZ_256M)
#ifdef CONFIG_KFENCE
-#define KFENCE_AREA_SIZE (((CONFIG_KFENCE_NUM_OBJECTS + 1) * 2 + 2) * PAGE_SIZE)
+#define KFENCE_AREA_SIZE (KFENCE_POOL_SIZE + (2 * PAGE_SIZE))
#else
#define KFENCE_AREA_SIZE 0
#endif
--
2.25.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v2 2/2] kfence: allow change number of object by early parameter
2025-12-18 6:39 [PATCH v2 0/2] kfence: allow change objects number yuan linyu
2025-12-18 6:39 ` [PATCH v2 1/2] LoongArch: kfence: avoid use CONFIG_KFENCE_NUM_OBJECTS yuan linyu
@ 2025-12-18 6:39 ` yuan linyu
2025-12-18 8:56 ` Marco Elver
2025-12-20 14:59 ` kernel test robot
1 sibling, 2 replies; 14+ messages in thread
From: yuan linyu @ 2025-12-18 6:39 UTC (permalink / raw)
To: Alexander Potapenko, Marco Elver, Dmitry Vyukov, Andrew Morton,
Huacai Chen, WANG Xuerui, kasan-dev, linux-mm, loongarch
Cc: linux-kernel, yuan linyu
when want to change the kfence pool size, currently it is not easy and
need to compile kernel.
Add an early boot parameter kfence.num_objects to allow change kfence
objects number and allow increate total pool to provide high failure
rate.
Signed-off-by: yuan linyu <yuanlinyu@honor.com>
---
include/linux/kfence.h | 5 +-
mm/kfence/core.c | 122 +++++++++++++++++++++++++++++-----------
mm/kfence/kfence.h | 4 +-
mm/kfence/kfence_test.c | 2 +-
4 files changed, 96 insertions(+), 37 deletions(-)
diff --git a/include/linux/kfence.h b/include/linux/kfence.h
index 0ad1ddbb8b99..920bcd5649fa 100644
--- a/include/linux/kfence.h
+++ b/include/linux/kfence.h
@@ -24,7 +24,10 @@ extern unsigned long kfence_sample_interval;
* address to metadata indices; effectively, the very first page serves as an
* extended guard page, but otherwise has no special purpose.
*/
-#define KFENCE_POOL_SIZE ((CONFIG_KFENCE_NUM_OBJECTS + 1) * 2 * PAGE_SIZE)
+extern unsigned int __kfence_pool_size;
+#define KFENCE_POOL_SIZE (__kfence_pool_size)
+extern unsigned int __kfence_num_objects;
+#define KFENCE_NUM_OBJECTS (__kfence_num_objects)
extern char *__kfence_pool;
DECLARE_STATIC_KEY_FALSE(kfence_allocation_key);
diff --git a/mm/kfence/core.c b/mm/kfence/core.c
index 577a1699c553..5d5cea59c7b6 100644
--- a/mm/kfence/core.c
+++ b/mm/kfence/core.c
@@ -132,6 +132,31 @@ struct kfence_metadata *kfence_metadata __read_mostly;
*/
static struct kfence_metadata *kfence_metadata_init __read_mostly;
+/* allow change number of objects from cmdline */
+#define KFENCE_MIN_NUM_OBJECTS 1
+#define KFENCE_MAX_NUM_OBJECTS 65535
+unsigned int __kfence_num_objects __read_mostly = CONFIG_KFENCE_NUM_OBJECTS;
+EXPORT_SYMBOL(__kfence_num_objects); /* Export for test modules. */
+static unsigned int __kfence_pool_pages __read_mostly = (CONFIG_KFENCE_NUM_OBJECTS + 1) * 2;
+unsigned int __kfence_pool_size __read_mostly = (CONFIG_KFENCE_NUM_OBJECTS + 1) * 2 * PAGE_SIZE;
+EXPORT_SYMBOL(__kfence_pool_size); /* Export for lkdtm module. */
+
+static int __init early_parse_kfence_num_objects(char *buf)
+{
+ unsigned int num;
+ int ret = kstrtouint(buf, 10, &num);
+
+ if (ret < 0)
+ return ret;
+
+ __kfence_num_objects = clamp(num, KFENCE_MIN_NUM_OBJECTS, KFENCE_MAX_NUM_OBJECTS);
+ __kfence_pool_pages = (__kfence_num_objects + 1) * 2;
+ __kfence_pool_size = __kfence_pool_pages * PAGE_SIZE;
+
+ return 0;
+}
+early_param("kfence.num_objects", early_parse_kfence_num_objects);
+
/* Freelist with available objects. */
static struct list_head kfence_freelist = LIST_HEAD_INIT(kfence_freelist);
static DEFINE_RAW_SPINLOCK(kfence_freelist_lock); /* Lock protecting freelist. */
@@ -155,12 +180,13 @@ atomic_t kfence_allocation_gate = ATOMIC_INIT(1);
*
* P(alloc_traces) = (1 - e^(-HNUM * (alloc_traces / SIZE)) ^ HNUM
*/
+static unsigned int kfence_alloc_covered_order __read_mostly;
+static unsigned int kfence_alloc_covered_mask __read_mostly;
+static atomic_t *alloc_covered __read_mostly;
#define ALLOC_COVERED_HNUM 2
-#define ALLOC_COVERED_ORDER (const_ilog2(CONFIG_KFENCE_NUM_OBJECTS) + 2)
-#define ALLOC_COVERED_SIZE (1 << ALLOC_COVERED_ORDER)
-#define ALLOC_COVERED_HNEXT(h) hash_32(h, ALLOC_COVERED_ORDER)
-#define ALLOC_COVERED_MASK (ALLOC_COVERED_SIZE - 1)
-static atomic_t alloc_covered[ALLOC_COVERED_SIZE];
+#define ALLOC_COVERED_HNEXT(h) hash_32(h, kfence_alloc_covered_order)
+#define ALLOC_COVERED_MASK (kfence_alloc_covered_mask)
+#define KFENCE_COVERED_SIZE (sizeof(atomic_t) * (1 << kfence_alloc_covered_order))
/* Stack depth used to determine uniqueness of an allocation. */
#define UNIQUE_ALLOC_STACK_DEPTH ((size_t)8)
@@ -200,7 +226,7 @@ static_assert(ARRAY_SIZE(counter_names) == KFENCE_COUNTER_COUNT);
static inline bool should_skip_covered(void)
{
- unsigned long thresh = (CONFIG_KFENCE_NUM_OBJECTS * kfence_skip_covered_thresh) / 100;
+ unsigned long thresh = (__kfence_num_objects * kfence_skip_covered_thresh) / 100;
return atomic_long_read(&counters[KFENCE_COUNTER_ALLOCATED]) > thresh;
}
@@ -262,7 +288,7 @@ static inline unsigned long metadata_to_pageaddr(const struct kfence_metadata *m
/* Only call with a pointer into kfence_metadata. */
if (KFENCE_WARN_ON(meta < kfence_metadata ||
- meta >= kfence_metadata + CONFIG_KFENCE_NUM_OBJECTS))
+ meta >= kfence_metadata + __kfence_num_objects))
return 0;
/*
@@ -612,7 +638,7 @@ static unsigned long kfence_init_pool(void)
* fast-path in SLUB, and therefore need to ensure kfree() correctly
* enters __slab_free() slow-path.
*/
- for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) {
+ for (i = 0; i < __kfence_pool_pages; i++) {
struct page *page;
if (!i || (i % 2))
@@ -640,7 +666,7 @@ static unsigned long kfence_init_pool(void)
addr += PAGE_SIZE;
}
- for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) {
+ for (i = 0; i < __kfence_num_objects; i++) {
struct kfence_metadata *meta = &kfence_metadata_init[i];
/* Initialize metadata. */
@@ -666,7 +692,7 @@ static unsigned long kfence_init_pool(void)
return 0;
reset_slab:
- for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) {
+ for (i = 0; i < __kfence_pool_pages; i++) {
struct page *page;
if (!i || (i % 2))
@@ -710,7 +736,7 @@ static bool __init kfence_init_pool_early(void)
* fails for the first page, and therefore expect addr==__kfence_pool in
* most failure cases.
*/
- memblock_free_late(__pa(addr), KFENCE_POOL_SIZE - (addr - (unsigned long)__kfence_pool));
+ memblock_free_late(__pa(addr), __kfence_pool_size - (addr - (unsigned long)__kfence_pool));
__kfence_pool = NULL;
memblock_free_late(__pa(kfence_metadata_init), KFENCE_METADATA_SIZE);
@@ -740,7 +766,7 @@ DEFINE_SHOW_ATTRIBUTE(stats);
*/
static void *start_object(struct seq_file *seq, loff_t *pos)
{
- if (*pos < CONFIG_KFENCE_NUM_OBJECTS)
+ if (*pos < __kfence_num_objects)
return (void *)((long)*pos + 1);
return NULL;
}
@@ -752,7 +778,7 @@ static void stop_object(struct seq_file *seq, void *v)
static void *next_object(struct seq_file *seq, void *v, loff_t *pos)
{
++*pos;
- if (*pos < CONFIG_KFENCE_NUM_OBJECTS)
+ if (*pos < __kfence_num_objects)
return (void *)((long)*pos + 1);
return NULL;
}
@@ -799,7 +825,7 @@ static void kfence_check_all_canary(void)
{
int i;
- for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) {
+ for (i = 0; i < __kfence_num_objects; i++) {
struct kfence_metadata *meta = &kfence_metadata[i];
if (kfence_obj_allocated(meta))
@@ -894,7 +920,7 @@ void __init kfence_alloc_pool_and_metadata(void)
* re-allocate the memory pool.
*/
if (!__kfence_pool)
- __kfence_pool = memblock_alloc(KFENCE_POOL_SIZE, PAGE_SIZE);
+ __kfence_pool = memblock_alloc(__kfence_pool_size, PAGE_SIZE);
if (!__kfence_pool) {
pr_err("failed to allocate pool\n");
@@ -903,11 +929,23 @@ void __init kfence_alloc_pool_and_metadata(void)
/* The memory allocated by memblock has been zeroed out. */
kfence_metadata_init = memblock_alloc(KFENCE_METADATA_SIZE, PAGE_SIZE);
- if (!kfence_metadata_init) {
- pr_err("failed to allocate metadata\n");
- memblock_free(__kfence_pool, KFENCE_POOL_SIZE);
- __kfence_pool = NULL;
- }
+ if (!kfence_metadata_init)
+ goto fail_pool;
+
+ kfence_alloc_covered_order = ilog2(__kfence_num_objects) + 2;
+ kfence_alloc_covered_mask = (1 << kfence_alloc_covered_order) - 1;
+ alloc_covered = memblock_alloc(KFENCE_COVERED_SIZE, PAGE_SIZE);
+ if (alloc_covered)
+ return;
+
+ pr_err("failed to allocate covered\n");
+ memblock_free(kfence_metadata_init, KFENCE_METADATA_SIZE);
+ kfence_metadata_init = NULL;
+
+fail_pool:
+ pr_err("failed to allocate metadata\n");
+ memblock_free(__kfence_pool, __kfence_pool_size);
+ __kfence_pool = NULL;
}
static void kfence_init_enable(void)
@@ -930,9 +968,9 @@ static void kfence_init_enable(void)
WRITE_ONCE(kfence_enabled, true);
queue_delayed_work(system_unbound_wq, &kfence_timer, 0);
- pr_info("initialized - using %lu bytes for %d objects at 0x%p-0x%p\n", KFENCE_POOL_SIZE,
- CONFIG_KFENCE_NUM_OBJECTS, (void *)__kfence_pool,
- (void *)(__kfence_pool + KFENCE_POOL_SIZE));
+ pr_info("initialized - using %u bytes for %d objects at 0x%p-0x%p\n", __kfence_pool_size,
+ __kfence_num_objects, (void *)__kfence_pool,
+ (void *)(__kfence_pool + __kfence_pool_size));
}
void __init kfence_init(void)
@@ -953,41 +991,53 @@ void __init kfence_init(void)
static int kfence_init_late(void)
{
- const unsigned long nr_pages_pool = KFENCE_POOL_SIZE / PAGE_SIZE;
- const unsigned long nr_pages_meta = KFENCE_METADATA_SIZE / PAGE_SIZE;
+ unsigned long nr_pages_meta = KFENCE_METADATA_SIZE / PAGE_SIZE;
unsigned long addr = (unsigned long)__kfence_pool;
- unsigned long free_size = KFENCE_POOL_SIZE;
+ unsigned long free_size = __kfence_pool_size;
+ unsigned long nr_pages_covered, covered_size;
int err = -ENOMEM;
+ kfence_alloc_covered_order = ilog2(__kfence_num_objects) + 2;
+ kfence_alloc_covered_mask = (1 << kfence_alloc_covered_order) - 1;
+ covered_size = PAGE_ALIGN(KFENCE_COVERED_SIZE);
+ nr_pages_covered = (covered_size / PAGE_SIZE);
#ifdef CONFIG_CONTIG_ALLOC
struct page *pages;
- pages = alloc_contig_pages(nr_pages_pool, GFP_KERNEL, first_online_node,
+ pages = alloc_contig_pages(__kfence_pool_pages, GFP_KERNEL, first_online_node,
NULL);
if (!pages)
return -ENOMEM;
__kfence_pool = page_to_virt(pages);
+ pages = alloc_contig_pages(nr_pages_covered, GFP_KERNEL, first_online_node,
+ NULL);
+ if (!pages)
+ goto free_pool;
+ alloc_covered = page_to_virt(pages);
pages = alloc_contig_pages(nr_pages_meta, GFP_KERNEL, first_online_node,
NULL);
if (pages)
kfence_metadata_init = page_to_virt(pages);
#else
- if (nr_pages_pool > MAX_ORDER_NR_PAGES ||
+ if (__kfence_pool_pages > MAX_ORDER_NR_PAGES ||
nr_pages_meta > MAX_ORDER_NR_PAGES) {
pr_warn("KFENCE_NUM_OBJECTS too large for buddy allocator\n");
return -EINVAL;
}
- __kfence_pool = alloc_pages_exact(KFENCE_POOL_SIZE, GFP_KERNEL);
+ __kfence_pool = alloc_pages_exact(__kfence_pool_size, GFP_KERNEL);
if (!__kfence_pool)
return -ENOMEM;
+ alloc_covered = alloc_pages_exact(covered_size, GFP_KERNEL);
+ if (!alloc_covered)
+ goto free_pool;
kfence_metadata_init = alloc_pages_exact(KFENCE_METADATA_SIZE, GFP_KERNEL);
#endif
if (!kfence_metadata_init)
- goto free_pool;
+ goto free_cover;
memzero_explicit(kfence_metadata_init, KFENCE_METADATA_SIZE);
addr = kfence_init_pool();
@@ -998,22 +1048,28 @@ static int kfence_init_late(void)
}
pr_err("%s failed\n", __func__);
- free_size = KFENCE_POOL_SIZE - (addr - (unsigned long)__kfence_pool);
+ free_size = __kfence_pool_size - (addr - (unsigned long)__kfence_pool);
err = -EBUSY;
#ifdef CONFIG_CONTIG_ALLOC
free_contig_range(page_to_pfn(virt_to_page((void *)kfence_metadata_init)),
nr_pages_meta);
+free_cover:
+ free_contig_range(page_to_pfn(virt_to_page((void *)alloc_covered)),
+ nr_pages_covered);
free_pool:
free_contig_range(page_to_pfn(virt_to_page((void *)addr)),
free_size / PAGE_SIZE);
#else
free_pages_exact((void *)kfence_metadata_init, KFENCE_METADATA_SIZE);
+free_cover:
+ free_pages_exact((void *)alloc_covered, covered_size);
free_pool:
free_pages_exact((void *)addr, free_size);
#endif
kfence_metadata_init = NULL;
+ alloc_covered = NULL;
__kfence_pool = NULL;
return err;
}
@@ -1039,7 +1095,7 @@ void kfence_shutdown_cache(struct kmem_cache *s)
if (!smp_load_acquire(&kfence_metadata))
return;
- for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) {
+ for (i = 0; i < __kfence_num_objects; i++) {
bool in_use;
meta = &kfence_metadata[i];
@@ -1077,7 +1133,7 @@ void kfence_shutdown_cache(struct kmem_cache *s)
}
}
- for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) {
+ for (i = 0; i < __kfence_num_objects; i++) {
meta = &kfence_metadata[i];
/* See above. */
diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h
index dfba5ea06b01..dc3abb27c632 100644
--- a/mm/kfence/kfence.h
+++ b/mm/kfence/kfence.h
@@ -104,7 +104,7 @@ struct kfence_metadata {
};
#define KFENCE_METADATA_SIZE PAGE_ALIGN(sizeof(struct kfence_metadata) * \
- CONFIG_KFENCE_NUM_OBJECTS)
+ __kfence_num_objects)
extern struct kfence_metadata *kfence_metadata;
@@ -123,7 +123,7 @@ static inline struct kfence_metadata *addr_to_metadata(unsigned long addr)
* error.
*/
index = (addr - (unsigned long)__kfence_pool) / (PAGE_SIZE * 2) - 1;
- if (index < 0 || index >= CONFIG_KFENCE_NUM_OBJECTS)
+ if (index < 0 || index >= __kfence_num_objects)
return NULL;
return &kfence_metadata[index];
diff --git a/mm/kfence/kfence_test.c b/mm/kfence/kfence_test.c
index 00034e37bc9f..00a51aa4bad9 100644
--- a/mm/kfence/kfence_test.c
+++ b/mm/kfence/kfence_test.c
@@ -641,7 +641,7 @@ static void test_gfpzero(struct kunit *test)
break;
test_free(buf2);
- if (kthread_should_stop() || (i == CONFIG_KFENCE_NUM_OBJECTS)) {
+ if (kthread_should_stop() || (i == __kfence_num_objects)) {
kunit_warn(test, "giving up ... cannot get same object back\n");
return;
}
--
2.25.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 2/2] kfence: allow change number of object by early parameter
2025-12-18 6:39 ` [PATCH v2 2/2] kfence: allow change number of object by early parameter yuan linyu
@ 2025-12-18 8:56 ` Marco Elver
2025-12-18 10:18 ` yuanlinyu
2025-12-20 14:59 ` kernel test robot
1 sibling, 1 reply; 14+ messages in thread
From: Marco Elver @ 2025-12-18 8:56 UTC (permalink / raw)
To: yuan linyu
Cc: Alexander Potapenko, Dmitry Vyukov, Andrew Morton, Huacai Chen,
WANG Xuerui, kasan-dev, linux-mm, loongarch, linux-kernel
On Thu, Dec 18, 2025 at 02:39PM +0800, yuan linyu wrote:
> when want to change the kfence pool size, currently it is not easy and
> need to compile kernel.
>
> Add an early boot parameter kfence.num_objects to allow change kfence
> objects number and allow increate total pool to provide high failure
> rate.
>
> Signed-off-by: yuan linyu <yuanlinyu@honor.com>
> ---
> include/linux/kfence.h | 5 +-
> mm/kfence/core.c | 122 +++++++++++++++++++++++++++++-----------
> mm/kfence/kfence.h | 4 +-
> mm/kfence/kfence_test.c | 2 +-
> 4 files changed, 96 insertions(+), 37 deletions(-)
>
> diff --git a/include/linux/kfence.h b/include/linux/kfence.h
> index 0ad1ddbb8b99..920bcd5649fa 100644
> --- a/include/linux/kfence.h
> +++ b/include/linux/kfence.h
> @@ -24,7 +24,10 @@ extern unsigned long kfence_sample_interval;
> * address to metadata indices; effectively, the very first page serves as an
> * extended guard page, but otherwise has no special purpose.
> */
> -#define KFENCE_POOL_SIZE ((CONFIG_KFENCE_NUM_OBJECTS + 1) * 2 * PAGE_SIZE)
> +extern unsigned int __kfence_pool_size;
> +#define KFENCE_POOL_SIZE (__kfence_pool_size)
> +extern unsigned int __kfence_num_objects;
> +#define KFENCE_NUM_OBJECTS (__kfence_num_objects)
> extern char *__kfence_pool;
>
You have ignored the comment below in this file:
/**
* is_kfence_address() - check if an address belongs to KFENCE pool
* @addr: address to check
*
[...]
* Note: This function may be used in fast-paths, and is performance critical.
* Future changes should take this into account; for instance, we want to avoid
>> * introducing another load and therefore need to keep KFENCE_POOL_SIZE a
>> * constant (until immediate patching support is added to the kernel).
*/
static __always_inline bool is_kfence_address(const void *addr)
{
/*
* The __kfence_pool != NULL check is required to deal with the case
* where __kfence_pool == NULL && addr < KFENCE_POOL_SIZE. Keep it in
* the slow-path after the range-check!
*/
return unlikely((unsigned long)((char *)addr - __kfence_pool) < KFENCE_POOL_SIZE && __kfence_pool);
}
While I think the change itself would be useful to have eventually, a
better design might be needed. It's unclear to me what the perf impact
is these days (a lot has changed since that comment was written). Could
you run some benchmarks to analyze if the fast path is affected by the
additional load (please do this for whichever arch you care about, but
also arm64 and x86)?
If performance is affected, all this could be guarded behind another
Kconfig option, but it's not great either.
> DECLARE_STATIC_KEY_FALSE(kfence_allocation_key);
> diff --git a/mm/kfence/core.c b/mm/kfence/core.c
> index 577a1699c553..5d5cea59c7b6 100644
> --- a/mm/kfence/core.c
> +++ b/mm/kfence/core.c
> @@ -132,6 +132,31 @@ struct kfence_metadata *kfence_metadata __read_mostly;
> */
> static struct kfence_metadata *kfence_metadata_init __read_mostly;
>
> +/* allow change number of objects from cmdline */
> +#define KFENCE_MIN_NUM_OBJECTS 1
> +#define KFENCE_MAX_NUM_OBJECTS 65535
> +unsigned int __kfence_num_objects __read_mostly = CONFIG_KFENCE_NUM_OBJECTS;
> +EXPORT_SYMBOL(__kfence_num_objects); /* Export for test modules. */
> +static unsigned int __kfence_pool_pages __read_mostly = (CONFIG_KFENCE_NUM_OBJECTS + 1) * 2;
> +unsigned int __kfence_pool_size __read_mostly = (CONFIG_KFENCE_NUM_OBJECTS + 1) * 2 * PAGE_SIZE;
> +EXPORT_SYMBOL(__kfence_pool_size); /* Export for lkdtm module. */
> +
> +static int __init early_parse_kfence_num_objects(char *buf)
> +{
> + unsigned int num;
> + int ret = kstrtouint(buf, 10, &num);
> +
> + if (ret < 0)
> + return ret;
> +
> + __kfence_num_objects = clamp(num, KFENCE_MIN_NUM_OBJECTS, KFENCE_MAX_NUM_OBJECTS);
> + __kfence_pool_pages = (__kfence_num_objects + 1) * 2;
> + __kfence_pool_size = __kfence_pool_pages * PAGE_SIZE;
> +
> + return 0;
> +}
> +early_param("kfence.num_objects", early_parse_kfence_num_objects);
> +
> /* Freelist with available objects. */
> static struct list_head kfence_freelist = LIST_HEAD_INIT(kfence_freelist);
> static DEFINE_RAW_SPINLOCK(kfence_freelist_lock); /* Lock protecting freelist. */
> @@ -155,12 +180,13 @@ atomic_t kfence_allocation_gate = ATOMIC_INIT(1);
> *
> * P(alloc_traces) = (1 - e^(-HNUM * (alloc_traces / SIZE)) ^ HNUM
> */
> +static unsigned int kfence_alloc_covered_order __read_mostly;
> +static unsigned int kfence_alloc_covered_mask __read_mostly;
> +static atomic_t *alloc_covered __read_mostly;
> #define ALLOC_COVERED_HNUM 2
> -#define ALLOC_COVERED_ORDER (const_ilog2(CONFIG_KFENCE_NUM_OBJECTS) + 2)
> -#define ALLOC_COVERED_SIZE (1 << ALLOC_COVERED_ORDER)
> -#define ALLOC_COVERED_HNEXT(h) hash_32(h, ALLOC_COVERED_ORDER)
> -#define ALLOC_COVERED_MASK (ALLOC_COVERED_SIZE - 1)
> -static atomic_t alloc_covered[ALLOC_COVERED_SIZE];
> +#define ALLOC_COVERED_HNEXT(h) hash_32(h, kfence_alloc_covered_order)
> +#define ALLOC_COVERED_MASK (kfence_alloc_covered_mask)
> +#define KFENCE_COVERED_SIZE (sizeof(atomic_t) * (1 << kfence_alloc_covered_order))
>
> /* Stack depth used to determine uniqueness of an allocation. */
> #define UNIQUE_ALLOC_STACK_DEPTH ((size_t)8)
> @@ -200,7 +226,7 @@ static_assert(ARRAY_SIZE(counter_names) == KFENCE_COUNTER_COUNT);
>
> static inline bool should_skip_covered(void)
> {
> - unsigned long thresh = (CONFIG_KFENCE_NUM_OBJECTS * kfence_skip_covered_thresh) / 100;
> + unsigned long thresh = (__kfence_num_objects * kfence_skip_covered_thresh) / 100;
>
> return atomic_long_read(&counters[KFENCE_COUNTER_ALLOCATED]) > thresh;
> }
> @@ -262,7 +288,7 @@ static inline unsigned long metadata_to_pageaddr(const struct kfence_metadata *m
>
> /* Only call with a pointer into kfence_metadata. */
> if (KFENCE_WARN_ON(meta < kfence_metadata ||
> - meta >= kfence_metadata + CONFIG_KFENCE_NUM_OBJECTS))
> + meta >= kfence_metadata + __kfence_num_objects))
> return 0;
>
> /*
> @@ -612,7 +638,7 @@ static unsigned long kfence_init_pool(void)
> * fast-path in SLUB, and therefore need to ensure kfree() correctly
> * enters __slab_free() slow-path.
> */
> - for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) {
> + for (i = 0; i < __kfence_pool_pages; i++) {
> struct page *page;
>
> if (!i || (i % 2))
> @@ -640,7 +666,7 @@ static unsigned long kfence_init_pool(void)
> addr += PAGE_SIZE;
> }
>
> - for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) {
> + for (i = 0; i < __kfence_num_objects; i++) {
> struct kfence_metadata *meta = &kfence_metadata_init[i];
>
> /* Initialize metadata. */
> @@ -666,7 +692,7 @@ static unsigned long kfence_init_pool(void)
> return 0;
>
> reset_slab:
> - for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) {
> + for (i = 0; i < __kfence_pool_pages; i++) {
> struct page *page;
>
> if (!i || (i % 2))
> @@ -710,7 +736,7 @@ static bool __init kfence_init_pool_early(void)
> * fails for the first page, and therefore expect addr==__kfence_pool in
> * most failure cases.
> */
> - memblock_free_late(__pa(addr), KFENCE_POOL_SIZE - (addr - (unsigned long)__kfence_pool));
> + memblock_free_late(__pa(addr), __kfence_pool_size - (addr - (unsigned long)__kfence_pool));
> __kfence_pool = NULL;
>
> memblock_free_late(__pa(kfence_metadata_init), KFENCE_METADATA_SIZE);
> @@ -740,7 +766,7 @@ DEFINE_SHOW_ATTRIBUTE(stats);
> */
> static void *start_object(struct seq_file *seq, loff_t *pos)
> {
> - if (*pos < CONFIG_KFENCE_NUM_OBJECTS)
> + if (*pos < __kfence_num_objects)
> return (void *)((long)*pos + 1);
> return NULL;
> }
> @@ -752,7 +778,7 @@ static void stop_object(struct seq_file *seq, void *v)
> static void *next_object(struct seq_file *seq, void *v, loff_t *pos)
> {
> ++*pos;
> - if (*pos < CONFIG_KFENCE_NUM_OBJECTS)
> + if (*pos < __kfence_num_objects)
> return (void *)((long)*pos + 1);
> return NULL;
> }
> @@ -799,7 +825,7 @@ static void kfence_check_all_canary(void)
> {
> int i;
>
> - for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) {
> + for (i = 0; i < __kfence_num_objects; i++) {
> struct kfence_metadata *meta = &kfence_metadata[i];
>
> if (kfence_obj_allocated(meta))
> @@ -894,7 +920,7 @@ void __init kfence_alloc_pool_and_metadata(void)
> * re-allocate the memory pool.
> */
> if (!__kfence_pool)
> - __kfence_pool = memblock_alloc(KFENCE_POOL_SIZE, PAGE_SIZE);
> + __kfence_pool = memblock_alloc(__kfence_pool_size, PAGE_SIZE);
>
> if (!__kfence_pool) {
> pr_err("failed to allocate pool\n");
> @@ -903,11 +929,23 @@ void __init kfence_alloc_pool_and_metadata(void)
>
> /* The memory allocated by memblock has been zeroed out. */
> kfence_metadata_init = memblock_alloc(KFENCE_METADATA_SIZE, PAGE_SIZE);
> - if (!kfence_metadata_init) {
> - pr_err("failed to allocate metadata\n");
> - memblock_free(__kfence_pool, KFENCE_POOL_SIZE);
> - __kfence_pool = NULL;
> - }
> + if (!kfence_metadata_init)
> + goto fail_pool;
> +
> + kfence_alloc_covered_order = ilog2(__kfence_num_objects) + 2;
> + kfence_alloc_covered_mask = (1 << kfence_alloc_covered_order) - 1;
> + alloc_covered = memblock_alloc(KFENCE_COVERED_SIZE, PAGE_SIZE);
> + if (alloc_covered)
> + return;
> +
> + pr_err("failed to allocate covered\n");
> + memblock_free(kfence_metadata_init, KFENCE_METADATA_SIZE);
> + kfence_metadata_init = NULL;
> +
> +fail_pool:
> + pr_err("failed to allocate metadata\n");
> + memblock_free(__kfence_pool, __kfence_pool_size);
> + __kfence_pool = NULL;
> }
>
> static void kfence_init_enable(void)
> @@ -930,9 +968,9 @@ static void kfence_init_enable(void)
> WRITE_ONCE(kfence_enabled, true);
> queue_delayed_work(system_unbound_wq, &kfence_timer, 0);
>
> - pr_info("initialized - using %lu bytes for %d objects at 0x%p-0x%p\n", KFENCE_POOL_SIZE,
> - CONFIG_KFENCE_NUM_OBJECTS, (void *)__kfence_pool,
> - (void *)(__kfence_pool + KFENCE_POOL_SIZE));
> + pr_info("initialized - using %u bytes for %d objects at 0x%p-0x%p\n", __kfence_pool_size,
> + __kfence_num_objects, (void *)__kfence_pool,
> + (void *)(__kfence_pool + __kfence_pool_size));
> }
>
> void __init kfence_init(void)
> @@ -953,41 +991,53 @@ void __init kfence_init(void)
>
> static int kfence_init_late(void)
> {
> - const unsigned long nr_pages_pool = KFENCE_POOL_SIZE / PAGE_SIZE;
> - const unsigned long nr_pages_meta = KFENCE_METADATA_SIZE / PAGE_SIZE;
> + unsigned long nr_pages_meta = KFENCE_METADATA_SIZE / PAGE_SIZE;
> unsigned long addr = (unsigned long)__kfence_pool;
> - unsigned long free_size = KFENCE_POOL_SIZE;
> + unsigned long free_size = __kfence_pool_size;
> + unsigned long nr_pages_covered, covered_size;
> int err = -ENOMEM;
>
> + kfence_alloc_covered_order = ilog2(__kfence_num_objects) + 2;
> + kfence_alloc_covered_mask = (1 << kfence_alloc_covered_order) - 1;
> + covered_size = PAGE_ALIGN(KFENCE_COVERED_SIZE);
> + nr_pages_covered = (covered_size / PAGE_SIZE);
> #ifdef CONFIG_CONTIG_ALLOC
> struct page *pages;
>
> - pages = alloc_contig_pages(nr_pages_pool, GFP_KERNEL, first_online_node,
> + pages = alloc_contig_pages(__kfence_pool_pages, GFP_KERNEL, first_online_node,
> NULL);
> if (!pages)
> return -ENOMEM;
>
> __kfence_pool = page_to_virt(pages);
> + pages = alloc_contig_pages(nr_pages_covered, GFP_KERNEL, first_online_node,
> + NULL);
> + if (!pages)
> + goto free_pool;
> + alloc_covered = page_to_virt(pages);
> pages = alloc_contig_pages(nr_pages_meta, GFP_KERNEL, first_online_node,
> NULL);
> if (pages)
> kfence_metadata_init = page_to_virt(pages);
> #else
> - if (nr_pages_pool > MAX_ORDER_NR_PAGES ||
> + if (__kfence_pool_pages > MAX_ORDER_NR_PAGES ||
> nr_pages_meta > MAX_ORDER_NR_PAGES) {
> pr_warn("KFENCE_NUM_OBJECTS too large for buddy allocator\n");
> return -EINVAL;
> }
>
> - __kfence_pool = alloc_pages_exact(KFENCE_POOL_SIZE, GFP_KERNEL);
> + __kfence_pool = alloc_pages_exact(__kfence_pool_size, GFP_KERNEL);
> if (!__kfence_pool)
> return -ENOMEM;
>
> + alloc_covered = alloc_pages_exact(covered_size, GFP_KERNEL);
> + if (!alloc_covered)
> + goto free_pool;
> kfence_metadata_init = alloc_pages_exact(KFENCE_METADATA_SIZE, GFP_KERNEL);
> #endif
>
> if (!kfence_metadata_init)
> - goto free_pool;
> + goto free_cover;
>
> memzero_explicit(kfence_metadata_init, KFENCE_METADATA_SIZE);
> addr = kfence_init_pool();
> @@ -998,22 +1048,28 @@ static int kfence_init_late(void)
> }
>
> pr_err("%s failed\n", __func__);
> - free_size = KFENCE_POOL_SIZE - (addr - (unsigned long)__kfence_pool);
> + free_size = __kfence_pool_size - (addr - (unsigned long)__kfence_pool);
> err = -EBUSY;
>
> #ifdef CONFIG_CONTIG_ALLOC
> free_contig_range(page_to_pfn(virt_to_page((void *)kfence_metadata_init)),
> nr_pages_meta);
> +free_cover:
> + free_contig_range(page_to_pfn(virt_to_page((void *)alloc_covered)),
> + nr_pages_covered);
> free_pool:
> free_contig_range(page_to_pfn(virt_to_page((void *)addr)),
> free_size / PAGE_SIZE);
> #else
> free_pages_exact((void *)kfence_metadata_init, KFENCE_METADATA_SIZE);
> +free_cover:
> + free_pages_exact((void *)alloc_covered, covered_size);
> free_pool:
> free_pages_exact((void *)addr, free_size);
> #endif
>
> kfence_metadata_init = NULL;
> + alloc_covered = NULL;
> __kfence_pool = NULL;
> return err;
> }
> @@ -1039,7 +1095,7 @@ void kfence_shutdown_cache(struct kmem_cache *s)
> if (!smp_load_acquire(&kfence_metadata))
> return;
>
> - for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) {
> + for (i = 0; i < __kfence_num_objects; i++) {
> bool in_use;
>
> meta = &kfence_metadata[i];
> @@ -1077,7 +1133,7 @@ void kfence_shutdown_cache(struct kmem_cache *s)
> }
> }
>
> - for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) {
> + for (i = 0; i < __kfence_num_objects; i++) {
> meta = &kfence_metadata[i];
>
> /* See above. */
> diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h
> index dfba5ea06b01..dc3abb27c632 100644
> --- a/mm/kfence/kfence.h
> +++ b/mm/kfence/kfence.h
> @@ -104,7 +104,7 @@ struct kfence_metadata {
> };
>
> #define KFENCE_METADATA_SIZE PAGE_ALIGN(sizeof(struct kfence_metadata) * \
> - CONFIG_KFENCE_NUM_OBJECTS)
> + __kfence_num_objects)
>
> extern struct kfence_metadata *kfence_metadata;
>
> @@ -123,7 +123,7 @@ static inline struct kfence_metadata *addr_to_metadata(unsigned long addr)
> * error.
> */
> index = (addr - (unsigned long)__kfence_pool) / (PAGE_SIZE * 2) - 1;
> - if (index < 0 || index >= CONFIG_KFENCE_NUM_OBJECTS)
> + if (index < 0 || index >= __kfence_num_objects)
> return NULL;
>
> return &kfence_metadata[index];
> diff --git a/mm/kfence/kfence_test.c b/mm/kfence/kfence_test.c
> index 00034e37bc9f..00a51aa4bad9 100644
> --- a/mm/kfence/kfence_test.c
> +++ b/mm/kfence/kfence_test.c
> @@ -641,7 +641,7 @@ static void test_gfpzero(struct kunit *test)
> break;
> test_free(buf2);
>
> - if (kthread_should_stop() || (i == CONFIG_KFENCE_NUM_OBJECTS)) {
> + if (kthread_should_stop() || (i == __kfence_num_objects)) {
> kunit_warn(test, "giving up ... cannot get same object back\n");
> return;
> }
> --
> 2.25.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: [PATCH v2 2/2] kfence: allow change number of object by early parameter
2025-12-18 8:56 ` Marco Elver
@ 2025-12-18 10:18 ` yuanlinyu
2025-12-18 10:23 ` Marco Elver
0 siblings, 1 reply; 14+ messages in thread
From: yuanlinyu @ 2025-12-18 10:18 UTC (permalink / raw)
To: Marco Elver
Cc: Alexander Potapenko, Dmitry Vyukov, Andrew Morton, Huacai Chen,
WANG Xuerui, kasan-dev, linux-mm, loongarch, linux-kernel
> From: Marco Elver <elver@google.com>
> Sent: Thursday, December 18, 2025 4:57 PM
> To: yuanlinyu <yuanlinyu@honor.com>
> Cc: Alexander Potapenko <glider@google.com>; Dmitry Vyukov
> <dvyukov@google.com>; Andrew Morton <akpm@linux-foundation.org>;
> Huacai Chen <chenhuacai@kernel.org>; WANG Xuerui <kernel@xen0n.name>;
> kasan-dev@googlegroups.com; linux-mm@kvack.org; loongarch@lists.linux.dev;
> linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v2 2/2] kfence: allow change number of object by early
> parameter
>
> On Thu, Dec 18, 2025 at 02:39PM +0800, yuan linyu wrote:
> > when want to change the kfence pool size, currently it is not easy and
> > need to compile kernel.
> >
> > Add an early boot parameter kfence.num_objects to allow change kfence
> > objects number and allow increate total pool to provide high failure
> > rate.
> >
> > Signed-off-by: yuan linyu <yuanlinyu@honor.com>
> > ---
> > include/linux/kfence.h | 5 +-
> > mm/kfence/core.c | 122
> +++++++++++++++++++++++++++++-----------
> > mm/kfence/kfence.h | 4 +-
> > mm/kfence/kfence_test.c | 2 +-
> > 4 files changed, 96 insertions(+), 37 deletions(-)
> >
> > diff --git a/include/linux/kfence.h b/include/linux/kfence.h
> > index 0ad1ddbb8b99..920bcd5649fa 100644
> > --- a/include/linux/kfence.h
> > +++ b/include/linux/kfence.h
> > @@ -24,7 +24,10 @@ extern unsigned long kfence_sample_interval;
> > * address to metadata indices; effectively, the very first page serves as an
> > * extended guard page, but otherwise has no special purpose.
> > */
> > -#define KFENCE_POOL_SIZE ((CONFIG_KFENCE_NUM_OBJECTS + 1) * 2 *
> PAGE_SIZE)
> > +extern unsigned int __kfence_pool_size;
> > +#define KFENCE_POOL_SIZE (__kfence_pool_size)
> > +extern unsigned int __kfence_num_objects;
> > +#define KFENCE_NUM_OBJECTS (__kfence_num_objects)
> > extern char *__kfence_pool;
> >
>
> You have ignored the comment below in this file:
>
> /**
> * is_kfence_address() - check if an address belongs to KFENCE pool
> * @addr: address to check
> *
> [...]
> * Note: This function may be used in fast-paths, and is performance
> critical.
> * Future changes should take this into account; for instance, we want to
> avoid
> >> * introducing another load and therefore need to keep
> KFENCE_POOL_SIZE a
> >> * constant (until immediate patching support is added to the kernel).
> */
> static __always_inline bool is_kfence_address(const void *addr)
> {
> /*
> * The __kfence_pool != NULL check is required to deal with the case
> * where __kfence_pool == NULL && addr < KFENCE_POOL_SIZE.
> Keep it in
> * the slow-path after the range-check!
> */
> return unlikely((unsigned long)((char *)addr - __kfence_pool) <
> KFENCE_POOL_SIZE && __kfence_pool);
> }
Do you mean performance critical by access global data ?
It already access __kfence_pool global data.
Add one more global data acceptable here ?
Other place may access global data indeed ?
I don't know if all linux release like ubuntu enable kfence or not.
I only know it turn on default on android device.
>
> While I think the change itself would be useful to have eventually, a
> better design might be needed. It's unclear to me what the perf impact
Could you share the better design idea ?
> is these days (a lot has changed since that comment was written). Could
> you run some benchmarks to analyze if the fast path is affected by the
> additional load (please do this for whichever arch you care about, but
> also arm64 and x86)?
>
> If performance is affected, all this could be guarded behind another
> Kconfig option, but it's not great either.
what kind of option ?
It already have kconfig option to define the number of objects, here just provide
a parameter for the same option which user can change.
>
> > --
> > 2.25.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 2/2] kfence: allow change number of object by early parameter
2025-12-18 10:18 ` yuanlinyu
@ 2025-12-18 10:23 ` Marco Elver
2025-12-19 4:36 ` yuanlinyu
2025-12-29 4:01 ` yuanlinyu
0 siblings, 2 replies; 14+ messages in thread
From: Marco Elver @ 2025-12-18 10:23 UTC (permalink / raw)
To: yuanlinyu
Cc: Alexander Potapenko, Dmitry Vyukov, Andrew Morton, Huacai Chen,
WANG Xuerui, kasan-dev, linux-mm, loongarch, linux-kernel
On Thu, 18 Dec 2025 at 11:18, yuanlinyu <yuanlinyu@honor.com> wrote:
>
> > From: Marco Elver <elver@google.com>
> > Sent: Thursday, December 18, 2025 4:57 PM
> > To: yuanlinyu <yuanlinyu@honor.com>
> > Cc: Alexander Potapenko <glider@google.com>; Dmitry Vyukov
> > <dvyukov@google.com>; Andrew Morton <akpm@linux-foundation.org>;
> > Huacai Chen <chenhuacai@kernel.org>; WANG Xuerui <kernel@xen0n.name>;
> > kasan-dev@googlegroups.com; linux-mm@kvack.org; loongarch@lists.linux.dev;
> > linux-kernel@vger.kernel.org
> > Subject: Re: [PATCH v2 2/2] kfence: allow change number of object by early
> > parameter
> >
> > On Thu, Dec 18, 2025 at 02:39PM +0800, yuan linyu wrote:
> > > when want to change the kfence pool size, currently it is not easy and
> > > need to compile kernel.
> > >
> > > Add an early boot parameter kfence.num_objects to allow change kfence
> > > objects number and allow increate total pool to provide high failure
> > > rate.
> > >
> > > Signed-off-by: yuan linyu <yuanlinyu@honor.com>
> > > ---
> > > include/linux/kfence.h | 5 +-
> > > mm/kfence/core.c | 122
> > +++++++++++++++++++++++++++++-----------
> > > mm/kfence/kfence.h | 4 +-
> > > mm/kfence/kfence_test.c | 2 +-
> > > 4 files changed, 96 insertions(+), 37 deletions(-)
> > >
> > > diff --git a/include/linux/kfence.h b/include/linux/kfence.h
> > > index 0ad1ddbb8b99..920bcd5649fa 100644
> > > --- a/include/linux/kfence.h
> > > +++ b/include/linux/kfence.h
> > > @@ -24,7 +24,10 @@ extern unsigned long kfence_sample_interval;
> > > * address to metadata indices; effectively, the very first page serves as an
> > > * extended guard page, but otherwise has no special purpose.
> > > */
> > > -#define KFENCE_POOL_SIZE ((CONFIG_KFENCE_NUM_OBJECTS + 1) * 2 *
> > PAGE_SIZE)
> > > +extern unsigned int __kfence_pool_size;
> > > +#define KFENCE_POOL_SIZE (__kfence_pool_size)
> > > +extern unsigned int __kfence_num_objects;
> > > +#define KFENCE_NUM_OBJECTS (__kfence_num_objects)
> > > extern char *__kfence_pool;
> > >
> >
> > You have ignored the comment below in this file:
> >
> > /**
> > * is_kfence_address() - check if an address belongs to KFENCE pool
> > * @addr: address to check
> > *
> > [...]
> > * Note: This function may be used in fast-paths, and is performance
> > critical.
> > * Future changes should take this into account; for instance, we want to
> > avoid
> > >> * introducing another load and therefore need to keep
> > KFENCE_POOL_SIZE a
> > >> * constant (until immediate patching support is added to the kernel).
> > */
> > static __always_inline bool is_kfence_address(const void *addr)
> > {
> > /*
> > * The __kfence_pool != NULL check is required to deal with the case
> > * where __kfence_pool == NULL && addr < KFENCE_POOL_SIZE.
> > Keep it in
> > * the slow-path after the range-check!
> > */
> > return unlikely((unsigned long)((char *)addr - __kfence_pool) <
> > KFENCE_POOL_SIZE && __kfence_pool);
> > }
>
> Do you mean performance critical by access global data ?
> It already access __kfence_pool global data.
> Add one more global data acceptable here ?
>
> Other place may access global data indeed ?
is_kfence_address() is used in the slub fast path, and another load is
one more instruction in the fast path. We have avoided this thus far
for this reason.
> I don't know if all linux release like ubuntu enable kfence or not.
> I only know it turn on default on android device.
This is irrelevant.
> > While I think the change itself would be useful to have eventually, a
> > better design might be needed. It's unclear to me what the perf impact
>
> Could you share the better design idea ?
Hot-patchable constants, similar to static branches/jump labels. This
had been discussed in the past (can't find the link now), but it's not
trivial to implement unfortunately.
> > is these days (a lot has changed since that comment was written). Could
> > you run some benchmarks to analyze if the fast path is affected by the
> > additional load (please do this for whichever arch you care about, but
> > also arm64 and x86)?
> >
> > If performance is affected, all this could be guarded behind another
> > Kconfig option, but it's not great either.
>
> what kind of option ?
> It already have kconfig option to define the number of objects, here just provide
> a parameter for the same option which user can change.
An option that would enable/disable the command-line changeable number
of objects, i.e one version that avoids the load in the fast path and
one version that enables all the bits that you added here. But I'd
rather avoid this if possible.
As such, please do benchmark and analyze the generated code in the
allocator fast path (you should see a load to the new global you
added). llvm-mca [1] might help you with analysis.
[1] https://llvm.org/docs/CommandGuide/llvm-mca.html
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 1/2] LoongArch: kfence: avoid use CONFIG_KFENCE_NUM_OBJECTS
2025-12-18 6:39 ` [PATCH v2 1/2] LoongArch: kfence: avoid use CONFIG_KFENCE_NUM_OBJECTS yuan linyu
@ 2025-12-19 2:13 ` Huacai Chen
2025-12-20 5:43 ` Enze Li
2025-12-20 14:34 ` kernel test robot
1 sibling, 1 reply; 14+ messages in thread
From: Huacai Chen @ 2025-12-19 2:13 UTC (permalink / raw)
To: yuan linyu, Enze Li
Cc: Alexander Potapenko, Marco Elver, Dmitry Vyukov, Andrew Morton,
WANG Xuerui, kasan-dev, linux-mm, loongarch, linux-kernel
Hi, Enze,
On Thu, Dec 18, 2025 at 2:39 PM yuan linyu <yuanlinyu@honor.com> wrote:
>
> use common kfence macro KFENCE_POOL_SIZE for KFENCE_AREA_SIZE definition
>
> Signed-off-by: yuan linyu <yuanlinyu@honor.com>
> ---
> arch/loongarch/include/asm/pgtable.h | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h
> index f41a648a3d9e..e9966c9f844f 100644
> --- a/arch/loongarch/include/asm/pgtable.h
> +++ b/arch/loongarch/include/asm/pgtable.h
> @@ -10,6 +10,7 @@
> #define _ASM_PGTABLE_H
>
> #include <linux/compiler.h>
> +#include <linux/kfence.h>
> #include <asm/addrspace.h>
> #include <asm/asm.h>
> #include <asm/page.h>
> @@ -96,7 +97,7 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
> #define MODULES_END (MODULES_VADDR + SZ_256M)
>
> #ifdef CONFIG_KFENCE
> -#define KFENCE_AREA_SIZE (((CONFIG_KFENCE_NUM_OBJECTS + 1) * 2 + 2) * PAGE_SIZE)
> +#define KFENCE_AREA_SIZE (KFENCE_POOL_SIZE + (2 * PAGE_SIZE))
Can you remember why you didn't use KFENCE_POOL_SIZE at the first place?
Huacai
> #else
> #define KFENCE_AREA_SIZE 0
> #endif
> --
> 2.25.1
>
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: [PATCH v2 2/2] kfence: allow change number of object by early parameter
2025-12-18 10:23 ` Marco Elver
@ 2025-12-19 4:36 ` yuanlinyu
2025-12-29 4:01 ` yuanlinyu
1 sibling, 0 replies; 14+ messages in thread
From: yuanlinyu @ 2025-12-19 4:36 UTC (permalink / raw)
To: Marco Elver
Cc: Alexander Potapenko, Dmitry Vyukov, Andrew Morton, Huacai Chen,
WANG Xuerui, kasan-dev, linux-mm, loongarch, linux-kernel
> From: Marco Elver <elver@google.com>
> Sent: Thursday, December 18, 2025 6:24 PM
> To: yuanlinyu <yuanlinyu@honor.com>
> Cc: Alexander Potapenko <glider@google.com>; Dmitry Vyukov
> <dvyukov@google.com>; Andrew Morton <akpm@linux-foundation.org>;
> Huacai Chen <chenhuacai@kernel.org>; WANG Xuerui <kernel@xen0n.name>;
> kasan-dev@googlegroups.com; linux-mm@kvack.org; loongarch@lists.linux.dev;
> linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v2 2/2] kfence: allow change number of object by early
> parameter
>
> On Thu, 18 Dec 2025 at 11:18, yuanlinyu <yuanlinyu@honor.com> wrote:
> >
> > > From: Marco Elver <elver@google.com>
> > Do you mean performance critical by access global data ?
> > It already access __kfence_pool global data.
> > Add one more global data acceptable here ?
> >
> > Other place may access global data indeed ?
>
> is_kfence_address() is used in the slub fast path, and another load is
> one more instruction in the fast path. We have avoided this thus far
> for this reason.
>
> > I don't know if all linux release like ubuntu enable kfence or not.
> > I only know it turn on default on android device.
>
> This is irrelevant.
>
> > > While I think the change itself would be useful to have eventually, a
> > > better design might be needed. It's unclear to me what the perf impact
> >
> > Could you share the better design idea ?
>
> Hot-patchable constants, similar to static branches/jump labels. This
> had been discussed in the past (can't find the link now), but it's not
> trivial to implement unfortunately.
is it possible add tag to kfence address and only check address itself ?
>
> An option that would enable/disable the command-line changeable number
> of objects, i.e one version that avoids the load in the fast path and
> one version that enables all the bits that you added here. But I'd
> rather avoid this if possible.
Yes, it should avoid, the purpose is without compile the kernel.
>
> As such, please do benchmark and analyze the generated code in the
> allocator fast path (you should see a load to the new global you
> added). llvm-mca [1] might help you with analysis.
>
> [1] https://llvm.org/docs/CommandGuide/llvm-mca.html
Thanks, will learn it
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 1/2] LoongArch: kfence: avoid use CONFIG_KFENCE_NUM_OBJECTS
2025-12-19 2:13 ` Huacai Chen
@ 2025-12-20 5:43 ` Enze Li
2025-12-22 9:16 ` yuanlinyu
0 siblings, 1 reply; 14+ messages in thread
From: Enze Li @ 2025-12-20 5:43 UTC (permalink / raw)
To: Huacai Chen, yuan linyu
Cc: Alexander Potapenko, Marco Elver, Dmitry Vyukov, Andrew Morton,
WANG Xuerui, kasan-dev, linux-mm, loongarch, linux-kernel,
enze.li
On 2025/12/19 10:13, Huacai Chen wrote:
> Hi, Enze,
>
> On Thu, Dec 18, 2025 at 2:39 PM yuan linyu <yuanlinyu@honor.com> wrote:
>>
>> use common kfence macro KFENCE_POOL_SIZE for KFENCE_AREA_SIZE definition
>>
>> Signed-off-by: yuan linyu <yuanlinyu@honor.com>
>> ---
>> arch/loongarch/include/asm/pgtable.h | 3 ++-
>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h
>> index f41a648a3d9e..e9966c9f844f 100644
>> --- a/arch/loongarch/include/asm/pgtable.h
>> +++ b/arch/loongarch/include/asm/pgtable.h
>> @@ -10,6 +10,7 @@
>> #define _ASM_PGTABLE_H
>>
>> #include <linux/compiler.h>
>> +#include <linux/kfence.h>
>> #include <asm/addrspace.h>
>> #include <asm/asm.h>
>> #include <asm/page.h>
>> @@ -96,7 +97,7 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
>> #define MODULES_END (MODULES_VADDR + SZ_256M)
>>
>> #ifdef CONFIG_KFENCE
>> -#define KFENCE_AREA_SIZE (((CONFIG_KFENCE_NUM_OBJECTS + 1) * 2 + 2) * PAGE_SIZE)
>> +#define KFENCE_AREA_SIZE (KFENCE_POOL_SIZE + (2 * PAGE_SIZE))
> Can you remember why you didn't use KFENCE_POOL_SIZE at the first place?
I don't recall the exact reason off the top of my head, but I believe it
was due to complex dependency issues with the header files where
KFENCE_POOL_SIZE is defined. To avoid those complications, we likely
opted to use KFENCE_NUM_OBJECTS directly.
I checked out the code at commit
(6ad3df56bb199134800933df2afcd7df3b03ef33 "LoongArch: Add KFENCE (Kernel
Electric-Fence) support") and encountered the following errors when
compiling with this patch applied.
8<------------------------------------------------------
CC arch/loongarch/kernel/asm-offsets.s
In file included from ./arch/loongarch/include/asm/pgtable.h:13,
from ./include/linux/pgtable.h:6,
from ./include/linux/mm.h:29,
from arch/loongarch/kernel/asm-offsets.c:9:
./include/linux/kfence.h:93:35: warning: 'struct kmem_cache' declared
inside parameter list will n
ot be visible outside of this definition or declaration
93 | void kfence_shutdown_cache(struct kmem_cache *s);
| ^~~~~~~~~~
./include/linux/kfence.h:99:29: warning: 'struct kmem_cache' declared
inside parameter list will n
ot be visible outside of this definition or declaration
99 | void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t
flags);
| ^~~~~~~~~~
./include/linux/kfence.h:117:50: warning: 'struct kmem_cache' declared
inside parameter list will
not be visible outside of this definition or declaration
117 | static __always_inline void *kfence_alloc(struct kmem_cache *s,
size_t size, gfp_t flags)
| ^~~~~~~~~~
./include/linux/kfence.h: In function 'kfence_alloc':
./include/linux/kfence.h:128:31: error: passing argument 1 of
'__kfence_alloc' from incompatible p
ointer type [-Wincompatible-pointer-types]
128 | return __kfence_alloc(s, size, flags);
| ^
| |
| struct kmem_cache *
./include/linux/kfence.h:99:41: note: expected 'struct kmem_cache *' but
argument is of type 'stru
ct kmem_cache *'
99 | void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t
flags);
| ~~~~~~~~~~~~~~~~~~~^
------------------------------------------------------>8
Similarly, after applying this patch to the latest code
(dd9b004b7ff3289fb7bae35130c0a5c0537266af "Merge tag 'trace-v6.19-rc1'")
from the master branch of the Linux repository and enabling KFENCE, I
encountered the following compilation errors.
8<------------------------------------------------------
CC arch/loongarch/kernel/asm-offsets.s
In file included from ./arch/loongarch/include/asm/pgtable.h:13,
from ./include/linux/pgtable.h:6,
from ./include/linux/mm.h:31,
from arch/loongarch/kernel/asm-offsets.c:11:
./include/linux/kfence.h:97:35: warning: 'struct kmem_cache' declared
inside parameter list will n
ot be visible outside of this definition or declaration
97 | void kfence_shutdown_cache(struct kmem_cache *s);
| ^~~~~~~~~~
./include/linux/kfence.h:103:29: warning: 'struct kmem_cache' declared
inside parameter list will
not be visible outside of this definition or declaration
103 | void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t
flags);
| ^~~~~~~~~~
./include/linux/kfence.h:121:50: warning: 'struct kmem_cache' declared
inside parameter list will
not be visible outside of this definition or declaration
121 | static __always_inline void *kfence_alloc(struct kmem_cache *s,
size_t size, gfp_t flags)
| ^~~~~~~~~~
./include/linux/kfence.h: In function 'kfence_alloc':
./include/linux/kfence.h:132:31: error: passing argument 1 of
'__kfence_alloc' from incompatible p
ointer type [-Wincompatible-pointer-types]
132 | return __kfence_alloc(s, size, flags);
| ^
| |
| struct kmem_cache *
./include/linux/kfence.h:103:41: note: expected 'struct kmem_cache *'
but argument is of type 'str
uct kmem_cache *'
103 | void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t
flags);
| ~~~~~~~~~~~~~~~~~~~^
------------------------------------------------------>8
So, this patch currently runs into compilation issues. linyu probably
didn't have KFENCE enabled when compiling locally, which is why this
error was missed. You can enable it as follows:
Kernel hacking
Memory Debugging
[*] KFENCE: low-overhead sampling-based memory safety
Thanks,
Enze
<...>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 1/2] LoongArch: kfence: avoid use CONFIG_KFENCE_NUM_OBJECTS
2025-12-18 6:39 ` [PATCH v2 1/2] LoongArch: kfence: avoid use CONFIG_KFENCE_NUM_OBJECTS yuan linyu
2025-12-19 2:13 ` Huacai Chen
@ 2025-12-20 14:34 ` kernel test robot
1 sibling, 0 replies; 14+ messages in thread
From: kernel test robot @ 2025-12-20 14:34 UTC (permalink / raw)
To: yuan linyu, Alexander Potapenko, Marco Elver, Dmitry Vyukov,
Andrew Morton, Huacai Chen, WANG Xuerui, kasan-dev, loongarch
Cc: oe-kbuild-all, Linux Memory Management List, linux-kernel, yuan linyu
Hi yuan,
kernel test robot noticed the following build errors:
[auto build test ERROR on akpm-mm/mm-everything]
[also build test ERROR on drm-misc/drm-misc-next linus/master v6.19-rc1 next-20251219]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/yuan-linyu/LoongArch-kfence-avoid-use-CONFIG_KFENCE_NUM_OBJECTS/20251218-144322
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/r/20251218063916.1433615-2-yuanlinyu%40honor.com
patch subject: [PATCH v2 1/2] LoongArch: kfence: avoid use CONFIG_KFENCE_NUM_OBJECTS
config: loongarch-randconfig-002-20251220 (https://download.01.org/0day-ci/archive/20251220/202512202213.B6MRZ7tt-lkp@intel.com/config)
compiler: loongarch64-linux-gcc (GCC) 15.1.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251220/202512202213.B6MRZ7tt-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202512202213.B6MRZ7tt-lkp@intel.com/
All error/warnings (new ones prefixed by >>):
In file included from arch/loongarch/include/asm/pgtable.h:13,
from include/linux/pgtable.h:6,
from include/linux/mm.h:31,
from arch/loongarch/kernel/asm-offsets.c:11:
>> include/linux/kfence.h:231:49: warning: 'struct kmem_cache' declared inside parameter list will not be visible outside of this definition or declaration
231 | static inline void kfence_shutdown_cache(struct kmem_cache *s) { }
| ^~~~~~~~~~
include/linux/kfence.h:232:41: warning: 'struct kmem_cache' declared inside parameter list will not be visible outside of this definition or declaration
232 | static inline void *kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) { return NULL; }
| ^~~~~~~~~~
>> include/linux/kfence.h:245:86: warning: 'struct slab' declared inside parameter list will not be visible outside of this definition or declaration
245 | static inline bool __kfence_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab)
| ^~~~
In file included from include/linux/pgtable.h:17,
from include/linux/mm.h:31,
from include/linux/kfence.h:12,
from arch/loongarch/include/asm/pgtable.h:13,
from arch/loongarch/include/asm/uaccess.h:17,
from include/linux/uaccess.h:13,
from include/linux/sched/task.h:13,
from include/linux/sched/signal.h:9,
from kernel/sched/sched.h:17,
from kernel/sched/rq-offsets.c:5:
include/asm-generic/pgtable_uffd.h:27:40: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
27 | static __always_inline int pmd_uffd_wp(pmd_t pmd)
| ^~~~~
| pgd_t
include/asm-generic/pgtable_uffd.h:37:24: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
37 | static __always_inline pmd_t pmd_mkuffd_wp(pmd_t pmd)
| ^~~~~
| pgd_t
include/asm-generic/pgtable_uffd.h:37:44: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
37 | static __always_inline pmd_t pmd_mkuffd_wp(pmd_t pmd)
| ^~~~~
| pgd_t
include/asm-generic/pgtable_uffd.h:47:24: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
47 | static __always_inline pmd_t pmd_clear_uffd_wp(pmd_t pmd)
| ^~~~~
| pgd_t
include/asm-generic/pgtable_uffd.h:47:48: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
47 | static __always_inline pmd_t pmd_clear_uffd_wp(pmd_t pmd)
| ^~~~~
| pgd_t
include/asm-generic/pgtable_uffd.h:67:15: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
67 | static inline pmd_t pmd_swp_mkuffd_wp(pmd_t pmd)
| ^~~~~
| pgd_t
include/asm-generic/pgtable_uffd.h:67:39: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
67 | static inline pmd_t pmd_swp_mkuffd_wp(pmd_t pmd)
| ^~~~~
| pgd_t
include/asm-generic/pgtable_uffd.h:72:35: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
72 | static inline int pmd_swp_uffd_wp(pmd_t pmd)
| ^~~~~
| pgd_t
include/asm-generic/pgtable_uffd.h:77:15: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
77 | static inline pmd_t pmd_swp_clear_uffd_wp(pmd_t pmd)
| ^~~~~
| pgd_t
include/asm-generic/pgtable_uffd.h:77:43: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
77 | static inline pmd_t pmd_swp_clear_uffd_wp(pmd_t pmd)
| ^~~~~
| pgd_t
In file included from include/linux/pgtable.h:18:
include/linux/page_table_check.h:121:69: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
121 | static inline void page_table_check_pmd_clear(struct mm_struct *mm, pmd_t pmd)
| ^~~~~
| pgd_t
include/linux/page_table_check.h:125:69: error: unknown type name 'pud_t'; did you mean 'pgd_t'?
125 | static inline void page_table_check_pud_clear(struct mm_struct *mm, pud_t pud)
| ^~~~~
| pgd_t
include/linux/page_table_check.h:135:17: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
135 | pmd_t *pmdp, pmd_t pmd, unsigned int nr)
| ^~~~~
| pgd_t
include/linux/page_table_check.h:135:30: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
135 | pmd_t *pmdp, pmd_t pmd, unsigned int nr)
| ^~~~~
| pgd_t
include/linux/page_table_check.h:140:17: error: unknown type name 'pud_t'; did you mean 'pgd_t'?
140 | pud_t *pudp, pud_t pud, unsigned int nr)
| ^~~~~
| pgd_t
include/linux/page_table_check.h:140:30: error: unknown type name 'pud_t'; did you mean 'pgd_t'?
140 | pud_t *pudp, pud_t pud, unsigned int nr)
| ^~~~~
| pgd_t
include/linux/page_table_check.h:146:53: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
146 | pmd_t pmd)
| ^~~~~
| pgd_t
>> include/linux/pgtable.h:22:2: error: #error CONFIG_PGTABLE_LEVELS is not consistent with __PAGETABLE_{P4D,PUD,PMD}_FOLDED
22 | #error CONFIG_PGTABLE_LEVELS is not consistent with __PAGETABLE_{P4D,PUD,PMD}_FOLDED
| ^~~~~
include/linux/pgtable.h: In function 'pte_index':
>> include/linux/pgtable.h:69:43: error: 'PTRS_PER_PTE' undeclared (first use in this function)
69 | return (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1);
| ^~~~~~~~~~~~
include/linux/pgtable.h:69:43: note: each undeclared identifier is reported only once for each function it appears in
include/linux/pgtable.h: In function 'pmd_index':
>> include/linux/pgtable.h:75:28: error: 'PMD_SHIFT' undeclared (first use in this function); did you mean 'NMI_SHIFT'?
75 | return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1);
| ^~~~~~~~~
| NMI_SHIFT
>> include/linux/pgtable.h:75:42: error: 'PTRS_PER_PMD' undeclared (first use in this function)
75 | return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1);
| ^~~~~~~~~~~~
include/linux/pgtable.h: In function 'pud_index':
>> include/linux/pgtable.h:83:28: error: 'PUD_SHIFT' undeclared (first use in this function); did you mean 'NMI_SHIFT'?
83 | return (address >> PUD_SHIFT) & (PTRS_PER_PUD - 1);
| ^~~~~~~~~
| NMI_SHIFT
>> include/linux/pgtable.h:83:42: error: 'PTRS_PER_PUD' undeclared (first use in this function)
83 | return (address >> PUD_SHIFT) & (PTRS_PER_PUD - 1);
| ^~~~~~~~~~~~
include/linux/pgtable.h: At top level:
>> include/linux/pgtable.h:115:40: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
115 | static inline pte_t *pte_offset_kernel(pmd_t *pmd, unsigned long address)
| ^~~~~
| pgd_t
include/linux/pgtable.h:130:32: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
130 | static inline pte_t *__pte_map(pmd_t *pmd, unsigned long address)
| ^~~~~
| pgd_t
include/linux/pgtable.h:144:15: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
144 | static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address)
| ^~~~~
| pgd_t
>> include/linux/pgtable.h:144:33: error: unknown type name 'pud_t'; did you mean 'pgd_t'?
144 | static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address)
| ^~~~~
| pgd_t
include/linux/pgtable.h:152:15: error: unknown type name 'pud_t'; did you mean 'pgd_t'?
152 | static inline pud_t *pud_offset(p4d_t *p4d, unsigned long address)
| ^~~~~
| pgd_t
>> include/linux/pgtable.h:152:33: error: unknown type name 'p4d_t'; did you mean 'pgd_t'?
152 | static inline pud_t *pud_offset(p4d_t *p4d, unsigned long address)
| ^~~~~
| pgd_t
include/linux/pgtable.h: In function 'pgd_offset_pgd':
>> include/linux/pgtable.h:90:32: error: 'PGDIR_SHIFT' undeclared (first use in this function)
90 | #define pgd_index(a) (((a) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
| ^~~~~~~~~~~
include/linux/pgtable.h:161:23: note: in expansion of macro 'pgd_index'
161 | return (pgd + pgd_index(address));
| ^~~~~~~~~
>> include/linux/pgtable.h:90:48: error: 'PTRS_PER_PGD' undeclared (first use in this function)
90 | #define pgd_index(a) (((a) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
| ^~~~~~~~~~~~
include/linux/pgtable.h:161:23: note: in expansion of macro 'pgd_index'
161 | return (pgd + pgd_index(address));
| ^~~~~~~~~
include/linux/pgtable.h: At top level:
include/linux/pgtable.h:184:15: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
184 | static inline pmd_t *pmd_off(struct mm_struct *mm, unsigned long va)
| ^~~~~
| pgd_t
include/linux/pgtable.h: In function 'pmd_off':
>> include/linux/pgtable.h:148:20: error: implicit declaration of function 'pmd_offset'; did you mean 'pmd_off'? [-Wimplicit-function-declaration]
148 | #define pmd_offset pmd_offset
| ^~~~~~~~~~
include/linux/pgtable.h:186:16: note: in expansion of macro 'pmd_offset'
186 | return pmd_offset(pud_offset(p4d_offset(pgd_offset(mm, va), va), va), va);
| ^~~~~~~~~~
>> include/linux/pgtable.h:156:20: error: implicit declaration of function 'pud_offset'; did you mean 'pmd_off'? [-Wimplicit-function-declaration]
156 | #define pud_offset pud_offset
| ^~~~~~~~~~
include/linux/pgtable.h:186:27: note: in expansion of macro 'pud_offset'
186 | return pmd_offset(pud_offset(p4d_offset(pgd_offset(mm, va), va), va), va);
| ^~~~~~~~~~
>> include/linux/pgtable.h:186:38: error: implicit declaration of function 'p4d_offset'; did you mean 'pmd_offset'? [-Wimplicit-function-declaration]
186 | return pmd_offset(pud_offset(p4d_offset(pgd_offset(mm, va), va), va), va);
| ^~~~~~~~~~
| pmd_offset
>> include/linux/pgtable.h:148:20: error: returning 'int' from a function with return type 'int *' makes pointer from integer without a cast [-Wint-conversion]
148 | #define pmd_offset pmd_offset
| ^
include/linux/pgtable.h:186:16: note: in expansion of macro 'pmd_offset'
186 | return pmd_offset(pud_offset(p4d_offset(pgd_offset(mm, va), va), va), va);
| ^~~~~~~~~~
include/linux/pgtable.h: At top level:
include/linux/pgtable.h:189:15: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
189 | static inline pmd_t *pmd_off_k(unsigned long va)
| ^~~~~
| pgd_t
include/linux/pgtable.h: In function 'pmd_off_k':
>> include/linux/pgtable.h:148:20: error: returning 'int' from a function with return type 'int *' makes pointer from integer without a cast [-Wint-conversion]
148 | #define pmd_offset pmd_offset
| ^
include/linux/pgtable.h:191:16: note: in expansion of macro 'pmd_offset'
191 | return pmd_offset(pud_offset(p4d_offset(pgd_offset_k(va), va), va), va);
| ^~~~~~~~~~
include/linux/pgtable.h: In function 'virt_to_kpte':
include/linux/pgtable.h:196:9: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
196 | pmd_t *pmd = pmd_off_k(vaddr);
| ^~~~~
| pgd_t
>> include/linux/pgtable.h:198:16: error: implicit declaration of function 'pmd_none' [-Wimplicit-function-declaration]
198 | return pmd_none(*pmd) ? NULL : pte_offset_kernel(pmd, vaddr);
| ^~~~~~~~
>> include/linux/pgtable.h:119:27: error: implicit declaration of function 'pte_offset_kernel' [-Wimplicit-function-declaration]
119 | #define pte_offset_kernel pte_offset_kernel
| ^~~~~~~~~~~~~~~~~
include/linux/pgtable.h:198:40: note: in expansion of macro 'pte_offset_kernel'
198 | return pmd_none(*pmd) ? NULL : pte_offset_kernel(pmd, vaddr);
| ^~~~~~~~~~~~~~~~~
include/linux/pgtable.h:198:38: error: pointer/integer type mismatch in conditional expression [-Wint-conversion]
198 | return pmd_none(*pmd) ? NULL : pte_offset_kernel(pmd, vaddr);
| ^
include/linux/pgtable.h: At top level:
include/linux/pgtable.h:202:29: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
202 | static inline int pmd_young(pmd_t pmd)
| ^~~~~
| pgd_t
include/linux/pgtable.h:209:29: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
209 | static inline int pmd_dirty(pmd_t pmd)
| ^~~~~
| pgd_t
In file included from include/linux/shm.h:6,
from include/linux/sched.h:23,
from include/linux/percpu.h:12,
from include/linux/prandom.h:13,
from kernel/sched/sched.h:8:
include/linux/pgtable.h: In function 'pte_advance_pfn':
include/linux/pgtable.h:404:44: error: 'PFN_PTE_SHIFT' undeclared (first use in this function)
404 | return __pte(pte_val(pte) + (nr << PFN_PTE_SHIFT));
| ^~~~~~~~~~~~~
arch/loongarch/include/asm/page.h:46:37: note: in definition of macro '__pte'
46 | #define __pte(x) ((pte_t) { (x) })
| ^
include/linux/pgtable.h: In function 'set_ptes':
include/linux/pgtable.h:435:17: error: implicit declaration of function 'set_pte'; did you mean 'set_ptes'? [-Wimplicit-function-declaration]
435 | set_pte(ptep, pte);
| ^~~~~~~
| set_ptes
include/linux/pgtable.h: At top level:
include/linux/pgtable.h:461:64: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
461 | unsigned long address, pmd_t *pmdp,
| ^~~~~
| pgd_t
include/linux/pgtable.h:462:41: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
462 | pmd_t entry, int dirty)
| ^~~~~
| pgd_t
include/linux/pgtable.h:468:64: error: unknown type name 'pud_t'; did you mean 'pgd_t'?
468 | unsigned long address, pud_t *pudp,
| ^~~~~
| pgd_t
include/linux/pgtable.h:469:41: error: unknown type name 'pud_t'; did you mean 'pgd_t'?
469 | pud_t entry, int dirty)
| ^~~~~
| pgd_t
include/linux/pgtable.h:485:15: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
485 | static inline pmd_t pmdp_get(pmd_t *pmdp)
| ^~~~~
| pgd_t
include/linux/pgtable.h:485:30: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
485 | static inline pmd_t pmdp_get(pmd_t *pmdp)
| ^~~~~
| pgd_t
include/linux/pgtable.h:492:15: error: unknown type name 'pud_t'; did you mean 'pgd_t'?
492 | static inline pud_t pudp_get(pud_t *pudp)
| ^~~~~
| pgd_t
include/linux/pgtable.h:492:30: error: unknown type name 'pud_t'; did you mean 'pgd_t'?
492 | static inline pud_t pudp_get(pud_t *pudp)
| ^~~~~
| pgd_t
include/linux/pgtable.h:499:15: error: unknown type name 'p4d_t'; did you mean 'pgd_t'?
499 | static inline p4d_t p4dp_get(p4d_t *p4dp)
| ^~~~~
| pgd_t
include/linux/pgtable.h:499:30: error: unknown type name 'p4d_t'; did you mean 'pgd_t'?
499 | static inline p4d_t p4dp_get(p4d_t *p4dp)
| ^~~~~
| pgd_t
include/linux/pgtable.h: In function 'ptep_test_and_clear_young':
include/linux/pgtable.h:519:14: error: implicit declaration of function 'pte_young' [-Wimplicit-function-declaration]
519 | if (!pte_young(pte))
| ^~~~~~~~~
include/linux/pgtable.h:522:55: error: implicit declaration of function 'pte_mkold' [-Wimplicit-function-declaration]
522 | set_pte_at(vma->vm_mm, address, ptep, pte_mkold(pte));
| ^~~~~~~~~
include/linux/pgtable.h:443:66: note: in definition of macro 'set_pte_at'
443 | #define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1)
| ^~~
include/linux/pgtable.h:522:55: error: incompatible type for argument 4 of 'set_ptes'
522 | set_pte_at(vma->vm_mm, address, ptep, pte_mkold(pte));
| ^~~~~~~~~~~~~~
| |
| int
include/linux/pgtable.h:443:66: note: in definition of macro 'set_pte_at'
443 | #define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1)
| ^~~
include/linux/pgtable.h:430:36: note: expected 'pte_t' but argument is of type 'int'
430 | pte_t *ptep, pte_t pte, unsigned int nr)
| ~~~~~~^~~
include/linux/pgtable.h: At top level:
include/linux/pgtable.h:544:45: error: unknown type name 'pmd_t'; did you mean 'pgd_t'?
544 | pmd_t *pmdp)
| ^~~~~
vim +22 include/linux/pgtable.h
fbd71844852c94 include/asm-generic/pgtable.h Ben Hutchings 2011-02-27 19
c2febafc67734a include/asm-generic/pgtable.h Kiryl Shutsemau 2017-03-09 20 #if 5 - defined(__PAGETABLE_P4D_FOLDED) - defined(__PAGETABLE_PUD_FOLDED) - \
c2febafc67734a include/asm-generic/pgtable.h Kiryl Shutsemau 2017-03-09 21 defined(__PAGETABLE_PMD_FOLDED) != CONFIG_PGTABLE_LEVELS
c2febafc67734a include/asm-generic/pgtable.h Kiryl Shutsemau 2017-03-09 @22 #error CONFIG_PGTABLE_LEVELS is not consistent with __PAGETABLE_{P4D,PUD,PMD}_FOLDED
235a8f0286d3de include/asm-generic/pgtable.h Kiryl Shutsemau 2015-04-14 23 #endif
235a8f0286d3de include/asm-generic/pgtable.h Kiryl Shutsemau 2015-04-14 24
6ee8630e02be6d include/asm-generic/pgtable.h Hugh Dickins 2013-04-29 25 /*
6ee8630e02be6d include/asm-generic/pgtable.h Hugh Dickins 2013-04-29 26 * On almost all architectures and configurations, 0 can be used as the
6ee8630e02be6d include/asm-generic/pgtable.h Hugh Dickins 2013-04-29 27 * upper ceiling to free_pgtables(): on many architectures it has the same
6ee8630e02be6d include/asm-generic/pgtable.h Hugh Dickins 2013-04-29 28 * effect as using TASK_SIZE. However, there is one configuration which
6ee8630e02be6d include/asm-generic/pgtable.h Hugh Dickins 2013-04-29 29 * must impose a more careful limit, to avoid freeing kernel pgtables.
6ee8630e02be6d include/asm-generic/pgtable.h Hugh Dickins 2013-04-29 30 */
6ee8630e02be6d include/asm-generic/pgtable.h Hugh Dickins 2013-04-29 31 #ifndef USER_PGTABLES_CEILING
6ee8630e02be6d include/asm-generic/pgtable.h Hugh Dickins 2013-04-29 32 #define USER_PGTABLES_CEILING 0UL
6ee8630e02be6d include/asm-generic/pgtable.h Hugh Dickins 2013-04-29 33 #endif
fac7757e1fb05b include/linux/pgtable.h Anshuman Khandual 2021-06-30 34
fac7757e1fb05b include/linux/pgtable.h Anshuman Khandual 2021-06-30 35 /*
fac7757e1fb05b include/linux/pgtable.h Anshuman Khandual 2021-06-30 36 * This defines the first usable user address. Platforms
fac7757e1fb05b include/linux/pgtable.h Anshuman Khandual 2021-06-30 37 * can override its value with custom FIRST_USER_ADDRESS
fac7757e1fb05b include/linux/pgtable.h Anshuman Khandual 2021-06-30 38 * defined in their respective <asm/pgtable.h>.
fac7757e1fb05b include/linux/pgtable.h Anshuman Khandual 2021-06-30 39 */
fac7757e1fb05b include/linux/pgtable.h Anshuman Khandual 2021-06-30 40 #ifndef FIRST_USER_ADDRESS
fac7757e1fb05b include/linux/pgtable.h Anshuman Khandual 2021-06-30 41 #define FIRST_USER_ADDRESS 0UL
fac7757e1fb05b include/linux/pgtable.h Anshuman Khandual 2021-06-30 42 #endif
1c2f7d14d84f76 include/linux/pgtable.h Anshuman Khandual 2021-06-30 43
1c2f7d14d84f76 include/linux/pgtable.h Anshuman Khandual 2021-06-30 44 /*
1c2f7d14d84f76 include/linux/pgtable.h Anshuman Khandual 2021-06-30 45 * This defines the generic helper for accessing PMD page
1c2f7d14d84f76 include/linux/pgtable.h Anshuman Khandual 2021-06-30 46 * table page. Although platforms can still override this
1c2f7d14d84f76 include/linux/pgtable.h Anshuman Khandual 2021-06-30 47 * via their respective <asm/pgtable.h>.
1c2f7d14d84f76 include/linux/pgtable.h Anshuman Khandual 2021-06-30 48 */
1c2f7d14d84f76 include/linux/pgtable.h Anshuman Khandual 2021-06-30 49 #ifndef pmd_pgtable
1c2f7d14d84f76 include/linux/pgtable.h Anshuman Khandual 2021-06-30 50 #define pmd_pgtable(pmd) pmd_page(pmd)
1c2f7d14d84f76 include/linux/pgtable.h Anshuman Khandual 2021-06-30 51 #endif
6ee8630e02be6d include/asm-generic/pgtable.h Hugh Dickins 2013-04-29 52
e06d03d5590ae1 include/linux/pgtable.h Matthew Wilcox (Oracle 2024-03-26 53) #define pmd_folio(pmd) page_folio(pmd_page(pmd))
e06d03d5590ae1 include/linux/pgtable.h Matthew Wilcox (Oracle 2024-03-26 54)
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 55 /*
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 56 * A page table page can be thought of an array like this: pXd_t[PTRS_PER_PxD]
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 57 *
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 58 * The pXx_index() functions return the index of the entry in the page
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 59 * table page which would control the given virtual address
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 60 *
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 61 * As these functions may be used by the same code for different levels of
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 62 * the page table folding, they are always available, regardless of
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 63 * CONFIG_PGTABLE_LEVELS value. For the folded levels they simply return 0
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 64 * because in such cases PTRS_PER_PxD equals 1.
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 65 */
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 66
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 67 static inline unsigned long pte_index(unsigned long address)
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 68 {
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 @69 return (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1);
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 70 }
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 71
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 72 #ifndef pmd_index
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 73 static inline unsigned long pmd_index(unsigned long address)
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 74 {
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 @75 return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1);
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 76 }
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 77 #define pmd_index pmd_index
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 78 #endif
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 79
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 80 #ifndef pud_index
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 81 static inline unsigned long pud_index(unsigned long address)
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 82 {
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 @83 return (address >> PUD_SHIFT) & (PTRS_PER_PUD - 1);
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 84 }
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 85 #define pud_index pud_index
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 86 #endif
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 87
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 88 #ifndef pgd_index
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 89 /* Must be a compile-time constant, so implement it as a macro */
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 @90 #define pgd_index(a) (((a) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 91 #endif
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 92
7269ed4af34418 include/linux/pgtable.h Bibo Mao 2024-11-04 93 #ifndef kernel_pte_init
7269ed4af34418 include/linux/pgtable.h Bibo Mao 2024-11-04 94 static inline void kernel_pte_init(void *addr)
7269ed4af34418 include/linux/pgtable.h Bibo Mao 2024-11-04 95 {
7269ed4af34418 include/linux/pgtable.h Bibo Mao 2024-11-04 96 }
7269ed4af34418 include/linux/pgtable.h Bibo Mao 2024-11-04 97 #define kernel_pte_init kernel_pte_init
7269ed4af34418 include/linux/pgtable.h Bibo Mao 2024-11-04 98 #endif
7269ed4af34418 include/linux/pgtable.h Bibo Mao 2024-11-04 99
7269ed4af34418 include/linux/pgtable.h Bibo Mao 2024-11-04 100 #ifndef pmd_init
7269ed4af34418 include/linux/pgtable.h Bibo Mao 2024-11-04 101 static inline void pmd_init(void *addr)
7269ed4af34418 include/linux/pgtable.h Bibo Mao 2024-11-04 102 {
7269ed4af34418 include/linux/pgtable.h Bibo Mao 2024-11-04 103 }
7269ed4af34418 include/linux/pgtable.h Bibo Mao 2024-11-04 104 #define pmd_init pmd_init
7269ed4af34418 include/linux/pgtable.h Bibo Mao 2024-11-04 105 #endif
7269ed4af34418 include/linux/pgtable.h Bibo Mao 2024-11-04 106
7269ed4af34418 include/linux/pgtable.h Bibo Mao 2024-11-04 107 #ifndef pud_init
7269ed4af34418 include/linux/pgtable.h Bibo Mao 2024-11-04 108 static inline void pud_init(void *addr)
7269ed4af34418 include/linux/pgtable.h Bibo Mao 2024-11-04 109 {
7269ed4af34418 include/linux/pgtable.h Bibo Mao 2024-11-04 110 }
7269ed4af34418 include/linux/pgtable.h Bibo Mao 2024-11-04 111 #define pud_init pud_init
7269ed4af34418 include/linux/pgtable.h Bibo Mao 2024-11-04 112 #endif
7269ed4af34418 include/linux/pgtable.h Bibo Mao 2024-11-04 113
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 114 #ifndef pte_offset_kernel
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 @115 static inline pte_t *pte_offset_kernel(pmd_t *pmd, unsigned long address)
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 116 {
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 117 return (pte_t *)pmd_page_vaddr(*pmd) + pte_index(address);
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 118 }
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 @119 #define pte_offset_kernel pte_offset_kernel
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 120 #endif
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 121
0d940a9b270b92 include/linux/pgtable.h Hugh Dickins 2023-06-08 122 #ifdef CONFIG_HIGHPTE
0d940a9b270b92 include/linux/pgtable.h Hugh Dickins 2023-06-08 123 #define __pte_map(pmd, address) \
0d940a9b270b92 include/linux/pgtable.h Hugh Dickins 2023-06-08 124 ((pte_t *)kmap_local_page(pmd_page(*(pmd))) + pte_index((address)))
0d940a9b270b92 include/linux/pgtable.h Hugh Dickins 2023-06-08 125 #define pte_unmap(pte) do { \
0d940a9b270b92 include/linux/pgtable.h Hugh Dickins 2023-06-08 126 kunmap_local((pte)); \
a349d72fd9efc8 include/linux/pgtable.h Hugh Dickins 2023-07-11 127 rcu_read_unlock(); \
0d940a9b270b92 include/linux/pgtable.h Hugh Dickins 2023-06-08 128 } while (0)
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 129 #else
0d940a9b270b92 include/linux/pgtable.h Hugh Dickins 2023-06-08 130 static inline pte_t *__pte_map(pmd_t *pmd, unsigned long address)
0d940a9b270b92 include/linux/pgtable.h Hugh Dickins 2023-06-08 131 {
0d940a9b270b92 include/linux/pgtable.h Hugh Dickins 2023-06-08 132 return pte_offset_kernel(pmd, address);
0d940a9b270b92 include/linux/pgtable.h Hugh Dickins 2023-06-08 133 }
0d940a9b270b92 include/linux/pgtable.h Hugh Dickins 2023-06-08 134 static inline void pte_unmap(pte_t *pte)
0d940a9b270b92 include/linux/pgtable.h Hugh Dickins 2023-06-08 135 {
a349d72fd9efc8 include/linux/pgtable.h Hugh Dickins 2023-07-11 136 rcu_read_unlock();
0d940a9b270b92 include/linux/pgtable.h Hugh Dickins 2023-06-08 137 }
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 138 #endif
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 139
13cf577e6b66a1 include/linux/pgtable.h Hugh Dickins 2023-07-11 140 void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable);
13cf577e6b66a1 include/linux/pgtable.h Hugh Dickins 2023-07-11 141
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 142 /* Find an entry in the second-level page table.. */
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 143 #ifndef pmd_offset
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 @144 static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address)
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 145 {
9cf6fa24584431 include/linux/pgtable.h Aneesh Kumar K.V 2021-07-07 146 return pud_pgtable(*pud) + pmd_index(address);
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 147 }
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 @148 #define pmd_offset pmd_offset
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 149 #endif
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 150
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 151 #ifndef pud_offset
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 @152 static inline pud_t *pud_offset(p4d_t *p4d, unsigned long address)
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 153 {
dc4875f0e791de include/linux/pgtable.h Aneesh Kumar K.V 2021-07-07 154 return p4d_pgtable(*p4d) + pud_index(address);
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 155 }
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 @156 #define pud_offset pud_offset
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 157 #endif
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 158
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 159 static inline pgd_t *pgd_offset_pgd(pgd_t *pgd, unsigned long address)
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 160 {
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 161 return (pgd + pgd_index(address));
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 162 };
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 163
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 164 /*
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 165 * a shortcut to get a pgd_t in a given mm
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 166 */
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 167 #ifndef pgd_offset
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 168 #define pgd_offset(mm, address) pgd_offset_pgd((mm)->pgd, (address))
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 169 #endif
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 170
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 171 /*
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 172 * a shortcut which implies the use of the kernel's pgd, instead
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 173 * of a process's
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 174 */
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 175 #define pgd_offset_k(address) pgd_offset(&init_mm, (address))
974b9b2c68f3d3 include/linux/pgtable.h Mike Rapoport 2020-06-08 176
e05c7b1f2bc4b7 include/linux/pgtable.h Mike Rapoport 2020-06-08 177 /*
e05c7b1f2bc4b7 include/linux/pgtable.h Mike Rapoport 2020-06-08 178 * In many cases it is known that a virtual address is mapped at PMD or PTE
e05c7b1f2bc4b7 include/linux/pgtable.h Mike Rapoport 2020-06-08 179 * level, so instead of traversing all the page table levels, we can get a
e05c7b1f2bc4b7 include/linux/pgtable.h Mike Rapoport 2020-06-08 180 * pointer to the PMD entry in user or kernel page table or translate a virtual
e05c7b1f2bc4b7 include/linux/pgtable.h Mike Rapoport 2020-06-08 181 * address to the pointer in the PTE in the kernel page tables with simple
e05c7b1f2bc4b7 include/linux/pgtable.h Mike Rapoport 2020-06-08 182 * helpers.
e05c7b1f2bc4b7 include/linux/pgtable.h Mike Rapoport 2020-06-08 183 */
e05c7b1f2bc4b7 include/linux/pgtable.h Mike Rapoport 2020-06-08 184 static inline pmd_t *pmd_off(struct mm_struct *mm, unsigned long va)
e05c7b1f2bc4b7 include/linux/pgtable.h Mike Rapoport 2020-06-08 185 {
e05c7b1f2bc4b7 include/linux/pgtable.h Mike Rapoport 2020-06-08 @186 return pmd_offset(pud_offset(p4d_offset(pgd_offset(mm, va), va), va), va);
e05c7b1f2bc4b7 include/linux/pgtable.h Mike Rapoport 2020-06-08 187 }
e05c7b1f2bc4b7 include/linux/pgtable.h Mike Rapoport 2020-06-08 188
e05c7b1f2bc4b7 include/linux/pgtable.h Mike Rapoport 2020-06-08 189 static inline pmd_t *pmd_off_k(unsigned long va)
e05c7b1f2bc4b7 include/linux/pgtable.h Mike Rapoport 2020-06-08 190 {
e05c7b1f2bc4b7 include/linux/pgtable.h Mike Rapoport 2020-06-08 191 return pmd_offset(pud_offset(p4d_offset(pgd_offset_k(va), va), va), va);
e05c7b1f2bc4b7 include/linux/pgtable.h Mike Rapoport 2020-06-08 192 }
e05c7b1f2bc4b7 include/linux/pgtable.h Mike Rapoport 2020-06-08 193
e05c7b1f2bc4b7 include/linux/pgtable.h Mike Rapoport 2020-06-08 194 static inline pte_t *virt_to_kpte(unsigned long vaddr)
e05c7b1f2bc4b7 include/linux/pgtable.h Mike Rapoport 2020-06-08 195 {
e05c7b1f2bc4b7 include/linux/pgtable.h Mike Rapoport 2020-06-08 196 pmd_t *pmd = pmd_off_k(vaddr);
e05c7b1f2bc4b7 include/linux/pgtable.h Mike Rapoport 2020-06-08 197
e05c7b1f2bc4b7 include/linux/pgtable.h Mike Rapoport 2020-06-08 @198 return pmd_none(*pmd) ? NULL : pte_offset_kernel(pmd, vaddr);
e05c7b1f2bc4b7 include/linux/pgtable.h Mike Rapoport 2020-06-08 199 }
e05c7b1f2bc4b7 include/linux/pgtable.h Mike Rapoport 2020-06-08 200
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 2/2] kfence: allow change number of object by early parameter
2025-12-18 6:39 ` [PATCH v2 2/2] kfence: allow change number of object by early parameter yuan linyu
2025-12-18 8:56 ` Marco Elver
@ 2025-12-20 14:59 ` kernel test robot
1 sibling, 0 replies; 14+ messages in thread
From: kernel test robot @ 2025-12-20 14:59 UTC (permalink / raw)
To: yuan linyu, Alexander Potapenko, Marco Elver, Dmitry Vyukov,
Andrew Morton, Huacai Chen, WANG Xuerui, kasan-dev, loongarch
Cc: llvm, oe-kbuild-all, Linux Memory Management List, linux-kernel,
yuan linyu
Hi yuan,
kernel test robot noticed the following build warnings:
[auto build test WARNING on akpm-mm/mm-everything]
[also build test WARNING on drm-misc/drm-misc-next linus/master v6.19-rc1 next-20251219]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/yuan-linyu/LoongArch-kfence-avoid-use-CONFIG_KFENCE_NUM_OBJECTS/20251218-144322
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/r/20251218063916.1433615-3-yuanlinyu%40honor.com
patch subject: [PATCH v2 2/2] kfence: allow change number of object by early parameter
config: i386-buildonly-randconfig-001-20251219 (https://download.01.org/0day-ci/archive/20251220/202512202213.aA8qY41g-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251220/202512202213.aA8qY41g-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202512202213.aA8qY41g-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> mm/kfence/core.c:997:16: warning: variable 'nr_pages_covered' set but not used [-Wunused-but-set-variable]
997 | unsigned long nr_pages_covered, covered_size;
| ^
1 warning generated.
vim +/nr_pages_covered +997 mm/kfence/core.c
991
992 static int kfence_init_late(void)
993 {
994 unsigned long nr_pages_meta = KFENCE_METADATA_SIZE / PAGE_SIZE;
995 unsigned long addr = (unsigned long)__kfence_pool;
996 unsigned long free_size = __kfence_pool_size;
> 997 unsigned long nr_pages_covered, covered_size;
998 int err = -ENOMEM;
999
1000 kfence_alloc_covered_order = ilog2(__kfence_num_objects) + 2;
1001 kfence_alloc_covered_mask = (1 << kfence_alloc_covered_order) - 1;
1002 covered_size = PAGE_ALIGN(KFENCE_COVERED_SIZE);
1003 nr_pages_covered = (covered_size / PAGE_SIZE);
1004 #ifdef CONFIG_CONTIG_ALLOC
1005 struct page *pages;
1006
1007 pages = alloc_contig_pages(__kfence_pool_pages, GFP_KERNEL, first_online_node,
1008 NULL);
1009 if (!pages)
1010 return -ENOMEM;
1011
1012 __kfence_pool = page_to_virt(pages);
1013 pages = alloc_contig_pages(nr_pages_covered, GFP_KERNEL, first_online_node,
1014 NULL);
1015 if (!pages)
1016 goto free_pool;
1017 alloc_covered = page_to_virt(pages);
1018 pages = alloc_contig_pages(nr_pages_meta, GFP_KERNEL, first_online_node,
1019 NULL);
1020 if (pages)
1021 kfence_metadata_init = page_to_virt(pages);
1022 #else
1023 if (__kfence_pool_pages > MAX_ORDER_NR_PAGES ||
1024 nr_pages_meta > MAX_ORDER_NR_PAGES) {
1025 pr_warn("KFENCE_NUM_OBJECTS too large for buddy allocator\n");
1026 return -EINVAL;
1027 }
1028
1029 __kfence_pool = alloc_pages_exact(__kfence_pool_size, GFP_KERNEL);
1030 if (!__kfence_pool)
1031 return -ENOMEM;
1032
1033 alloc_covered = alloc_pages_exact(covered_size, GFP_KERNEL);
1034 if (!alloc_covered)
1035 goto free_pool;
1036 kfence_metadata_init = alloc_pages_exact(KFENCE_METADATA_SIZE, GFP_KERNEL);
1037 #endif
1038
1039 if (!kfence_metadata_init)
1040 goto free_cover;
1041
1042 memzero_explicit(kfence_metadata_init, KFENCE_METADATA_SIZE);
1043 addr = kfence_init_pool();
1044 if (!addr) {
1045 kfence_init_enable();
1046 kfence_debugfs_init();
1047 return 0;
1048 }
1049
1050 pr_err("%s failed\n", __func__);
1051 free_size = __kfence_pool_size - (addr - (unsigned long)__kfence_pool);
1052 err = -EBUSY;
1053
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: [PATCH v2 1/2] LoongArch: kfence: avoid use CONFIG_KFENCE_NUM_OBJECTS
2025-12-20 5:43 ` Enze Li
@ 2025-12-22 9:16 ` yuanlinyu
2025-12-22 9:37 ` Enze Li
0 siblings, 1 reply; 14+ messages in thread
From: yuanlinyu @ 2025-12-22 9:16 UTC (permalink / raw)
To: Enze Li, Huacai Chen
Cc: Alexander Potapenko, Marco Elver, Dmitry Vyukov, Andrew Morton,
WANG Xuerui, kasan-dev, linux-mm, loongarch, linux-kernel,
enze.li
> From: Enze Li <lienze@kylinos.cn>
> Sent: Saturday, December 20, 2025 1:44 PM
> To: Huacai Chen <chenhuacai@kernel.org>; yuanlinyu <yuanlinyu@honor.com>
> Cc: Alexander Potapenko <glider@google.com>; Marco Elver
> <elver@google.com>; Dmitry Vyukov <dvyukov@google.com>; Andrew Morton
> <akpm@linux-foundation.org>; WANG Xuerui <kernel@xen0n.name>;
> kasan-dev@googlegroups.com; linux-mm@kvack.org; loongarch@lists.linux.dev;
> linux-kernel@vger.kernel.org; enze.li@gmx.com
> Subject: Re: [PATCH v2 1/2] LoongArch: kfence: avoid use
> CONFIG_KFENCE_NUM_OBJECTS
>
> On 2025/12/19 10:13, Huacai Chen wrote:
> > Hi, Enze,
> >
> > On Thu, Dec 18, 2025 at 2:39 PM yuan linyu <yuanlinyu@honor.com> wrote:
> >>
> >> use common kfence macro KFENCE_POOL_SIZE for KFENCE_AREA_SIZE
> >> definition
> >>
> >> Signed-off-by: yuan linyu <yuanlinyu@honor.com>
> >> ---
> >> arch/loongarch/include/asm/pgtable.h | 3 ++-
> >> 1 file changed, 2 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/arch/loongarch/include/asm/pgtable.h
> >> b/arch/loongarch/include/asm/pgtable.h
> >> index f41a648a3d9e..e9966c9f844f 100644
> >> --- a/arch/loongarch/include/asm/pgtable.h
> >> +++ b/arch/loongarch/include/asm/pgtable.h
> >> @@ -10,6 +10,7 @@
> >> #define _ASM_PGTABLE_H
> >>
> >> #include <linux/compiler.h>
> >> +#include <linux/kfence.h>
> >> #include <asm/addrspace.h>
> >> #include <asm/asm.h>
> >> #include <asm/page.h>
> >> @@ -96,7 +97,7 @@ extern unsigned long empty_zero_page[PAGE_SIZE /
> sizeof(unsigned long)];
> >> #define MODULES_END (MODULES_VADDR + SZ_256M)
> >>
> >> #ifdef CONFIG_KFENCE
> >> -#define KFENCE_AREA_SIZE (((CONFIG_KFENCE_NUM_OBJECTS + 1)
> * 2 + 2) * PAGE_SIZE)
> >> +#define KFENCE_AREA_SIZE (KFENCE_POOL_SIZE + (2 *
> PAGE_SIZE))
> > Can you remember why you didn't use KFENCE_POOL_SIZE at the first place?
>
> I don't recall the exact reason off the top of my head, but I believe it was due to
> complex dependency issues with the header files where KFENCE_POOL_SIZE is
> defined. To avoid those complications, we likely opted to use
> KFENCE_NUM_OBJECTS directly.
>
> I checked out the code at commit
> (6ad3df56bb199134800933df2afcd7df3b03ef33 "LoongArch: Add KFENCE
> (Kernel
> Electric-Fence) support") and encountered the following errors when compiling
> with this patch applied.
>
> 8<------------------------------------------------------
> CC arch/loongarch/kernel/asm-offsets.s
> In file included from ./arch/loongarch/include/asm/pgtable.h:13,
> from ./include/linux/pgtable.h:6,
> from ./include/linux/mm.h:29,
> from arch/loongarch/kernel/asm-offsets.c:9:
> ./include/linux/kfence.h:93:35: warning: 'struct kmem_cache' declared inside
> parameter list will n ot be visible outside of this definition or declaration
> 93 | void kfence_shutdown_cache(struct kmem_cache *s);
> | ^~~~~~~~~~
> ./include/linux/kfence.h:99:29: warning: 'struct kmem_cache' declared inside
> parameter list will n ot be visible outside of this definition or declaration
> 99 | void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags);
> | ^~~~~~~~~~
> ./include/linux/kfence.h:117:50: warning: 'struct kmem_cache' declared inside
> parameter list will not be visible outside of this definition or declaration
> 117 | static __always_inline void *kfence_alloc(struct kmem_cache *s, size_t
> size, gfp_t flags)
> |
> ^~~~~~~~~~
> ./include/linux/kfence.h: In function 'kfence_alloc':
> ./include/linux/kfence.h:128:31: error: passing argument 1 of '__kfence_alloc'
> from incompatible p ointer type [-Wincompatible-pointer-types]
> 128 | return __kfence_alloc(s, size, flags);
> | ^
> | |
> | struct kmem_cache *
> ./include/linux/kfence.h:99:41: note: expected 'struct kmem_cache *' but
> argument is of type 'stru ct kmem_cache *'
> 99 | void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags);
> | ~~~~~~~~~~~~~~~~~~~^
> ------------------------------------------------------>8
>
> Similarly, after applying this patch to the latest code
> (dd9b004b7ff3289fb7bae35130c0a5c0537266af "Merge tag 'trace-v6.19-rc1'")
> from the master branch of the Linux repository and enabling KFENCE, I
> encountered the following compilation errors.
>
> 8<------------------------------------------------------
> CC arch/loongarch/kernel/asm-offsets.s
> In file included from ./arch/loongarch/include/asm/pgtable.h:13,
> from ./include/linux/pgtable.h:6,
> from ./include/linux/mm.h:31,
> from arch/loongarch/kernel/asm-offsets.c:11:
> ./include/linux/kfence.h:97:35: warning: 'struct kmem_cache' declared inside
> parameter list will n ot be visible outside of this definition or declaration
> 97 | void kfence_shutdown_cache(struct kmem_cache *s);
> | ^~~~~~~~~~
> ./include/linux/kfence.h:103:29: warning: 'struct kmem_cache' declared inside
> parameter list will not be visible outside of this definition or declaration
> 103 | void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags);
> | ^~~~~~~~~~
> ./include/linux/kfence.h:121:50: warning: 'struct kmem_cache' declared inside
> parameter list will not be visible outside of this definition or declaration
> 121 | static __always_inline void *kfence_alloc(struct kmem_cache *s, size_t
> size, gfp_t flags)
> |
> ^~~~~~~~~~
> ./include/linux/kfence.h: In function 'kfence_alloc':
> ./include/linux/kfence.h:132:31: error: passing argument 1 of '__kfence_alloc'
> from incompatible p ointer type [-Wincompatible-pointer-types]
> 132 | return __kfence_alloc(s, size, flags);
> | ^
> | |
> | struct kmem_cache *
> ./include/linux/kfence.h:103:41: note: expected 'struct kmem_cache *'
> but argument is of type 'str
> uct kmem_cache *'
> 103 | void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags);
> | ~~~~~~~~~~~~~~~~~~~^
> ------------------------------------------------------>8
>
> So, this patch currently runs into compilation issues. linyu probably didn't have
> KFENCE enabled when compiling locally, which is why this error was missed.
> You can enable it as follows:
>
> Kernel hacking
> Memory Debugging
> [*] KFENCE: low-overhead sampling-based memory safety
Hi Enze,
Sorry only test on arm64.
Could you help fix the compile issue and provide a correct change ?
Or I need sometime to resolve the issue.
>
> Thanks,
> Enze
>
> <...>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 1/2] LoongArch: kfence: avoid use CONFIG_KFENCE_NUM_OBJECTS
2025-12-22 9:16 ` yuanlinyu
@ 2025-12-22 9:37 ` Enze Li
0 siblings, 0 replies; 14+ messages in thread
From: Enze Li @ 2025-12-22 9:37 UTC (permalink / raw)
To: yuanlinyu, Huacai Chen
Cc: Alexander Potapenko, Marco Elver, Dmitry Vyukov, Andrew Morton,
WANG Xuerui, kasan-dev, linux-mm, loongarch, linux-kernel,
enze.li
On 12/22/25 5:16 PM, yuanlinyu wrote:
>> From: Enze Li <lienze@kylinos.cn>
>> Sent: Saturday, December 20, 2025 1:44 PM
>> To: Huacai Chen <chenhuacai@kernel.org>; yuanlinyu <yuanlinyu@honor.com>
>> Cc: Alexander Potapenko <glider@google.com>; Marco Elver
>> <elver@google.com>; Dmitry Vyukov <dvyukov@google.com>; Andrew Morton
>> <akpm@linux-foundation.org>; WANG Xuerui <kernel@xen0n.name>;
>> kasan-dev@googlegroups.com; linux-mm@kvack.org; loongarch@lists.linux.dev;
>> linux-kernel@vger.kernel.org; enze.li@gmx.com
>> Subject: Re: [PATCH v2 1/2] LoongArch: kfence: avoid use
>> CONFIG_KFENCE_NUM_OBJECTS
>>
>> On 2025/12/19 10:13, Huacai Chen wrote:
>>> Hi, Enze,
>>>
>>> On Thu, Dec 18, 2025 at 2:39 PM yuan linyu <yuanlinyu@honor.com> wrote:
>>>>
>>>> use common kfence macro KFENCE_POOL_SIZE for KFENCE_AREA_SIZE
>>>> definition
>>>>
>>>> Signed-off-by: yuan linyu <yuanlinyu@honor.com>
>>>> ---
>>>> arch/loongarch/include/asm/pgtable.h | 3 ++-
>>>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/arch/loongarch/include/asm/pgtable.h
>>>> b/arch/loongarch/include/asm/pgtable.h
>>>> index f41a648a3d9e..e9966c9f844f 100644
>>>> --- a/arch/loongarch/include/asm/pgtable.h
>>>> +++ b/arch/loongarch/include/asm/pgtable.h
>>>> @@ -10,6 +10,7 @@
>>>> #define _ASM_PGTABLE_H
>>>>
>>>> #include <linux/compiler.h>
>>>> +#include <linux/kfence.h>
>>>> #include <asm/addrspace.h>
>>>> #include <asm/asm.h>
>>>> #include <asm/page.h>
>>>> @@ -96,7 +97,7 @@ extern unsigned long empty_zero_page[PAGE_SIZE /
>> sizeof(unsigned long)];
>>>> #define MODULES_END (MODULES_VADDR + SZ_256M)
>>>>
>>>> #ifdef CONFIG_KFENCE
>>>> -#define KFENCE_AREA_SIZE (((CONFIG_KFENCE_NUM_OBJECTS + 1)
>> * 2 + 2) * PAGE_SIZE)
>>>> +#define KFENCE_AREA_SIZE (KFENCE_POOL_SIZE + (2 *
>> PAGE_SIZE))
>>> Can you remember why you didn't use KFENCE_POOL_SIZE at the first place?
>>
>> I don't recall the exact reason off the top of my head, but I believe it was due to
>> complex dependency issues with the header files where KFENCE_POOL_SIZE is
>> defined. To avoid those complications, we likely opted to use
>> KFENCE_NUM_OBJECTS directly.
>>
>> I checked out the code at commit
>> (6ad3df56bb199134800933df2afcd7df3b03ef33 "LoongArch: Add KFENCE
>> (Kernel
>> Electric-Fence) support") and encountered the following errors when compiling
>> with this patch applied.
>>
>> 8<------------------------------------------------------
>> CC arch/loongarch/kernel/asm-offsets.s
>> In file included from ./arch/loongarch/include/asm/pgtable.h:13,
>> from ./include/linux/pgtable.h:6,
>> from ./include/linux/mm.h:29,
>> from arch/loongarch/kernel/asm-offsets.c:9:
>> ./include/linux/kfence.h:93:35: warning: 'struct kmem_cache' declared inside
>> parameter list will n ot be visible outside of this definition or declaration
>> 93 | void kfence_shutdown_cache(struct kmem_cache *s);
>> | ^~~~~~~~~~
>> ./include/linux/kfence.h:99:29: warning: 'struct kmem_cache' declared inside
>> parameter list will n ot be visible outside of this definition or declaration
>> 99 | void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags);
>> | ^~~~~~~~~~
>> ./include/linux/kfence.h:117:50: warning: 'struct kmem_cache' declared inside
>> parameter list will not be visible outside of this definition or declaration
>> 117 | static __always_inline void *kfence_alloc(struct kmem_cache *s, size_t
>> size, gfp_t flags)
>> |
>> ^~~~~~~~~~
>> ./include/linux/kfence.h: In function 'kfence_alloc':
>> ./include/linux/kfence.h:128:31: error: passing argument 1 of '__kfence_alloc'
>> from incompatible p ointer type [-Wincompatible-pointer-types]
>> 128 | return __kfence_alloc(s, size, flags);
>> | ^
>> | |
>> | struct kmem_cache *
>> ./include/linux/kfence.h:99:41: note: expected 'struct kmem_cache *' but
>> argument is of type 'stru ct kmem_cache *'
>> 99 | void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags);
>> | ~~~~~~~~~~~~~~~~~~~^
>> ------------------------------------------------------>8
>>
>> Similarly, after applying this patch to the latest code
>> (dd9b004b7ff3289fb7bae35130c0a5c0537266af "Merge tag 'trace-v6.19-rc1'")
>> from the master branch of the Linux repository and enabling KFENCE, I
>> encountered the following compilation errors.
>>
>> 8<------------------------------------------------------
>> CC arch/loongarch/kernel/asm-offsets.s
>> In file included from ./arch/loongarch/include/asm/pgtable.h:13,
>> from ./include/linux/pgtable.h:6,
>> from ./include/linux/mm.h:31,
>> from arch/loongarch/kernel/asm-offsets.c:11:
>> ./include/linux/kfence.h:97:35: warning: 'struct kmem_cache' declared inside
>> parameter list will n ot be visible outside of this definition or declaration
>> 97 | void kfence_shutdown_cache(struct kmem_cache *s);
>> | ^~~~~~~~~~
>> ./include/linux/kfence.h:103:29: warning: 'struct kmem_cache' declared inside
>> parameter list will not be visible outside of this definition or declaration
>> 103 | void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags);
>> | ^~~~~~~~~~
>> ./include/linux/kfence.h:121:50: warning: 'struct kmem_cache' declared inside
>> parameter list will not be visible outside of this definition or declaration
>> 121 | static __always_inline void *kfence_alloc(struct kmem_cache *s, size_t
>> size, gfp_t flags)
>> |
>> ^~~~~~~~~~
>> ./include/linux/kfence.h: In function 'kfence_alloc':
>> ./include/linux/kfence.h:132:31: error: passing argument 1 of '__kfence_alloc'
>> from incompatible p ointer type [-Wincompatible-pointer-types]
>> 132 | return __kfence_alloc(s, size, flags);
>> | ^
>> | |
>> | struct kmem_cache *
>> ./include/linux/kfence.h:103:41: note: expected 'struct kmem_cache *'
>> but argument is of type 'str
>> uct kmem_cache *'
>> 103 | void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags);
>> | ~~~~~~~~~~~~~~~~~~~^
>> ------------------------------------------------------>8
>>
>> So, this patch currently runs into compilation issues. linyu probably didn't have
>> KFENCE enabled when compiling locally, which is why this error was missed.
>> You can enable it as follows:
>>
>> Kernel hacking
>> Memory Debugging
>> [*] KFENCE: low-overhead sampling-based memory safety
>
> Hi Enze,
>
> Sorry only test on arm64.
>
> Could you help fix the compile issue and provide a correct change ?
>
> Or I need sometime to resolve the issue.
>
Thanks for pointing out this issue. I've taken a look at the
compilation problem you mentioned. Based on my current understanding,
the header dependencies are quite complex, and I couldn't find a
straightforward fix without potentially affecting other parts of the
codebase.
Given the risk of introducing broader compilation errors, I think it
might be safer to hold off on using the KFENCE_POOL_SIZE macro for now,
unless there's a clear and safe path forward that I might have missed.
I'm happy to discuss this further if you have any insights or suggestions.
Thanks,
Enze
^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: [PATCH v2 2/2] kfence: allow change number of object by early parameter
2025-12-18 10:23 ` Marco Elver
2025-12-19 4:36 ` yuanlinyu
@ 2025-12-29 4:01 ` yuanlinyu
1 sibling, 0 replies; 14+ messages in thread
From: yuanlinyu @ 2025-12-29 4:01 UTC (permalink / raw)
To: Marco Elver
Cc: Alexander Potapenko, Dmitry Vyukov, Andrew Morton, Huacai Chen,
WANG Xuerui, kasan-dev, linux-mm, loongarch, linux-kernel
> From: Marco Elver <elver@google.com>
> Sent: Thursday, December 18, 2025 6:24 PM
> To: yuanlinyu <yuanlinyu@honor.com>
> Cc: Alexander Potapenko <glider@google.com>; Dmitry Vyukov
> <dvyukov@google.com>; Andrew Morton <akpm@linux-foundation.org>;
> Huacai Chen <chenhuacai@kernel.org>; WANG Xuerui <kernel@xen0n.name>;
> kasan-dev@googlegroups.com; linux-mm@kvack.org; loongarch@lists.linux.dev;
> linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v2 2/2] kfence: allow change number of object by early
> parameter
> > Could you share the better design idea ?
>
> Hot-patchable constants, similar to static branches/jump labels. This
> had been discussed in the past (can't find the link now), but it's not
> trivial to implement unfortunately.
>
Hi Marco,
If you have concern about one more global,
how about below code ?
/* The pool of pages used for guard pages and objects with number of objects at lower bits . */
unsigned long __kfence_pool_objects __read_mostly;
static __always_inline bool is_kfence_address(const void *addr)
{
return unlikely((unsigned long)((char *)addr - KFENCE_POOL_ADDR) < KFENCE_POOL_LEN && __kfence_pool_objects);
}
It may generate one or two more instruction when compare with original patch.
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2025-12-29 4:01 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-12-18 6:39 [PATCH v2 0/2] kfence: allow change objects number yuan linyu
2025-12-18 6:39 ` [PATCH v2 1/2] LoongArch: kfence: avoid use CONFIG_KFENCE_NUM_OBJECTS yuan linyu
2025-12-19 2:13 ` Huacai Chen
2025-12-20 5:43 ` Enze Li
2025-12-22 9:16 ` yuanlinyu
2025-12-22 9:37 ` Enze Li
2025-12-20 14:34 ` kernel test robot
2025-12-18 6:39 ` [PATCH v2 2/2] kfence: allow change number of object by early parameter yuan linyu
2025-12-18 8:56 ` Marco Elver
2025-12-18 10:18 ` yuanlinyu
2025-12-18 10:23 ` Marco Elver
2025-12-19 4:36 ` yuanlinyu
2025-12-29 4:01 ` yuanlinyu
2025-12-20 14:59 ` kernel test robot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox