linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/10] __vmalloc()/kvmalloc() and no-block support(v4)
@ 2025-10-07 12:20 Uladzislau Rezki (Sony)
  2025-10-07 12:20 ` [PATCH v4 01/10] lib/test_vmalloc: add no_block_alloc_test case Uladzislau Rezki (Sony)
                   ` (10 more replies)
  0 siblings, 11 replies; 14+ messages in thread
From: Uladzislau Rezki (Sony) @ 2025-10-07 12:20 UTC (permalink / raw)
  To: linux-mm, Andrew Morton; +Cc: Michal Hocko, Baoquan He, LKML, Uladzislau Rezki

This is v4. It is based on the next-20250929 branch. I am pretty done
with it, if no objections, appreciate if it is taken.

https://lore.kernel.org/all/20250704152537.55724-1-urezki@gmail.com/
https://lkml.org/lkml/2025/8/7/332
https://lore.kernel.org/all/20251001192647.195204-1-urezki@gmail.com/

v3 -> v4:
 - collected Acked-by/Reviewed-by tags;
 - Fixed "Warning: mm/vmalloc.c:3889 bad line:" reported by robot.

Uladzislau Rezki (Sony) (10):
  lib/test_vmalloc: add no_block_alloc_test case
  lib/test_vmalloc: Remove xfail condition check
  mm/vmalloc: Support non-blocking GFP flags in alloc_vmap_area()
  mm/vmalloc: Defer freeing partly initialized vm_struct
  mm/vmalloc: Handle non-blocking GFP in __vmalloc_area_node()
  mm/kasan: Support non-blocking GFP in kasan_populate_vmalloc()
  kmsan: Remove hard-coded GFP_KERNEL flags
  mm: Skip might_alloc() warnings when PF_MEMALLOC is set
  mm/vmalloc: Update __vmalloc_node_range() documentation
  mm: kvmalloc: Add non-blocking support for vmalloc

 include/linux/kmsan.h    |   6 +-
 include/linux/sched/mm.h |   3 +
 include/linux/vmalloc.h  |   8 +-
 lib/test_vmalloc.c       |  28 ++++++-
 mm/internal.h            |   4 +-
 mm/kasan/shadow.c        |  12 +--
 mm/kmsan/shadow.c        |   6 +-
 mm/percpu-vm.c           |   2 +-
 mm/slub.c                |  19 +++--
 mm/vmalloc.c             | 153 ++++++++++++++++++++++++++++++---------
 10 files changed, 179 insertions(+), 62 deletions(-)

-- 
2.47.3



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 01/10] lib/test_vmalloc: add no_block_alloc_test case
  2025-10-07 12:20 [PATCH v4 00/10] __vmalloc()/kvmalloc() and no-block support(v4) Uladzislau Rezki (Sony)
@ 2025-10-07 12:20 ` Uladzislau Rezki (Sony)
  2025-10-07 12:20 ` [PATCH v4 02/10] lib/test_vmalloc: Remove xfail condition check Uladzislau Rezki (Sony)
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 14+ messages in thread
From: Uladzislau Rezki (Sony) @ 2025-10-07 12:20 UTC (permalink / raw)
  To: linux-mm, Andrew Morton; +Cc: Michal Hocko, Baoquan He, LKML, Uladzislau Rezki

Introduce a new test case "no_block_alloc_test" that verifies
non-blocking allocations using __vmalloc() with GFP_ATOMIC and
GFP_NOWAIT flags.

It is recommended to build kernel with CONFIG_DEBUG_ATOMIC_SLEEP
enabled to help catch "sleeping while atomic" issues. This test
ensures that memory allocation logic under atomic constraints
does not inadvertently sleep.

Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
 lib/test_vmalloc.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/lib/test_vmalloc.c b/lib/test_vmalloc.c
index 2815658ccc37..aae5f4910aff 100644
--- a/lib/test_vmalloc.c
+++ b/lib/test_vmalloc.c
@@ -54,6 +54,7 @@ __param(int, run_test_mask, 7,
 		"\t\tid: 256,  name: kvfree_rcu_1_arg_vmalloc_test\n"
 		"\t\tid: 512,  name: kvfree_rcu_2_arg_vmalloc_test\n"
 		"\t\tid: 1024, name: vm_map_ram_test\n"
+		"\t\tid: 2048, name: no_block_alloc_test\n"
 		/* Add a new test case description here. */
 );
 
@@ -283,6 +284,30 @@ static int fix_size_alloc_test(void)
 	return 0;
 }
 
+static int no_block_alloc_test(void)
+{
+	void *ptr;
+	int i;
+
+	for (i = 0; i < test_loop_count; i++) {
+		bool use_atomic = !!(get_random_u8() % 2);
+		gfp_t gfp = use_atomic ? GFP_ATOMIC : GFP_NOWAIT;
+		unsigned long size = (nr_pages > 0 ? nr_pages : 1) * PAGE_SIZE;
+
+		preempt_disable();
+		ptr = __vmalloc(size, gfp);
+		preempt_enable();
+
+		if (!ptr)
+			return -1;
+
+		*((__u8 *)ptr) = 0;
+		vfree(ptr);
+	}
+
+	return 0;
+}
+
 static int
 pcpu_alloc_test(void)
 {
@@ -411,6 +436,7 @@ static struct test_case_desc test_case_array[] = {
 	{ "kvfree_rcu_1_arg_vmalloc_test", kvfree_rcu_1_arg_vmalloc_test, },
 	{ "kvfree_rcu_2_arg_vmalloc_test", kvfree_rcu_2_arg_vmalloc_test, },
 	{ "vm_map_ram_test", vm_map_ram_test, },
+	{ "no_block_alloc_test", no_block_alloc_test, true },
 	/* Add a new test case here. */
 };
 
-- 
2.47.3



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 02/10] lib/test_vmalloc: Remove xfail condition check
  2025-10-07 12:20 [PATCH v4 00/10] __vmalloc()/kvmalloc() and no-block support(v4) Uladzislau Rezki (Sony)
  2025-10-07 12:20 ` [PATCH v4 01/10] lib/test_vmalloc: add no_block_alloc_test case Uladzislau Rezki (Sony)
@ 2025-10-07 12:20 ` Uladzislau Rezki (Sony)
  2025-10-07 12:20 ` [PATCH v4 03/10] mm/vmalloc: Support non-blocking GFP flags in alloc_vmap_area() Uladzislau Rezki (Sony)
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 14+ messages in thread
From: Uladzislau Rezki (Sony) @ 2025-10-07 12:20 UTC (permalink / raw)
  To: linux-mm, Andrew Morton; +Cc: Michal Hocko, Baoquan He, LKML, Uladzislau Rezki

A test marked with "xfail = true" is expected to fail but that
does not mean it is predetermined to fail. Remove "xfail" condition
check for tests which pass successfully.

Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
 lib/test_vmalloc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/test_vmalloc.c b/lib/test_vmalloc.c
index aae5f4910aff..6521c05c7816 100644
--- a/lib/test_vmalloc.c
+++ b/lib/test_vmalloc.c
@@ -500,7 +500,7 @@ static int test_func(void *private)
 		for (j = 0; j < test_repeat_count; j++) {
 			ret = test_case_array[index].test_func();
 
-			if (!ret && !test_case_array[index].xfail)
+			if (!ret)
 				t->data[index].test_passed++;
 			else if (ret && test_case_array[index].xfail)
 				t->data[index].test_xfailed++;
-- 
2.47.3



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 03/10] mm/vmalloc: Support non-blocking GFP flags in alloc_vmap_area()
  2025-10-07 12:20 [PATCH v4 00/10] __vmalloc()/kvmalloc() and no-block support(v4) Uladzislau Rezki (Sony)
  2025-10-07 12:20 ` [PATCH v4 01/10] lib/test_vmalloc: add no_block_alloc_test case Uladzislau Rezki (Sony)
  2025-10-07 12:20 ` [PATCH v4 02/10] lib/test_vmalloc: Remove xfail condition check Uladzislau Rezki (Sony)
@ 2025-10-07 12:20 ` Uladzislau Rezki (Sony)
  2025-10-07 12:20 ` [PATCH v4 04/10] mm/vmalloc: Defer freeing partly initialized vm_struct Uladzislau Rezki (Sony)
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 14+ messages in thread
From: Uladzislau Rezki (Sony) @ 2025-10-07 12:20 UTC (permalink / raw)
  To: linux-mm, Andrew Morton
  Cc: Michal Hocko, Baoquan He, LKML, Uladzislau Rezki, Michal Hocko

alloc_vmap_area() currently assumes that sleeping is allowed during
allocation. This is not true for callers which pass non-blocking
GFP flags, such as GFP_ATOMIC or GFP_NOWAIT.

This patch adds logic to detect whether the given gfp_mask permits
blocking. It avoids invoking might_sleep() or falling back to reclaim
path if blocking is not allowed.

This makes alloc_vmap_area() safer for use in non-sleeping contexts,
where previously it could hit unexpected sleeps, trigger warnings.

It is a preparation and adjustment step to later allow both GFP_ATOMIC
and GFP_NOWAIT allocations in this series.

Acked-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Baoquan He <bhe@redhat.com>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
 mm/vmalloc.c | 20 ++++++++++++++++----
 1 file changed, 16 insertions(+), 4 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 798b2ed21e46..d83c01caaabe 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2017,6 +2017,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
 	unsigned long freed;
 	unsigned long addr;
 	unsigned int vn_id;
+	bool allow_block;
 	int purged = 0;
 	int ret;
 
@@ -2028,7 +2029,8 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
 
 	/* Only reclaim behaviour flags are relevant. */
 	gfp_mask = gfp_mask & GFP_RECLAIM_MASK;
-	might_sleep();
+	allow_block = gfpflags_allow_blocking(gfp_mask);
+	might_sleep_if(allow_block);
 
 	/*
 	 * If a VA is obtained from a global heap(if it fails here)
@@ -2062,7 +2064,8 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
 		 * This is not a fast path.  Check if yielding is needed. This
 		 * is the only reschedule point in the vmalloc() path.
 		 */
-		cond_resched();
+		if (allow_block)
+			cond_resched();
 	}
 
 	trace_alloc_vmap_area(addr, size, align, vstart, vend, IS_ERR_VALUE(addr));
@@ -2071,8 +2074,16 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
 	 * If an allocation fails, the error value is
 	 * returned. Therefore trigger the overflow path.
 	 */
-	if (IS_ERR_VALUE(addr))
-		goto overflow;
+	if (IS_ERR_VALUE(addr)) {
+		if (allow_block)
+			goto overflow;
+
+		/*
+		 * We can not trigger any reclaim logic because
+		 * sleeping is not allowed, thus fail an allocation.
+		 */
+		goto out_free_va;
+	}
 
 	va->va_start = addr;
 	va->va_end = addr + size;
@@ -2122,6 +2133,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
 		pr_warn("vmalloc_node_range for size %lu failed: Address range restricted to %#lx - %#lx\n",
 				size, vstart, vend);
 
+out_free_va:
 	kmem_cache_free(vmap_area_cachep, va);
 	return ERR_PTR(-EBUSY);
 }
-- 
2.47.3



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 04/10] mm/vmalloc: Defer freeing partly initialized vm_struct
  2025-10-07 12:20 [PATCH v4 00/10] __vmalloc()/kvmalloc() and no-block support(v4) Uladzislau Rezki (Sony)
                   ` (2 preceding siblings ...)
  2025-10-07 12:20 ` [PATCH v4 03/10] mm/vmalloc: Support non-blocking GFP flags in alloc_vmap_area() Uladzislau Rezki (Sony)
@ 2025-10-07 12:20 ` Uladzislau Rezki (Sony)
  2025-10-07 12:20 ` [PATCH v4 05/10] mm/vmalloc: Handle non-blocking GFP in __vmalloc_area_node() Uladzislau Rezki (Sony)
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 14+ messages in thread
From: Uladzislau Rezki (Sony) @ 2025-10-07 12:20 UTC (permalink / raw)
  To: linux-mm, Andrew Morton
  Cc: Michal Hocko, Baoquan He, LKML, Uladzislau Rezki, Michal Hocko

__vmalloc_area_node() may call free_vmap_area() or vfree() on
error paths, both of which can sleep. This becomes problematic
if the function is invoked from an atomic context, such as when
GFP_ATOMIC or GFP_NOWAIT is passed via gfp_mask.

To fix this, unify error paths and defer the cleanup of partly
initialized vm_struct objects to a workqueue. This ensures that
freeing happens in a process context and avoids invalid sleeps
in atomic regions.

Acked-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Baoquan He <bhe@redhat.com>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
 include/linux/vmalloc.h |  6 +++++-
 mm/vmalloc.c            | 34 +++++++++++++++++++++++++++++++---
 2 files changed, 36 insertions(+), 4 deletions(-)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index eb54b7b3202f..1e43181369f1 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -50,7 +50,11 @@ struct iov_iter;		/* in uio.h */
 #endif
 
 struct vm_struct {
-	struct vm_struct	*next;
+	union {
+		struct vm_struct *next;	  /* Early registration of vm_areas. */
+		struct llist_node llnode; /* Asynchronous freeing on error paths. */
+	};
+
 	void			*addr;
 	unsigned long		size;
 	unsigned long		flags;
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d83c01caaabe..9e29dd767c41 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3687,6 +3687,35 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
 	return nr_allocated;
 }
 
+static LLIST_HEAD(pending_vm_area_cleanup);
+static void cleanup_vm_area_work(struct work_struct *work)
+{
+	struct vm_struct *area, *tmp;
+	struct llist_node *head;
+
+	head = llist_del_all(&pending_vm_area_cleanup);
+	if (!head)
+		return;
+
+	llist_for_each_entry_safe(area, tmp, head, llnode) {
+		if (!area->pages)
+			free_vm_area(area);
+		else
+			vfree(area->addr);
+	}
+}
+
+/*
+ * Helper for __vmalloc_area_node() to defer cleanup
+ * of partially initialized vm_struct in error paths.
+ */
+static DECLARE_WORK(cleanup_vm_area, cleanup_vm_area_work);
+static void defer_vm_area_cleanup(struct vm_struct *area)
+{
+	if (llist_add(&area->llnode, &pending_vm_area_cleanup))
+		schedule_work(&cleanup_vm_area);
+}
+
 static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
 				 pgprot_t prot, unsigned int page_shift,
 				 int node)
@@ -3718,8 +3747,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
 		warn_alloc(gfp_mask, NULL,
 			"vmalloc error: size %lu, failed to allocated page array size %lu",
 			nr_small_pages * PAGE_SIZE, array_size);
-		free_vm_area(area);
-		return NULL;
+		goto fail;
 	}
 
 	set_vm_area_page_order(area, page_shift - PAGE_SHIFT);
@@ -3796,7 +3824,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
 	return area->addr;
 
 fail:
-	vfree(area->addr);
+	defer_vm_area_cleanup(area);
 	return NULL;
 }
 
-- 
2.47.3



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 05/10] mm/vmalloc: Handle non-blocking GFP in __vmalloc_area_node()
  2025-10-07 12:20 [PATCH v4 00/10] __vmalloc()/kvmalloc() and no-block support(v4) Uladzislau Rezki (Sony)
                   ` (3 preceding siblings ...)
  2025-10-07 12:20 ` [PATCH v4 04/10] mm/vmalloc: Defer freeing partly initialized vm_struct Uladzislau Rezki (Sony)
@ 2025-10-07 12:20 ` Uladzislau Rezki (Sony)
  2025-10-07 12:20 ` [PATCH v4 06/10] mm/kasan: Support non-blocking GFP in kasan_populate_vmalloc() Uladzislau Rezki (Sony)
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 14+ messages in thread
From: Uladzislau Rezki (Sony) @ 2025-10-07 12:20 UTC (permalink / raw)
  To: linux-mm, Andrew Morton
  Cc: Michal Hocko, Baoquan He, LKML, Uladzislau Rezki, Michal Hocko

Make __vmalloc_area_node() respect non-blocking GFP masks such
as GFP_ATOMIC and GFP_NOWAIT.

- Add memalloc_apply_gfp_scope()/memalloc_restore_scope()
  helpers to apply a proper scope.
- Apply memalloc_apply_gfp_scope()/memalloc_restore_scope()
  around vmap_pages_range() for page table setup.
- Set "nofail" to false if a non-blocking mask is used, as
  they are mutually exclusive.

This is particularly important for page table allocations that
internally use GFP_PGTABLE_KERNEL, which may sleep unless such
scope restrictions are applied. For example:

<snip>
__pte_alloc_kernel()
  pte_alloc_one_kernel(&init_mm);
    pagetable_alloc_noprof(GFP_PGTABLE_KERNEL & ~__GFP_HIGHMEM, 0);
<snip>

Note: in most cases, PTE entries are established only up to the
level required by current vmap space usage, meaning the page tables
are typically fully populated during the mapping process.

Acked-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Baoquan He <bhe@redhat.com>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
 include/linux/vmalloc.h |  2 ++
 mm/vmalloc.c            | 52 +++++++++++++++++++++++++++++++++--------
 2 files changed, 44 insertions(+), 10 deletions(-)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 1e43181369f1..e8e94f90d686 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -332,4 +332,6 @@ bool vmalloc_dump_obj(void *object);
 static inline bool vmalloc_dump_obj(void *object) { return false; }
 #endif
 
+unsigned int memalloc_apply_gfp_scope(gfp_t gfp_mask);
+void memalloc_restore_scope(unsigned int flags);
 #endif /* _LINUX_VMALLOC_H */
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 9e29dd767c41..d8bcd87239b5 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3716,6 +3716,42 @@ static void defer_vm_area_cleanup(struct vm_struct *area)
 		schedule_work(&cleanup_vm_area);
 }
 
+/*
+ * Page tables allocations ignore external GFP. Enforces it by
+ * the memalloc scope API. It is used by vmalloc internals and
+ * KASAN shadow population only.
+ *
+ * GFP to scope mapping:
+ *
+ * non-blocking (no __GFP_DIRECT_RECLAIM) - memalloc_noreclaim_save()
+ * GFP_NOFS - memalloc_nofs_save()
+ * GFP_NOIO - memalloc_noio_save()
+ *
+ * Returns a flag cookie to pair with restore.
+ */
+unsigned int
+memalloc_apply_gfp_scope(gfp_t gfp_mask)
+{
+	unsigned int flags = 0;
+
+	if (!gfpflags_allow_blocking(gfp_mask))
+		flags = memalloc_noreclaim_save();
+	else if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO)
+		flags = memalloc_nofs_save();
+	else if ((gfp_mask & (__GFP_FS | __GFP_IO)) == 0)
+		flags = memalloc_noio_save();
+
+	/* 0 - no scope applied. */
+	return flags;
+}
+
+void
+memalloc_restore_scope(unsigned int flags)
+{
+	if (flags)
+		memalloc_flags_restore(flags);
+}
+
 static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
 				 pgprot_t prot, unsigned int page_shift,
 				 int node)
@@ -3732,6 +3768,10 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
 
 	array_size = (unsigned long)nr_small_pages * sizeof(struct page *);
 
+	/* __GFP_NOFAIL and "noblock" flags are mutually exclusive. */
+	if (!gfpflags_allow_blocking(gfp_mask))
+		nofail = false;
+
 	if (!(gfp_mask & (GFP_DMA | GFP_DMA32)))
 		gfp_mask |= __GFP_HIGHMEM;
 
@@ -3797,22 +3837,14 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
 	 * page tables allocations ignore external gfp mask, enforce it
 	 * by the scope API
 	 */
-	if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO)
-		flags = memalloc_nofs_save();
-	else if ((gfp_mask & (__GFP_FS | __GFP_IO)) == 0)
-		flags = memalloc_noio_save();
-
+	flags = memalloc_apply_gfp_scope(gfp_mask);
 	do {
 		ret = vmap_pages_range(addr, addr + size, prot, area->pages,
 			page_shift);
 		if (nofail && (ret < 0))
 			schedule_timeout_uninterruptible(1);
 	} while (nofail && (ret < 0));
-
-	if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO)
-		memalloc_nofs_restore(flags);
-	else if ((gfp_mask & (__GFP_FS | __GFP_IO)) == 0)
-		memalloc_noio_restore(flags);
+	memalloc_restore_scope(flags);
 
 	if (ret < 0) {
 		warn_alloc(gfp_mask, NULL,
-- 
2.47.3



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 06/10] mm/kasan: Support non-blocking GFP in kasan_populate_vmalloc()
  2025-10-07 12:20 [PATCH v4 00/10] __vmalloc()/kvmalloc() and no-block support(v4) Uladzislau Rezki (Sony)
                   ` (4 preceding siblings ...)
  2025-10-07 12:20 ` [PATCH v4 05/10] mm/vmalloc: Handle non-blocking GFP in __vmalloc_area_node() Uladzislau Rezki (Sony)
@ 2025-10-07 12:20 ` Uladzislau Rezki (Sony)
  2025-10-07 12:20 ` [PATCH v4 07/10] kmsan: Remove hard-coded GFP_KERNEL flags Uladzislau Rezki (Sony)
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 14+ messages in thread
From: Uladzislau Rezki (Sony) @ 2025-10-07 12:20 UTC (permalink / raw)
  To: linux-mm, Andrew Morton
  Cc: Michal Hocko, Baoquan He, LKML, Uladzislau Rezki,
	Andrey Ryabinin, Alexander Potapenko

A "gfp_mask" is already passed to kasan_populate_vmalloc() as
an argument to respect GFPs from callers and KASAN uses it for
its internal allocations.

But apply_to_page_range() function ignores GFP flags due to a
hard-coded mask.

Wrap the call with memalloc_apply_gfp_scope()/memalloc_restore_scope()
so that non-blocking GFP flags(GFP_ATOMIC, GFP_NOWAIT) are respected.

Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Alexander Potapenko <glider@google.com>
Reviewed-by: Baoquan He <bhe@redhat.com>
Reviewed-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
 mm/kasan/shadow.c | 12 ++----------
 1 file changed, 2 insertions(+), 10 deletions(-)

diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 5d2a876035d6..a30d84bfdd52 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -377,18 +377,10 @@ static int __kasan_populate_vmalloc(unsigned long start, unsigned long end, gfp_
 		 * page tables allocations ignore external gfp mask, enforce it
 		 * by the scope API
 		 */
-		if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO)
-			flags = memalloc_nofs_save();
-		else if ((gfp_mask & (__GFP_FS | __GFP_IO)) == 0)
-			flags = memalloc_noio_save();
-
+		flags = memalloc_apply_gfp_scope(gfp_mask);
 		ret = apply_to_page_range(&init_mm, start, nr_pages * PAGE_SIZE,
 					  kasan_populate_vmalloc_pte, &data);
-
-		if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO)
-			memalloc_nofs_restore(flags);
-		else if ((gfp_mask & (__GFP_FS | __GFP_IO)) == 0)
-			memalloc_noio_restore(flags);
+		memalloc_restore_scope(flags);
 
 		___free_pages_bulk(data.pages, nr_pages);
 		if (ret)
-- 
2.47.3



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 07/10] kmsan: Remove hard-coded GFP_KERNEL flags
  2025-10-07 12:20 [PATCH v4 00/10] __vmalloc()/kvmalloc() and no-block support(v4) Uladzislau Rezki (Sony)
                   ` (5 preceding siblings ...)
  2025-10-07 12:20 ` [PATCH v4 06/10] mm/kasan: Support non-blocking GFP in kasan_populate_vmalloc() Uladzislau Rezki (Sony)
@ 2025-10-07 12:20 ` Uladzislau Rezki (Sony)
  2025-10-07 12:37   ` Alexander Potapenko
  2025-10-07 12:20 ` [PATCH v4 08/10] mm: Skip might_alloc() warnings when PF_MEMALLOC is set Uladzislau Rezki (Sony)
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 14+ messages in thread
From: Uladzislau Rezki (Sony) @ 2025-10-07 12:20 UTC (permalink / raw)
  To: linux-mm, Andrew Morton
  Cc: Michal Hocko, Baoquan He, LKML, Uladzislau Rezki,
	Alexander Potapenko, Marco Elver

kmsan_vmap_pages_range_noflush() allocates its temp s_pages/o_pages
arrays with GFP_KERNEL, which may sleep. This is inconsistent with
vmalloc() as it will support non-blocking requests later.

Plumb gfp_mask through the kmsan_vmap_pages_range_noflush(), so it
can use it internally for its demand.

Please note, the subsequent __vmap_pages_range_noflush() still uses
GFP_KERNEL and can sleep. If a caller runs under reclaim constraints,
sleeping is forbidden, it must establish the appropriate memalloc
scope API.

Cc: Alexander Potapenko <glider@google.com>
Cc: Marco Elver <elver@google.com>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
 include/linux/kmsan.h |  6 ++++--
 mm/internal.h         |  4 ++--
 mm/kmsan/shadow.c     |  6 +++---
 mm/percpu-vm.c        |  2 +-
 mm/vmalloc.c          | 26 +++++++++++++++++---------
 5 files changed, 27 insertions(+), 17 deletions(-)

diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h
index f2fd221107bb..7da9fd506b39 100644
--- a/include/linux/kmsan.h
+++ b/include/linux/kmsan.h
@@ -133,6 +133,7 @@ void kmsan_kfree_large(const void *ptr);
  * @prot:	page protection flags used for vmap.
  * @pages:	array of pages.
  * @page_shift:	page_shift passed to vmap_range_noflush().
+ * @gfp_mask:	gfp_mask to use internally.
  *
  * KMSAN maps shadow and origin pages of @pages into contiguous ranges in
  * vmalloc metadata address range. Returns 0 on success, callers must check
@@ -142,7 +143,8 @@ int __must_check kmsan_vmap_pages_range_noflush(unsigned long start,
 						unsigned long end,
 						pgprot_t prot,
 						struct page **pages,
-						unsigned int page_shift);
+						unsigned int page_shift,
+						gfp_t gfp_mask);
 
 /**
  * kmsan_vunmap_kernel_range_noflush() - Notify KMSAN about a vunmap.
@@ -347,7 +349,7 @@ static inline void kmsan_kfree_large(const void *ptr)
 
 static inline int __must_check kmsan_vmap_pages_range_noflush(
 	unsigned long start, unsigned long end, pgprot_t prot,
-	struct page **pages, unsigned int page_shift)
+	struct page **pages, unsigned int page_shift, gfp_t gfp_mask)
 {
 	return 0;
 }
diff --git a/mm/internal.h b/mm/internal.h
index 1561fc2ff5b8..e623c8103358 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1355,7 +1355,7 @@ size_t splice_folio_into_pipe(struct pipe_inode_info *pipe,
 #ifdef CONFIG_MMU
 void __init vmalloc_init(void);
 int __must_check vmap_pages_range_noflush(unsigned long addr, unsigned long end,
-                pgprot_t prot, struct page **pages, unsigned int page_shift);
+	pgprot_t prot, struct page **pages, unsigned int page_shift, gfp_t gfp_mask);
 unsigned int get_vm_area_page_order(struct vm_struct *vm);
 #else
 static inline void vmalloc_init(void)
@@ -1364,7 +1364,7 @@ static inline void vmalloc_init(void)
 
 static inline
 int __must_check vmap_pages_range_noflush(unsigned long addr, unsigned long end,
-                pgprot_t prot, struct page **pages, unsigned int page_shift)
+	pgprot_t prot, struct page **pages, unsigned int page_shift, gfp_t gfp_mask)
 {
 	return -EINVAL;
 }
diff --git a/mm/kmsan/shadow.c b/mm/kmsan/shadow.c
index 54f3c3c962f0..3cd733663100 100644
--- a/mm/kmsan/shadow.c
+++ b/mm/kmsan/shadow.c
@@ -215,7 +215,7 @@ void kmsan_free_page(struct page *page, unsigned int order)
 
 int kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end,
 				   pgprot_t prot, struct page **pages,
-				   unsigned int page_shift)
+				   unsigned int page_shift, gfp_t gfp_mask)
 {
 	unsigned long shadow_start, origin_start, shadow_end, origin_end;
 	struct page **s_pages, **o_pages;
@@ -230,8 +230,8 @@ int kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end,
 		return 0;
 
 	nr = (end - start) / PAGE_SIZE;
-	s_pages = kcalloc(nr, sizeof(*s_pages), GFP_KERNEL);
-	o_pages = kcalloc(nr, sizeof(*o_pages), GFP_KERNEL);
+	s_pages = kcalloc(nr, sizeof(*s_pages), gfp_mask);
+	o_pages = kcalloc(nr, sizeof(*o_pages), gfp_mask);
 	if (!s_pages || !o_pages) {
 		err = -ENOMEM;
 		goto ret;
diff --git a/mm/percpu-vm.c b/mm/percpu-vm.c
index cd69caf6aa8d..4f5937090590 100644
--- a/mm/percpu-vm.c
+++ b/mm/percpu-vm.c
@@ -194,7 +194,7 @@ static int __pcpu_map_pages(unsigned long addr, struct page **pages,
 			    int nr_pages)
 {
 	return vmap_pages_range_noflush(addr, addr + (nr_pages << PAGE_SHIFT),
-					PAGE_KERNEL, pages, PAGE_SHIFT);
+			PAGE_KERNEL, pages, PAGE_SHIFT, GFP_KERNEL);
 }
 
 /**
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d8bcd87239b5..d7e7049e01f8 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -671,16 +671,28 @@ int __vmap_pages_range_noflush(unsigned long addr, unsigned long end,
 }
 
 int vmap_pages_range_noflush(unsigned long addr, unsigned long end,
-		pgprot_t prot, struct page **pages, unsigned int page_shift)
+		pgprot_t prot, struct page **pages, unsigned int page_shift,
+		gfp_t gfp_mask)
 {
 	int ret = kmsan_vmap_pages_range_noflush(addr, end, prot, pages,
-						 page_shift);
+						page_shift, gfp_mask);
 
 	if (ret)
 		return ret;
 	return __vmap_pages_range_noflush(addr, end, prot, pages, page_shift);
 }
 
+static int __vmap_pages_range(unsigned long addr, unsigned long end,
+		pgprot_t prot, struct page **pages, unsigned int page_shift,
+		gfp_t gfp_mask)
+{
+	int err;
+
+	err = vmap_pages_range_noflush(addr, end, prot, pages, page_shift, gfp_mask);
+	flush_cache_vmap(addr, end);
+	return err;
+}
+
 /**
  * vmap_pages_range - map pages to a kernel virtual address
  * @addr: start of the VM area to map
@@ -696,11 +708,7 @@ int vmap_pages_range_noflush(unsigned long addr, unsigned long end,
 int vmap_pages_range(unsigned long addr, unsigned long end,
 		pgprot_t prot, struct page **pages, unsigned int page_shift)
 {
-	int err;
-
-	err = vmap_pages_range_noflush(addr, end, prot, pages, page_shift);
-	flush_cache_vmap(addr, end);
-	return err;
+	return __vmap_pages_range(addr, end, prot, pages, page_shift, GFP_KERNEL);
 }
 
 static int check_sparse_vm_area(struct vm_struct *area, unsigned long start,
@@ -3839,8 +3847,8 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
 	 */
 	flags = memalloc_apply_gfp_scope(gfp_mask);
 	do {
-		ret = vmap_pages_range(addr, addr + size, prot, area->pages,
-			page_shift);
+		ret = __vmap_pages_range(addr, addr + size, prot, area->pages,
+				page_shift, nested_gfp);
 		if (nofail && (ret < 0))
 			schedule_timeout_uninterruptible(1);
 	} while (nofail && (ret < 0));
-- 
2.47.3



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 08/10] mm: Skip might_alloc() warnings when PF_MEMALLOC is set
  2025-10-07 12:20 [PATCH v4 00/10] __vmalloc()/kvmalloc() and no-block support(v4) Uladzislau Rezki (Sony)
                   ` (6 preceding siblings ...)
  2025-10-07 12:20 ` [PATCH v4 07/10] kmsan: Remove hard-coded GFP_KERNEL flags Uladzislau Rezki (Sony)
@ 2025-10-07 12:20 ` Uladzislau Rezki (Sony)
  2025-10-07 12:20 ` [PATCH v4 09/10] mm/vmalloc: Update __vmalloc_node_range() documentation Uladzislau Rezki (Sony)
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 14+ messages in thread
From: Uladzislau Rezki (Sony) @ 2025-10-07 12:20 UTC (permalink / raw)
  To: linux-mm, Andrew Morton
  Cc: Michal Hocko, Baoquan He, LKML, Uladzislau Rezki, Michal Hocko

might_alloc() catches invalid blocking allocations in contexts
where sleeping is not allowed.

However when PF_MEMALLOC is set, the page allocator already skips
reclaim and other blocking paths. In such cases, a blocking gfp_mask
does not actually lead to blocking, so triggering might_alloc() splats
is misleading.

Adjust might_alloc() to skip warnings when the current task has
PF_MEMALLOC set, matching the allocator's actual blocking behaviour.

Reviewed-by: Baoquan He <bhe@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
 include/linux/sched/mm.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
index 0232d983b715..a74582aed747 100644
--- a/include/linux/sched/mm.h
+++ b/include/linux/sched/mm.h
@@ -318,6 +318,9 @@ static inline void might_alloc(gfp_t gfp_mask)
 	fs_reclaim_acquire(gfp_mask);
 	fs_reclaim_release(gfp_mask);
 
+	if (current->flags & PF_MEMALLOC)
+		return;
+
 	might_sleep_if(gfpflags_allow_blocking(gfp_mask));
 }
 
-- 
2.47.3



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 09/10] mm/vmalloc: Update __vmalloc_node_range() documentation
  2025-10-07 12:20 [PATCH v4 00/10] __vmalloc()/kvmalloc() and no-block support(v4) Uladzislau Rezki (Sony)
                   ` (7 preceding siblings ...)
  2025-10-07 12:20 ` [PATCH v4 08/10] mm: Skip might_alloc() warnings when PF_MEMALLOC is set Uladzislau Rezki (Sony)
@ 2025-10-07 12:20 ` Uladzislau Rezki (Sony)
  2025-10-07 12:20 ` [PATCH v4 10/10] mm: kvmalloc: Add non-blocking support for vmalloc Uladzislau Rezki (Sony)
  2025-10-07 21:31 ` [PATCH v4 00/10] __vmalloc()/kvmalloc() and no-block support(v4) Andrew Morton
  10 siblings, 0 replies; 14+ messages in thread
From: Uladzislau Rezki (Sony) @ 2025-10-07 12:20 UTC (permalink / raw)
  To: linux-mm, Andrew Morton
  Cc: Michal Hocko, Baoquan He, LKML, Uladzislau Rezki, Michal Hocko

__vmalloc() function now supports non-blocking flags such as
GFP_ATOMIC and GFP_NOWAIT. Update the documentation accordingly.

Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
 mm/vmalloc.c | 21 +++++++++++----------
 1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d7e7049e01f8..9a63c91c6150 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3881,19 +3881,20 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
  * @caller:		  caller's return address
  *
  * Allocate enough pages to cover @size from the page level
- * allocator with @gfp_mask flags. Please note that the full set of gfp
- * flags are not supported. GFP_KERNEL, GFP_NOFS and GFP_NOIO are all
- * supported.
- * Zone modifiers are not supported. From the reclaim modifiers
- * __GFP_DIRECT_RECLAIM is required (aka GFP_NOWAIT is not supported)
- * and only __GFP_NOFAIL is supported (i.e. __GFP_NORETRY and
- * __GFP_RETRY_MAYFAIL are not supported).
+ * allocator with @gfp_mask flags and map them into contiguous
+ * virtual range with protection @prot.
  *
- * __GFP_NOWARN can be used to suppress failures messages.
+ * Supported GFP classes: %GFP_KERNEL, %GFP_ATOMIC, %GFP_NOWAIT,
+ * %GFP_NOFS and %GFP_NOIO. Zone modifiers are not supported.
+ * Please note %GFP_ATOMIC and %GFP_NOWAIT are supported only
+ * by __vmalloc().
  *
- * Map them into contiguous kernel virtual space, using a pagetable
- * protection of @prot.
+ * Retry modifiers: only %__GFP_NOFAIL is supported; %__GFP_NORETRY
+ * and %__GFP_RETRY_MAYFAIL are not supported.
  *
+ * %__GFP_NOWARN can be used to suppress failure messages.
+ *
+ * Can not be called from interrupt nor NMI contexts.
  * Return: the address of the area or %NULL on failure
  */
 void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align,
-- 
2.47.3



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 10/10] mm: kvmalloc: Add non-blocking support for vmalloc
  2025-10-07 12:20 [PATCH v4 00/10] __vmalloc()/kvmalloc() and no-block support(v4) Uladzislau Rezki (Sony)
                   ` (8 preceding siblings ...)
  2025-10-07 12:20 ` [PATCH v4 09/10] mm/vmalloc: Update __vmalloc_node_range() documentation Uladzislau Rezki (Sony)
@ 2025-10-07 12:20 ` Uladzislau Rezki (Sony)
  2025-10-07 21:31 ` [PATCH v4 00/10] __vmalloc()/kvmalloc() and no-block support(v4) Andrew Morton
  10 siblings, 0 replies; 14+ messages in thread
From: Uladzislau Rezki (Sony) @ 2025-10-07 12:20 UTC (permalink / raw)
  To: linux-mm, Andrew Morton
  Cc: Michal Hocko, Baoquan He, LKML, Uladzislau Rezki, Michal Hocko

Extend __kvmalloc_node_noprof() to handle non-blocking GFP flags
(GFP_NOWAIT and GFP_ATOMIC). Previously such flags were rejected,
returning NULL. With this change:

- kvmalloc() can fall back to vmalloc() if non-blocking contexts;
- for non-blocking allocations the VM_ALLOW_HUGE_VMAP option is
  disabled, since the huge mapping path still contains might_sleep();
- documentation update to reflect that GFP_NOWAIT and GFP_ATOMIC
  are now supported.

Acked-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Baoquan He <bhe@redhat.com>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
 mm/slub.c | 19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 584a5ff1828b..3de0719e24e9 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -7018,7 +7018,7 @@ static gfp_t kmalloc_gfp_adjust(gfp_t flags, size_t size)
  * Uses kmalloc to get the memory but if the allocation fails then falls back
  * to the vmalloc allocator. Use kvfree for freeing the memory.
  *
- * GFP_NOWAIT and GFP_ATOMIC are not supported, neither is the __GFP_NORETRY modifier.
+ * GFP_NOWAIT and GFP_ATOMIC are supported, the __GFP_NORETRY modifier is not.
  * __GFP_RETRY_MAYFAIL is supported, and it should be used only if kmalloc is
  * preferable to the vmalloc fallback, due to visible performance drawbacks.
  *
@@ -7027,6 +7027,7 @@ static gfp_t kmalloc_gfp_adjust(gfp_t flags, size_t size)
 void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), unsigned long align,
 			     gfp_t flags, int node)
 {
+	bool allow_block;
 	void *ret;
 
 	/*
@@ -7039,16 +7040,22 @@ void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), unsigned long align,
 	if (ret || size <= PAGE_SIZE)
 		return ret;
 
-	/* non-sleeping allocations are not supported by vmalloc */
-	if (!gfpflags_allow_blocking(flags))
-		return NULL;
-
 	/* Don't even allow crazy sizes */
 	if (unlikely(size > INT_MAX)) {
 		WARN_ON_ONCE(!(flags & __GFP_NOWARN));
 		return NULL;
 	}
 
+	/*
+	 * For non-blocking the VM_ALLOW_HUGE_VMAP is not used
+	 * because the huge-mapping path in vmalloc contains at
+	 * least one might_sleep() call.
+	 *
+	 * TODO: Revise huge-mapping path to support non-blocking
+	 * flags.
+	 */
+	allow_block = gfpflags_allow_blocking(flags);
+
 	/*
 	 * kvmalloc() can always use VM_ALLOW_HUGE_VMAP,
 	 * since the callers already cannot assume anything
@@ -7056,7 +7063,7 @@ void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), unsigned long align,
 	 * protection games.
 	 */
 	return __vmalloc_node_range_noprof(size, align, VMALLOC_START, VMALLOC_END,
-			flags, PAGE_KERNEL, VM_ALLOW_HUGE_VMAP,
+			flags, PAGE_KERNEL, allow_block ? VM_ALLOW_HUGE_VMAP:0,
 			node, __builtin_return_address(0));
 }
 EXPORT_SYMBOL(__kvmalloc_node_noprof);
-- 
2.47.3



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 07/10] kmsan: Remove hard-coded GFP_KERNEL flags
  2025-10-07 12:20 ` [PATCH v4 07/10] kmsan: Remove hard-coded GFP_KERNEL flags Uladzislau Rezki (Sony)
@ 2025-10-07 12:37   ` Alexander Potapenko
  0 siblings, 0 replies; 14+ messages in thread
From: Alexander Potapenko @ 2025-10-07 12:37 UTC (permalink / raw)
  To: Uladzislau Rezki (Sony)
  Cc: linux-mm, Andrew Morton, Michal Hocko, Baoquan He, LKML, Marco Elver

On Tue, Oct 7, 2025 at 2:20 PM Uladzislau Rezki (Sony) <urezki@gmail.com> wrote:
>
> kmsan_vmap_pages_range_noflush() allocates its temp s_pages/o_pages
> arrays with GFP_KERNEL, which may sleep. This is inconsistent with
> vmalloc() as it will support non-blocking requests later.
>
> Plumb gfp_mask through the kmsan_vmap_pages_range_noflush(), so it
> can use it internally for its demand.
>
> Please note, the subsequent __vmap_pages_range_noflush() still uses
> GFP_KERNEL and can sleep. If a caller runs under reclaim constraints,
> sleeping is forbidden, it must establish the appropriate memalloc
> scope API.
>
> Cc: Alexander Potapenko <glider@google.com>
> Cc: Marco Elver <elver@google.com>
> Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Reviewed-by: Alexander Potapenko <glider@google.com>

Thank you!


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 00/10] __vmalloc()/kvmalloc() and no-block support(v4)
  2025-10-07 12:20 [PATCH v4 00/10] __vmalloc()/kvmalloc() and no-block support(v4) Uladzislau Rezki (Sony)
                   ` (9 preceding siblings ...)
  2025-10-07 12:20 ` [PATCH v4 10/10] mm: kvmalloc: Add non-blocking support for vmalloc Uladzislau Rezki (Sony)
@ 2025-10-07 21:31 ` Andrew Morton
  2025-10-08 12:10   ` Uladzislau Rezki
  10 siblings, 1 reply; 14+ messages in thread
From: Andrew Morton @ 2025-10-07 21:31 UTC (permalink / raw)
  To: Uladzislau Rezki (Sony); +Cc: linux-mm, Michal Hocko, Baoquan He, LKML

On Tue,  7 Oct 2025 14:20:25 +0200 "Uladzislau Rezki (Sony)" <urezki@gmail.com> wrote:

> This is v4. It is based on the next-20250929 branch. I am pretty done
> with it, if no objections, appreciate if it is taken.
> 
> https://lore.kernel.org/all/20250704152537.55724-1-urezki@gmail.com/
> https://lkml.org/lkml/2025/8/7/332
> https://lore.kernel.org/all/20251001192647.195204-1-urezki@gmail.com/

It would be nice (and conventional) to have a [0/N]
introduction/overview, please.  I went back through the previous
iterations and could have kind of used
https://lkml.org/lkml/2025/8/7/332, but that doesn't look very
applicable.



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 00/10] __vmalloc()/kvmalloc() and no-block support(v4)
  2025-10-07 21:31 ` [PATCH v4 00/10] __vmalloc()/kvmalloc() and no-block support(v4) Andrew Morton
@ 2025-10-08 12:10   ` Uladzislau Rezki
  0 siblings, 0 replies; 14+ messages in thread
From: Uladzislau Rezki @ 2025-10-08 12:10 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Uladzislau Rezki (Sony), linux-mm, Michal Hocko, Baoquan He, LKML

On Tue, Oct 07, 2025 at 02:31:26PM -0700, Andrew Morton wrote:
> On Tue,  7 Oct 2025 14:20:25 +0200 "Uladzislau Rezki (Sony)" <urezki@gmail.com> wrote:
> 
> > This is v4. It is based on the next-20250929 branch. I am pretty done
> > with it, if no objections, appreciate if it is taken.
> > 
> > https://lore.kernel.org/all/20250704152537.55724-1-urezki@gmail.com/
> > https://lkml.org/lkml/2025/8/7/332
> > https://lore.kernel.org/all/20251001192647.195204-1-urezki@gmail.com/
> 
> It would be nice (and conventional) to have a [0/N]
> introduction/overview, please.  I went back through the previous
> iterations and could have kind of used
> https://lkml.org/lkml/2025/8/7/332, but that doesn't look very
> applicable.
> 
OK, next time i will numerate links so it is easier to keep track of it.

Thank you!

--
Uladzislau Rezki


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2025-10-08 12:10 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-10-07 12:20 [PATCH v4 00/10] __vmalloc()/kvmalloc() and no-block support(v4) Uladzislau Rezki (Sony)
2025-10-07 12:20 ` [PATCH v4 01/10] lib/test_vmalloc: add no_block_alloc_test case Uladzislau Rezki (Sony)
2025-10-07 12:20 ` [PATCH v4 02/10] lib/test_vmalloc: Remove xfail condition check Uladzislau Rezki (Sony)
2025-10-07 12:20 ` [PATCH v4 03/10] mm/vmalloc: Support non-blocking GFP flags in alloc_vmap_area() Uladzislau Rezki (Sony)
2025-10-07 12:20 ` [PATCH v4 04/10] mm/vmalloc: Defer freeing partly initialized vm_struct Uladzislau Rezki (Sony)
2025-10-07 12:20 ` [PATCH v4 05/10] mm/vmalloc: Handle non-blocking GFP in __vmalloc_area_node() Uladzislau Rezki (Sony)
2025-10-07 12:20 ` [PATCH v4 06/10] mm/kasan: Support non-blocking GFP in kasan_populate_vmalloc() Uladzislau Rezki (Sony)
2025-10-07 12:20 ` [PATCH v4 07/10] kmsan: Remove hard-coded GFP_KERNEL flags Uladzislau Rezki (Sony)
2025-10-07 12:37   ` Alexander Potapenko
2025-10-07 12:20 ` [PATCH v4 08/10] mm: Skip might_alloc() warnings when PF_MEMALLOC is set Uladzislau Rezki (Sony)
2025-10-07 12:20 ` [PATCH v4 09/10] mm/vmalloc: Update __vmalloc_node_range() documentation Uladzislau Rezki (Sony)
2025-10-07 12:20 ` [PATCH v4 10/10] mm: kvmalloc: Add non-blocking support for vmalloc Uladzislau Rezki (Sony)
2025-10-07 21:31 ` [PATCH v4 00/10] __vmalloc()/kvmalloc() and no-block support(v4) Andrew Morton
2025-10-08 12:10   ` Uladzislau Rezki

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox