linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/5] mm/vmalloc.c: code cleanup and improvements
@ 2025-04-18 22:36 Baoquan He
  2025-04-18 22:36 ` [PATCH v2 1/5] mm/vmalloc.c: change purge_ndoes as local static variable Baoquan He
                   ` (5 more replies)
  0 siblings, 6 replies; 8+ messages in thread
From: Baoquan He @ 2025-04-18 22:36 UTC (permalink / raw)
  To: linux-mm; +Cc: akpm, urezki, shivankg, vishal.moola, linux-kernel, Baoquan He

These were made from code inspection in mm/vmalloc.c.

v1->v2:
=======
- In patch 3:
  - made change to improve code according to Uladzislau's suggestion;
  - use WRITE_ONCE() to assign the value to vn->pool[i].len finally,
    according to Shivank's suggestion.
In patch 5:
  - add back the WARN_ON_ONCE() on returned value from va_clip()
    invocation, and also add back the code comment. These are pointed
    out by Uladzislau. 

- Add reviewers' tag from Uladzislau, Shivank and Vishal. And I only add
  Shivank's tag in patch 1, 2, 4 according to his comment because patch 3
  and 5 are changed in v2.

Baoquan He (5):
  mm/vmalloc.c: change purge_ndoes as local static variable
  mm/vmalloc.c: find the vmap of vmap_nodes in reverse order
  mm/vmalloc.c: optimize code in decay_va_pool_node() a little bit
  mm/vmalloc: optimize function vm_unmap_aliases()
  mm/vmalloc.c: return explicit error value in alloc_vmap_area()

 mm/vmalloc.c | 61 ++++++++++++++++++++++++----------------------------
 1 file changed, 28 insertions(+), 33 deletions(-)

-- 
2.41.0



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 1/5] mm/vmalloc.c: change purge_ndoes as local static variable
  2025-04-18 22:36 [PATCH v2 0/5] mm/vmalloc.c: code cleanup and improvements Baoquan He
@ 2025-04-18 22:36 ` Baoquan He
  2025-04-18 22:36 ` [PATCH v2 2/5] mm/vmalloc.c: find the vmap of vmap_nodes in reverse order Baoquan He
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Baoquan He @ 2025-04-18 22:36 UTC (permalink / raw)
  To: linux-mm; +Cc: akpm, urezki, shivankg, vishal.moola, linux-kernel, Baoquan He

Static variable 'purge_ndoes' is defined in global scope, while it's
only used in function __purge_vmap_area_lazy(). It mainly serves to
avoid memory allocation repeatedly, especially when NR_CPUS is big.

While a local static variable can also satisfy the demand, and can
improve code readibility. Hence move its definition into
__purge_vmap_area_lazy().

Signed-off-by: Baoquan He <bhe@redhat.com>
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Reviewed-by: Shivank Garg <shivankg@amd.com>
---
 mm/vmalloc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 3ed720a787ec..38d8d8d60985 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2111,7 +2111,6 @@ static DEFINE_MUTEX(vmap_purge_lock);
 
 /* for per-CPU blocks */
 static void purge_fragmented_blocks_allcpus(void);
-static cpumask_t purge_nodes;
 
 static void
 reclaim_list_global(struct list_head *head)
@@ -2244,6 +2243,7 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end,
 {
 	unsigned long nr_purged_areas = 0;
 	unsigned int nr_purge_helpers;
+	static cpumask_t purge_nodes;
 	unsigned int nr_purge_nodes;
 	struct vmap_node *vn;
 	int i;
-- 
2.41.0



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 2/5] mm/vmalloc.c: find the vmap of vmap_nodes in reverse order
  2025-04-18 22:36 [PATCH v2 0/5] mm/vmalloc.c: code cleanup and improvements Baoquan He
  2025-04-18 22:36 ` [PATCH v2 1/5] mm/vmalloc.c: change purge_ndoes as local static variable Baoquan He
@ 2025-04-18 22:36 ` Baoquan He
  2025-04-18 22:36 ` [PATCH v2 3/5] mm/vmalloc.c: optimize code in decay_va_pool_node() a little bit Baoquan He
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Baoquan He @ 2025-04-18 22:36 UTC (permalink / raw)
  To: linux-mm; +Cc: akpm, urezki, shivankg, vishal.moola, linux-kernel, Baoquan He

When finding VA in vn->busy, if VA spans several zones and the passed
addr is not the same as va->va_start, we should scan the vn in reverse
odrdr because the starting address of VA must be smaller than the passed
addr if it really resides in the VA.

E.g on a system nr_vmap_nodes=100,

     <----va---->
 -|-----|-----|-----|-----|-----|-----|-----|-----|-----|-
    ...   n-1   n    n+1   n+2   ...   100     0     1

VA resides in node 'n' whereas it spans 'n', 'n+1' and 'n+2'. If passed
addr is within 'n+2', we should try nodes backwards on 'n+1' and 'n',
then succeed very soon.

Meanwhile we still need loop around because VA could spans node from 'n'
to node 100, node 0, node 1.

Anyway, changing to find in reverse order can improve efficiency on
many CPUs system.

Signed-off-by: Baoquan He <bhe@redhat.com>
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Reviewed-by: Shivank Garg <shivankg@amd.com>
---
 mm/vmalloc.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 38d8d8d60985..76ab4d3ce616 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2421,7 +2421,7 @@ struct vmap_area *find_vmap_area(unsigned long addr)
 
 		if (va)
 			return va;
-	} while ((i = (i + 1) % nr_vmap_nodes) != j);
+	} while ((i = (i + nr_vmap_nodes - 1) % nr_vmap_nodes) != j);
 
 	return NULL;
 }
@@ -2447,7 +2447,7 @@ static struct vmap_area *find_unlink_vmap_area(unsigned long addr)
 
 		if (va)
 			return va;
-	} while ((i = (i + 1) % nr_vmap_nodes) != j);
+	} while ((i = (i + nr_vmap_nodes - 1) % nr_vmap_nodes) != j);
 
 	return NULL;
 }
-- 
2.41.0



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 3/5] mm/vmalloc.c: optimize code in decay_va_pool_node() a little bit
  2025-04-18 22:36 [PATCH v2 0/5] mm/vmalloc.c: code cleanup and improvements Baoquan He
  2025-04-18 22:36 ` [PATCH v2 1/5] mm/vmalloc.c: change purge_ndoes as local static variable Baoquan He
  2025-04-18 22:36 ` [PATCH v2 2/5] mm/vmalloc.c: find the vmap of vmap_nodes in reverse order Baoquan He
@ 2025-04-18 22:36 ` Baoquan He
  2025-04-18 22:36 ` [PATCH v2 4/5] mm/vmalloc: optimize function vm_unmap_aliases() Baoquan He
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Baoquan He @ 2025-04-18 22:36 UTC (permalink / raw)
  To: linux-mm; +Cc: akpm, urezki, shivankg, vishal.moola, linux-kernel, Baoquan He

When purge lazily freed vmap areas, VA stored in vn->pool[] will also be
taken away into free vmap tree partially or completely accordingly, that
is done in decay_va_pool_node(). When doing that, for each pool of node,
the whole list is detached from the pool for handling. At this time,
that pool is empty. It's not necessary to update the pool size each time
when one VA is removed and addded into free vmap tree.

Here change code to update the pool size when attaching the pool back.

Signed-off-by: Baoquan He <bhe@redhat.com>
---
 mm/vmalloc.c | 23 +++++++++++------------
 1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 76ab4d3ce616..cd654cc35d2b 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2133,7 +2133,7 @@ decay_va_pool_node(struct vmap_node *vn, bool full_decay)
 	LIST_HEAD(decay_list);
 	struct rb_root decay_root = RB_ROOT;
 	struct vmap_area *va, *nva;
-	unsigned long n_decay;
+	unsigned long n_decay, pool_len;
 	int i;
 
 	for (i = 0; i < MAX_VA_SIZE_PAGES; i++) {
@@ -2147,22 +2147,20 @@ decay_va_pool_node(struct vmap_node *vn, bool full_decay)
 		list_replace_init(&vn->pool[i].head, &tmp_list);
 		spin_unlock(&vn->pool_lock);
 
-		if (full_decay)
-			WRITE_ONCE(vn->pool[i].len, 0);
+		pool_len = n_decay = vn->pool[i].len;
+		WRITE_ONCE(vn->pool[i].len, 0);
 
 		/* Decay a pool by ~25% out of left objects. */
-		n_decay = vn->pool[i].len >> 2;
+		if (!full_decay)
+			n_decay >>= 2;
+		pool_len -= n_decay;
 
 		list_for_each_entry_safe(va, nva, &tmp_list, list) {
+			if (!n_decay--)
+				break;
+
 			list_del_init(&va->list);
 			merge_or_add_vmap_area(va, &decay_root, &decay_list);
-
-			if (!full_decay) {
-				WRITE_ONCE(vn->pool[i].len, vn->pool[i].len - 1);
-
-				if (!--n_decay)
-					break;
-			}
 		}
 
 		/*
@@ -2171,9 +2169,10 @@ decay_va_pool_node(struct vmap_node *vn, bool full_decay)
 		 * can populate the pool therefore a simple list replace
 		 * operation takes place here.
 		 */
-		if (!full_decay && !list_empty(&tmp_list)) {
+		if (!list_empty(&tmp_list)) {
 			spin_lock(&vn->pool_lock);
 			list_replace_init(&tmp_list, &vn->pool[i].head);
+			WRITE_ONCE(vn->pool[i].len, pool_len);
 			spin_unlock(&vn->pool_lock);
 		}
 	}
-- 
2.41.0



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 4/5] mm/vmalloc: optimize function vm_unmap_aliases()
  2025-04-18 22:36 [PATCH v2 0/5] mm/vmalloc.c: code cleanup and improvements Baoquan He
                   ` (2 preceding siblings ...)
  2025-04-18 22:36 ` [PATCH v2 3/5] mm/vmalloc.c: optimize code in decay_va_pool_node() a little bit Baoquan He
@ 2025-04-18 22:36 ` Baoquan He
  2025-04-18 22:36 ` [PATCH v2 5/5] mm/vmalloc.c: return explicit error value in alloc_vmap_area() Baoquan He
  2025-04-22  8:53 ` [PATCH v2 0/5] mm/vmalloc.c: code cleanup and improvements Uladzislau Rezki
  5 siblings, 0 replies; 8+ messages in thread
From: Baoquan He @ 2025-04-18 22:36 UTC (permalink / raw)
  To: linux-mm; +Cc: akpm, urezki, shivankg, vishal.moola, linux-kernel, Baoquan He

Remove unneeded local variables and replace them with values.

Signed-off-by: Baoquan He <bhe@redhat.com>
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Reviewed-by: Shivank Garg <shivankg@amd.com>
---
 mm/vmalloc.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index cd654cc35d2b..39e043ba969b 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2915,10 +2915,7 @@ static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush)
  */
 void vm_unmap_aliases(void)
 {
-	unsigned long start = ULONG_MAX, end = 0;
-	int flush = 0;
-
-	_vm_unmap_aliases(start, end, flush);
+	_vm_unmap_aliases(ULONG_MAX, 0, 0);
 }
 EXPORT_SYMBOL_GPL(vm_unmap_aliases);
 
-- 
2.41.0



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 5/5] mm/vmalloc.c: return explicit error value in alloc_vmap_area()
  2025-04-18 22:36 [PATCH v2 0/5] mm/vmalloc.c: code cleanup and improvements Baoquan He
                   ` (3 preceding siblings ...)
  2025-04-18 22:36 ` [PATCH v2 4/5] mm/vmalloc: optimize function vm_unmap_aliases() Baoquan He
@ 2025-04-18 22:36 ` Baoquan He
  2025-04-21  4:47   ` Shivank Garg
  2025-04-22  8:53 ` [PATCH v2 0/5] mm/vmalloc.c: code cleanup and improvements Uladzislau Rezki
  5 siblings, 1 reply; 8+ messages in thread
From: Baoquan He @ 2025-04-18 22:36 UTC (permalink / raw)
  To: linux-mm; +Cc: akpm, urezki, shivankg, vishal.moola, linux-kernel, Baoquan He

In codes of alloc_vmap_area(), it returns the upper bound 'vend' to
indicate if the allocation is successful or failed. That is not very clear.

Here change to return explicit error values and check them to judge if
allocation is successful.

Signed-off-by: Baoquan He <bhe@redhat.com>
---
 mm/vmalloc.c | 27 +++++++++++++--------------
 1 file changed, 13 insertions(+), 14 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 39e043ba969b..0251402ca5b9 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1698,7 +1698,7 @@ va_clip(struct rb_root *root, struct list_head *head,
 			 */
 			lva = kmem_cache_alloc(vmap_area_cachep, GFP_NOWAIT);
 			if (!lva)
-				return -1;
+				return -ENOMEM;
 		}
 
 		/*
@@ -1712,7 +1712,7 @@ va_clip(struct rb_root *root, struct list_head *head,
 		 */
 		va->va_start = nva_start_addr + size;
 	} else {
-		return -1;
+		return -EINVAL;
 	}
 
 	if (type != FL_FIT_TYPE) {
@@ -1741,19 +1741,19 @@ va_alloc(struct vmap_area *va,
 
 	/* Check the "vend" restriction. */
 	if (nva_start_addr + size > vend)
-		return vend;
+		return -ERANGE;
 
 	/* Update the free vmap_area. */
 	ret = va_clip(root, head, va, nva_start_addr, size);
 	if (WARN_ON_ONCE(ret))
-		return vend;
+		return ret;
 
 	return nva_start_addr;
 }
 
 /*
  * Returns a start address of the newly allocated area, if success.
- * Otherwise a vend is returned that indicates failure.
+ * Otherwise an error value is returned that indicates failure.
  */
 static __always_inline unsigned long
 __alloc_vmap_area(struct rb_root *root, struct list_head *head,
@@ -1778,14 +1778,13 @@ __alloc_vmap_area(struct rb_root *root, struct list_head *head,
 
 	va = find_vmap_lowest_match(root, size, align, vstart, adjust_search_size);
 	if (unlikely(!va))
-		return vend;
+		return -ENOENT;
 
 	nva_start_addr = va_alloc(va, root, head, size, align, vstart, vend);
-	if (nva_start_addr == vend)
-		return vend;
 
 #if DEBUG_AUGMENT_LOWEST_MATCH_CHECK
-	find_vmap_lowest_match_check(root, head, size, align);
+	if (!IS_ERR_VALUE(nva_start_addr))
+		find_vmap_lowest_match_check(root, head, size, align);
 #endif
 
 	return nva_start_addr;
@@ -1915,7 +1914,7 @@ node_alloc(unsigned long size, unsigned long align,
 	struct vmap_area *va;
 
 	*vn_id = 0;
-	*addr = vend;
+	*addr = -EINVAL;
 
 	/*
 	 * Fallback to a global heap if not vmalloc or there
@@ -1995,20 +1994,20 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
 	}
 
 retry:
-	if (addr == vend) {
+	if (IS_ERR_VALUE(addr)) {
 		preload_this_cpu_lock(&free_vmap_area_lock, gfp_mask, node);
 		addr = __alloc_vmap_area(&free_vmap_area_root, &free_vmap_area_list,
 			size, align, vstart, vend);
 		spin_unlock(&free_vmap_area_lock);
 	}
 
-	trace_alloc_vmap_area(addr, size, align, vstart, vend, addr == vend);
+	trace_alloc_vmap_area(addr, size, align, vstart, vend, IS_ERR_VALUE(addr));
 
 	/*
-	 * If an allocation fails, the "vend" address is
+	 * If an allocation fails, the error value is
 	 * returned. Therefore trigger the overflow path.
 	 */
-	if (unlikely(addr == vend))
+	if (IS_ERR_VALUE(addr))
 		goto overflow;
 
 	va->va_start = addr;
-- 
2.41.0



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 5/5] mm/vmalloc.c: return explicit error value in alloc_vmap_area()
  2025-04-18 22:36 ` [PATCH v2 5/5] mm/vmalloc.c: return explicit error value in alloc_vmap_area() Baoquan He
@ 2025-04-21  4:47   ` Shivank Garg
  0 siblings, 0 replies; 8+ messages in thread
From: Shivank Garg @ 2025-04-21  4:47 UTC (permalink / raw)
  To: Baoquan He, linux-mm; +Cc: akpm, urezki, vishal.moola, linux-kernel

On 4/19/2025 4:06 AM, Baoquan He wrote:
> In codes of alloc_vmap_area(), it returns the upper bound 'vend' to
> indicate if the allocation is successful or failed. That is not very clear.
> 
> Here change to return explicit error values and check them to judge if
> allocation is successful.
> 
> Signed-off-by: Baoquan He <bhe@redhat.com>
> ---
>  mm/vmalloc.c | 27 +++++++++++++--------------
>  1 file changed, 13 insertions(+), 14 deletions(-)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 39e043ba969b..0251402ca5b9 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -1698,7 +1698,7 @@ va_clip(struct rb_root *root, struct list_head *head,
>  			 */
>  			lva = kmem_cache_alloc(vmap_area_cachep, GFP_NOWAIT);
>  			if (!lva)
> -				return -1;
> +				return -ENOMEM;
>  		}
>  
>  		/*
> @@ -1712,7 +1712,7 @@ va_clip(struct rb_root *root, struct list_head *head,
>  		 */
>  		va->va_start = nva_start_addr + size;
>  	} else {
> -		return -1;
> +		return -EINVAL;
>  	}
>  
>  	if (type != FL_FIT_TYPE) {
> @@ -1741,19 +1741,19 @@ va_alloc(struct vmap_area *va,
>  
>  	/* Check the "vend" restriction. */
>  	if (nva_start_addr + size > vend)
> -		return vend;
> +		return -ERANGE;
>  
>  	/* Update the free vmap_area. */
>  	ret = va_clip(root, head, va, nva_start_addr, size);
>  	if (WARN_ON_ONCE(ret))
> -		return vend;
> +		return ret;
>  
>  	return nva_start_addr;
>  }
>  
>  /*
>   * Returns a start address of the newly allocated area, if success.
> - * Otherwise a vend is returned that indicates failure.
> + * Otherwise an error value is returned that indicates failure.
>   */
>  static __always_inline unsigned long
>  __alloc_vmap_area(struct rb_root *root, struct list_head *head,
> @@ -1778,14 +1778,13 @@ __alloc_vmap_area(struct rb_root *root, struct list_head *head,
>  
>  	va = find_vmap_lowest_match(root, size, align, vstart, adjust_search_size);
>  	if (unlikely(!va))
> -		return vend;
> +		return -ENOENT;
>  
>  	nva_start_addr = va_alloc(va, root, head, size, align, vstart, vend);
> -	if (nva_start_addr == vend)
> -		return vend;
>  
>  #if DEBUG_AUGMENT_LOWEST_MATCH_CHECK
> -	find_vmap_lowest_match_check(root, head, size, align);
> +	if (!IS_ERR_VALUE(nva_start_addr))
> +		find_vmap_lowest_match_check(root, head, size, align);
>  #endif
>  
>  	return nva_start_addr;
> @@ -1915,7 +1914,7 @@ node_alloc(unsigned long size, unsigned long align,
>  	struct vmap_area *va;
>  
>  	*vn_id = 0;
> -	*addr = vend;
> +	*addr = -EINVAL;
>  
>  	/*
>  	 * Fallback to a global heap if not vmalloc or there
> @@ -1995,20 +1994,20 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
>  	}
>  
>  retry:
> -	if (addr == vend) {
> +	if (IS_ERR_VALUE(addr)) {
>  		preload_this_cpu_lock(&free_vmap_area_lock, gfp_mask, node);
>  		addr = __alloc_vmap_area(&free_vmap_area_root, &free_vmap_area_list,
>  			size, align, vstart, vend);
>  		spin_unlock(&free_vmap_area_lock);
>  	}
>  
> -	trace_alloc_vmap_area(addr, size, align, vstart, vend, addr == vend);
> +	trace_alloc_vmap_area(addr, size, align, vstart, vend, IS_ERR_VALUE(addr));
>  
>  	/*
> -	 * If an allocation fails, the "vend" address is
> +	 * If an allocation fails, the error value is
>  	 * returned. Therefore trigger the overflow path.
>  	 */
> -	if (unlikely(addr == vend))
> +	if (IS_ERR_VALUE(addr))
>  		goto overflow;
>  
>  	va->va_start = addr;

Reviewed-by: Shivank Garg <shivankg@amd.com>

Thanks,
Shivank



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 0/5] mm/vmalloc.c: code cleanup and improvements
  2025-04-18 22:36 [PATCH v2 0/5] mm/vmalloc.c: code cleanup and improvements Baoquan He
                   ` (4 preceding siblings ...)
  2025-04-18 22:36 ` [PATCH v2 5/5] mm/vmalloc.c: return explicit error value in alloc_vmap_area() Baoquan He
@ 2025-04-22  8:53 ` Uladzislau Rezki
  5 siblings, 0 replies; 8+ messages in thread
From: Uladzislau Rezki @ 2025-04-22  8:53 UTC (permalink / raw)
  To: Baoquan He; +Cc: linux-mm, akpm, urezki, shivankg, vishal.moola, linux-kernel

On Sat, Apr 19, 2025 at 06:36:48AM +0800, Baoquan He wrote:
> These were made from code inspection in mm/vmalloc.c.
> 
> v1->v2:
> =======
> - In patch 3:
>   - made change to improve code according to Uladzislau's suggestion;
>   - use WRITE_ONCE() to assign the value to vn->pool[i].len finally,
>     according to Shivank's suggestion.
> In patch 5:
>   - add back the WARN_ON_ONCE() on returned value from va_clip()
>     invocation, and also add back the code comment. These are pointed
>     out by Uladzislau. 
> 
> - Add reviewers' tag from Uladzislau, Shivank and Vishal. And I only add
>   Shivank's tag in patch 1, 2, 4 according to his comment because patch 3
>   and 5 are changed in v2.
> 
> Baoquan He (5):
>   mm/vmalloc.c: change purge_ndoes as local static variable
>   mm/vmalloc.c: find the vmap of vmap_nodes in reverse order
>   mm/vmalloc.c: optimize code in decay_va_pool_node() a little bit
>   mm/vmalloc: optimize function vm_unmap_aliases()
>   mm/vmalloc.c: return explicit error value in alloc_vmap_area()
> 
>  mm/vmalloc.c | 61 ++++++++++++++++++++++++----------------------------
>  1 file changed, 28 insertions(+), 33 deletions(-)
> 
> -- 
> 2.41.0
> 
LGTM for whole series:

Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>

--
Uladzislau Rezki


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-04-22  8:53 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-18 22:36 [PATCH v2 0/5] mm/vmalloc.c: code cleanup and improvements Baoquan He
2025-04-18 22:36 ` [PATCH v2 1/5] mm/vmalloc.c: change purge_ndoes as local static variable Baoquan He
2025-04-18 22:36 ` [PATCH v2 2/5] mm/vmalloc.c: find the vmap of vmap_nodes in reverse order Baoquan He
2025-04-18 22:36 ` [PATCH v2 3/5] mm/vmalloc.c: optimize code in decay_va_pool_node() a little bit Baoquan He
2025-04-18 22:36 ` [PATCH v2 4/5] mm/vmalloc: optimize function vm_unmap_aliases() Baoquan He
2025-04-18 22:36 ` [PATCH v2 5/5] mm/vmalloc.c: return explicit error value in alloc_vmap_area() Baoquan He
2025-04-21  4:47   ` Shivank Garg
2025-04-22  8:53 ` [PATCH v2 0/5] mm/vmalloc.c: code cleanup and improvements Uladzislau Rezki

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox