* patch idea: convert trivial call_rcu users to kfree_rcu
@ 2024-05-27 7:46 Vlastimil Babka
2024-05-27 8:13 ` [cocci] " Julia Lawall
0 siblings, 1 reply; 24+ messages in thread
From: Vlastimil Babka @ 2024-05-27 7:46 UTC (permalink / raw)
To: linux-mm, RCU, cocci
Hi,
one bit from LSF/MM discussions is that there might be call_rcu users with a
callback that only does a kmem_cache_free() to a specific cache. Since SLOB
was removed, it's always ok to use kfree() and thus also kfree_rcu() on
allocations from kmem_cache_alloc() in addition to kmalloc(). Thus, such
call_rcu() users might be simplified to kfree_rcu(). I found some cases
semi-manually, but I'd expect coccinelle could help here so if anyone wants
to take this task, feel free to.
Vlastimil
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [cocci] patch idea: convert trivial call_rcu users to kfree_rcu
2024-05-27 7:46 patch idea: convert trivial call_rcu users to kfree_rcu Vlastimil Babka
@ 2024-05-27 8:13 ` Julia Lawall
2024-05-27 19:27 ` Paul E. McKenney
0 siblings, 1 reply; 24+ messages in thread
From: Julia Lawall @ 2024-05-27 8:13 UTC (permalink / raw)
To: Vlastimil Babka; +Cc: linux-mm, RCU, cocci
On Mon, 27 May 2024, Vlastimil Babka wrote:
> Hi,
>
> one bit from LSF/MM discussions is that there might be call_rcu users with a
> callback that only does a kmem_cache_free() to a specific cache. Since SLOB
> was removed, it's always ok to use kfree() and thus also kfree_rcu() on
> allocations from kmem_cache_alloc() in addition to kmalloc(). Thus, such
> call_rcu() users might be simplified to kfree_rcu(). I found some cases
> semi-manually, but I'd expect coccinelle could help here so if anyone wants
> to take this task, feel free to.
Thanks for the suggestion! I will try to look into it.
julia
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [cocci] patch idea: convert trivial call_rcu users to kfree_rcu
2024-05-27 8:13 ` [cocci] " Julia Lawall
@ 2024-05-27 19:27 ` Paul E. McKenney
2024-05-27 19:46 ` Uladzislau Rezki
2024-05-27 20:43 ` Julia Lawall
0 siblings, 2 replies; 24+ messages in thread
From: Paul E. McKenney @ 2024-05-27 19:27 UTC (permalink / raw)
To: Julia Lawall; +Cc: Vlastimil Babka, linux-mm, RCU, cocci
On Mon, May 27, 2024 at 10:13:40AM +0200, Julia Lawall wrote:
>
>
> On Mon, 27 May 2024, Vlastimil Babka wrote:
>
> > Hi,
> >
> > one bit from LSF/MM discussions is that there might be call_rcu users with a
> > callback that only does a kmem_cache_free() to a specific cache. Since SLOB
> > was removed, it's always ok to use kfree() and thus also kfree_rcu() on
> > allocations from kmem_cache_alloc() in addition to kmalloc(). Thus, such
> > call_rcu() users might be simplified to kfree_rcu(). I found some cases
> > semi-manually, but I'd expect coccinelle could help here so if anyone wants
> > to take this task, feel free to.
>
> Thanks for the suggestion! I will try to look into it.
Thank you both!
Thanx, Paul
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [cocci] patch idea: convert trivial call_rcu users to kfree_rcu
2024-05-27 19:27 ` Paul E. McKenney
@ 2024-05-27 19:46 ` Uladzislau Rezki
2024-05-27 19:51 ` Julia Lawall
2024-05-27 20:43 ` Julia Lawall
1 sibling, 1 reply; 24+ messages in thread
From: Uladzislau Rezki @ 2024-05-27 19:46 UTC (permalink / raw)
To: Paul E. McKenney, Julia Lawall, Vlastimil Babka
Cc: Julia Lawall, Vlastimil Babka, linux-mm, RCU, cocci
On Mon, May 27, 2024 at 12:27:14PM -0700, Paul E. McKenney wrote:
> On Mon, May 27, 2024 at 10:13:40AM +0200, Julia Lawall wrote:
> >
> >
> > On Mon, 27 May 2024, Vlastimil Babka wrote:
> >
> > > Hi,
> > >
> > > one bit from LSF/MM discussions is that there might be call_rcu users with a
> > > callback that only does a kmem_cache_free() to a specific cache. Since SLOB
> > > was removed, it's always ok to use kfree() and thus also kfree_rcu() on
> > > allocations from kmem_cache_alloc() in addition to kmalloc(). Thus, such
> > > call_rcu() users might be simplified to kfree_rcu(). I found some cases
> > > semi-manually, but I'd expect coccinelle could help here so if anyone wants
> > > to take this task, feel free to.
> >
> > Thanks for the suggestion! I will try to look into it.
>
> Thank you both!
>
I wanted to take an action on it but Julia was first. So, please go ahead :)
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [cocci] patch idea: convert trivial call_rcu users to kfree_rcu
2024-05-27 19:46 ` Uladzislau Rezki
@ 2024-05-27 19:51 ` Julia Lawall
2024-05-27 20:36 ` Uladzislau Rezki
0 siblings, 1 reply; 24+ messages in thread
From: Julia Lawall @ 2024-05-27 19:51 UTC (permalink / raw)
To: Uladzislau Rezki; +Cc: Paul E. McKenney, Vlastimil Babka, linux-mm, RCU, cocci
On Mon, 27 May 2024, Uladzislau Rezki wrote:
> On Mon, May 27, 2024 at 12:27:14PM -0700, Paul E. McKenney wrote:
> > On Mon, May 27, 2024 at 10:13:40AM +0200, Julia Lawall wrote:
> > >
> > >
> > > On Mon, 27 May 2024, Vlastimil Babka wrote:
> > >
> > > > Hi,
> > > >
> > > > one bit from LSF/MM discussions is that there might be call_rcu users with a
> > > > callback that only does a kmem_cache_free() to a specific cache. Since SLOB
> > > > was removed, it's always ok to use kfree() and thus also kfree_rcu() on
> > > > allocations from kmem_cache_alloc() in addition to kmalloc(). Thus, such
> > > > call_rcu() users might be simplified to kfree_rcu(). I found some cases
> > > > semi-manually, but I'd expect coccinelle could help here so if anyone wants
> > > > to take this task, feel free to.
> > >
> > > Thanks for the suggestion! I will try to look into it.
> >
> > Thank you both!
> >
> I wanted to take an action on it but Julia was first. So, please go ahead :)
If you want to try, please go ahead. We can compare results.
julia
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [cocci] patch idea: convert trivial call_rcu users to kfree_rcu
2024-05-27 19:51 ` Julia Lawall
@ 2024-05-27 20:36 ` Uladzislau Rezki
0 siblings, 0 replies; 24+ messages in thread
From: Uladzislau Rezki @ 2024-05-27 20:36 UTC (permalink / raw)
To: Julia Lawall
Cc: Uladzislau Rezki, Paul E. McKenney, Vlastimil Babka, linux-mm,
RCU, cocci
On Mon, May 27, 2024 at 09:51:35PM +0200, Julia Lawall wrote:
>
>
> On Mon, 27 May 2024, Uladzislau Rezki wrote:
>
> > On Mon, May 27, 2024 at 12:27:14PM -0700, Paul E. McKenney wrote:
> > > On Mon, May 27, 2024 at 10:13:40AM +0200, Julia Lawall wrote:
> > > >
> > > >
> > > > On Mon, 27 May 2024, Vlastimil Babka wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > one bit from LSF/MM discussions is that there might be call_rcu users with a
> > > > > callback that only does a kmem_cache_free() to a specific cache. Since SLOB
> > > > > was removed, it's always ok to use kfree() and thus also kfree_rcu() on
> > > > > allocations from kmem_cache_alloc() in addition to kmalloc(). Thus, such
> > > > > call_rcu() users might be simplified to kfree_rcu(). I found some cases
> > > > > semi-manually, but I'd expect coccinelle could help here so if anyone wants
> > > > > to take this task, feel free to.
> > > >
> > > > Thanks for the suggestion! I will try to look into it.
> > >
> > > Thank you both!
> > >
> > I wanted to take an action on it but Julia was first. So, please go ahead :)
>
> If you want to try, please go ahead. We can compare results.
>
It is appreciated if you proceed. From my side i will keep an eye and if
n it. If something is missed, which i can detect, i will let you know.
Thanks.
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [cocci] patch idea: convert trivial call_rcu users to kfree_rcu
2024-05-27 19:27 ` Paul E. McKenney
2024-05-27 19:46 ` Uladzislau Rezki
@ 2024-05-27 20:43 ` Julia Lawall
2024-05-27 21:48 ` Paul E. McKenney
2024-05-28 12:03 ` Uladzislau Rezki
1 sibling, 2 replies; 24+ messages in thread
From: Julia Lawall @ 2024-05-27 20:43 UTC (permalink / raw)
To: Paul E. McKenney; +Cc: Vlastimil Babka, linux-mm, RCU, cocci
On Mon, 27 May 2024, Paul E. McKenney wrote:
> On Mon, May 27, 2024 at 10:13:40AM +0200, Julia Lawall wrote:
> >
> >
> > On Mon, 27 May 2024, Vlastimil Babka wrote:
> >
> > > Hi,
> > >
> > > one bit from LSF/MM discussions is that there might be call_rcu users with a
> > > callback that only does a kmem_cache_free() to a specific cache. Since SLOB
> > > was removed, it's always ok to use kfree() and thus also kfree_rcu() on
> > > allocations from kmem_cache_alloc() in addition to kmalloc(). Thus, such
> > > call_rcu() users might be simplified to kfree_rcu(). I found some cases
> > > semi-manually, but I'd expect coccinelle could help here so if anyone wants
> > > to take this task, feel free to.
> >
> > Thanks for the suggestion! I will try to look into it.
>
> Thank you both!
I found the following functions. Do you want some other information, such
as where they are called from?
Ignore the -s at the beginning of some lines. Those are for emphasis. not
to suggest to remove the code.
I checked that the functions are only used in calls to call_rcu.
Without more effort, Coccinelle only looks for functions defined in the
same file. Here are the functions that are passed to call_rcu where the
function is not defined in the same file:
need definition for audit_free_rule_rcu
need definition for __i915_gem_free_object_rcu
need definition for io_eventfd_ops
need definition for ip_vs_dest_dst_rcu_free
need definition for __put_task_struct_rcu_cb
need definition for radix_tree_node_rcu_free
They all do something more, although radix_tree_node_rcu_free doesn't do
much more (some memsets).
julia
diff -u -p /home/jll/linux/drivers/net/wireguard/allowedips.c /tmp/nothing/drivers/net/wireguard/allowedips.c
--- /home/jll/linux/drivers/net/wireguard/allowedips.c
+++ /tmp/nothing/drivers/net/wireguard/allowedips.c
@@ -50,7 +50,6 @@ static void push_rcu(struct allowedips_n
static void node_free_rcu(struct rcu_head *rcu)
{
- kmem_cache_free(node_cache, container_of(rcu, struct allowedips_node, rcu));
}
static void root_free_rcu(struct rcu_head *rcu)
diff -u -p /home/jll/linux/fs/ecryptfs/dentry.c /tmp/nothing/fs/ecryptfs/dentry.c
--- /home/jll/linux/fs/ecryptfs/dentry.c
+++ /tmp/nothing/fs/ecryptfs/dentry.c
@@ -53,8 +53,6 @@ struct kmem_cache *ecryptfs_dentry_info_
static void ecryptfs_dentry_free_rcu(struct rcu_head *head)
{
- kmem_cache_free(ecryptfs_dentry_info_cache,
- container_of(head, struct ecryptfs_dentry_info, rcu));
}
/**
diff -u -p /home/jll/linux/kernel/fork.c /tmp/nothing/kernel/fork.c
--- /home/jll/linux/kernel/fork.c
+++ /tmp/nothing/kernel/fork.c
@@ -378,7 +378,6 @@ static struct kmem_cache *thread_stack_c
static void thread_stack_free_rcu(struct rcu_head *rh)
{
- kmem_cache_free(thread_stack_cache, rh);
}
static void thread_stack_delayed_free(struct task_struct *tsk)
diff -u -p /home/jll/linux/kernel/workqueue.c /tmp/nothing/kernel/workqueue.c
--- /home/jll/linux/kernel/workqueue.c
+++ /tmp/nothing/kernel/workqueue.c
@@ -5024,8 +5024,6 @@ fail:
static void rcu_free_pwq(struct rcu_head *rcu)
{
- kmem_cache_free(pwq_cache,
- container_of(rcu, struct pool_workqueue, rcu));
}
/*
diff -u -p /home/jll/linux/net/ipv4/inetpeer.c /tmp/nothing/net/ipv4/inetpeer.c
--- /home/jll/linux/net/ipv4/inetpeer.c
+++ /tmp/nothing/net/ipv4/inetpeer.c
@@ -130,7 +130,6 @@ static struct inet_peer *lookup(const st
static void inetpeer_free_rcu(struct rcu_head *head)
{
- kmem_cache_free(peer_cachep, container_of(head, struct inet_peer, rcu));
}
/* perform garbage collect on all items stacked during a lookup */
diff -u -p /home/jll/linux/net/ipv6/xfrm6_tunnel.c /tmp/nothing/net/ipv6/xfrm6_tunnel.c
--- /home/jll/linux/net/ipv6/xfrm6_tunnel.c
+++ /tmp/nothing/net/ipv6/xfrm6_tunnel.c
@@ -180,8 +180,6 @@ EXPORT_SYMBOL(xfrm6_tunnel_alloc_spi);
static void x6spi_destroy_rcu(struct rcu_head *head)
{
- kmem_cache_free(xfrm6_tunnel_spi_kmem,
- container_of(head, struct xfrm6_tunnel_spi, rcu_head));
}
static void xfrm6_tunnel_free_spi(struct net *net, xfrm_address_t *saddr)
diff -u -p /home/jll/linux/security/security.c /tmp/nothing/security/security.c
--- /home/jll/linux/security/security.c
+++ /tmp/nothing/security/security.c
@@ -1599,7 +1599,6 @@ static void inode_free_by_rcu(struct rcu
/*
* The rcu head is at the start of the inode blob
*/
- kmem_cache_free(lsm_inode_cache, head);
}
/**
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [cocci] patch idea: convert trivial call_rcu users to kfree_rcu
2024-05-27 20:43 ` Julia Lawall
@ 2024-05-27 21:48 ` Paul E. McKenney
2024-05-28 5:09 ` Julia Lawall
2024-05-28 12:03 ` Uladzislau Rezki
1 sibling, 1 reply; 24+ messages in thread
From: Paul E. McKenney @ 2024-05-27 21:48 UTC (permalink / raw)
To: Julia Lawall; +Cc: Vlastimil Babka, linux-mm, RCU, cocci, qiang.zhang1211
On Mon, May 27, 2024 at 10:43:08PM +0200, Julia Lawall wrote:
>
>
> On Mon, 27 May 2024, Paul E. McKenney wrote:
>
> > On Mon, May 27, 2024 at 10:13:40AM +0200, Julia Lawall wrote:
> > >
> > >
> > > On Mon, 27 May 2024, Vlastimil Babka wrote:
> > >
> > > > Hi,
> > > >
> > > > one bit from LSF/MM discussions is that there might be call_rcu users with a
> > > > callback that only does a kmem_cache_free() to a specific cache. Since SLOB
> > > > was removed, it's always ok to use kfree() and thus also kfree_rcu() on
> > > > allocations from kmem_cache_alloc() in addition to kmalloc(). Thus, such
> > > > call_rcu() users might be simplified to kfree_rcu(). I found some cases
> > > > semi-manually, but I'd expect coccinelle could help here so if anyone wants
> > > > to take this task, feel free to.
> > >
> > > Thanks for the suggestion! I will try to look into it.
> >
> > Thank you both!
>
> I found the following functions. Do you want some other information, such
> as where they are called from?
Thank you!
> Ignore the -s at the beginning of some lines. Those are for emphasis. not
> to suggest to remove the code.
Well, the idea is to remove not only those lines, but the functions
containing them as well. ;-)
Then each call_rcu() function becomes kfree_rcu().
> I checked that the functions are only used in calls to call_rcu.
Good point, I had forgotten about that possibility!
> Without more effort, Coccinelle only looks for functions defined in the
> same file. Here are the functions that are passed to call_rcu where the
> function is not defined in the same file:
>
> need definition for audit_free_rule_rcu
> need definition for __i915_gem_free_object_rcu
> need definition for io_eventfd_ops
> need definition for ip_vs_dest_dst_rcu_free
> need definition for __put_task_struct_rcu_cb
> need definition for radix_tree_node_rcu_free
>
> They all do something more, although radix_tree_node_rcu_free doesn't do
> much more (some memsets).
Those might be important in order to handle the possibility of readers
holding a reference to a given block of memory across the time that it
is freed and then reallocated, but I cannot say for sure.
In most cases, the reinitialization at reallocation time suffices.
Thanx, Paul
> julia
>
> diff -u -p /home/jll/linux/drivers/net/wireguard/allowedips.c /tmp/nothing/drivers/net/wireguard/allowedips.c
> --- /home/jll/linux/drivers/net/wireguard/allowedips.c
> +++ /tmp/nothing/drivers/net/wireguard/allowedips.c
> @@ -50,7 +50,6 @@ static void push_rcu(struct allowedips_n
>
> static void node_free_rcu(struct rcu_head *rcu)
> {
> - kmem_cache_free(node_cache, container_of(rcu, struct allowedips_node, rcu));
> }
>
> static void root_free_rcu(struct rcu_head *rcu)
> diff -u -p /home/jll/linux/fs/ecryptfs/dentry.c /tmp/nothing/fs/ecryptfs/dentry.c
> --- /home/jll/linux/fs/ecryptfs/dentry.c
> +++ /tmp/nothing/fs/ecryptfs/dentry.c
> @@ -53,8 +53,6 @@ struct kmem_cache *ecryptfs_dentry_info_
>
> static void ecryptfs_dentry_free_rcu(struct rcu_head *head)
> {
> - kmem_cache_free(ecryptfs_dentry_info_cache,
> - container_of(head, struct ecryptfs_dentry_info, rcu));
> }
>
> /**
> diff -u -p /home/jll/linux/kernel/fork.c /tmp/nothing/kernel/fork.c
> --- /home/jll/linux/kernel/fork.c
> +++ /tmp/nothing/kernel/fork.c
> @@ -378,7 +378,6 @@ static struct kmem_cache *thread_stack_c
>
> static void thread_stack_free_rcu(struct rcu_head *rh)
> {
> - kmem_cache_free(thread_stack_cache, rh);
> }
>
> static void thread_stack_delayed_free(struct task_struct *tsk)
> diff -u -p /home/jll/linux/kernel/workqueue.c /tmp/nothing/kernel/workqueue.c
> --- /home/jll/linux/kernel/workqueue.c
> +++ /tmp/nothing/kernel/workqueue.c
> @@ -5024,8 +5024,6 @@ fail:
>
> static void rcu_free_pwq(struct rcu_head *rcu)
> {
> - kmem_cache_free(pwq_cache,
> - container_of(rcu, struct pool_workqueue, rcu));
> }
>
> /*
> diff -u -p /home/jll/linux/net/ipv4/inetpeer.c /tmp/nothing/net/ipv4/inetpeer.c
> --- /home/jll/linux/net/ipv4/inetpeer.c
> +++ /tmp/nothing/net/ipv4/inetpeer.c
> @@ -130,7 +130,6 @@ static struct inet_peer *lookup(const st
>
> static void inetpeer_free_rcu(struct rcu_head *head)
> {
> - kmem_cache_free(peer_cachep, container_of(head, struct inet_peer, rcu));
> }
>
> /* perform garbage collect on all items stacked during a lookup */
> diff -u -p /home/jll/linux/net/ipv6/xfrm6_tunnel.c /tmp/nothing/net/ipv6/xfrm6_tunnel.c
> --- /home/jll/linux/net/ipv6/xfrm6_tunnel.c
> +++ /tmp/nothing/net/ipv6/xfrm6_tunnel.c
> @@ -180,8 +180,6 @@ EXPORT_SYMBOL(xfrm6_tunnel_alloc_spi);
>
> static void x6spi_destroy_rcu(struct rcu_head *head)
> {
> - kmem_cache_free(xfrm6_tunnel_spi_kmem,
> - container_of(head, struct xfrm6_tunnel_spi, rcu_head));
> }
>
> static void xfrm6_tunnel_free_spi(struct net *net, xfrm_address_t *saddr)
> diff -u -p /home/jll/linux/security/security.c /tmp/nothing/security/security.c
> --- /home/jll/linux/security/security.c
> +++ /tmp/nothing/security/security.c
> @@ -1599,7 +1599,6 @@ static void inode_free_by_rcu(struct rcu
> /*
> * The rcu head is at the start of the inode blob
> */
> - kmem_cache_free(lsm_inode_cache, head);
> }
>
> /**
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [cocci] patch idea: convert trivial call_rcu users to kfree_rcu
2024-05-27 21:48 ` Paul E. McKenney
@ 2024-05-28 5:09 ` Julia Lawall
0 siblings, 0 replies; 24+ messages in thread
From: Julia Lawall @ 2024-05-28 5:09 UTC (permalink / raw)
To: Paul E. McKenney; +Cc: Vlastimil Babka, linux-mm, RCU, cocci, qiang.zhang1211
On Mon, 27 May 2024, Paul E. McKenney wrote:
> On Mon, May 27, 2024 at 10:43:08PM +0200, Julia Lawall wrote:
> >
> >
> > On Mon, 27 May 2024, Paul E. McKenney wrote:
> >
> > > On Mon, May 27, 2024 at 10:13:40AM +0200, Julia Lawall wrote:
> > > >
> > > >
> > > > On Mon, 27 May 2024, Vlastimil Babka wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > one bit from LSF/MM discussions is that there might be call_rcu users with a
> > > > > callback that only does a kmem_cache_free() to a specific cache. Since SLOB
> > > > > was removed, it's always ok to use kfree() and thus also kfree_rcu() on
> > > > > allocations from kmem_cache_alloc() in addition to kmalloc(). Thus, such
> > > > > call_rcu() users might be simplified to kfree_rcu(). I found some cases
> > > > > semi-manually, but I'd expect coccinelle could help here so if anyone wants
> > > > > to take this task, feel free to.
> > > >
> > > > Thanks for the suggestion! I will try to look into it.
> > >
> > > Thank you both!
> >
> > I found the following functions. Do you want some other information, such
> > as where they are called from?
>
> Thank you!
>
> > Ignore the -s at the beginning of some lines. Those are for emphasis. not
> > to suggest to remove the code.
>
> Well, the idea is to remove not only those lines, but the functions
> containing them as well. ;-)
>
> Then each call_rcu() function becomes kfree_rcu().
OK. I'll send a patch for your review accordingly. If everything looks
fine, I will send the patches to the maintainers.
julia
>
> > I checked that the functions are only used in calls to call_rcu.
>
> Good point, I had forgotten about that possibility!
>
> > Without more effort, Coccinelle only looks for functions defined in the
> > same file. Here are the functions that are passed to call_rcu where the
> > function is not defined in the same file:
> >
> > need definition for audit_free_rule_rcu
> > need definition for __i915_gem_free_object_rcu
> > need definition for io_eventfd_ops
> > need definition for ip_vs_dest_dst_rcu_free
> > need definition for __put_task_struct_rcu_cb
> > need definition for radix_tree_node_rcu_free
> >
> > They all do something more, although radix_tree_node_rcu_free doesn't do
> > much more (some memsets).
>
> Those might be important in order to handle the possibility of readers
> holding a reference to a given block of memory across the time that it
> is freed and then reallocated, but I cannot say for sure.
>
> In most cases, the reinitialization at reallocation time suffices.
>
> Thanx, Paul
>
> > julia
> >
> > diff -u -p /home/jll/linux/drivers/net/wireguard/allowedips.c /tmp/nothing/drivers/net/wireguard/allowedips.c
> > --- /home/jll/linux/drivers/net/wireguard/allowedips.c
> > +++ /tmp/nothing/drivers/net/wireguard/allowedips.c
> > @@ -50,7 +50,6 @@ static void push_rcu(struct allowedips_n
> >
> > static void node_free_rcu(struct rcu_head *rcu)
> > {
> > - kmem_cache_free(node_cache, container_of(rcu, struct allowedips_node, rcu));
> > }
> >
> > static void root_free_rcu(struct rcu_head *rcu)
> > diff -u -p /home/jll/linux/fs/ecryptfs/dentry.c /tmp/nothing/fs/ecryptfs/dentry.c
> > --- /home/jll/linux/fs/ecryptfs/dentry.c
> > +++ /tmp/nothing/fs/ecryptfs/dentry.c
> > @@ -53,8 +53,6 @@ struct kmem_cache *ecryptfs_dentry_info_
> >
> > static void ecryptfs_dentry_free_rcu(struct rcu_head *head)
> > {
> > - kmem_cache_free(ecryptfs_dentry_info_cache,
> > - container_of(head, struct ecryptfs_dentry_info, rcu));
> > }
> >
> > /**
> > diff -u -p /home/jll/linux/kernel/fork.c /tmp/nothing/kernel/fork.c
> > --- /home/jll/linux/kernel/fork.c
> > +++ /tmp/nothing/kernel/fork.c
> > @@ -378,7 +378,6 @@ static struct kmem_cache *thread_stack_c
> >
> > static void thread_stack_free_rcu(struct rcu_head *rh)
> > {
> > - kmem_cache_free(thread_stack_cache, rh);
> > }
> >
> > static void thread_stack_delayed_free(struct task_struct *tsk)
> > diff -u -p /home/jll/linux/kernel/workqueue.c /tmp/nothing/kernel/workqueue.c
> > --- /home/jll/linux/kernel/workqueue.c
> > +++ /tmp/nothing/kernel/workqueue.c
> > @@ -5024,8 +5024,6 @@ fail:
> >
> > static void rcu_free_pwq(struct rcu_head *rcu)
> > {
> > - kmem_cache_free(pwq_cache,
> > - container_of(rcu, struct pool_workqueue, rcu));
> > }
> >
> > /*
> > diff -u -p /home/jll/linux/net/ipv4/inetpeer.c /tmp/nothing/net/ipv4/inetpeer.c
> > --- /home/jll/linux/net/ipv4/inetpeer.c
> > +++ /tmp/nothing/net/ipv4/inetpeer.c
> > @@ -130,7 +130,6 @@ static struct inet_peer *lookup(const st
> >
> > static void inetpeer_free_rcu(struct rcu_head *head)
> > {
> > - kmem_cache_free(peer_cachep, container_of(head, struct inet_peer, rcu));
> > }
> >
> > /* perform garbage collect on all items stacked during a lookup */
> > diff -u -p /home/jll/linux/net/ipv6/xfrm6_tunnel.c /tmp/nothing/net/ipv6/xfrm6_tunnel.c
> > --- /home/jll/linux/net/ipv6/xfrm6_tunnel.c
> > +++ /tmp/nothing/net/ipv6/xfrm6_tunnel.c
> > @@ -180,8 +180,6 @@ EXPORT_SYMBOL(xfrm6_tunnel_alloc_spi);
> >
> > static void x6spi_destroy_rcu(struct rcu_head *head)
> > {
> > - kmem_cache_free(xfrm6_tunnel_spi_kmem,
> > - container_of(head, struct xfrm6_tunnel_spi, rcu_head));
> > }
> >
> > static void xfrm6_tunnel_free_spi(struct net *net, xfrm_address_t *saddr)
> > diff -u -p /home/jll/linux/security/security.c /tmp/nothing/security/security.c
> > --- /home/jll/linux/security/security.c
> > +++ /tmp/nothing/security/security.c
> > @@ -1599,7 +1599,6 @@ static void inode_free_by_rcu(struct rcu
> > /*
> > * The rcu head is at the start of the inode blob
> > */
> > - kmem_cache_free(lsm_inode_cache, head);
> > }
> >
> > /**
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [cocci] patch idea: convert trivial call_rcu users to kfree_rcu
2024-05-27 20:43 ` Julia Lawall
2024-05-27 21:48 ` Paul E. McKenney
@ 2024-05-28 12:03 ` Uladzislau Rezki
2024-05-28 12:08 ` Julia Lawall
1 sibling, 1 reply; 24+ messages in thread
From: Uladzislau Rezki @ 2024-05-28 12:03 UTC (permalink / raw)
To: Julia Lawall; +Cc: Paul E. McKenney, Vlastimil Babka, linux-mm, RCU, cocci
Hello.
>
>
> On Mon, 27 May 2024, Paul E. McKenney wrote:
>
> > On Mon, May 27, 2024 at 10:13:40AM +0200, Julia Lawall wrote:
> > >
> > >
> > > On Mon, 27 May 2024, Vlastimil Babka wrote:
> > >
> > > > Hi,
> > > >
> > > > one bit from LSF/MM discussions is that there might be call_rcu users with a
> > > > callback that only does a kmem_cache_free() to a specific cache. Since SLOB
> > > > was removed, it's always ok to use kfree() and thus also kfree_rcu() on
> > > > allocations from kmem_cache_alloc() in addition to kmalloc(). Thus, such
> > > > call_rcu() users might be simplified to kfree_rcu(). I found some cases
> > > > semi-manually, but I'd expect coccinelle could help here so if anyone wants
> > > > to take this task, feel free to.
> > >
> > > Thanks for the suggestion! I will try to look into it.
> >
> > Thank you both!
>
> I found the following functions. Do you want some other information, such
> as where they are called from?
>
> Ignore the -s at the beginning of some lines. Those are for emphasis. not
> to suggest to remove the code.
>
> I checked that the functions are only used in calls to call_rcu.
>
> Without more effort, Coccinelle only looks for functions defined in the
> same file. Here are the functions that are passed to call_rcu where the
> function is not defined in the same file:
>
> need definition for audit_free_rule_rcu
> need definition for __i915_gem_free_object_rcu
> need definition for io_eventfd_ops
> need definition for ip_vs_dest_dst_rcu_free
> need definition for __put_task_struct_rcu_cb
> need definition for radix_tree_node_rcu_free
>
> They all do something more, although radix_tree_node_rcu_free doesn't do
> much more (some memsets).
>
> julia
>
> diff -u -p /home/jll/linux/drivers/net/wireguard/allowedips.c /tmp/nothing/drivers/net/wireguard/allowedips.c
> --- /home/jll/linux/drivers/net/wireguard/allowedips.c
> +++ /tmp/nothing/drivers/net/wireguard/allowedips.c
> @@ -50,7 +50,6 @@ static void push_rcu(struct allowedips_n
>
> static void node_free_rcu(struct rcu_head *rcu)
> {
> - kmem_cache_free(node_cache, container_of(rcu, struct allowedips_node, rcu));
> }
>
> static void root_free_rcu(struct rcu_head *rcu)
> diff -u -p /home/jll/linux/fs/ecryptfs/dentry.c /tmp/nothing/fs/ecryptfs/dentry.c
> --- /home/jll/linux/fs/ecryptfs/dentry.c
> +++ /tmp/nothing/fs/ecryptfs/dentry.c
> @@ -53,8 +53,6 @@ struct kmem_cache *ecryptfs_dentry_info_
>
> static void ecryptfs_dentry_free_rcu(struct rcu_head *head)
> {
> - kmem_cache_free(ecryptfs_dentry_info_cache,
> - container_of(head, struct ecryptfs_dentry_info, rcu));
> }
>
> /**
> diff -u -p /home/jll/linux/kernel/fork.c /tmp/nothing/kernel/fork.c
> --- /home/jll/linux/kernel/fork.c
> +++ /tmp/nothing/kernel/fork.c
> @@ -378,7 +378,6 @@ static struct kmem_cache *thread_stack_c
>
> static void thread_stack_free_rcu(struct rcu_head *rh)
> {
> - kmem_cache_free(thread_stack_cache, rh);
> }
>
> static void thread_stack_delayed_free(struct task_struct *tsk)
> diff -u -p /home/jll/linux/kernel/workqueue.c /tmp/nothing/kernel/workqueue.c
> --- /home/jll/linux/kernel/workqueue.c
> +++ /tmp/nothing/kernel/workqueue.c
> @@ -5024,8 +5024,6 @@ fail:
>
> static void rcu_free_pwq(struct rcu_head *rcu)
> {
> - kmem_cache_free(pwq_cache,
> - container_of(rcu, struct pool_workqueue, rcu));
> }
>
> /*
> diff -u -p /home/jll/linux/net/ipv4/inetpeer.c /tmp/nothing/net/ipv4/inetpeer.c
> --- /home/jll/linux/net/ipv4/inetpeer.c
> +++ /tmp/nothing/net/ipv4/inetpeer.c
> @@ -130,7 +130,6 @@ static struct inet_peer *lookup(const st
>
> static void inetpeer_free_rcu(struct rcu_head *head)
> {
> - kmem_cache_free(peer_cachep, container_of(head, struct inet_peer, rcu));
> }
>
> /* perform garbage collect on all items stacked during a lookup */
> diff -u -p /home/jll/linux/net/ipv6/xfrm6_tunnel.c /tmp/nothing/net/ipv6/xfrm6_tunnel.c
> --- /home/jll/linux/net/ipv6/xfrm6_tunnel.c
> +++ /tmp/nothing/net/ipv6/xfrm6_tunnel.c
> @@ -180,8 +180,6 @@ EXPORT_SYMBOL(xfrm6_tunnel_alloc_spi);
>
> static void x6spi_destroy_rcu(struct rcu_head *head)
> {
> - kmem_cache_free(xfrm6_tunnel_spi_kmem,
> - container_of(head, struct xfrm6_tunnel_spi, rcu_head));
> }
>
> static void xfrm6_tunnel_free_spi(struct net *net, xfrm_address_t *saddr)
> diff -u -p /home/jll/linux/security/security.c /tmp/nothing/security/security.c
> --- /home/jll/linux/security/security.c
> +++ /tmp/nothing/security/security.c
> @@ -1599,7 +1599,6 @@ static void inode_free_by_rcu(struct rcu
> /*
> * The rcu head is at the start of the inode blob
> */
> - kmem_cache_free(lsm_inode_cache, head);
> }
>
> /**
>
See below some extra functions which can be eliminated. How i found them:
find ./ -name "*.c" -o -name "*.h" | xargs grep -rn call_rcu -B 10 | grep kmem_cache_free
<snip>
static void nfsd4_free_file_rcu(struct rcu_head *rcu);
static void lima_fence_release(struct dma_fence *fence);
void intel_context_free(struct intel_context *ce);
static void ipmr_cache_free(struct mfc_cache *c);
static inline void alias_free_mem_rcu(struct fib_alias *fa);
dsthash_free(struct xt_hashlimit_htable *ht, struct dsthash_ent *ent);
void nf_ct_expect_put(struct nf_conntrack_expect *exp);
static void node_free(struct net *net, struct fib6_node *fn);
static inline void ip6mr_cache_free(struct mfc6_cache *c);
static void posix_timer_free(struct k_itimer *tmr);
static void thread_stack_delayed_free(struct task_struct *tsk);
<snip>
Those are can be replaced by directly calling of kfree_rcu() instead of call_rcu().
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [cocci] patch idea: convert trivial call_rcu users to kfree_rcu
2024-05-28 12:03 ` Uladzislau Rezki
@ 2024-05-28 12:08 ` Julia Lawall
2024-05-28 13:21 ` Uladzislau Rezki
0 siblings, 1 reply; 24+ messages in thread
From: Julia Lawall @ 2024-05-28 12:08 UTC (permalink / raw)
To: Uladzislau Rezki; +Cc: Paul E. McKenney, Vlastimil Babka, linux-mm, RCU, cocci
On Tue, 28 May 2024, Uladzislau Rezki wrote:
> Hello.
>
> >
> >
> > On Mon, 27 May 2024, Paul E. McKenney wrote:
> >
> > > On Mon, May 27, 2024 at 10:13:40AM +0200, Julia Lawall wrote:
> > > >
> > > >
> > > > On Mon, 27 May 2024, Vlastimil Babka wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > one bit from LSF/MM discussions is that there might be call_rcu users with a
> > > > > callback that only does a kmem_cache_free() to a specific cache. Since SLOB
> > > > > was removed, it's always ok to use kfree() and thus also kfree_rcu() on
> > > > > allocations from kmem_cache_alloc() in addition to kmalloc(). Thus, such
> > > > > call_rcu() users might be simplified to kfree_rcu(). I found some cases
> > > > > semi-manually, but I'd expect coccinelle could help here so if anyone wants
> > > > > to take this task, feel free to.
> > > >
> > > > Thanks for the suggestion! I will try to look into it.
> > >
> > > Thank you both!
> >
> > I found the following functions. Do you want some other information, such
> > as where they are called from?
> >
> > Ignore the -s at the beginning of some lines. Those are for emphasis. not
> > to suggest to remove the code.
> >
> > I checked that the functions are only used in calls to call_rcu.
> >
> > Without more effort, Coccinelle only looks for functions defined in the
> > same file. Here are the functions that are passed to call_rcu where the
> > function is not defined in the same file:
> >
> > need definition for audit_free_rule_rcu
> > need definition for __i915_gem_free_object_rcu
> > need definition for io_eventfd_ops
> > need definition for ip_vs_dest_dst_rcu_free
> > need definition for __put_task_struct_rcu_cb
> > need definition for radix_tree_node_rcu_free
> >
> > They all do something more, although radix_tree_node_rcu_free doesn't do
> > much more (some memsets).
> >
> > julia
> >
> > diff -u -p /home/jll/linux/drivers/net/wireguard/allowedips.c /tmp/nothing/drivers/net/wireguard/allowedips.c
> > --- /home/jll/linux/drivers/net/wireguard/allowedips.c
> > +++ /tmp/nothing/drivers/net/wireguard/allowedips.c
> > @@ -50,7 +50,6 @@ static void push_rcu(struct allowedips_n
> >
> > static void node_free_rcu(struct rcu_head *rcu)
> > {
> > - kmem_cache_free(node_cache, container_of(rcu, struct allowedips_node, rcu));
> > }
> >
> > static void root_free_rcu(struct rcu_head *rcu)
> > diff -u -p /home/jll/linux/fs/ecryptfs/dentry.c /tmp/nothing/fs/ecryptfs/dentry.c
> > --- /home/jll/linux/fs/ecryptfs/dentry.c
> > +++ /tmp/nothing/fs/ecryptfs/dentry.c
> > @@ -53,8 +53,6 @@ struct kmem_cache *ecryptfs_dentry_info_
> >
> > static void ecryptfs_dentry_free_rcu(struct rcu_head *head)
> > {
> > - kmem_cache_free(ecryptfs_dentry_info_cache,
> > - container_of(head, struct ecryptfs_dentry_info, rcu));
> > }
> >
> > /**
> > diff -u -p /home/jll/linux/kernel/fork.c /tmp/nothing/kernel/fork.c
> > --- /home/jll/linux/kernel/fork.c
> > +++ /tmp/nothing/kernel/fork.c
> > @@ -378,7 +378,6 @@ static struct kmem_cache *thread_stack_c
> >
> > static void thread_stack_free_rcu(struct rcu_head *rh)
> > {
> > - kmem_cache_free(thread_stack_cache, rh);
> > }
> >
> > static void thread_stack_delayed_free(struct task_struct *tsk)
> > diff -u -p /home/jll/linux/kernel/workqueue.c /tmp/nothing/kernel/workqueue.c
> > --- /home/jll/linux/kernel/workqueue.c
> > +++ /tmp/nothing/kernel/workqueue.c
> > @@ -5024,8 +5024,6 @@ fail:
> >
> > static void rcu_free_pwq(struct rcu_head *rcu)
> > {
> > - kmem_cache_free(pwq_cache,
> > - container_of(rcu, struct pool_workqueue, rcu));
> > }
> >
> > /*
> > diff -u -p /home/jll/linux/net/ipv4/inetpeer.c /tmp/nothing/net/ipv4/inetpeer.c
> > --- /home/jll/linux/net/ipv4/inetpeer.c
> > +++ /tmp/nothing/net/ipv4/inetpeer.c
> > @@ -130,7 +130,6 @@ static struct inet_peer *lookup(const st
> >
> > static void inetpeer_free_rcu(struct rcu_head *head)
> > {
> > - kmem_cache_free(peer_cachep, container_of(head, struct inet_peer, rcu));
> > }
> >
> > /* perform garbage collect on all items stacked during a lookup */
> > diff -u -p /home/jll/linux/net/ipv6/xfrm6_tunnel.c /tmp/nothing/net/ipv6/xfrm6_tunnel.c
> > --- /home/jll/linux/net/ipv6/xfrm6_tunnel.c
> > +++ /tmp/nothing/net/ipv6/xfrm6_tunnel.c
> > @@ -180,8 +180,6 @@ EXPORT_SYMBOL(xfrm6_tunnel_alloc_spi);
> >
> > static void x6spi_destroy_rcu(struct rcu_head *head)
> > {
> > - kmem_cache_free(xfrm6_tunnel_spi_kmem,
> > - container_of(head, struct xfrm6_tunnel_spi, rcu_head));
> > }
> >
> > static void xfrm6_tunnel_free_spi(struct net *net, xfrm_address_t *saddr)
> > diff -u -p /home/jll/linux/security/security.c /tmp/nothing/security/security.c
> > --- /home/jll/linux/security/security.c
> > +++ /tmp/nothing/security/security.c
> > @@ -1599,7 +1599,6 @@ static void inode_free_by_rcu(struct rcu
> > /*
> > * The rcu head is at the start of the inode blob
> > */
> > - kmem_cache_free(lsm_inode_cache, head);
> > }
> >
> > /**
> >
> See below some extra functions which can be eliminated. How i found them:
>
> find ./ -name "*.c" -o -name "*.h" | xargs grep -rn call_rcu -B 10 | grep kmem_cache_free
>
> <snip>
> static void nfsd4_free_file_rcu(struct rcu_head *rcu);
Thanks. I tried to find this kind of issue, but it turned up nothing.
There must be some mistake, and I will try again.
julia
> static void lima_fence_release(struct dma_fence *fence);
> void intel_context_free(struct intel_context *ce);
> static void ipmr_cache_free(struct mfc_cache *c);
> static inline void alias_free_mem_rcu(struct fib_alias *fa);
> dsthash_free(struct xt_hashlimit_htable *ht, struct dsthash_ent *ent);
> void nf_ct_expect_put(struct nf_conntrack_expect *exp);
> static void node_free(struct net *net, struct fib6_node *fn);
> static inline void ip6mr_cache_free(struct mfc6_cache *c);
> static void posix_timer_free(struct k_itimer *tmr);
> static void thread_stack_delayed_free(struct task_struct *tsk);
> <snip>
>
> Those are can be replaced by directly calling of kfree_rcu() instead of call_rcu().
>
> --
> Uladzislau Rezki
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [cocci] patch idea: convert trivial call_rcu users to kfree_rcu
2024-05-28 12:08 ` Julia Lawall
@ 2024-05-28 13:21 ` Uladzislau Rezki
2024-05-28 14:29 ` Paul E. McKenney
0 siblings, 1 reply; 24+ messages in thread
From: Uladzislau Rezki @ 2024-05-28 13:21 UTC (permalink / raw)
To: Julia Lawall
Cc: Uladzislau Rezki, Paul E. McKenney, Vlastimil Babka, linux-mm,
RCU, cocci
On Tue, May 28, 2024 at 02:08:20PM +0200, Julia Lawall wrote:
>
>
> On Tue, 28 May 2024, Uladzislau Rezki wrote:
>
> > Hello.
> >
> > >
> > >
> > > On Mon, 27 May 2024, Paul E. McKenney wrote:
> > >
> > > > On Mon, May 27, 2024 at 10:13:40AM +0200, Julia Lawall wrote:
> > > > >
> > > > >
> > > > > On Mon, 27 May 2024, Vlastimil Babka wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > one bit from LSF/MM discussions is that there might be call_rcu users with a
> > > > > > callback that only does a kmem_cache_free() to a specific cache. Since SLOB
> > > > > > was removed, it's always ok to use kfree() and thus also kfree_rcu() on
> > > > > > allocations from kmem_cache_alloc() in addition to kmalloc(). Thus, such
> > > > > > call_rcu() users might be simplified to kfree_rcu(). I found some cases
> > > > > > semi-manually, but I'd expect coccinelle could help here so if anyone wants
> > > > > > to take this task, feel free to.
> > > > >
> > > > > Thanks for the suggestion! I will try to look into it.
> > > >
> > > > Thank you both!
> > >
> > > I found the following functions. Do you want some other information, such
> > > as where they are called from?
> > >
> > > Ignore the -s at the beginning of some lines. Those are for emphasis. not
> > > to suggest to remove the code.
> > >
> > > I checked that the functions are only used in calls to call_rcu.
> > >
> > > Without more effort, Coccinelle only looks for functions defined in the
> > > same file. Here are the functions that are passed to call_rcu where the
> > > function is not defined in the same file:
> > >
> > > need definition for audit_free_rule_rcu
> > > need definition for __i915_gem_free_object_rcu
> > > need definition for io_eventfd_ops
> > > need definition for ip_vs_dest_dst_rcu_free
> > > need definition for __put_task_struct_rcu_cb
> > > need definition for radix_tree_node_rcu_free
> > >
> > > They all do something more, although radix_tree_node_rcu_free doesn't do
> > > much more (some memsets).
> > >
> > > julia
> > >
> > > diff -u -p /home/jll/linux/drivers/net/wireguard/allowedips.c /tmp/nothing/drivers/net/wireguard/allowedips.c
> > > --- /home/jll/linux/drivers/net/wireguard/allowedips.c
> > > +++ /tmp/nothing/drivers/net/wireguard/allowedips.c
> > > @@ -50,7 +50,6 @@ static void push_rcu(struct allowedips_n
> > >
> > > static void node_free_rcu(struct rcu_head *rcu)
> > > {
> > > - kmem_cache_free(node_cache, container_of(rcu, struct allowedips_node, rcu));
> > > }
> > >
> > > static void root_free_rcu(struct rcu_head *rcu)
> > > diff -u -p /home/jll/linux/fs/ecryptfs/dentry.c /tmp/nothing/fs/ecryptfs/dentry.c
> > > --- /home/jll/linux/fs/ecryptfs/dentry.c
> > > +++ /tmp/nothing/fs/ecryptfs/dentry.c
> > > @@ -53,8 +53,6 @@ struct kmem_cache *ecryptfs_dentry_info_
> > >
> > > static void ecryptfs_dentry_free_rcu(struct rcu_head *head)
> > > {
> > > - kmem_cache_free(ecryptfs_dentry_info_cache,
> > > - container_of(head, struct ecryptfs_dentry_info, rcu));
> > > }
> > >
> > > /**
> > > diff -u -p /home/jll/linux/kernel/fork.c /tmp/nothing/kernel/fork.c
> > > --- /home/jll/linux/kernel/fork.c
> > > +++ /tmp/nothing/kernel/fork.c
> > > @@ -378,7 +378,6 @@ static struct kmem_cache *thread_stack_c
> > >
> > > static void thread_stack_free_rcu(struct rcu_head *rh)
> > > {
> > > - kmem_cache_free(thread_stack_cache, rh);
> > > }
> > >
> > > static void thread_stack_delayed_free(struct task_struct *tsk)
> > > diff -u -p /home/jll/linux/kernel/workqueue.c /tmp/nothing/kernel/workqueue.c
> > > --- /home/jll/linux/kernel/workqueue.c
> > > +++ /tmp/nothing/kernel/workqueue.c
> > > @@ -5024,8 +5024,6 @@ fail:
> > >
> > > static void rcu_free_pwq(struct rcu_head *rcu)
> > > {
> > > - kmem_cache_free(pwq_cache,
> > > - container_of(rcu, struct pool_workqueue, rcu));
> > > }
> > >
> > > /*
> > > diff -u -p /home/jll/linux/net/ipv4/inetpeer.c /tmp/nothing/net/ipv4/inetpeer.c
> > > --- /home/jll/linux/net/ipv4/inetpeer.c
> > > +++ /tmp/nothing/net/ipv4/inetpeer.c
> > > @@ -130,7 +130,6 @@ static struct inet_peer *lookup(const st
> > >
> > > static void inetpeer_free_rcu(struct rcu_head *head)
> > > {
> > > - kmem_cache_free(peer_cachep, container_of(head, struct inet_peer, rcu));
> > > }
> > >
> > > /* perform garbage collect on all items stacked during a lookup */
> > > diff -u -p /home/jll/linux/net/ipv6/xfrm6_tunnel.c /tmp/nothing/net/ipv6/xfrm6_tunnel.c
> > > --- /home/jll/linux/net/ipv6/xfrm6_tunnel.c
> > > +++ /tmp/nothing/net/ipv6/xfrm6_tunnel.c
> > > @@ -180,8 +180,6 @@ EXPORT_SYMBOL(xfrm6_tunnel_alloc_spi);
> > >
> > > static void x6spi_destroy_rcu(struct rcu_head *head)
> > > {
> > > - kmem_cache_free(xfrm6_tunnel_spi_kmem,
> > > - container_of(head, struct xfrm6_tunnel_spi, rcu_head));
> > > }
> > >
> > > static void xfrm6_tunnel_free_spi(struct net *net, xfrm_address_t *saddr)
> > > diff -u -p /home/jll/linux/security/security.c /tmp/nothing/security/security.c
> > > --- /home/jll/linux/security/security.c
> > > +++ /tmp/nothing/security/security.c
> > > @@ -1599,7 +1599,6 @@ static void inode_free_by_rcu(struct rcu
> > > /*
> > > * The rcu head is at the start of the inode blob
> > > */
> > > - kmem_cache_free(lsm_inode_cache, head);
> > > }
> > >
> > > /**
> > >
> > See below some extra functions which can be eliminated. How i found them:
> >
> > find ./ -name "*.c" -o -name "*.h" | xargs grep -rn call_rcu -B 10 | grep kmem_cache_free
> >
> > <snip>
> > static void nfsd4_free_file_rcu(struct rcu_head *rcu);
>
> Thanks. I tried to find this kind of issue, but it turned up nothing.
> There must be some mistake, and I will try again.
>
You are welcome. Also there are several places like below:
<snip>
static void blk_free_queue_rcu(struct rcu_head *rcu_head)
{
struct request_queue *q = container_of(rcu_head,
struct request_queue, rcu_head);
percpu_ref_exit(&q->q_usage_counter);
kmem_cache_free(blk_requestq_cachep, q);
}
static void blk_free_queue(struct request_queue *q)
{
blk_free_queue_stats(q->stats);
if (queue_is_mq(q))
blk_mq_release(q);
ida_free(&blk_queue_ida, q->id);
call_rcu(&q->rcu_head, blk_free_queue_rcu);
}
<snip>
and potentially it can be also replaced:
<snip>
diff --git a/block/blk-core.c b/block/blk-core.c
index 82c3ae22d76d..898d6990600d 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -257,15 +257,6 @@ void blk_clear_pm_only(struct request_queue *q)
}
EXPORT_SYMBOL_GPL(blk_clear_pm_only);
-static void blk_free_queue_rcu(struct rcu_head *rcu_head)
-{
- struct request_queue *q = container_of(rcu_head,
- struct request_queue, rcu_head);
-
- percpu_ref_exit(&q->q_usage_counter);
- kmem_cache_free(blk_requestq_cachep, q);
-}
-
static void blk_free_queue(struct request_queue *q)
{
blk_free_queue_stats(q->stats);
@@ -273,7 +264,8 @@ static void blk_free_queue(struct request_queue *q)
blk_mq_release(q);
ida_free(&blk_queue_ida, q->id);
- call_rcu(&q->rcu_head, blk_free_queue_rcu);
+ percpu_ref_exit(&q->q_usage_counter);
+ kfree_rcu(q, rcu_head);
}
/**
<snip>
but a maintainer of "blk-core.c" should check it if a usage counter can
be updated before a completion of GP.
Probably we can skip as of now such cases, so i do not have a strong
opinion here.
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [cocci] patch idea: convert trivial call_rcu users to kfree_rcu
2024-05-28 13:21 ` Uladzislau Rezki
@ 2024-05-28 14:29 ` Paul E. McKenney
2024-05-31 16:02 ` Julia Lawall
0 siblings, 1 reply; 24+ messages in thread
From: Paul E. McKenney @ 2024-05-28 14:29 UTC (permalink / raw)
To: Uladzislau Rezki; +Cc: Julia Lawall, Vlastimil Babka, linux-mm, RCU, cocci
On Tue, May 28, 2024 at 03:21:02PM +0200, Uladzislau Rezki wrote:
> On Tue, May 28, 2024 at 02:08:20PM +0200, Julia Lawall wrote:
> >
> >
> > On Tue, 28 May 2024, Uladzislau Rezki wrote:
> >
> > > Hello.
> > >
> > > >
> > > >
> > > > On Mon, 27 May 2024, Paul E. McKenney wrote:
> > > >
> > > > > On Mon, May 27, 2024 at 10:13:40AM +0200, Julia Lawall wrote:
> > > > > >
> > > > > >
> > > > > > On Mon, 27 May 2024, Vlastimil Babka wrote:
> > > > > >
> > > > > > > Hi,
> > > > > > >
> > > > > > > one bit from LSF/MM discussions is that there might be call_rcu users with a
> > > > > > > callback that only does a kmem_cache_free() to a specific cache. Since SLOB
> > > > > > > was removed, it's always ok to use kfree() and thus also kfree_rcu() on
> > > > > > > allocations from kmem_cache_alloc() in addition to kmalloc(). Thus, such
> > > > > > > call_rcu() users might be simplified to kfree_rcu(). I found some cases
> > > > > > > semi-manually, but I'd expect coccinelle could help here so if anyone wants
> > > > > > > to take this task, feel free to.
> > > > > >
> > > > > > Thanks for the suggestion! I will try to look into it.
> > > > >
> > > > > Thank you both!
> > > >
> > > > I found the following functions. Do you want some other information, such
> > > > as where they are called from?
> > > >
> > > > Ignore the -s at the beginning of some lines. Those are for emphasis. not
> > > > to suggest to remove the code.
> > > >
> > > > I checked that the functions are only used in calls to call_rcu.
> > > >
> > > > Without more effort, Coccinelle only looks for functions defined in the
> > > > same file. Here are the functions that are passed to call_rcu where the
> > > > function is not defined in the same file:
> > > >
> > > > need definition for audit_free_rule_rcu
> > > > need definition for __i915_gem_free_object_rcu
> > > > need definition for io_eventfd_ops
> > > > need definition for ip_vs_dest_dst_rcu_free
> > > > need definition for __put_task_struct_rcu_cb
> > > > need definition for radix_tree_node_rcu_free
> > > >
> > > > They all do something more, although radix_tree_node_rcu_free doesn't do
> > > > much more (some memsets).
> > > >
> > > > julia
> > > >
> > > > diff -u -p /home/jll/linux/drivers/net/wireguard/allowedips.c /tmp/nothing/drivers/net/wireguard/allowedips.c
> > > > --- /home/jll/linux/drivers/net/wireguard/allowedips.c
> > > > +++ /tmp/nothing/drivers/net/wireguard/allowedips.c
> > > > @@ -50,7 +50,6 @@ static void push_rcu(struct allowedips_n
> > > >
> > > > static void node_free_rcu(struct rcu_head *rcu)
> > > > {
> > > > - kmem_cache_free(node_cache, container_of(rcu, struct allowedips_node, rcu));
> > > > }
> > > >
> > > > static void root_free_rcu(struct rcu_head *rcu)
> > > > diff -u -p /home/jll/linux/fs/ecryptfs/dentry.c /tmp/nothing/fs/ecryptfs/dentry.c
> > > > --- /home/jll/linux/fs/ecryptfs/dentry.c
> > > > +++ /tmp/nothing/fs/ecryptfs/dentry.c
> > > > @@ -53,8 +53,6 @@ struct kmem_cache *ecryptfs_dentry_info_
> > > >
> > > > static void ecryptfs_dentry_free_rcu(struct rcu_head *head)
> > > > {
> > > > - kmem_cache_free(ecryptfs_dentry_info_cache,
> > > > - container_of(head, struct ecryptfs_dentry_info, rcu));
> > > > }
> > > >
> > > > /**
> > > > diff -u -p /home/jll/linux/kernel/fork.c /tmp/nothing/kernel/fork.c
> > > > --- /home/jll/linux/kernel/fork.c
> > > > +++ /tmp/nothing/kernel/fork.c
> > > > @@ -378,7 +378,6 @@ static struct kmem_cache *thread_stack_c
> > > >
> > > > static void thread_stack_free_rcu(struct rcu_head *rh)
> > > > {
> > > > - kmem_cache_free(thread_stack_cache, rh);
> > > > }
> > > >
> > > > static void thread_stack_delayed_free(struct task_struct *tsk)
> > > > diff -u -p /home/jll/linux/kernel/workqueue.c /tmp/nothing/kernel/workqueue.c
> > > > --- /home/jll/linux/kernel/workqueue.c
> > > > +++ /tmp/nothing/kernel/workqueue.c
> > > > @@ -5024,8 +5024,6 @@ fail:
> > > >
> > > > static void rcu_free_pwq(struct rcu_head *rcu)
> > > > {
> > > > - kmem_cache_free(pwq_cache,
> > > > - container_of(rcu, struct pool_workqueue, rcu));
> > > > }
> > > >
> > > > /*
> > > > diff -u -p /home/jll/linux/net/ipv4/inetpeer.c /tmp/nothing/net/ipv4/inetpeer.c
> > > > --- /home/jll/linux/net/ipv4/inetpeer.c
> > > > +++ /tmp/nothing/net/ipv4/inetpeer.c
> > > > @@ -130,7 +130,6 @@ static struct inet_peer *lookup(const st
> > > >
> > > > static void inetpeer_free_rcu(struct rcu_head *head)
> > > > {
> > > > - kmem_cache_free(peer_cachep, container_of(head, struct inet_peer, rcu));
> > > > }
> > > >
> > > > /* perform garbage collect on all items stacked during a lookup */
> > > > diff -u -p /home/jll/linux/net/ipv6/xfrm6_tunnel.c /tmp/nothing/net/ipv6/xfrm6_tunnel.c
> > > > --- /home/jll/linux/net/ipv6/xfrm6_tunnel.c
> > > > +++ /tmp/nothing/net/ipv6/xfrm6_tunnel.c
> > > > @@ -180,8 +180,6 @@ EXPORT_SYMBOL(xfrm6_tunnel_alloc_spi);
> > > >
> > > > static void x6spi_destroy_rcu(struct rcu_head *head)
> > > > {
> > > > - kmem_cache_free(xfrm6_tunnel_spi_kmem,
> > > > - container_of(head, struct xfrm6_tunnel_spi, rcu_head));
> > > > }
> > > >
> > > > static void xfrm6_tunnel_free_spi(struct net *net, xfrm_address_t *saddr)
> > > > diff -u -p /home/jll/linux/security/security.c /tmp/nothing/security/security.c
> > > > --- /home/jll/linux/security/security.c
> > > > +++ /tmp/nothing/security/security.c
> > > > @@ -1599,7 +1599,6 @@ static void inode_free_by_rcu(struct rcu
> > > > /*
> > > > * The rcu head is at the start of the inode blob
> > > > */
> > > > - kmem_cache_free(lsm_inode_cache, head);
> > > > }
> > > >
> > > > /**
> > > >
> > > See below some extra functions which can be eliminated. How i found them:
> > >
> > > find ./ -name "*.c" -o -name "*.h" | xargs grep -rn call_rcu -B 10 | grep kmem_cache_free
> > >
> > > <snip>
> > > static void nfsd4_free_file_rcu(struct rcu_head *rcu);
> >
> > Thanks. I tried to find this kind of issue, but it turned up nothing.
> > There must be some mistake, and I will try again.
> >
> You are welcome. Also there are several places like below:
>
> <snip>
> static void blk_free_queue_rcu(struct rcu_head *rcu_head)
> {
> struct request_queue *q = container_of(rcu_head,
> struct request_queue, rcu_head);
>
> percpu_ref_exit(&q->q_usage_counter);
> kmem_cache_free(blk_requestq_cachep, q);
> }
>
> static void blk_free_queue(struct request_queue *q)
> {
> blk_free_queue_stats(q->stats);
> if (queue_is_mq(q))
> blk_mq_release(q);
>
> ida_free(&blk_queue_ida, q->id);
> call_rcu(&q->rcu_head, blk_free_queue_rcu);
> }
> <snip>
>
> and potentially it can be also replaced:
>
> <snip>
> diff --git a/block/blk-core.c b/block/blk-core.c
> index 82c3ae22d76d..898d6990600d 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -257,15 +257,6 @@ void blk_clear_pm_only(struct request_queue *q)
> }
> EXPORT_SYMBOL_GPL(blk_clear_pm_only);
>
> -static void blk_free_queue_rcu(struct rcu_head *rcu_head)
> -{
> - struct request_queue *q = container_of(rcu_head,
> - struct request_queue, rcu_head);
> -
> - percpu_ref_exit(&q->q_usage_counter);
> - kmem_cache_free(blk_requestq_cachep, q);
> -}
> -
> static void blk_free_queue(struct request_queue *q)
> {
> blk_free_queue_stats(q->stats);
> @@ -273,7 +264,8 @@ static void blk_free_queue(struct request_queue *q)
> blk_mq_release(q);
>
> ida_free(&blk_queue_ida, q->id);
> - call_rcu(&q->rcu_head, blk_free_queue_rcu);
> + percpu_ref_exit(&q->q_usage_counter);
> + kfree_rcu(q, rcu_head);
> }
>
> /**
> <snip>
This should be functionally fine, but could inflict serious memory
contention on the last few readers. Which, for all I know, might
be just fine.
As you say:
> but a maintainer of "blk-core.c" should check it if a usage counter can
> be updated before a completion of GP.
>
> Probably we can skip as of now such cases, so i do not have a strong
> opinion here.
Thanx, Paul
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [cocci] patch idea: convert trivial call_rcu users to kfree_rcu
2024-05-28 14:29 ` Paul E. McKenney
@ 2024-05-31 16:02 ` Julia Lawall
2024-06-03 17:25 ` Paul E. McKenney
` (2 more replies)
0 siblings, 3 replies; 24+ messages in thread
From: Julia Lawall @ 2024-05-31 16:02 UTC (permalink / raw)
To: Paul E. McKenney; +Cc: Uladzislau Rezki, Vlastimil Babka, linux-mm, RCU, cocci
Here are the changes proposed by Coccinelle.
I didn't succeed to compile the powerpc code, but the rest has been
compile tested.
julia
commit 1881f31fe3ad693d07ecff45985dd0e87534923f
Author: Julia Lawall <Julia.Lawall@inria.fr>
Date: Wed May 29 18:54:41 2024 +0200
misc
diff --git a/arch/powerpc/kvm/book3s_mmu_hpte.c b/arch/powerpc/kvm/book3s_mmu_hpte.c
index ce79ac33e8d3..d904e13e069b 100644
--- a/arch/powerpc/kvm/book3s_mmu_hpte.c
+++ b/arch/powerpc/kvm/book3s_mmu_hpte.c
@@ -92,12 +92,6 @@ void kvmppc_mmu_hpte_cache_map(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
spin_unlock(&vcpu3s->mmu_lock);
}
-static void free_pte_rcu(struct rcu_head *head)
-{
- struct hpte_cache *pte = container_of(head, struct hpte_cache, rcu_head);
- kmem_cache_free(hpte_cache, pte);
-}
-
static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
{
struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu);
@@ -126,7 +120,7 @@ static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
spin_unlock(&vcpu3s->mmu_lock);
- call_rcu(&pte->rcu_head, free_pte_rcu);
+ kfree_rcu(pte, rcu_head);
}
static void kvmppc_mmu_pte_flush_all(struct kvm_vcpu *vcpu)
diff --git a/block/blk-ioc.c b/block/blk-ioc.c
index 25dd4db11121..ce82770c72ab 100644
--- a/block/blk-ioc.c
+++ b/block/blk-ioc.c
@@ -32,13 +32,6 @@ static void get_io_context(struct io_context *ioc)
atomic_long_inc(&ioc->refcount);
}
-static void icq_free_icq_rcu(struct rcu_head *head)
-{
- struct io_cq *icq = container_of(head, struct io_cq, __rcu_head);
-
- kmem_cache_free(icq->__rcu_icq_cache, icq);
-}
-
/*
* Exit an icq. Called with ioc locked for blk-mq, and with both ioc
* and queue locked for legacy.
@@ -102,7 +95,7 @@ static void ioc_destroy_icq(struct io_cq *icq)
*/
icq->__rcu_icq_cache = et->icq_cache;
icq->flags |= ICQ_DESTROYED;
- call_rcu(&icq->__rcu_head, icq_free_icq_rcu);
+ kfree_rcu(icq, __rcu_head);
}
/*
diff --git a/drivers/net/wireguard/allowedips.c b/drivers/net/wireguard/allowedips.c
index 0ba714ca5185..e4e1638fce1b 100644
--- a/drivers/net/wireguard/allowedips.c
+++ b/drivers/net/wireguard/allowedips.c
@@ -48,11 +48,6 @@ static void push_rcu(struct allowedips_node **stack,
}
}
-static void node_free_rcu(struct rcu_head *rcu)
-{
- kmem_cache_free(node_cache, container_of(rcu, struct allowedips_node, rcu));
-}
-
static void root_free_rcu(struct rcu_head *rcu)
{
struct allowedips_node *node, *stack[MAX_ALLOWEDIPS_DEPTH] = {
@@ -330,13 +325,13 @@ void wg_allowedips_remove_by_peer(struct allowedips *table,
child = rcu_dereference_protected(
parent->bit[!(node->parent_bit_packed & 1)],
lockdep_is_held(lock));
- call_rcu(&node->rcu, node_free_rcu);
+ kfree_rcu(node, rcu);
if (!free_parent)
continue;
if (child)
child->parent_bit_packed = parent->parent_bit_packed;
*(struct allowedips_node **)(parent->parent_bit_packed & ~3UL) = child;
- call_rcu(&parent->rcu, node_free_rcu);
+ kfree_rcu(parent, rcu);
}
}
diff --git a/fs/ecryptfs/dentry.c b/fs/ecryptfs/dentry.c
index acaa0825e9bb..49d626ff33a9 100644
--- a/fs/ecryptfs/dentry.c
+++ b/fs/ecryptfs/dentry.c
@@ -51,12 +51,6 @@ static int ecryptfs_d_revalidate(struct dentry *dentry, unsigned int flags)
struct kmem_cache *ecryptfs_dentry_info_cache;
-static void ecryptfs_dentry_free_rcu(struct rcu_head *head)
-{
- kmem_cache_free(ecryptfs_dentry_info_cache,
- container_of(head, struct ecryptfs_dentry_info, rcu));
-}
-
/**
* ecryptfs_d_release
* @dentry: The ecryptfs dentry
@@ -68,7 +62,7 @@ static void ecryptfs_d_release(struct dentry *dentry)
struct ecryptfs_dentry_info *p = dentry->d_fsdata;
if (p) {
path_put(&p->lower_path);
- call_rcu(&p->rcu, ecryptfs_dentry_free_rcu);
+ kfree_rcu(p, rcu);
}
}
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index a20c2c9d7d45..eba5083504c7 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -571,13 +571,6 @@ opaque_hashval(const void *ptr, int nbytes)
return x;
}
-static void nfsd4_free_file_rcu(struct rcu_head *rcu)
-{
- struct nfs4_file *fp = container_of(rcu, struct nfs4_file, fi_rcu);
-
- kmem_cache_free(file_slab, fp);
-}
-
void
put_nfs4_file(struct nfs4_file *fi)
{
@@ -585,7 +578,7 @@ put_nfs4_file(struct nfs4_file *fi)
nfsd4_file_hash_remove(fi);
WARN_ON_ONCE(!list_empty(&fi->fi_clnt_odstate));
WARN_ON_ONCE(!list_empty(&fi->fi_delegations));
- call_rcu(&fi->fi_rcu, nfsd4_free_file_rcu);
+ kfree_rcu(fi, fi_rcu);
}
}
diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
index 7c29f4afc23d..338c52168e61 100644
--- a/fs/tracefs/inode.c
+++ b/fs/tracefs/inode.c
@@ -53,14 +53,6 @@ static struct inode *tracefs_alloc_inode(struct super_block *sb)
return &ti->vfs_inode;
}
-static void tracefs_free_inode_rcu(struct rcu_head *rcu)
-{
- struct tracefs_inode *ti;
-
- ti = container_of(rcu, struct tracefs_inode, rcu);
- kmem_cache_free(tracefs_inode_cachep, ti);
-}
-
static void tracefs_free_inode(struct inode *inode)
{
struct tracefs_inode *ti = get_tracefs(inode);
@@ -70,7 +62,7 @@ static void tracefs_free_inode(struct inode *inode)
list_del_rcu(&ti->list);
spin_unlock_irqrestore(&tracefs_inode_lock, flags);
- call_rcu(&ti->rcu, tracefs_free_inode_rcu);
+ kfree_rcu(ti, rcu);
}
static ssize_t default_read_file(struct file *file, char __user *buf,
diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
index b924f0f096fa..bad5db979664 100644
--- a/kernel/time/posix-timers.c
+++ b/kernel/time/posix-timers.c
@@ -412,18 +412,11 @@ static struct k_itimer * alloc_posix_timer(void)
return tmr;
}
-static void k_itimer_rcu_free(struct rcu_head *head)
-{
- struct k_itimer *tmr = container_of(head, struct k_itimer, rcu);
-
- kmem_cache_free(posix_timers_cache, tmr);
-}
-
static void posix_timer_free(struct k_itimer *tmr)
{
put_pid(tmr->it_pid);
sigqueue_free(tmr->sigq);
- call_rcu(&tmr->rcu, k_itimer_rcu_free);
+ kfree_rcu(tmr, rcu);
}
static void posix_timer_unhash_and_free(struct k_itimer *tmr)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 003474c9a77d..367fc459cbd2 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -5022,12 +5022,6 @@ static struct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs)
return NULL;
}
-static void rcu_free_pwq(struct rcu_head *rcu)
-{
- kmem_cache_free(pwq_cache,
- container_of(rcu, struct pool_workqueue, rcu));
-}
-
/*
* Scheduled on pwq_release_worker by put_pwq() when an unbound pwq hits zero
* refcnt and needs to be destroyed.
@@ -5073,7 +5067,7 @@ static void pwq_release_workfn(struct kthread_work *work)
raw_spin_unlock_irq(&nna->lock);
}
- call_rcu(&pwq->rcu, rcu_free_pwq);
+ kfree_rcu(pwq, rcu);
/*
* If we're the last pwq going away, @wq is already dead and no one
diff --git a/net/bridge/br_fdb.c b/net/bridge/br_fdb.c
index c77591e63841..6d04b48f7e2c 100644
--- a/net/bridge/br_fdb.c
+++ b/net/bridge/br_fdb.c
@@ -73,13 +73,6 @@ static inline int has_expired(const struct net_bridge *br,
time_before_eq(fdb->updated + hold_time(br), jiffies);
}
-static void fdb_rcu_free(struct rcu_head *head)
-{
- struct net_bridge_fdb_entry *ent
- = container_of(head, struct net_bridge_fdb_entry, rcu);
- kmem_cache_free(br_fdb_cache, ent);
-}
-
static int fdb_to_nud(const struct net_bridge *br,
const struct net_bridge_fdb_entry *fdb)
{
@@ -329,7 +322,7 @@ static void fdb_delete(struct net_bridge *br, struct net_bridge_fdb_entry *f,
if (test_and_clear_bit(BR_FDB_DYNAMIC_LEARNED, &f->flags))
atomic_dec(&br->fdb_n_learned);
fdb_notify(br, f, RTM_DELNEIGH, swdev_notify);
- call_rcu(&f->rcu, fdb_rcu_free);
+ kfree_rcu(f, rcu);
}
/* Delete a local entry if no other port had the same address.
diff --git a/net/can/gw.c b/net/can/gw.c
index 37528826935e..ffb9870e2d01 100644
--- a/net/can/gw.c
+++ b/net/can/gw.c
@@ -577,13 +577,6 @@ static inline void cgw_unregister_filter(struct net *net, struct cgw_job *gwj)
gwj->ccgw.filter.can_mask, can_can_gw_rcv, gwj);
}
-static void cgw_job_free_rcu(struct rcu_head *rcu_head)
-{
- struct cgw_job *gwj = container_of(rcu_head, struct cgw_job, rcu);
-
- kmem_cache_free(cgw_cache, gwj);
-}
-
static int cgw_notifier(struct notifier_block *nb,
unsigned long msg, void *ptr)
{
@@ -603,7 +596,7 @@ static int cgw_notifier(struct notifier_block *nb,
if (gwj->src.dev == dev || gwj->dst.dev == dev) {
hlist_del(&gwj->list);
cgw_unregister_filter(net, gwj);
- call_rcu(&gwj->rcu, cgw_job_free_rcu);
+ kfree_rcu(gwj, rcu);
}
}
}
@@ -1168,7 +1161,7 @@ static void cgw_remove_all_jobs(struct net *net)
hlist_for_each_entry_safe(gwj, nx, &net->can.cgw_list, list) {
hlist_del(&gwj->list);
cgw_unregister_filter(net, gwj);
- call_rcu(&gwj->rcu, cgw_job_free_rcu);
+ kfree_rcu(gwj, rcu);
}
}
@@ -1236,7 +1229,7 @@ static int cgw_remove_job(struct sk_buff *skb, struct nlmsghdr *nlh,
hlist_del(&gwj->list);
cgw_unregister_filter(net, gwj);
- call_rcu(&gwj->rcu, cgw_job_free_rcu);
+ kfree_rcu(gwj, rcu);
err = 0;
break;
}
diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
index f474106464d2..3ed92e583417 100644
--- a/net/ipv4/fib_trie.c
+++ b/net/ipv4/fib_trie.c
@@ -292,15 +292,9 @@ static const int inflate_threshold = 50;
static const int halve_threshold_root = 15;
static const int inflate_threshold_root = 30;
-static void __alias_free_mem(struct rcu_head *head)
-{
- struct fib_alias *fa = container_of(head, struct fib_alias, rcu);
- kmem_cache_free(fn_alias_kmem, fa);
-}
-
static inline void alias_free_mem_rcu(struct fib_alias *fa)
{
- call_rcu(&fa->rcu, __alias_free_mem);
+ kfree_rcu(fa, rcu);
}
#define TNODE_VMALLOC_MAX \
diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c
index 5bd759963451..5ab56f4cb529 100644
--- a/net/ipv4/inetpeer.c
+++ b/net/ipv4/inetpeer.c
@@ -128,11 +128,6 @@ static struct inet_peer *lookup(const struct inetpeer_addr *daddr,
return NULL;
}
-static void inetpeer_free_rcu(struct rcu_head *head)
-{
- kmem_cache_free(peer_cachep, container_of(head, struct inet_peer, rcu));
-}
-
/* perform garbage collect on all items stacked during a lookup */
static void inet_peer_gc(struct inet_peer_base *base,
struct inet_peer *gc_stack[],
@@ -168,7 +163,7 @@ static void inet_peer_gc(struct inet_peer_base *base,
if (p) {
rb_erase(&p->rb_node, &base->rb_root);
base->total--;
- call_rcu(&p->rcu, inetpeer_free_rcu);
+ kfree_rcu(p, rcu);
}
}
}
@@ -242,7 +237,7 @@ void inet_putpeer(struct inet_peer *p)
WRITE_ONCE(p->dtime, (__u32)jiffies);
if (refcount_dec_and_test(&p->refcnt))
- call_rcu(&p->rcu, inetpeer_free_rcu);
+ kfree_rcu(p, rcu);
}
EXPORT_SYMBOL_GPL(inet_putpeer);
diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
index 31d77885bcae..bafc49de270e 100644
--- a/net/ipv6/ip6_fib.c
+++ b/net/ipv6/ip6_fib.c
@@ -198,16 +198,9 @@ static void node_free_immediate(struct net *net, struct fib6_node *fn)
net->ipv6.rt6_stats->fib_nodes--;
}
-static void node_free_rcu(struct rcu_head *head)
-{
- struct fib6_node *fn = container_of(head, struct fib6_node, rcu);
-
- kmem_cache_free(fib6_node_kmem, fn);
-}
-
static void node_free(struct net *net, struct fib6_node *fn)
{
- call_rcu(&fn->rcu, node_free_rcu);
+ kfree_rcu(fn, rcu);
net->ipv6.rt6_stats->fib_nodes--;
}
diff --git a/net/ipv6/xfrm6_tunnel.c b/net/ipv6/xfrm6_tunnel.c
index bf140ef781c1..c3c893ddb6ee 100644
--- a/net/ipv6/xfrm6_tunnel.c
+++ b/net/ipv6/xfrm6_tunnel.c
@@ -178,12 +178,6 @@ __be32 xfrm6_tunnel_alloc_spi(struct net *net, xfrm_address_t *saddr)
}
EXPORT_SYMBOL(xfrm6_tunnel_alloc_spi);
-static void x6spi_destroy_rcu(struct rcu_head *head)
-{
- kmem_cache_free(xfrm6_tunnel_spi_kmem,
- container_of(head, struct xfrm6_tunnel_spi, rcu_head));
-}
-
static void xfrm6_tunnel_free_spi(struct net *net, xfrm_address_t *saddr)
{
struct xfrm6_tunnel_net *xfrm6_tn = xfrm6_tunnel_pernet(net);
@@ -200,7 +194,7 @@ static void xfrm6_tunnel_free_spi(struct net *net, xfrm_address_t *saddr)
if (refcount_dec_and_test(&x6spi->refcnt)) {
hlist_del_rcu(&x6spi->list_byaddr);
hlist_del_rcu(&x6spi->list_byspi);
- call_rcu(&x6spi->rcu_head, x6spi_destroy_rcu);
+ kfree_rcu(x6spi, rcu_head);
break;
}
}
diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
index 2f191e50d4fc..fbb730cd2d38 100644
--- a/net/kcm/kcmsock.c
+++ b/net/kcm/kcmsock.c
@@ -1580,14 +1580,6 @@ static int kcm_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
return err;
}
-static void free_mux(struct rcu_head *rcu)
-{
- struct kcm_mux *mux = container_of(rcu,
- struct kcm_mux, rcu);
-
- kmem_cache_free(kcm_muxp, mux);
-}
-
static void release_mux(struct kcm_mux *mux)
{
struct kcm_net *knet = mux->knet;
@@ -1615,7 +1607,7 @@ static void release_mux(struct kcm_mux *mux)
knet->count--;
mutex_unlock(&knet->mutex);
- call_rcu(&mux->rcu, free_mux);
+ kfree_rcu(mux, rcu);
}
static void kcm_done(struct kcm_sock *kcm)
diff --git a/net/netfilter/nf_conncount.c b/net/netfilter/nf_conncount.c
index 8715617b02fe..587bfcb79723 100644
--- a/net/netfilter/nf_conncount.c
+++ b/net/netfilter/nf_conncount.c
@@ -275,14 +275,6 @@ bool nf_conncount_gc_list(struct net *net,
}
EXPORT_SYMBOL_GPL(nf_conncount_gc_list);
-static void __tree_nodes_free(struct rcu_head *h)
-{
- struct nf_conncount_rb *rbconn;
-
- rbconn = container_of(h, struct nf_conncount_rb, rcu_head);
- kmem_cache_free(conncount_rb_cachep, rbconn);
-}
-
/* caller must hold tree nf_conncount_locks[] lock */
static void tree_nodes_free(struct rb_root *root,
struct nf_conncount_rb *gc_nodes[],
@@ -295,7 +287,7 @@ static void tree_nodes_free(struct rb_root *root,
spin_lock(&rbconn->list.list_lock);
if (!rbconn->list.count) {
rb_erase(&rbconn->node, root);
- call_rcu(&rbconn->rcu_head, __tree_nodes_free);
+ kfree_rcu(rbconn, rcu_head);
}
spin_unlock(&rbconn->list.list_lock);
}
diff --git a/net/netfilter/nf_conntrack_expect.c b/net/netfilter/nf_conntrack_expect.c
index 21fa550966f0..9dcaef6f3663 100644
--- a/net/netfilter/nf_conntrack_expect.c
+++ b/net/netfilter/nf_conntrack_expect.c
@@ -367,18 +367,10 @@ void nf_ct_expect_init(struct nf_conntrack_expect *exp, unsigned int class,
}
EXPORT_SYMBOL_GPL(nf_ct_expect_init);
-static void nf_ct_expect_free_rcu(struct rcu_head *head)
-{
- struct nf_conntrack_expect *exp;
-
- exp = container_of(head, struct nf_conntrack_expect, rcu);
- kmem_cache_free(nf_ct_expect_cachep, exp);
-}
-
void nf_ct_expect_put(struct nf_conntrack_expect *exp)
{
if (refcount_dec_and_test(&exp->use))
- call_rcu(&exp->rcu, nf_ct_expect_free_rcu);
+ kfree_rcu(exp, rcu);
}
EXPORT_SYMBOL_GPL(nf_ct_expect_put);
diff --git a/net/netfilter/xt_hashlimit.c b/net/netfilter/xt_hashlimit.c
index 0859b8f76764..c2b9b954eb53 100644
--- a/net/netfilter/xt_hashlimit.c
+++ b/net/netfilter/xt_hashlimit.c
@@ -256,18 +256,11 @@ dsthash_alloc_init(struct xt_hashlimit_htable *ht,
return ent;
}
-static void dsthash_free_rcu(struct rcu_head *head)
-{
- struct dsthash_ent *ent = container_of(head, struct dsthash_ent, rcu);
-
- kmem_cache_free(hashlimit_cachep, ent);
-}
-
static inline void
dsthash_free(struct xt_hashlimit_htable *ht, struct dsthash_ent *ent)
{
hlist_del_rcu(&ent->node);
- call_rcu(&ent->rcu, dsthash_free_rcu);
+ kfree_rcu(ent, rcu);
ht->count--;
}
static void htable_gc(struct work_struct *work);
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [cocci] patch idea: convert trivial call_rcu users to kfree_rcu
2024-05-31 16:02 ` Julia Lawall
@ 2024-06-03 17:25 ` Paul E. McKenney
2024-06-03 19:29 ` Vlastimil Babka
2024-06-04 11:26 ` Uladzislau Rezki
2 siblings, 0 replies; 24+ messages in thread
From: Paul E. McKenney @ 2024-06-03 17:25 UTC (permalink / raw)
To: Julia Lawall; +Cc: Uladzislau Rezki, Vlastimil Babka, linux-mm, RCU, cocci
On Fri, May 31, 2024 at 06:02:01PM +0200, Julia Lawall wrote:
> Here are the changes proposed by Coccinelle.
>
> I didn't succeed to compile the powerpc code, but the rest has been
> compile tested.
Thank you, Julia!
For the set:
Reviewed-by: Paul E. McKenney <paulmck@kernel.org>
> julia
>
> commit 1881f31fe3ad693d07ecff45985dd0e87534923f
> Author: Julia Lawall <Julia.Lawall@inria.fr>
> Date: Wed May 29 18:54:41 2024 +0200
>
> misc
>
> diff --git a/arch/powerpc/kvm/book3s_mmu_hpte.c b/arch/powerpc/kvm/book3s_mmu_hpte.c
> index ce79ac33e8d3..d904e13e069b 100644
> --- a/arch/powerpc/kvm/book3s_mmu_hpte.c
> +++ b/arch/powerpc/kvm/book3s_mmu_hpte.c
> @@ -92,12 +92,6 @@ void kvmppc_mmu_hpte_cache_map(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
> spin_unlock(&vcpu3s->mmu_lock);
> }
>
> -static void free_pte_rcu(struct rcu_head *head)
> -{
> - struct hpte_cache *pte = container_of(head, struct hpte_cache, rcu_head);
> - kmem_cache_free(hpte_cache, pte);
> -}
> -
> static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
> {
> struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu);
> @@ -126,7 +120,7 @@ static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
>
> spin_unlock(&vcpu3s->mmu_lock);
>
> - call_rcu(&pte->rcu_head, free_pte_rcu);
> + kfree_rcu(pte, rcu_head);
> }
>
> static void kvmppc_mmu_pte_flush_all(struct kvm_vcpu *vcpu)
> diff --git a/block/blk-ioc.c b/block/blk-ioc.c
> index 25dd4db11121..ce82770c72ab 100644
> --- a/block/blk-ioc.c
> +++ b/block/blk-ioc.c
> @@ -32,13 +32,6 @@ static void get_io_context(struct io_context *ioc)
> atomic_long_inc(&ioc->refcount);
> }
>
> -static void icq_free_icq_rcu(struct rcu_head *head)
> -{
> - struct io_cq *icq = container_of(head, struct io_cq, __rcu_head);
> -
> - kmem_cache_free(icq->__rcu_icq_cache, icq);
> -}
> -
> /*
> * Exit an icq. Called with ioc locked for blk-mq, and with both ioc
> * and queue locked for legacy.
> @@ -102,7 +95,7 @@ static void ioc_destroy_icq(struct io_cq *icq)
> */
> icq->__rcu_icq_cache = et->icq_cache;
> icq->flags |= ICQ_DESTROYED;
> - call_rcu(&icq->__rcu_head, icq_free_icq_rcu);
> + kfree_rcu(icq, __rcu_head);
> }
>
> /*
> diff --git a/drivers/net/wireguard/allowedips.c b/drivers/net/wireguard/allowedips.c
> index 0ba714ca5185..e4e1638fce1b 100644
> --- a/drivers/net/wireguard/allowedips.c
> +++ b/drivers/net/wireguard/allowedips.c
> @@ -48,11 +48,6 @@ static void push_rcu(struct allowedips_node **stack,
> }
> }
>
> -static void node_free_rcu(struct rcu_head *rcu)
> -{
> - kmem_cache_free(node_cache, container_of(rcu, struct allowedips_node, rcu));
> -}
> -
> static void root_free_rcu(struct rcu_head *rcu)
> {
> struct allowedips_node *node, *stack[MAX_ALLOWEDIPS_DEPTH] = {
> @@ -330,13 +325,13 @@ void wg_allowedips_remove_by_peer(struct allowedips *table,
> child = rcu_dereference_protected(
> parent->bit[!(node->parent_bit_packed & 1)],
> lockdep_is_held(lock));
> - call_rcu(&node->rcu, node_free_rcu);
> + kfree_rcu(node, rcu);
> if (!free_parent)
> continue;
> if (child)
> child->parent_bit_packed = parent->parent_bit_packed;
> *(struct allowedips_node **)(parent->parent_bit_packed & ~3UL) = child;
> - call_rcu(&parent->rcu, node_free_rcu);
> + kfree_rcu(parent, rcu);
> }
> }
>
> diff --git a/fs/ecryptfs/dentry.c b/fs/ecryptfs/dentry.c
> index acaa0825e9bb..49d626ff33a9 100644
> --- a/fs/ecryptfs/dentry.c
> +++ b/fs/ecryptfs/dentry.c
> @@ -51,12 +51,6 @@ static int ecryptfs_d_revalidate(struct dentry *dentry, unsigned int flags)
>
> struct kmem_cache *ecryptfs_dentry_info_cache;
>
> -static void ecryptfs_dentry_free_rcu(struct rcu_head *head)
> -{
> - kmem_cache_free(ecryptfs_dentry_info_cache,
> - container_of(head, struct ecryptfs_dentry_info, rcu));
> -}
> -
> /**
> * ecryptfs_d_release
> * @dentry: The ecryptfs dentry
> @@ -68,7 +62,7 @@ static void ecryptfs_d_release(struct dentry *dentry)
> struct ecryptfs_dentry_info *p = dentry->d_fsdata;
> if (p) {
> path_put(&p->lower_path);
> - call_rcu(&p->rcu, ecryptfs_dentry_free_rcu);
> + kfree_rcu(p, rcu);
> }
> }
>
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index a20c2c9d7d45..eba5083504c7 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -571,13 +571,6 @@ opaque_hashval(const void *ptr, int nbytes)
> return x;
> }
>
> -static void nfsd4_free_file_rcu(struct rcu_head *rcu)
> -{
> - struct nfs4_file *fp = container_of(rcu, struct nfs4_file, fi_rcu);
> -
> - kmem_cache_free(file_slab, fp);
> -}
> -
> void
> put_nfs4_file(struct nfs4_file *fi)
> {
> @@ -585,7 +578,7 @@ put_nfs4_file(struct nfs4_file *fi)
> nfsd4_file_hash_remove(fi);
> WARN_ON_ONCE(!list_empty(&fi->fi_clnt_odstate));
> WARN_ON_ONCE(!list_empty(&fi->fi_delegations));
> - call_rcu(&fi->fi_rcu, nfsd4_free_file_rcu);
> + kfree_rcu(fi, fi_rcu);
> }
> }
>
> diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
> index 7c29f4afc23d..338c52168e61 100644
> --- a/fs/tracefs/inode.c
> +++ b/fs/tracefs/inode.c
> @@ -53,14 +53,6 @@ static struct inode *tracefs_alloc_inode(struct super_block *sb)
> return &ti->vfs_inode;
> }
>
> -static void tracefs_free_inode_rcu(struct rcu_head *rcu)
> -{
> - struct tracefs_inode *ti;
> -
> - ti = container_of(rcu, struct tracefs_inode, rcu);
> - kmem_cache_free(tracefs_inode_cachep, ti);
> -}
> -
> static void tracefs_free_inode(struct inode *inode)
> {
> struct tracefs_inode *ti = get_tracefs(inode);
> @@ -70,7 +62,7 @@ static void tracefs_free_inode(struct inode *inode)
> list_del_rcu(&ti->list);
> spin_unlock_irqrestore(&tracefs_inode_lock, flags);
>
> - call_rcu(&ti->rcu, tracefs_free_inode_rcu);
> + kfree_rcu(ti, rcu);
> }
>
> static ssize_t default_read_file(struct file *file, char __user *buf,
> diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
> index b924f0f096fa..bad5db979664 100644
> --- a/kernel/time/posix-timers.c
> +++ b/kernel/time/posix-timers.c
> @@ -412,18 +412,11 @@ static struct k_itimer * alloc_posix_timer(void)
> return tmr;
> }
>
> -static void k_itimer_rcu_free(struct rcu_head *head)
> -{
> - struct k_itimer *tmr = container_of(head, struct k_itimer, rcu);
> -
> - kmem_cache_free(posix_timers_cache, tmr);
> -}
> -
> static void posix_timer_free(struct k_itimer *tmr)
> {
> put_pid(tmr->it_pid);
> sigqueue_free(tmr->sigq);
> - call_rcu(&tmr->rcu, k_itimer_rcu_free);
> + kfree_rcu(tmr, rcu);
> }
>
> static void posix_timer_unhash_and_free(struct k_itimer *tmr)
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 003474c9a77d..367fc459cbd2 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -5022,12 +5022,6 @@ static struct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs)
> return NULL;
> }
>
> -static void rcu_free_pwq(struct rcu_head *rcu)
> -{
> - kmem_cache_free(pwq_cache,
> - container_of(rcu, struct pool_workqueue, rcu));
> -}
> -
> /*
> * Scheduled on pwq_release_worker by put_pwq() when an unbound pwq hits zero
> * refcnt and needs to be destroyed.
> @@ -5073,7 +5067,7 @@ static void pwq_release_workfn(struct kthread_work *work)
> raw_spin_unlock_irq(&nna->lock);
> }
>
> - call_rcu(&pwq->rcu, rcu_free_pwq);
> + kfree_rcu(pwq, rcu);
>
> /*
> * If we're the last pwq going away, @wq is already dead and no one
> diff --git a/net/bridge/br_fdb.c b/net/bridge/br_fdb.c
> index c77591e63841..6d04b48f7e2c 100644
> --- a/net/bridge/br_fdb.c
> +++ b/net/bridge/br_fdb.c
> @@ -73,13 +73,6 @@ static inline int has_expired(const struct net_bridge *br,
> time_before_eq(fdb->updated + hold_time(br), jiffies);
> }
>
> -static void fdb_rcu_free(struct rcu_head *head)
> -{
> - struct net_bridge_fdb_entry *ent
> - = container_of(head, struct net_bridge_fdb_entry, rcu);
> - kmem_cache_free(br_fdb_cache, ent);
> -}
> -
> static int fdb_to_nud(const struct net_bridge *br,
> const struct net_bridge_fdb_entry *fdb)
> {
> @@ -329,7 +322,7 @@ static void fdb_delete(struct net_bridge *br, struct net_bridge_fdb_entry *f,
> if (test_and_clear_bit(BR_FDB_DYNAMIC_LEARNED, &f->flags))
> atomic_dec(&br->fdb_n_learned);
> fdb_notify(br, f, RTM_DELNEIGH, swdev_notify);
> - call_rcu(&f->rcu, fdb_rcu_free);
> + kfree_rcu(f, rcu);
> }
>
> /* Delete a local entry if no other port had the same address.
> diff --git a/net/can/gw.c b/net/can/gw.c
> index 37528826935e..ffb9870e2d01 100644
> --- a/net/can/gw.c
> +++ b/net/can/gw.c
> @@ -577,13 +577,6 @@ static inline void cgw_unregister_filter(struct net *net, struct cgw_job *gwj)
> gwj->ccgw.filter.can_mask, can_can_gw_rcv, gwj);
> }
>
> -static void cgw_job_free_rcu(struct rcu_head *rcu_head)
> -{
> - struct cgw_job *gwj = container_of(rcu_head, struct cgw_job, rcu);
> -
> - kmem_cache_free(cgw_cache, gwj);
> -}
> -
> static int cgw_notifier(struct notifier_block *nb,
> unsigned long msg, void *ptr)
> {
> @@ -603,7 +596,7 @@ static int cgw_notifier(struct notifier_block *nb,
> if (gwj->src.dev == dev || gwj->dst.dev == dev) {
> hlist_del(&gwj->list);
> cgw_unregister_filter(net, gwj);
> - call_rcu(&gwj->rcu, cgw_job_free_rcu);
> + kfree_rcu(gwj, rcu);
> }
> }
> }
> @@ -1168,7 +1161,7 @@ static void cgw_remove_all_jobs(struct net *net)
> hlist_for_each_entry_safe(gwj, nx, &net->can.cgw_list, list) {
> hlist_del(&gwj->list);
> cgw_unregister_filter(net, gwj);
> - call_rcu(&gwj->rcu, cgw_job_free_rcu);
> + kfree_rcu(gwj, rcu);
> }
> }
>
> @@ -1236,7 +1229,7 @@ static int cgw_remove_job(struct sk_buff *skb, struct nlmsghdr *nlh,
>
> hlist_del(&gwj->list);
> cgw_unregister_filter(net, gwj);
> - call_rcu(&gwj->rcu, cgw_job_free_rcu);
> + kfree_rcu(gwj, rcu);
> err = 0;
> break;
> }
> diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
> index f474106464d2..3ed92e583417 100644
> --- a/net/ipv4/fib_trie.c
> +++ b/net/ipv4/fib_trie.c
> @@ -292,15 +292,9 @@ static const int inflate_threshold = 50;
> static const int halve_threshold_root = 15;
> static const int inflate_threshold_root = 30;
>
> -static void __alias_free_mem(struct rcu_head *head)
> -{
> - struct fib_alias *fa = container_of(head, struct fib_alias, rcu);
> - kmem_cache_free(fn_alias_kmem, fa);
> -}
> -
> static inline void alias_free_mem_rcu(struct fib_alias *fa)
> {
> - call_rcu(&fa->rcu, __alias_free_mem);
> + kfree_rcu(fa, rcu);
> }
>
> #define TNODE_VMALLOC_MAX \
> diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c
> index 5bd759963451..5ab56f4cb529 100644
> --- a/net/ipv4/inetpeer.c
> +++ b/net/ipv4/inetpeer.c
> @@ -128,11 +128,6 @@ static struct inet_peer *lookup(const struct inetpeer_addr *daddr,
> return NULL;
> }
>
> -static void inetpeer_free_rcu(struct rcu_head *head)
> -{
> - kmem_cache_free(peer_cachep, container_of(head, struct inet_peer, rcu));
> -}
> -
> /* perform garbage collect on all items stacked during a lookup */
> static void inet_peer_gc(struct inet_peer_base *base,
> struct inet_peer *gc_stack[],
> @@ -168,7 +163,7 @@ static void inet_peer_gc(struct inet_peer_base *base,
> if (p) {
> rb_erase(&p->rb_node, &base->rb_root);
> base->total--;
> - call_rcu(&p->rcu, inetpeer_free_rcu);
> + kfree_rcu(p, rcu);
> }
> }
> }
> @@ -242,7 +237,7 @@ void inet_putpeer(struct inet_peer *p)
> WRITE_ONCE(p->dtime, (__u32)jiffies);
>
> if (refcount_dec_and_test(&p->refcnt))
> - call_rcu(&p->rcu, inetpeer_free_rcu);
> + kfree_rcu(p, rcu);
> }
> EXPORT_SYMBOL_GPL(inet_putpeer);
>
> diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
> index 31d77885bcae..bafc49de270e 100644
> --- a/net/ipv6/ip6_fib.c
> +++ b/net/ipv6/ip6_fib.c
> @@ -198,16 +198,9 @@ static void node_free_immediate(struct net *net, struct fib6_node *fn)
> net->ipv6.rt6_stats->fib_nodes--;
> }
>
> -static void node_free_rcu(struct rcu_head *head)
> -{
> - struct fib6_node *fn = container_of(head, struct fib6_node, rcu);
> -
> - kmem_cache_free(fib6_node_kmem, fn);
> -}
> -
> static void node_free(struct net *net, struct fib6_node *fn)
> {
> - call_rcu(&fn->rcu, node_free_rcu);
> + kfree_rcu(fn, rcu);
> net->ipv6.rt6_stats->fib_nodes--;
> }
>
> diff --git a/net/ipv6/xfrm6_tunnel.c b/net/ipv6/xfrm6_tunnel.c
> index bf140ef781c1..c3c893ddb6ee 100644
> --- a/net/ipv6/xfrm6_tunnel.c
> +++ b/net/ipv6/xfrm6_tunnel.c
> @@ -178,12 +178,6 @@ __be32 xfrm6_tunnel_alloc_spi(struct net *net, xfrm_address_t *saddr)
> }
> EXPORT_SYMBOL(xfrm6_tunnel_alloc_spi);
>
> -static void x6spi_destroy_rcu(struct rcu_head *head)
> -{
> - kmem_cache_free(xfrm6_tunnel_spi_kmem,
> - container_of(head, struct xfrm6_tunnel_spi, rcu_head));
> -}
> -
> static void xfrm6_tunnel_free_spi(struct net *net, xfrm_address_t *saddr)
> {
> struct xfrm6_tunnel_net *xfrm6_tn = xfrm6_tunnel_pernet(net);
> @@ -200,7 +194,7 @@ static void xfrm6_tunnel_free_spi(struct net *net, xfrm_address_t *saddr)
> if (refcount_dec_and_test(&x6spi->refcnt)) {
> hlist_del_rcu(&x6spi->list_byaddr);
> hlist_del_rcu(&x6spi->list_byspi);
> - call_rcu(&x6spi->rcu_head, x6spi_destroy_rcu);
> + kfree_rcu(x6spi, rcu_head);
> break;
> }
> }
> diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
> index 2f191e50d4fc..fbb730cd2d38 100644
> --- a/net/kcm/kcmsock.c
> +++ b/net/kcm/kcmsock.c
> @@ -1580,14 +1580,6 @@ static int kcm_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
> return err;
> }
>
> -static void free_mux(struct rcu_head *rcu)
> -{
> - struct kcm_mux *mux = container_of(rcu,
> - struct kcm_mux, rcu);
> -
> - kmem_cache_free(kcm_muxp, mux);
> -}
> -
> static void release_mux(struct kcm_mux *mux)
> {
> struct kcm_net *knet = mux->knet;
> @@ -1615,7 +1607,7 @@ static void release_mux(struct kcm_mux *mux)
> knet->count--;
> mutex_unlock(&knet->mutex);
>
> - call_rcu(&mux->rcu, free_mux);
> + kfree_rcu(mux, rcu);
> }
>
> static void kcm_done(struct kcm_sock *kcm)
> diff --git a/net/netfilter/nf_conncount.c b/net/netfilter/nf_conncount.c
> index 8715617b02fe..587bfcb79723 100644
> --- a/net/netfilter/nf_conncount.c
> +++ b/net/netfilter/nf_conncount.c
> @@ -275,14 +275,6 @@ bool nf_conncount_gc_list(struct net *net,
> }
> EXPORT_SYMBOL_GPL(nf_conncount_gc_list);
>
> -static void __tree_nodes_free(struct rcu_head *h)
> -{
> - struct nf_conncount_rb *rbconn;
> -
> - rbconn = container_of(h, struct nf_conncount_rb, rcu_head);
> - kmem_cache_free(conncount_rb_cachep, rbconn);
> -}
> -
> /* caller must hold tree nf_conncount_locks[] lock */
> static void tree_nodes_free(struct rb_root *root,
> struct nf_conncount_rb *gc_nodes[],
> @@ -295,7 +287,7 @@ static void tree_nodes_free(struct rb_root *root,
> spin_lock(&rbconn->list.list_lock);
> if (!rbconn->list.count) {
> rb_erase(&rbconn->node, root);
> - call_rcu(&rbconn->rcu_head, __tree_nodes_free);
> + kfree_rcu(rbconn, rcu_head);
> }
> spin_unlock(&rbconn->list.list_lock);
> }
> diff --git a/net/netfilter/nf_conntrack_expect.c b/net/netfilter/nf_conntrack_expect.c
> index 21fa550966f0..9dcaef6f3663 100644
> --- a/net/netfilter/nf_conntrack_expect.c
> +++ b/net/netfilter/nf_conntrack_expect.c
> @@ -367,18 +367,10 @@ void nf_ct_expect_init(struct nf_conntrack_expect *exp, unsigned int class,
> }
> EXPORT_SYMBOL_GPL(nf_ct_expect_init);
>
> -static void nf_ct_expect_free_rcu(struct rcu_head *head)
> -{
> - struct nf_conntrack_expect *exp;
> -
> - exp = container_of(head, struct nf_conntrack_expect, rcu);
> - kmem_cache_free(nf_ct_expect_cachep, exp);
> -}
> -
> void nf_ct_expect_put(struct nf_conntrack_expect *exp)
> {
> if (refcount_dec_and_test(&exp->use))
> - call_rcu(&exp->rcu, nf_ct_expect_free_rcu);
> + kfree_rcu(exp, rcu);
> }
> EXPORT_SYMBOL_GPL(nf_ct_expect_put);
>
> diff --git a/net/netfilter/xt_hashlimit.c b/net/netfilter/xt_hashlimit.c
> index 0859b8f76764..c2b9b954eb53 100644
> --- a/net/netfilter/xt_hashlimit.c
> +++ b/net/netfilter/xt_hashlimit.c
> @@ -256,18 +256,11 @@ dsthash_alloc_init(struct xt_hashlimit_htable *ht,
> return ent;
> }
>
> -static void dsthash_free_rcu(struct rcu_head *head)
> -{
> - struct dsthash_ent *ent = container_of(head, struct dsthash_ent, rcu);
> -
> - kmem_cache_free(hashlimit_cachep, ent);
> -}
> -
> static inline void
> dsthash_free(struct xt_hashlimit_htable *ht, struct dsthash_ent *ent)
> {
> hlist_del_rcu(&ent->node);
> - call_rcu(&ent->rcu, dsthash_free_rcu);
> + kfree_rcu(ent, rcu);
> ht->count--;
> }
> static void htable_gc(struct work_struct *work);
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [cocci] patch idea: convert trivial call_rcu users to kfree_rcu
2024-05-31 16:02 ` Julia Lawall
2024-06-03 17:25 ` Paul E. McKenney
@ 2024-06-03 19:29 ` Vlastimil Babka
2024-06-03 19:51 ` Julia Lawall
2024-06-04 11:26 ` Uladzislau Rezki
2 siblings, 1 reply; 24+ messages in thread
From: Vlastimil Babka @ 2024-06-03 19:29 UTC (permalink / raw)
To: Julia Lawall, Paul E. McKenney; +Cc: Uladzislau Rezki, linux-mm, RCU, cocci
On 5/31/24 6:02 PM, Julia Lawall wrote:
> Here are the changes proposed by Coccinelle.
>
> I didn't succeed to compile the powerpc code, but the rest has been
> compile tested.
>
> julia
>
> commit 1881f31fe3ad693d07ecff45985dd0e87534923f
> Author: Julia Lawall <Julia.Lawall@inria.fr>
> Date: Wed May 29 18:54:41 2024 +0200
>
> misc
Looks great, thanks!
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Now comes the hard part, how to submit this. One patch (to rcu or linux-mm?)
with many Cc's, or multiple per-subsystem patches. There's always someone
who wants the other way that was chosen...
> diff --git a/arch/powerpc/kvm/book3s_mmu_hpte.c b/arch/powerpc/kvm/book3s_mmu_hpte.c
> index ce79ac33e8d3..d904e13e069b 100644
> --- a/arch/powerpc/kvm/book3s_mmu_hpte.c
> +++ b/arch/powerpc/kvm/book3s_mmu_hpte.c
> @@ -92,12 +92,6 @@ void kvmppc_mmu_hpte_cache_map(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
> spin_unlock(&vcpu3s->mmu_lock);
> }
>
> -static void free_pte_rcu(struct rcu_head *head)
> -{
> - struct hpte_cache *pte = container_of(head, struct hpte_cache, rcu_head);
> - kmem_cache_free(hpte_cache, pte);
> -}
> -
> static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
> {
> struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu);
> @@ -126,7 +120,7 @@ static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
>
> spin_unlock(&vcpu3s->mmu_lock);
>
> - call_rcu(&pte->rcu_head, free_pte_rcu);
> + kfree_rcu(pte, rcu_head);
> }
>
> static void kvmppc_mmu_pte_flush_all(struct kvm_vcpu *vcpu)
> diff --git a/block/blk-ioc.c b/block/blk-ioc.c
> index 25dd4db11121..ce82770c72ab 100644
> --- a/block/blk-ioc.c
> +++ b/block/blk-ioc.c
> @@ -32,13 +32,6 @@ static void get_io_context(struct io_context *ioc)
> atomic_long_inc(&ioc->refcount);
> }
>
> -static void icq_free_icq_rcu(struct rcu_head *head)
> -{
> - struct io_cq *icq = container_of(head, struct io_cq, __rcu_head);
> -
> - kmem_cache_free(icq->__rcu_icq_cache, icq);
> -}
> -
> /*
> * Exit an icq. Called with ioc locked for blk-mq, and with both ioc
> * and queue locked for legacy.
> @@ -102,7 +95,7 @@ static void ioc_destroy_icq(struct io_cq *icq)
> */
> icq->__rcu_icq_cache = et->icq_cache;
> icq->flags |= ICQ_DESTROYED;
> - call_rcu(&icq->__rcu_head, icq_free_icq_rcu);
> + kfree_rcu(icq, __rcu_head);
> }
>
> /*
> diff --git a/drivers/net/wireguard/allowedips.c b/drivers/net/wireguard/allowedips.c
> index 0ba714ca5185..e4e1638fce1b 100644
> --- a/drivers/net/wireguard/allowedips.c
> +++ b/drivers/net/wireguard/allowedips.c
> @@ -48,11 +48,6 @@ static void push_rcu(struct allowedips_node **stack,
> }
> }
>
> -static void node_free_rcu(struct rcu_head *rcu)
> -{
> - kmem_cache_free(node_cache, container_of(rcu, struct allowedips_node, rcu));
> -}
> -
> static void root_free_rcu(struct rcu_head *rcu)
> {
> struct allowedips_node *node, *stack[MAX_ALLOWEDIPS_DEPTH] = {
> @@ -330,13 +325,13 @@ void wg_allowedips_remove_by_peer(struct allowedips *table,
> child = rcu_dereference_protected(
> parent->bit[!(node->parent_bit_packed & 1)],
> lockdep_is_held(lock));
> - call_rcu(&node->rcu, node_free_rcu);
> + kfree_rcu(node, rcu);
> if (!free_parent)
> continue;
> if (child)
> child->parent_bit_packed = parent->parent_bit_packed;
> *(struct allowedips_node **)(parent->parent_bit_packed & ~3UL) = child;
> - call_rcu(&parent->rcu, node_free_rcu);
> + kfree_rcu(parent, rcu);
> }
> }
>
> diff --git a/fs/ecryptfs/dentry.c b/fs/ecryptfs/dentry.c
> index acaa0825e9bb..49d626ff33a9 100644
> --- a/fs/ecryptfs/dentry.c
> +++ b/fs/ecryptfs/dentry.c
> @@ -51,12 +51,6 @@ static int ecryptfs_d_revalidate(struct dentry *dentry, unsigned int flags)
>
> struct kmem_cache *ecryptfs_dentry_info_cache;
>
> -static void ecryptfs_dentry_free_rcu(struct rcu_head *head)
> -{
> - kmem_cache_free(ecryptfs_dentry_info_cache,
> - container_of(head, struct ecryptfs_dentry_info, rcu));
> -}
> -
> /**
> * ecryptfs_d_release
> * @dentry: The ecryptfs dentry
> @@ -68,7 +62,7 @@ static void ecryptfs_d_release(struct dentry *dentry)
> struct ecryptfs_dentry_info *p = dentry->d_fsdata;
> if (p) {
> path_put(&p->lower_path);
> - call_rcu(&p->rcu, ecryptfs_dentry_free_rcu);
> + kfree_rcu(p, rcu);
> }
> }
>
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index a20c2c9d7d45..eba5083504c7 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -571,13 +571,6 @@ opaque_hashval(const void *ptr, int nbytes)
> return x;
> }
>
> -static void nfsd4_free_file_rcu(struct rcu_head *rcu)
> -{
> - struct nfs4_file *fp = container_of(rcu, struct nfs4_file, fi_rcu);
> -
> - kmem_cache_free(file_slab, fp);
> -}
> -
> void
> put_nfs4_file(struct nfs4_file *fi)
> {
> @@ -585,7 +578,7 @@ put_nfs4_file(struct nfs4_file *fi)
> nfsd4_file_hash_remove(fi);
> WARN_ON_ONCE(!list_empty(&fi->fi_clnt_odstate));
> WARN_ON_ONCE(!list_empty(&fi->fi_delegations));
> - call_rcu(&fi->fi_rcu, nfsd4_free_file_rcu);
> + kfree_rcu(fi, fi_rcu);
> }
> }
>
> diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
> index 7c29f4afc23d..338c52168e61 100644
> --- a/fs/tracefs/inode.c
> +++ b/fs/tracefs/inode.c
> @@ -53,14 +53,6 @@ static struct inode *tracefs_alloc_inode(struct super_block *sb)
> return &ti->vfs_inode;
> }
>
> -static void tracefs_free_inode_rcu(struct rcu_head *rcu)
> -{
> - struct tracefs_inode *ti;
> -
> - ti = container_of(rcu, struct tracefs_inode, rcu);
> - kmem_cache_free(tracefs_inode_cachep, ti);
> -}
> -
> static void tracefs_free_inode(struct inode *inode)
> {
> struct tracefs_inode *ti = get_tracefs(inode);
> @@ -70,7 +62,7 @@ static void tracefs_free_inode(struct inode *inode)
> list_del_rcu(&ti->list);
> spin_unlock_irqrestore(&tracefs_inode_lock, flags);
>
> - call_rcu(&ti->rcu, tracefs_free_inode_rcu);
> + kfree_rcu(ti, rcu);
> }
>
> static ssize_t default_read_file(struct file *file, char __user *buf,
> diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
> index b924f0f096fa..bad5db979664 100644
> --- a/kernel/time/posix-timers.c
> +++ b/kernel/time/posix-timers.c
> @@ -412,18 +412,11 @@ static struct k_itimer * alloc_posix_timer(void)
> return tmr;
> }
>
> -static void k_itimer_rcu_free(struct rcu_head *head)
> -{
> - struct k_itimer *tmr = container_of(head, struct k_itimer, rcu);
> -
> - kmem_cache_free(posix_timers_cache, tmr);
> -}
> -
> static void posix_timer_free(struct k_itimer *tmr)
> {
> put_pid(tmr->it_pid);
> sigqueue_free(tmr->sigq);
> - call_rcu(&tmr->rcu, k_itimer_rcu_free);
> + kfree_rcu(tmr, rcu);
> }
>
> static void posix_timer_unhash_and_free(struct k_itimer *tmr)
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 003474c9a77d..367fc459cbd2 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -5022,12 +5022,6 @@ static struct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs)
> return NULL;
> }
>
> -static void rcu_free_pwq(struct rcu_head *rcu)
> -{
> - kmem_cache_free(pwq_cache,
> - container_of(rcu, struct pool_workqueue, rcu));
> -}
> -
> /*
> * Scheduled on pwq_release_worker by put_pwq() when an unbound pwq hits zero
> * refcnt and needs to be destroyed.
> @@ -5073,7 +5067,7 @@ static void pwq_release_workfn(struct kthread_work *work)
> raw_spin_unlock_irq(&nna->lock);
> }
>
> - call_rcu(&pwq->rcu, rcu_free_pwq);
> + kfree_rcu(pwq, rcu);
>
> /*
> * If we're the last pwq going away, @wq is already dead and no one
> diff --git a/net/bridge/br_fdb.c b/net/bridge/br_fdb.c
> index c77591e63841..6d04b48f7e2c 100644
> --- a/net/bridge/br_fdb.c
> +++ b/net/bridge/br_fdb.c
> @@ -73,13 +73,6 @@ static inline int has_expired(const struct net_bridge *br,
> time_before_eq(fdb->updated + hold_time(br), jiffies);
> }
>
> -static void fdb_rcu_free(struct rcu_head *head)
> -{
> - struct net_bridge_fdb_entry *ent
> - = container_of(head, struct net_bridge_fdb_entry, rcu);
> - kmem_cache_free(br_fdb_cache, ent);
> -}
> -
> static int fdb_to_nud(const struct net_bridge *br,
> const struct net_bridge_fdb_entry *fdb)
> {
> @@ -329,7 +322,7 @@ static void fdb_delete(struct net_bridge *br, struct net_bridge_fdb_entry *f,
> if (test_and_clear_bit(BR_FDB_DYNAMIC_LEARNED, &f->flags))
> atomic_dec(&br->fdb_n_learned);
> fdb_notify(br, f, RTM_DELNEIGH, swdev_notify);
> - call_rcu(&f->rcu, fdb_rcu_free);
> + kfree_rcu(f, rcu);
> }
>
> /* Delete a local entry if no other port had the same address.
> diff --git a/net/can/gw.c b/net/can/gw.c
> index 37528826935e..ffb9870e2d01 100644
> --- a/net/can/gw.c
> +++ b/net/can/gw.c
> @@ -577,13 +577,6 @@ static inline void cgw_unregister_filter(struct net *net, struct cgw_job *gwj)
> gwj->ccgw.filter.can_mask, can_can_gw_rcv, gwj);
> }
>
> -static void cgw_job_free_rcu(struct rcu_head *rcu_head)
> -{
> - struct cgw_job *gwj = container_of(rcu_head, struct cgw_job, rcu);
> -
> - kmem_cache_free(cgw_cache, gwj);
> -}
> -
> static int cgw_notifier(struct notifier_block *nb,
> unsigned long msg, void *ptr)
> {
> @@ -603,7 +596,7 @@ static int cgw_notifier(struct notifier_block *nb,
> if (gwj->src.dev == dev || gwj->dst.dev == dev) {
> hlist_del(&gwj->list);
> cgw_unregister_filter(net, gwj);
> - call_rcu(&gwj->rcu, cgw_job_free_rcu);
> + kfree_rcu(gwj, rcu);
> }
> }
> }
> @@ -1168,7 +1161,7 @@ static void cgw_remove_all_jobs(struct net *net)
> hlist_for_each_entry_safe(gwj, nx, &net->can.cgw_list, list) {
> hlist_del(&gwj->list);
> cgw_unregister_filter(net, gwj);
> - call_rcu(&gwj->rcu, cgw_job_free_rcu);
> + kfree_rcu(gwj, rcu);
> }
> }
>
> @@ -1236,7 +1229,7 @@ static int cgw_remove_job(struct sk_buff *skb, struct nlmsghdr *nlh,
>
> hlist_del(&gwj->list);
> cgw_unregister_filter(net, gwj);
> - call_rcu(&gwj->rcu, cgw_job_free_rcu);
> + kfree_rcu(gwj, rcu);
> err = 0;
> break;
> }
> diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
> index f474106464d2..3ed92e583417 100644
> --- a/net/ipv4/fib_trie.c
> +++ b/net/ipv4/fib_trie.c
> @@ -292,15 +292,9 @@ static const int inflate_threshold = 50;
> static const int halve_threshold_root = 15;
> static const int inflate_threshold_root = 30;
>
> -static void __alias_free_mem(struct rcu_head *head)
> -{
> - struct fib_alias *fa = container_of(head, struct fib_alias, rcu);
> - kmem_cache_free(fn_alias_kmem, fa);
> -}
> -
> static inline void alias_free_mem_rcu(struct fib_alias *fa)
> {
> - call_rcu(&fa->rcu, __alias_free_mem);
> + kfree_rcu(fa, rcu);
> }
>
> #define TNODE_VMALLOC_MAX \
> diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c
> index 5bd759963451..5ab56f4cb529 100644
> --- a/net/ipv4/inetpeer.c
> +++ b/net/ipv4/inetpeer.c
> @@ -128,11 +128,6 @@ static struct inet_peer *lookup(const struct inetpeer_addr *daddr,
> return NULL;
> }
>
> -static void inetpeer_free_rcu(struct rcu_head *head)
> -{
> - kmem_cache_free(peer_cachep, container_of(head, struct inet_peer, rcu));
> -}
> -
> /* perform garbage collect on all items stacked during a lookup */
> static void inet_peer_gc(struct inet_peer_base *base,
> struct inet_peer *gc_stack[],
> @@ -168,7 +163,7 @@ static void inet_peer_gc(struct inet_peer_base *base,
> if (p) {
> rb_erase(&p->rb_node, &base->rb_root);
> base->total--;
> - call_rcu(&p->rcu, inetpeer_free_rcu);
> + kfree_rcu(p, rcu);
> }
> }
> }
> @@ -242,7 +237,7 @@ void inet_putpeer(struct inet_peer *p)
> WRITE_ONCE(p->dtime, (__u32)jiffies);
>
> if (refcount_dec_and_test(&p->refcnt))
> - call_rcu(&p->rcu, inetpeer_free_rcu);
> + kfree_rcu(p, rcu);
> }
> EXPORT_SYMBOL_GPL(inet_putpeer);
>
> diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
> index 31d77885bcae..bafc49de270e 100644
> --- a/net/ipv6/ip6_fib.c
> +++ b/net/ipv6/ip6_fib.c
> @@ -198,16 +198,9 @@ static void node_free_immediate(struct net *net, struct fib6_node *fn)
> net->ipv6.rt6_stats->fib_nodes--;
> }
>
> -static void node_free_rcu(struct rcu_head *head)
> -{
> - struct fib6_node *fn = container_of(head, struct fib6_node, rcu);
> -
> - kmem_cache_free(fib6_node_kmem, fn);
> -}
> -
> static void node_free(struct net *net, struct fib6_node *fn)
> {
> - call_rcu(&fn->rcu, node_free_rcu);
> + kfree_rcu(fn, rcu);
> net->ipv6.rt6_stats->fib_nodes--;
> }
>
> diff --git a/net/ipv6/xfrm6_tunnel.c b/net/ipv6/xfrm6_tunnel.c
> index bf140ef781c1..c3c893ddb6ee 100644
> --- a/net/ipv6/xfrm6_tunnel.c
> +++ b/net/ipv6/xfrm6_tunnel.c
> @@ -178,12 +178,6 @@ __be32 xfrm6_tunnel_alloc_spi(struct net *net, xfrm_address_t *saddr)
> }
> EXPORT_SYMBOL(xfrm6_tunnel_alloc_spi);
>
> -static void x6spi_destroy_rcu(struct rcu_head *head)
> -{
> - kmem_cache_free(xfrm6_tunnel_spi_kmem,
> - container_of(head, struct xfrm6_tunnel_spi, rcu_head));
> -}
> -
> static void xfrm6_tunnel_free_spi(struct net *net, xfrm_address_t *saddr)
> {
> struct xfrm6_tunnel_net *xfrm6_tn = xfrm6_tunnel_pernet(net);
> @@ -200,7 +194,7 @@ static void xfrm6_tunnel_free_spi(struct net *net, xfrm_address_t *saddr)
> if (refcount_dec_and_test(&x6spi->refcnt)) {
> hlist_del_rcu(&x6spi->list_byaddr);
> hlist_del_rcu(&x6spi->list_byspi);
> - call_rcu(&x6spi->rcu_head, x6spi_destroy_rcu);
> + kfree_rcu(x6spi, rcu_head);
> break;
> }
> }
> diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
> index 2f191e50d4fc..fbb730cd2d38 100644
> --- a/net/kcm/kcmsock.c
> +++ b/net/kcm/kcmsock.c
> @@ -1580,14 +1580,6 @@ static int kcm_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
> return err;
> }
>
> -static void free_mux(struct rcu_head *rcu)
> -{
> - struct kcm_mux *mux = container_of(rcu,
> - struct kcm_mux, rcu);
> -
> - kmem_cache_free(kcm_muxp, mux);
> -}
> -
> static void release_mux(struct kcm_mux *mux)
> {
> struct kcm_net *knet = mux->knet;
> @@ -1615,7 +1607,7 @@ static void release_mux(struct kcm_mux *mux)
> knet->count--;
> mutex_unlock(&knet->mutex);
>
> - call_rcu(&mux->rcu, free_mux);
> + kfree_rcu(mux, rcu);
> }
>
> static void kcm_done(struct kcm_sock *kcm)
> diff --git a/net/netfilter/nf_conncount.c b/net/netfilter/nf_conncount.c
> index 8715617b02fe..587bfcb79723 100644
> --- a/net/netfilter/nf_conncount.c
> +++ b/net/netfilter/nf_conncount.c
> @@ -275,14 +275,6 @@ bool nf_conncount_gc_list(struct net *net,
> }
> EXPORT_SYMBOL_GPL(nf_conncount_gc_list);
>
> -static void __tree_nodes_free(struct rcu_head *h)
> -{
> - struct nf_conncount_rb *rbconn;
> -
> - rbconn = container_of(h, struct nf_conncount_rb, rcu_head);
> - kmem_cache_free(conncount_rb_cachep, rbconn);
> -}
> -
> /* caller must hold tree nf_conncount_locks[] lock */
> static void tree_nodes_free(struct rb_root *root,
> struct nf_conncount_rb *gc_nodes[],
> @@ -295,7 +287,7 @@ static void tree_nodes_free(struct rb_root *root,
> spin_lock(&rbconn->list.list_lock);
> if (!rbconn->list.count) {
> rb_erase(&rbconn->node, root);
> - call_rcu(&rbconn->rcu_head, __tree_nodes_free);
> + kfree_rcu(rbconn, rcu_head);
> }
> spin_unlock(&rbconn->list.list_lock);
> }
> diff --git a/net/netfilter/nf_conntrack_expect.c b/net/netfilter/nf_conntrack_expect.c
> index 21fa550966f0..9dcaef6f3663 100644
> --- a/net/netfilter/nf_conntrack_expect.c
> +++ b/net/netfilter/nf_conntrack_expect.c
> @@ -367,18 +367,10 @@ void nf_ct_expect_init(struct nf_conntrack_expect *exp, unsigned int class,
> }
> EXPORT_SYMBOL_GPL(nf_ct_expect_init);
>
> -static void nf_ct_expect_free_rcu(struct rcu_head *head)
> -{
> - struct nf_conntrack_expect *exp;
> -
> - exp = container_of(head, struct nf_conntrack_expect, rcu);
> - kmem_cache_free(nf_ct_expect_cachep, exp);
> -}
> -
> void nf_ct_expect_put(struct nf_conntrack_expect *exp)
> {
> if (refcount_dec_and_test(&exp->use))
> - call_rcu(&exp->rcu, nf_ct_expect_free_rcu);
> + kfree_rcu(exp, rcu);
> }
> EXPORT_SYMBOL_GPL(nf_ct_expect_put);
>
> diff --git a/net/netfilter/xt_hashlimit.c b/net/netfilter/xt_hashlimit.c
> index 0859b8f76764..c2b9b954eb53 100644
> --- a/net/netfilter/xt_hashlimit.c
> +++ b/net/netfilter/xt_hashlimit.c
> @@ -256,18 +256,11 @@ dsthash_alloc_init(struct xt_hashlimit_htable *ht,
> return ent;
> }
>
> -static void dsthash_free_rcu(struct rcu_head *head)
> -{
> - struct dsthash_ent *ent = container_of(head, struct dsthash_ent, rcu);
> -
> - kmem_cache_free(hashlimit_cachep, ent);
> -}
> -
> static inline void
> dsthash_free(struct xt_hashlimit_htable *ht, struct dsthash_ent *ent)
> {
> hlist_del_rcu(&ent->node);
> - call_rcu(&ent->rcu, dsthash_free_rcu);
> + kfree_rcu(ent, rcu);
> ht->count--;
> }
> static void htable_gc(struct work_struct *work);
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [cocci] patch idea: convert trivial call_rcu users to kfree_rcu
2024-06-03 19:29 ` Vlastimil Babka
@ 2024-06-03 19:51 ` Julia Lawall
0 siblings, 0 replies; 24+ messages in thread
From: Julia Lawall @ 2024-06-03 19:51 UTC (permalink / raw)
To: Vlastimil Babka; +Cc: Paul E. McKenney, Uladzislau Rezki, linux-mm, RCU, cocci
On Mon, 3 Jun 2024, Vlastimil Babka wrote:
> On 5/31/24 6:02 PM, Julia Lawall wrote:
> > Here are the changes proposed by Coccinelle.
> >
> > I didn't succeed to compile the powerpc code, but the rest has been
> > compile tested.
> >
> > julia
> >
> > commit 1881f31fe3ad693d07ecff45985dd0e87534923f
> > Author: Julia Lawall <Julia.Lawall@inria.fr>
> > Date: Wed May 29 18:54:41 2024 +0200
> >
> > misc
>
> Looks great, thanks!
>
> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
>
> Now comes the hard part, how to submit this. One patch (to rcu or linux-mm?)
> with many Cc's, or multiple per-subsystem patches. There's always someone
> who wants the other way that was chosen...
I was planning to break up the patch series. I will keep the two
reviewed-bys. Thanks for the review.
julia
>
> > diff --git a/arch/powerpc/kvm/book3s_mmu_hpte.c b/arch/powerpc/kvm/book3s_mmu_hpte.c
> > index ce79ac33e8d3..d904e13e069b 100644
> > --- a/arch/powerpc/kvm/book3s_mmu_hpte.c
> > +++ b/arch/powerpc/kvm/book3s_mmu_hpte.c
> > @@ -92,12 +92,6 @@ void kvmppc_mmu_hpte_cache_map(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
> > spin_unlock(&vcpu3s->mmu_lock);
> > }
> >
> > -static void free_pte_rcu(struct rcu_head *head)
> > -{
> > - struct hpte_cache *pte = container_of(head, struct hpte_cache, rcu_head);
> > - kmem_cache_free(hpte_cache, pte);
> > -}
> > -
> > static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
> > {
> > struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu);
> > @@ -126,7 +120,7 @@ static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
> >
> > spin_unlock(&vcpu3s->mmu_lock);
> >
> > - call_rcu(&pte->rcu_head, free_pte_rcu);
> > + kfree_rcu(pte, rcu_head);
> > }
> >
> > static void kvmppc_mmu_pte_flush_all(struct kvm_vcpu *vcpu)
> > diff --git a/block/blk-ioc.c b/block/blk-ioc.c
> > index 25dd4db11121..ce82770c72ab 100644
> > --- a/block/blk-ioc.c
> > +++ b/block/blk-ioc.c
> > @@ -32,13 +32,6 @@ static void get_io_context(struct io_context *ioc)
> > atomic_long_inc(&ioc->refcount);
> > }
> >
> > -static void icq_free_icq_rcu(struct rcu_head *head)
> > -{
> > - struct io_cq *icq = container_of(head, struct io_cq, __rcu_head);
> > -
> > - kmem_cache_free(icq->__rcu_icq_cache, icq);
> > -}
> > -
> > /*
> > * Exit an icq. Called with ioc locked for blk-mq, and with both ioc
> > * and queue locked for legacy.
> > @@ -102,7 +95,7 @@ static void ioc_destroy_icq(struct io_cq *icq)
> > */
> > icq->__rcu_icq_cache = et->icq_cache;
> > icq->flags |= ICQ_DESTROYED;
> > - call_rcu(&icq->__rcu_head, icq_free_icq_rcu);
> > + kfree_rcu(icq, __rcu_head);
> > }
> >
> > /*
> > diff --git a/drivers/net/wireguard/allowedips.c b/drivers/net/wireguard/allowedips.c
> > index 0ba714ca5185..e4e1638fce1b 100644
> > --- a/drivers/net/wireguard/allowedips.c
> > +++ b/drivers/net/wireguard/allowedips.c
> > @@ -48,11 +48,6 @@ static void push_rcu(struct allowedips_node **stack,
> > }
> > }
> >
> > -static void node_free_rcu(struct rcu_head *rcu)
> > -{
> > - kmem_cache_free(node_cache, container_of(rcu, struct allowedips_node, rcu));
> > -}
> > -
> > static void root_free_rcu(struct rcu_head *rcu)
> > {
> > struct allowedips_node *node, *stack[MAX_ALLOWEDIPS_DEPTH] = {
> > @@ -330,13 +325,13 @@ void wg_allowedips_remove_by_peer(struct allowedips *table,
> > child = rcu_dereference_protected(
> > parent->bit[!(node->parent_bit_packed & 1)],
> > lockdep_is_held(lock));
> > - call_rcu(&node->rcu, node_free_rcu);
> > + kfree_rcu(node, rcu);
> > if (!free_parent)
> > continue;
> > if (child)
> > child->parent_bit_packed = parent->parent_bit_packed;
> > *(struct allowedips_node **)(parent->parent_bit_packed & ~3UL) = child;
> > - call_rcu(&parent->rcu, node_free_rcu);
> > + kfree_rcu(parent, rcu);
> > }
> > }
> >
> > diff --git a/fs/ecryptfs/dentry.c b/fs/ecryptfs/dentry.c
> > index acaa0825e9bb..49d626ff33a9 100644
> > --- a/fs/ecryptfs/dentry.c
> > +++ b/fs/ecryptfs/dentry.c
> > @@ -51,12 +51,6 @@ static int ecryptfs_d_revalidate(struct dentry *dentry, unsigned int flags)
> >
> > struct kmem_cache *ecryptfs_dentry_info_cache;
> >
> > -static void ecryptfs_dentry_free_rcu(struct rcu_head *head)
> > -{
> > - kmem_cache_free(ecryptfs_dentry_info_cache,
> > - container_of(head, struct ecryptfs_dentry_info, rcu));
> > -}
> > -
> > /**
> > * ecryptfs_d_release
> > * @dentry: The ecryptfs dentry
> > @@ -68,7 +62,7 @@ static void ecryptfs_d_release(struct dentry *dentry)
> > struct ecryptfs_dentry_info *p = dentry->d_fsdata;
> > if (p) {
> > path_put(&p->lower_path);
> > - call_rcu(&p->rcu, ecryptfs_dentry_free_rcu);
> > + kfree_rcu(p, rcu);
> > }
> > }
> >
> > diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> > index a20c2c9d7d45..eba5083504c7 100644
> > --- a/fs/nfsd/nfs4state.c
> > +++ b/fs/nfsd/nfs4state.c
> > @@ -571,13 +571,6 @@ opaque_hashval(const void *ptr, int nbytes)
> > return x;
> > }
> >
> > -static void nfsd4_free_file_rcu(struct rcu_head *rcu)
> > -{
> > - struct nfs4_file *fp = container_of(rcu, struct nfs4_file, fi_rcu);
> > -
> > - kmem_cache_free(file_slab, fp);
> > -}
> > -
> > void
> > put_nfs4_file(struct nfs4_file *fi)
> > {
> > @@ -585,7 +578,7 @@ put_nfs4_file(struct nfs4_file *fi)
> > nfsd4_file_hash_remove(fi);
> > WARN_ON_ONCE(!list_empty(&fi->fi_clnt_odstate));
> > WARN_ON_ONCE(!list_empty(&fi->fi_delegations));
> > - call_rcu(&fi->fi_rcu, nfsd4_free_file_rcu);
> > + kfree_rcu(fi, fi_rcu);
> > }
> > }
> >
> > diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
> > index 7c29f4afc23d..338c52168e61 100644
> > --- a/fs/tracefs/inode.c
> > +++ b/fs/tracefs/inode.c
> > @@ -53,14 +53,6 @@ static struct inode *tracefs_alloc_inode(struct super_block *sb)
> > return &ti->vfs_inode;
> > }
> >
> > -static void tracefs_free_inode_rcu(struct rcu_head *rcu)
> > -{
> > - struct tracefs_inode *ti;
> > -
> > - ti = container_of(rcu, struct tracefs_inode, rcu);
> > - kmem_cache_free(tracefs_inode_cachep, ti);
> > -}
> > -
> > static void tracefs_free_inode(struct inode *inode)
> > {
> > struct tracefs_inode *ti = get_tracefs(inode);
> > @@ -70,7 +62,7 @@ static void tracefs_free_inode(struct inode *inode)
> > list_del_rcu(&ti->list);
> > spin_unlock_irqrestore(&tracefs_inode_lock, flags);
> >
> > - call_rcu(&ti->rcu, tracefs_free_inode_rcu);
> > + kfree_rcu(ti, rcu);
> > }
> >
> > static ssize_t default_read_file(struct file *file, char __user *buf,
> > diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
> > index b924f0f096fa..bad5db979664 100644
> > --- a/kernel/time/posix-timers.c
> > +++ b/kernel/time/posix-timers.c
> > @@ -412,18 +412,11 @@ static struct k_itimer * alloc_posix_timer(void)
> > return tmr;
> > }
> >
> > -static void k_itimer_rcu_free(struct rcu_head *head)
> > -{
> > - struct k_itimer *tmr = container_of(head, struct k_itimer, rcu);
> > -
> > - kmem_cache_free(posix_timers_cache, tmr);
> > -}
> > -
> > static void posix_timer_free(struct k_itimer *tmr)
> > {
> > put_pid(tmr->it_pid);
> > sigqueue_free(tmr->sigq);
> > - call_rcu(&tmr->rcu, k_itimer_rcu_free);
> > + kfree_rcu(tmr, rcu);
> > }
> >
> > static void posix_timer_unhash_and_free(struct k_itimer *tmr)
> > diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> > index 003474c9a77d..367fc459cbd2 100644
> > --- a/kernel/workqueue.c
> > +++ b/kernel/workqueue.c
> > @@ -5022,12 +5022,6 @@ static struct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs)
> > return NULL;
> > }
> >
> > -static void rcu_free_pwq(struct rcu_head *rcu)
> > -{
> > - kmem_cache_free(pwq_cache,
> > - container_of(rcu, struct pool_workqueue, rcu));
> > -}
> > -
> > /*
> > * Scheduled on pwq_release_worker by put_pwq() when an unbound pwq hits zero
> > * refcnt and needs to be destroyed.
> > @@ -5073,7 +5067,7 @@ static void pwq_release_workfn(struct kthread_work *work)
> > raw_spin_unlock_irq(&nna->lock);
> > }
> >
> > - call_rcu(&pwq->rcu, rcu_free_pwq);
> > + kfree_rcu(pwq, rcu);
> >
> > /*
> > * If we're the last pwq going away, @wq is already dead and no one
> > diff --git a/net/bridge/br_fdb.c b/net/bridge/br_fdb.c
> > index c77591e63841..6d04b48f7e2c 100644
> > --- a/net/bridge/br_fdb.c
> > +++ b/net/bridge/br_fdb.c
> > @@ -73,13 +73,6 @@ static inline int has_expired(const struct net_bridge *br,
> > time_before_eq(fdb->updated + hold_time(br), jiffies);
> > }
> >
> > -static void fdb_rcu_free(struct rcu_head *head)
> > -{
> > - struct net_bridge_fdb_entry *ent
> > - = container_of(head, struct net_bridge_fdb_entry, rcu);
> > - kmem_cache_free(br_fdb_cache, ent);
> > -}
> > -
> > static int fdb_to_nud(const struct net_bridge *br,
> > const struct net_bridge_fdb_entry *fdb)
> > {
> > @@ -329,7 +322,7 @@ static void fdb_delete(struct net_bridge *br, struct net_bridge_fdb_entry *f,
> > if (test_and_clear_bit(BR_FDB_DYNAMIC_LEARNED, &f->flags))
> > atomic_dec(&br->fdb_n_learned);
> > fdb_notify(br, f, RTM_DELNEIGH, swdev_notify);
> > - call_rcu(&f->rcu, fdb_rcu_free);
> > + kfree_rcu(f, rcu);
> > }
> >
> > /* Delete a local entry if no other port had the same address.
> > diff --git a/net/can/gw.c b/net/can/gw.c
> > index 37528826935e..ffb9870e2d01 100644
> > --- a/net/can/gw.c
> > +++ b/net/can/gw.c
> > @@ -577,13 +577,6 @@ static inline void cgw_unregister_filter(struct net *net, struct cgw_job *gwj)
> > gwj->ccgw.filter.can_mask, can_can_gw_rcv, gwj);
> > }
> >
> > -static void cgw_job_free_rcu(struct rcu_head *rcu_head)
> > -{
> > - struct cgw_job *gwj = container_of(rcu_head, struct cgw_job, rcu);
> > -
> > - kmem_cache_free(cgw_cache, gwj);
> > -}
> > -
> > static int cgw_notifier(struct notifier_block *nb,
> > unsigned long msg, void *ptr)
> > {
> > @@ -603,7 +596,7 @@ static int cgw_notifier(struct notifier_block *nb,
> > if (gwj->src.dev == dev || gwj->dst.dev == dev) {
> > hlist_del(&gwj->list);
> > cgw_unregister_filter(net, gwj);
> > - call_rcu(&gwj->rcu, cgw_job_free_rcu);
> > + kfree_rcu(gwj, rcu);
> > }
> > }
> > }
> > @@ -1168,7 +1161,7 @@ static void cgw_remove_all_jobs(struct net *net)
> > hlist_for_each_entry_safe(gwj, nx, &net->can.cgw_list, list) {
> > hlist_del(&gwj->list);
> > cgw_unregister_filter(net, gwj);
> > - call_rcu(&gwj->rcu, cgw_job_free_rcu);
> > + kfree_rcu(gwj, rcu);
> > }
> > }
> >
> > @@ -1236,7 +1229,7 @@ static int cgw_remove_job(struct sk_buff *skb, struct nlmsghdr *nlh,
> >
> > hlist_del(&gwj->list);
> > cgw_unregister_filter(net, gwj);
> > - call_rcu(&gwj->rcu, cgw_job_free_rcu);
> > + kfree_rcu(gwj, rcu);
> > err = 0;
> > break;
> > }
> > diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
> > index f474106464d2..3ed92e583417 100644
> > --- a/net/ipv4/fib_trie.c
> > +++ b/net/ipv4/fib_trie.c
> > @@ -292,15 +292,9 @@ static const int inflate_threshold = 50;
> > static const int halve_threshold_root = 15;
> > static const int inflate_threshold_root = 30;
> >
> > -static void __alias_free_mem(struct rcu_head *head)
> > -{
> > - struct fib_alias *fa = container_of(head, struct fib_alias, rcu);
> > - kmem_cache_free(fn_alias_kmem, fa);
> > -}
> > -
> > static inline void alias_free_mem_rcu(struct fib_alias *fa)
> > {
> > - call_rcu(&fa->rcu, __alias_free_mem);
> > + kfree_rcu(fa, rcu);
> > }
> >
> > #define TNODE_VMALLOC_MAX \
> > diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c
> > index 5bd759963451..5ab56f4cb529 100644
> > --- a/net/ipv4/inetpeer.c
> > +++ b/net/ipv4/inetpeer.c
> > @@ -128,11 +128,6 @@ static struct inet_peer *lookup(const struct inetpeer_addr *daddr,
> > return NULL;
> > }
> >
> > -static void inetpeer_free_rcu(struct rcu_head *head)
> > -{
> > - kmem_cache_free(peer_cachep, container_of(head, struct inet_peer, rcu));
> > -}
> > -
> > /* perform garbage collect on all items stacked during a lookup */
> > static void inet_peer_gc(struct inet_peer_base *base,
> > struct inet_peer *gc_stack[],
> > @@ -168,7 +163,7 @@ static void inet_peer_gc(struct inet_peer_base *base,
> > if (p) {
> > rb_erase(&p->rb_node, &base->rb_root);
> > base->total--;
> > - call_rcu(&p->rcu, inetpeer_free_rcu);
> > + kfree_rcu(p, rcu);
> > }
> > }
> > }
> > @@ -242,7 +237,7 @@ void inet_putpeer(struct inet_peer *p)
> > WRITE_ONCE(p->dtime, (__u32)jiffies);
> >
> > if (refcount_dec_and_test(&p->refcnt))
> > - call_rcu(&p->rcu, inetpeer_free_rcu);
> > + kfree_rcu(p, rcu);
> > }
> > EXPORT_SYMBOL_GPL(inet_putpeer);
> >
> > diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
> > index 31d77885bcae..bafc49de270e 100644
> > --- a/net/ipv6/ip6_fib.c
> > +++ b/net/ipv6/ip6_fib.c
> > @@ -198,16 +198,9 @@ static void node_free_immediate(struct net *net, struct fib6_node *fn)
> > net->ipv6.rt6_stats->fib_nodes--;
> > }
> >
> > -static void node_free_rcu(struct rcu_head *head)
> > -{
> > - struct fib6_node *fn = container_of(head, struct fib6_node, rcu);
> > -
> > - kmem_cache_free(fib6_node_kmem, fn);
> > -}
> > -
> > static void node_free(struct net *net, struct fib6_node *fn)
> > {
> > - call_rcu(&fn->rcu, node_free_rcu);
> > + kfree_rcu(fn, rcu);
> > net->ipv6.rt6_stats->fib_nodes--;
> > }
> >
> > diff --git a/net/ipv6/xfrm6_tunnel.c b/net/ipv6/xfrm6_tunnel.c
> > index bf140ef781c1..c3c893ddb6ee 100644
> > --- a/net/ipv6/xfrm6_tunnel.c
> > +++ b/net/ipv6/xfrm6_tunnel.c
> > @@ -178,12 +178,6 @@ __be32 xfrm6_tunnel_alloc_spi(struct net *net, xfrm_address_t *saddr)
> > }
> > EXPORT_SYMBOL(xfrm6_tunnel_alloc_spi);
> >
> > -static void x6spi_destroy_rcu(struct rcu_head *head)
> > -{
> > - kmem_cache_free(xfrm6_tunnel_spi_kmem,
> > - container_of(head, struct xfrm6_tunnel_spi, rcu_head));
> > -}
> > -
> > static void xfrm6_tunnel_free_spi(struct net *net, xfrm_address_t *saddr)
> > {
> > struct xfrm6_tunnel_net *xfrm6_tn = xfrm6_tunnel_pernet(net);
> > @@ -200,7 +194,7 @@ static void xfrm6_tunnel_free_spi(struct net *net, xfrm_address_t *saddr)
> > if (refcount_dec_and_test(&x6spi->refcnt)) {
> > hlist_del_rcu(&x6spi->list_byaddr);
> > hlist_del_rcu(&x6spi->list_byspi);
> > - call_rcu(&x6spi->rcu_head, x6spi_destroy_rcu);
> > + kfree_rcu(x6spi, rcu_head);
> > break;
> > }
> > }
> > diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
> > index 2f191e50d4fc..fbb730cd2d38 100644
> > --- a/net/kcm/kcmsock.c
> > +++ b/net/kcm/kcmsock.c
> > @@ -1580,14 +1580,6 @@ static int kcm_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
> > return err;
> > }
> >
> > -static void free_mux(struct rcu_head *rcu)
> > -{
> > - struct kcm_mux *mux = container_of(rcu,
> > - struct kcm_mux, rcu);
> > -
> > - kmem_cache_free(kcm_muxp, mux);
> > -}
> > -
> > static void release_mux(struct kcm_mux *mux)
> > {
> > struct kcm_net *knet = mux->knet;
> > @@ -1615,7 +1607,7 @@ static void release_mux(struct kcm_mux *mux)
> > knet->count--;
> > mutex_unlock(&knet->mutex);
> >
> > - call_rcu(&mux->rcu, free_mux);
> > + kfree_rcu(mux, rcu);
> > }
> >
> > static void kcm_done(struct kcm_sock *kcm)
> > diff --git a/net/netfilter/nf_conncount.c b/net/netfilter/nf_conncount.c
> > index 8715617b02fe..587bfcb79723 100644
> > --- a/net/netfilter/nf_conncount.c
> > +++ b/net/netfilter/nf_conncount.c
> > @@ -275,14 +275,6 @@ bool nf_conncount_gc_list(struct net *net,
> > }
> > EXPORT_SYMBOL_GPL(nf_conncount_gc_list);
> >
> > -static void __tree_nodes_free(struct rcu_head *h)
> > -{
> > - struct nf_conncount_rb *rbconn;
> > -
> > - rbconn = container_of(h, struct nf_conncount_rb, rcu_head);
> > - kmem_cache_free(conncount_rb_cachep, rbconn);
> > -}
> > -
> > /* caller must hold tree nf_conncount_locks[] lock */
> > static void tree_nodes_free(struct rb_root *root,
> > struct nf_conncount_rb *gc_nodes[],
> > @@ -295,7 +287,7 @@ static void tree_nodes_free(struct rb_root *root,
> > spin_lock(&rbconn->list.list_lock);
> > if (!rbconn->list.count) {
> > rb_erase(&rbconn->node, root);
> > - call_rcu(&rbconn->rcu_head, __tree_nodes_free);
> > + kfree_rcu(rbconn, rcu_head);
> > }
> > spin_unlock(&rbconn->list.list_lock);
> > }
> > diff --git a/net/netfilter/nf_conntrack_expect.c b/net/netfilter/nf_conntrack_expect.c
> > index 21fa550966f0..9dcaef6f3663 100644
> > --- a/net/netfilter/nf_conntrack_expect.c
> > +++ b/net/netfilter/nf_conntrack_expect.c
> > @@ -367,18 +367,10 @@ void nf_ct_expect_init(struct nf_conntrack_expect *exp, unsigned int class,
> > }
> > EXPORT_SYMBOL_GPL(nf_ct_expect_init);
> >
> > -static void nf_ct_expect_free_rcu(struct rcu_head *head)
> > -{
> > - struct nf_conntrack_expect *exp;
> > -
> > - exp = container_of(head, struct nf_conntrack_expect, rcu);
> > - kmem_cache_free(nf_ct_expect_cachep, exp);
> > -}
> > -
> > void nf_ct_expect_put(struct nf_conntrack_expect *exp)
> > {
> > if (refcount_dec_and_test(&exp->use))
> > - call_rcu(&exp->rcu, nf_ct_expect_free_rcu);
> > + kfree_rcu(exp, rcu);
> > }
> > EXPORT_SYMBOL_GPL(nf_ct_expect_put);
> >
> > diff --git a/net/netfilter/xt_hashlimit.c b/net/netfilter/xt_hashlimit.c
> > index 0859b8f76764..c2b9b954eb53 100644
> > --- a/net/netfilter/xt_hashlimit.c
> > +++ b/net/netfilter/xt_hashlimit.c
> > @@ -256,18 +256,11 @@ dsthash_alloc_init(struct xt_hashlimit_htable *ht,
> > return ent;
> > }
> >
> > -static void dsthash_free_rcu(struct rcu_head *head)
> > -{
> > - struct dsthash_ent *ent = container_of(head, struct dsthash_ent, rcu);
> > -
> > - kmem_cache_free(hashlimit_cachep, ent);
> > -}
> > -
> > static inline void
> > dsthash_free(struct xt_hashlimit_htable *ht, struct dsthash_ent *ent)
> > {
> > hlist_del_rcu(&ent->node);
> > - call_rcu(&ent->rcu, dsthash_free_rcu);
> > + kfree_rcu(ent, rcu);
> > ht->count--;
> > }
> > static void htable_gc(struct work_struct *work);
>
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [cocci] patch idea: convert trivial call_rcu users to kfree_rcu
2024-05-31 16:02 ` Julia Lawall
2024-06-03 17:25 ` Paul E. McKenney
2024-06-03 19:29 ` Vlastimil Babka
@ 2024-06-04 11:26 ` Uladzislau Rezki
2024-06-09 8:32 ` Julia Lawall
2024-06-09 10:00 ` Julia Lawall
2 siblings, 2 replies; 24+ messages in thread
From: Uladzislau Rezki @ 2024-06-04 11:26 UTC (permalink / raw)
To: Julia Lawall
Cc: Paul E. McKenney, Uladzislau Rezki, Vlastimil Babka, linux-mm,
RCU, cocci
On Fri, May 31, 2024 at 06:02:01PM +0200, Julia Lawall wrote:
> Here are the changes proposed by Coccinelle.
>
> I didn't succeed to compile the powerpc code, but the rest has been
> compile tested.
>
> julia
>
> commit 1881f31fe3ad693d07ecff45985dd0e87534923f
> Author: Julia Lawall <Julia.Lawall@inria.fr>
> Date: Wed May 29 18:54:41 2024 +0200
>
> misc
>
> diff --git a/arch/powerpc/kvm/book3s_mmu_hpte.c b/arch/powerpc/kvm/book3s_mmu_hpte.c
> index ce79ac33e8d3..d904e13e069b 100644
> --- a/arch/powerpc/kvm/book3s_mmu_hpte.c
> +++ b/arch/powerpc/kvm/book3s_mmu_hpte.c
> @@ -92,12 +92,6 @@ void kvmppc_mmu_hpte_cache_map(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
> spin_unlock(&vcpu3s->mmu_lock);
> }
>
> -static void free_pte_rcu(struct rcu_head *head)
> -{
> - struct hpte_cache *pte = container_of(head, struct hpte_cache, rcu_head);
> - kmem_cache_free(hpte_cache, pte);
> -}
> -
> static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
> {
> struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu);
> @@ -126,7 +120,7 @@ static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
>
> spin_unlock(&vcpu3s->mmu_lock);
>
> - call_rcu(&pte->rcu_head, free_pte_rcu);
> + kfree_rcu(pte, rcu_head);
> }
>
> static void kvmppc_mmu_pte_flush_all(struct kvm_vcpu *vcpu)
> diff --git a/block/blk-ioc.c b/block/blk-ioc.c
> index 25dd4db11121..ce82770c72ab 100644
> --- a/block/blk-ioc.c
> +++ b/block/blk-ioc.c
> @@ -32,13 +32,6 @@ static void get_io_context(struct io_context *ioc)
> atomic_long_inc(&ioc->refcount);
> }
>
> -static void icq_free_icq_rcu(struct rcu_head *head)
> -{
> - struct io_cq *icq = container_of(head, struct io_cq, __rcu_head);
> -
> - kmem_cache_free(icq->__rcu_icq_cache, icq);
> -}
> -
> /*
> * Exit an icq. Called with ioc locked for blk-mq, and with both ioc
> * and queue locked for legacy.
> @@ -102,7 +95,7 @@ static void ioc_destroy_icq(struct io_cq *icq)
> */
> icq->__rcu_icq_cache = et->icq_cache;
> icq->flags |= ICQ_DESTROYED;
> - call_rcu(&icq->__rcu_head, icq_free_icq_rcu);
> + kfree_rcu(icq, __rcu_head);
> }
>
> /*
> diff --git a/drivers/net/wireguard/allowedips.c b/drivers/net/wireguard/allowedips.c
> index 0ba714ca5185..e4e1638fce1b 100644
> --- a/drivers/net/wireguard/allowedips.c
> +++ b/drivers/net/wireguard/allowedips.c
> @@ -48,11 +48,6 @@ static void push_rcu(struct allowedips_node **stack,
> }
> }
>
> -static void node_free_rcu(struct rcu_head *rcu)
> -{
> - kmem_cache_free(node_cache, container_of(rcu, struct allowedips_node, rcu));
> -}
> -
> static void root_free_rcu(struct rcu_head *rcu)
> {
> struct allowedips_node *node, *stack[MAX_ALLOWEDIPS_DEPTH] = {
> @@ -330,13 +325,13 @@ void wg_allowedips_remove_by_peer(struct allowedips *table,
> child = rcu_dereference_protected(
> parent->bit[!(node->parent_bit_packed & 1)],
> lockdep_is_held(lock));
> - call_rcu(&node->rcu, node_free_rcu);
> + kfree_rcu(node, rcu);
> if (!free_parent)
> continue;
> if (child)
> child->parent_bit_packed = parent->parent_bit_packed;
> *(struct allowedips_node **)(parent->parent_bit_packed & ~3UL) = child;
> - call_rcu(&parent->rcu, node_free_rcu);
> + kfree_rcu(parent, rcu);
> }
> }
>
> diff --git a/fs/ecryptfs/dentry.c b/fs/ecryptfs/dentry.c
> index acaa0825e9bb..49d626ff33a9 100644
> --- a/fs/ecryptfs/dentry.c
> +++ b/fs/ecryptfs/dentry.c
> @@ -51,12 +51,6 @@ static int ecryptfs_d_revalidate(struct dentry *dentry, unsigned int flags)
>
> struct kmem_cache *ecryptfs_dentry_info_cache;
>
> -static void ecryptfs_dentry_free_rcu(struct rcu_head *head)
> -{
> - kmem_cache_free(ecryptfs_dentry_info_cache,
> - container_of(head, struct ecryptfs_dentry_info, rcu));
> -}
> -
> /**
> * ecryptfs_d_release
> * @dentry: The ecryptfs dentry
> @@ -68,7 +62,7 @@ static void ecryptfs_d_release(struct dentry *dentry)
> struct ecryptfs_dentry_info *p = dentry->d_fsdata;
> if (p) {
> path_put(&p->lower_path);
> - call_rcu(&p->rcu, ecryptfs_dentry_free_rcu);
> + kfree_rcu(p, rcu);
> }
> }
>
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index a20c2c9d7d45..eba5083504c7 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -571,13 +571,6 @@ opaque_hashval(const void *ptr, int nbytes)
> return x;
> }
>
> -static void nfsd4_free_file_rcu(struct rcu_head *rcu)
> -{
> - struct nfs4_file *fp = container_of(rcu, struct nfs4_file, fi_rcu);
> -
> - kmem_cache_free(file_slab, fp);
> -}
> -
> void
> put_nfs4_file(struct nfs4_file *fi)
> {
> @@ -585,7 +578,7 @@ put_nfs4_file(struct nfs4_file *fi)
> nfsd4_file_hash_remove(fi);
> WARN_ON_ONCE(!list_empty(&fi->fi_clnt_odstate));
> WARN_ON_ONCE(!list_empty(&fi->fi_delegations));
> - call_rcu(&fi->fi_rcu, nfsd4_free_file_rcu);
> + kfree_rcu(fi, fi_rcu);
> }
> }
>
> diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
> index 7c29f4afc23d..338c52168e61 100644
> --- a/fs/tracefs/inode.c
> +++ b/fs/tracefs/inode.c
> @@ -53,14 +53,6 @@ static struct inode *tracefs_alloc_inode(struct super_block *sb)
> return &ti->vfs_inode;
> }
>
> -static void tracefs_free_inode_rcu(struct rcu_head *rcu)
> -{
> - struct tracefs_inode *ti;
> -
> - ti = container_of(rcu, struct tracefs_inode, rcu);
> - kmem_cache_free(tracefs_inode_cachep, ti);
> -}
> -
> static void tracefs_free_inode(struct inode *inode)
> {
> struct tracefs_inode *ti = get_tracefs(inode);
> @@ -70,7 +62,7 @@ static void tracefs_free_inode(struct inode *inode)
> list_del_rcu(&ti->list);
> spin_unlock_irqrestore(&tracefs_inode_lock, flags);
>
> - call_rcu(&ti->rcu, tracefs_free_inode_rcu);
> + kfree_rcu(ti, rcu);
> }
>
> static ssize_t default_read_file(struct file *file, char __user *buf,
> diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
> index b924f0f096fa..bad5db979664 100644
> --- a/kernel/time/posix-timers.c
> +++ b/kernel/time/posix-timers.c
> @@ -412,18 +412,11 @@ static struct k_itimer * alloc_posix_timer(void)
> return tmr;
> }
>
> -static void k_itimer_rcu_free(struct rcu_head *head)
> -{
> - struct k_itimer *tmr = container_of(head, struct k_itimer, rcu);
> -
> - kmem_cache_free(posix_timers_cache, tmr);
> -}
> -
> static void posix_timer_free(struct k_itimer *tmr)
> {
> put_pid(tmr->it_pid);
> sigqueue_free(tmr->sigq);
> - call_rcu(&tmr->rcu, k_itimer_rcu_free);
> + kfree_rcu(tmr, rcu);
> }
>
> static void posix_timer_unhash_and_free(struct k_itimer *tmr)
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 003474c9a77d..367fc459cbd2 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -5022,12 +5022,6 @@ static struct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs)
> return NULL;
> }
>
> -static void rcu_free_pwq(struct rcu_head *rcu)
> -{
> - kmem_cache_free(pwq_cache,
> - container_of(rcu, struct pool_workqueue, rcu));
> -}
> -
> /*
> * Scheduled on pwq_release_worker by put_pwq() when an unbound pwq hits zero
> * refcnt and needs to be destroyed.
> @@ -5073,7 +5067,7 @@ static void pwq_release_workfn(struct kthread_work *work)
> raw_spin_unlock_irq(&nna->lock);
> }
>
> - call_rcu(&pwq->rcu, rcu_free_pwq);
> + kfree_rcu(pwq, rcu);
>
> /*
> * If we're the last pwq going away, @wq is already dead and no one
> diff --git a/net/bridge/br_fdb.c b/net/bridge/br_fdb.c
> index c77591e63841..6d04b48f7e2c 100644
> --- a/net/bridge/br_fdb.c
> +++ b/net/bridge/br_fdb.c
> @@ -73,13 +73,6 @@ static inline int has_expired(const struct net_bridge *br,
> time_before_eq(fdb->updated + hold_time(br), jiffies);
> }
>
> -static void fdb_rcu_free(struct rcu_head *head)
> -{
> - struct net_bridge_fdb_entry *ent
> - = container_of(head, struct net_bridge_fdb_entry, rcu);
> - kmem_cache_free(br_fdb_cache, ent);
> -}
> -
> static int fdb_to_nud(const struct net_bridge *br,
> const struct net_bridge_fdb_entry *fdb)
> {
> @@ -329,7 +322,7 @@ static void fdb_delete(struct net_bridge *br, struct net_bridge_fdb_entry *f,
> if (test_and_clear_bit(BR_FDB_DYNAMIC_LEARNED, &f->flags))
> atomic_dec(&br->fdb_n_learned);
> fdb_notify(br, f, RTM_DELNEIGH, swdev_notify);
> - call_rcu(&f->rcu, fdb_rcu_free);
> + kfree_rcu(f, rcu);
> }
>
> /* Delete a local entry if no other port had the same address.
> diff --git a/net/can/gw.c b/net/can/gw.c
> index 37528826935e..ffb9870e2d01 100644
> --- a/net/can/gw.c
> +++ b/net/can/gw.c
> @@ -577,13 +577,6 @@ static inline void cgw_unregister_filter(struct net *net, struct cgw_job *gwj)
> gwj->ccgw.filter.can_mask, can_can_gw_rcv, gwj);
> }
>
> -static void cgw_job_free_rcu(struct rcu_head *rcu_head)
> -{
> - struct cgw_job *gwj = container_of(rcu_head, struct cgw_job, rcu);
> -
> - kmem_cache_free(cgw_cache, gwj);
> -}
> -
> static int cgw_notifier(struct notifier_block *nb,
> unsigned long msg, void *ptr)
> {
> @@ -603,7 +596,7 @@ static int cgw_notifier(struct notifier_block *nb,
> if (gwj->src.dev == dev || gwj->dst.dev == dev) {
> hlist_del(&gwj->list);
> cgw_unregister_filter(net, gwj);
> - call_rcu(&gwj->rcu, cgw_job_free_rcu);
> + kfree_rcu(gwj, rcu);
> }
> }
> }
> @@ -1168,7 +1161,7 @@ static void cgw_remove_all_jobs(struct net *net)
> hlist_for_each_entry_safe(gwj, nx, &net->can.cgw_list, list) {
> hlist_del(&gwj->list);
> cgw_unregister_filter(net, gwj);
> - call_rcu(&gwj->rcu, cgw_job_free_rcu);
> + kfree_rcu(gwj, rcu);
> }
> }
>
> @@ -1236,7 +1229,7 @@ static int cgw_remove_job(struct sk_buff *skb, struct nlmsghdr *nlh,
>
> hlist_del(&gwj->list);
> cgw_unregister_filter(net, gwj);
> - call_rcu(&gwj->rcu, cgw_job_free_rcu);
> + kfree_rcu(gwj, rcu);
> err = 0;
> break;
> }
> diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
> index f474106464d2..3ed92e583417 100644
> --- a/net/ipv4/fib_trie.c
> +++ b/net/ipv4/fib_trie.c
> @@ -292,15 +292,9 @@ static const int inflate_threshold = 50;
> static const int halve_threshold_root = 15;
> static const int inflate_threshold_root = 30;
>
> -static void __alias_free_mem(struct rcu_head *head)
> -{
> - struct fib_alias *fa = container_of(head, struct fib_alias, rcu);
> - kmem_cache_free(fn_alias_kmem, fa);
> -}
> -
> static inline void alias_free_mem_rcu(struct fib_alias *fa)
> {
> - call_rcu(&fa->rcu, __alias_free_mem);
> + kfree_rcu(fa, rcu);
> }
>
> #define TNODE_VMALLOC_MAX \
> diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c
> index 5bd759963451..5ab56f4cb529 100644
> --- a/net/ipv4/inetpeer.c
> +++ b/net/ipv4/inetpeer.c
> @@ -128,11 +128,6 @@ static struct inet_peer *lookup(const struct inetpeer_addr *daddr,
> return NULL;
> }
>
> -static void inetpeer_free_rcu(struct rcu_head *head)
> -{
> - kmem_cache_free(peer_cachep, container_of(head, struct inet_peer, rcu));
> -}
> -
> /* perform garbage collect on all items stacked during a lookup */
> static void inet_peer_gc(struct inet_peer_base *base,
> struct inet_peer *gc_stack[],
> @@ -168,7 +163,7 @@ static void inet_peer_gc(struct inet_peer_base *base,
> if (p) {
> rb_erase(&p->rb_node, &base->rb_root);
> base->total--;
> - call_rcu(&p->rcu, inetpeer_free_rcu);
> + kfree_rcu(p, rcu);
> }
> }
> }
> @@ -242,7 +237,7 @@ void inet_putpeer(struct inet_peer *p)
> WRITE_ONCE(p->dtime, (__u32)jiffies);
>
> if (refcount_dec_and_test(&p->refcnt))
> - call_rcu(&p->rcu, inetpeer_free_rcu);
> + kfree_rcu(p, rcu);
> }
> EXPORT_SYMBOL_GPL(inet_putpeer);
>
> diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
> index 31d77885bcae..bafc49de270e 100644
> --- a/net/ipv6/ip6_fib.c
> +++ b/net/ipv6/ip6_fib.c
> @@ -198,16 +198,9 @@ static void node_free_immediate(struct net *net, struct fib6_node *fn)
> net->ipv6.rt6_stats->fib_nodes--;
> }
>
> -static void node_free_rcu(struct rcu_head *head)
> -{
> - struct fib6_node *fn = container_of(head, struct fib6_node, rcu);
> -
> - kmem_cache_free(fib6_node_kmem, fn);
> -}
> -
> static void node_free(struct net *net, struct fib6_node *fn)
> {
> - call_rcu(&fn->rcu, node_free_rcu);
> + kfree_rcu(fn, rcu);
> net->ipv6.rt6_stats->fib_nodes--;
> }
>
> diff --git a/net/ipv6/xfrm6_tunnel.c b/net/ipv6/xfrm6_tunnel.c
> index bf140ef781c1..c3c893ddb6ee 100644
> --- a/net/ipv6/xfrm6_tunnel.c
> +++ b/net/ipv6/xfrm6_tunnel.c
> @@ -178,12 +178,6 @@ __be32 xfrm6_tunnel_alloc_spi(struct net *net, xfrm_address_t *saddr)
> }
> EXPORT_SYMBOL(xfrm6_tunnel_alloc_spi);
>
> -static void x6spi_destroy_rcu(struct rcu_head *head)
> -{
> - kmem_cache_free(xfrm6_tunnel_spi_kmem,
> - container_of(head, struct xfrm6_tunnel_spi, rcu_head));
> -}
> -
> static void xfrm6_tunnel_free_spi(struct net *net, xfrm_address_t *saddr)
> {
> struct xfrm6_tunnel_net *xfrm6_tn = xfrm6_tunnel_pernet(net);
> @@ -200,7 +194,7 @@ static void xfrm6_tunnel_free_spi(struct net *net, xfrm_address_t *saddr)
> if (refcount_dec_and_test(&x6spi->refcnt)) {
> hlist_del_rcu(&x6spi->list_byaddr);
> hlist_del_rcu(&x6spi->list_byspi);
> - call_rcu(&x6spi->rcu_head, x6spi_destroy_rcu);
> + kfree_rcu(x6spi, rcu_head);
> break;
> }
> }
> diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
> index 2f191e50d4fc..fbb730cd2d38 100644
> --- a/net/kcm/kcmsock.c
> +++ b/net/kcm/kcmsock.c
> @@ -1580,14 +1580,6 @@ static int kcm_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
> return err;
> }
>
> -static void free_mux(struct rcu_head *rcu)
> -{
> - struct kcm_mux *mux = container_of(rcu,
> - struct kcm_mux, rcu);
> -
> - kmem_cache_free(kcm_muxp, mux);
> -}
> -
> static void release_mux(struct kcm_mux *mux)
> {
> struct kcm_net *knet = mux->knet;
> @@ -1615,7 +1607,7 @@ static void release_mux(struct kcm_mux *mux)
> knet->count--;
> mutex_unlock(&knet->mutex);
>
> - call_rcu(&mux->rcu, free_mux);
> + kfree_rcu(mux, rcu);
> }
>
> static void kcm_done(struct kcm_sock *kcm)
> diff --git a/net/netfilter/nf_conncount.c b/net/netfilter/nf_conncount.c
> index 8715617b02fe..587bfcb79723 100644
> --- a/net/netfilter/nf_conncount.c
> +++ b/net/netfilter/nf_conncount.c
> @@ -275,14 +275,6 @@ bool nf_conncount_gc_list(struct net *net,
> }
> EXPORT_SYMBOL_GPL(nf_conncount_gc_list);
>
> -static void __tree_nodes_free(struct rcu_head *h)
> -{
> - struct nf_conncount_rb *rbconn;
> -
> - rbconn = container_of(h, struct nf_conncount_rb, rcu_head);
> - kmem_cache_free(conncount_rb_cachep, rbconn);
> -}
> -
> /* caller must hold tree nf_conncount_locks[] lock */
> static void tree_nodes_free(struct rb_root *root,
> struct nf_conncount_rb *gc_nodes[],
> @@ -295,7 +287,7 @@ static void tree_nodes_free(struct rb_root *root,
> spin_lock(&rbconn->list.list_lock);
> if (!rbconn->list.count) {
> rb_erase(&rbconn->node, root);
> - call_rcu(&rbconn->rcu_head, __tree_nodes_free);
> + kfree_rcu(rbconn, rcu_head);
> }
> spin_unlock(&rbconn->list.list_lock);
> }
> diff --git a/net/netfilter/nf_conntrack_expect.c b/net/netfilter/nf_conntrack_expect.c
> index 21fa550966f0..9dcaef6f3663 100644
> --- a/net/netfilter/nf_conntrack_expect.c
> +++ b/net/netfilter/nf_conntrack_expect.c
> @@ -367,18 +367,10 @@ void nf_ct_expect_init(struct nf_conntrack_expect *exp, unsigned int class,
> }
> EXPORT_SYMBOL_GPL(nf_ct_expect_init);
>
> -static void nf_ct_expect_free_rcu(struct rcu_head *head)
> -{
> - struct nf_conntrack_expect *exp;
> -
> - exp = container_of(head, struct nf_conntrack_expect, rcu);
> - kmem_cache_free(nf_ct_expect_cachep, exp);
> -}
> -
> void nf_ct_expect_put(struct nf_conntrack_expect *exp)
> {
> if (refcount_dec_and_test(&exp->use))
> - call_rcu(&exp->rcu, nf_ct_expect_free_rcu);
> + kfree_rcu(exp, rcu);
> }
> EXPORT_SYMBOL_GPL(nf_ct_expect_put);
>
> diff --git a/net/netfilter/xt_hashlimit.c b/net/netfilter/xt_hashlimit.c
> index 0859b8f76764..c2b9b954eb53 100644
> --- a/net/netfilter/xt_hashlimit.c
> +++ b/net/netfilter/xt_hashlimit.c
> @@ -256,18 +256,11 @@ dsthash_alloc_init(struct xt_hashlimit_htable *ht,
> return ent;
> }
>
> -static void dsthash_free_rcu(struct rcu_head *head)
> -{
> - struct dsthash_ent *ent = container_of(head, struct dsthash_ent, rcu);
> -
> - kmem_cache_free(hashlimit_cachep, ent);
> -}
> -
> static inline void
> dsthash_free(struct xt_hashlimit_htable *ht, struct dsthash_ent *ent)
> {
> hlist_del_rcu(&ent->node);
> - call_rcu(&ent->rcu, dsthash_free_rcu);
> + kfree_rcu(ent, rcu);
> ht->count--;
> }
> static void htable_gc(struct work_struct *work);
Looks good. What is about extra below useres:
lima_sched.c
ipmr.c
ip6mr.c
<snip>
diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
index bbf3f8feab94..21ec6fbe732e 100644
--- a/drivers/gpu/drm/lima/lima_sched.c
+++ b/drivers/gpu/drm/lima/lima_sched.c
@@ -64,19 +64,11 @@ static const char *lima_fence_get_timeline_name(struct dma_fence *fence)
return f->pipe->base.name;
}
-static void lima_fence_release_rcu(struct rcu_head *rcu)
-{
- struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);
- struct lima_fence *fence = to_lima_fence(f);
-
- kmem_cache_free(lima_fence_slab, fence);
-}
-
static void lima_fence_release(struct dma_fence *fence)
{
struct lima_fence *f = to_lima_fence(fence);
- call_rcu(&f->base.rcu, lima_fence_release_rcu);
+ kfree_rcu(f, base.rcu);
}
static const struct dma_fence_ops lima_fence_ops = {
diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
index 6c750bd13dd8..7a9496e51dc8 100644
--- a/net/ipv4/ipmr.c
+++ b/net/ipv4/ipmr.c
@@ -711,7 +711,7 @@ static void ipmr_cache_free_rcu(struct rcu_head *head)
static void ipmr_cache_free(struct mfc_cache *c)
{
- call_rcu(&c->_c.rcu, ipmr_cache_free_rcu);
+ kfree_rcu(c, _c.rcu);
}
/* Destroy an unresolved cache entry, killing queued skbs
diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
index dd342e6ecf3f..c236f2d0c88f 100644
--- a/net/ipv6/ip6mr.c
+++ b/net/ipv6/ip6mr.c
@@ -762,7 +762,7 @@ static inline void ip6mr_cache_free_rcu(struct rcu_head *head)
static inline void ip6mr_cache_free(struct mfc6_cache *c)
{
- call_rcu(&c->_c.rcu, ip6mr_cache_free_rcu);
+ kfree_rcu(c, _c.rcu);
}
/* Destroy an unresolved cache entry, killing queued skbs
<snip>
?
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [cocci] patch idea: convert trivial call_rcu users to kfree_rcu
2024-06-04 11:26 ` Uladzislau Rezki
@ 2024-06-09 8:32 ` Julia Lawall
2024-06-09 10:00 ` Julia Lawall
1 sibling, 0 replies; 24+ messages in thread
From: Julia Lawall @ 2024-06-09 8:32 UTC (permalink / raw)
To: Uladzislau Rezki; +Cc: Paul E. McKenney, Vlastimil Babka, linux-mm, RCU, cocci
On Tue, 4 Jun 2024, Uladzislau Rezki wrote:
> On Fri, May 31, 2024 at 06:02:01PM +0200, Julia Lawall wrote:
> > Here are the changes proposed by Coccinelle.
> >
> > I didn't succeed to compile the powerpc code, but the rest has been
> > compile tested.
> >
> > julia
> >
> > commit 1881f31fe3ad693d07ecff45985dd0e87534923f
> > Author: Julia Lawall <Julia.Lawall@inria.fr>
> > Date: Wed May 29 18:54:41 2024 +0200
> >
> > misc
> >
> > diff --git a/arch/powerpc/kvm/book3s_mmu_hpte.c b/arch/powerpc/kvm/book3s_mmu_hpte.c
> > index ce79ac33e8d3..d904e13e069b 100644
> > --- a/arch/powerpc/kvm/book3s_mmu_hpte.c
> > +++ b/arch/powerpc/kvm/book3s_mmu_hpte.c
> > @@ -92,12 +92,6 @@ void kvmppc_mmu_hpte_cache_map(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
> > spin_unlock(&vcpu3s->mmu_lock);
> > }
> >
> > -static void free_pte_rcu(struct rcu_head *head)
> > -{
> > - struct hpte_cache *pte = container_of(head, struct hpte_cache, rcu_head);
> > - kmem_cache_free(hpte_cache, pte);
> > -}
> > -
> > static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
> > {
> > struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu);
> > @@ -126,7 +120,7 @@ static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
> >
> > spin_unlock(&vcpu3s->mmu_lock);
> >
> > - call_rcu(&pte->rcu_head, free_pte_rcu);
> > + kfree_rcu(pte, rcu_head);
> > }
> >
> > static void kvmppc_mmu_pte_flush_all(struct kvm_vcpu *vcpu)
> > diff --git a/block/blk-ioc.c b/block/blk-ioc.c
> > index 25dd4db11121..ce82770c72ab 100644
> > --- a/block/blk-ioc.c
> > +++ b/block/blk-ioc.c
> > @@ -32,13 +32,6 @@ static void get_io_context(struct io_context *ioc)
> > atomic_long_inc(&ioc->refcount);
> > }
> >
> > -static void icq_free_icq_rcu(struct rcu_head *head)
> > -{
> > - struct io_cq *icq = container_of(head, struct io_cq, __rcu_head);
> > -
> > - kmem_cache_free(icq->__rcu_icq_cache, icq);
> > -}
> > -
> > /*
> > * Exit an icq. Called with ioc locked for blk-mq, and with both ioc
> > * and queue locked for legacy.
> > @@ -102,7 +95,7 @@ static void ioc_destroy_icq(struct io_cq *icq)
> > */
> > icq->__rcu_icq_cache = et->icq_cache;
> > icq->flags |= ICQ_DESTROYED;
> > - call_rcu(&icq->__rcu_head, icq_free_icq_rcu);
> > + kfree_rcu(icq, __rcu_head);
> > }
> >
> > /*
> > diff --git a/drivers/net/wireguard/allowedips.c b/drivers/net/wireguard/allowedips.c
> > index 0ba714ca5185..e4e1638fce1b 100644
> > --- a/drivers/net/wireguard/allowedips.c
> > +++ b/drivers/net/wireguard/allowedips.c
> > @@ -48,11 +48,6 @@ static void push_rcu(struct allowedips_node **stack,
> > }
> > }
> >
> > -static void node_free_rcu(struct rcu_head *rcu)
> > -{
> > - kmem_cache_free(node_cache, container_of(rcu, struct allowedips_node, rcu));
> > -}
> > -
> > static void root_free_rcu(struct rcu_head *rcu)
> > {
> > struct allowedips_node *node, *stack[MAX_ALLOWEDIPS_DEPTH] = {
> > @@ -330,13 +325,13 @@ void wg_allowedips_remove_by_peer(struct allowedips *table,
> > child = rcu_dereference_protected(
> > parent->bit[!(node->parent_bit_packed & 1)],
> > lockdep_is_held(lock));
> > - call_rcu(&node->rcu, node_free_rcu);
> > + kfree_rcu(node, rcu);
> > if (!free_parent)
> > continue;
> > if (child)
> > child->parent_bit_packed = parent->parent_bit_packed;
> > *(struct allowedips_node **)(parent->parent_bit_packed & ~3UL) = child;
> > - call_rcu(&parent->rcu, node_free_rcu);
> > + kfree_rcu(parent, rcu);
> > }
> > }
> >
> > diff --git a/fs/ecryptfs/dentry.c b/fs/ecryptfs/dentry.c
> > index acaa0825e9bb..49d626ff33a9 100644
> > --- a/fs/ecryptfs/dentry.c
> > +++ b/fs/ecryptfs/dentry.c
> > @@ -51,12 +51,6 @@ static int ecryptfs_d_revalidate(struct dentry *dentry, unsigned int flags)
> >
> > struct kmem_cache *ecryptfs_dentry_info_cache;
> >
> > -static void ecryptfs_dentry_free_rcu(struct rcu_head *head)
> > -{
> > - kmem_cache_free(ecryptfs_dentry_info_cache,
> > - container_of(head, struct ecryptfs_dentry_info, rcu));
> > -}
> > -
> > /**
> > * ecryptfs_d_release
> > * @dentry: The ecryptfs dentry
> > @@ -68,7 +62,7 @@ static void ecryptfs_d_release(struct dentry *dentry)
> > struct ecryptfs_dentry_info *p = dentry->d_fsdata;
> > if (p) {
> > path_put(&p->lower_path);
> > - call_rcu(&p->rcu, ecryptfs_dentry_free_rcu);
> > + kfree_rcu(p, rcu);
> > }
> > }
> >
> > diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> > index a20c2c9d7d45..eba5083504c7 100644
> > --- a/fs/nfsd/nfs4state.c
> > +++ b/fs/nfsd/nfs4state.c
> > @@ -571,13 +571,6 @@ opaque_hashval(const void *ptr, int nbytes)
> > return x;
> > }
> >
> > -static void nfsd4_free_file_rcu(struct rcu_head *rcu)
> > -{
> > - struct nfs4_file *fp = container_of(rcu, struct nfs4_file, fi_rcu);
> > -
> > - kmem_cache_free(file_slab, fp);
> > -}
> > -
> > void
> > put_nfs4_file(struct nfs4_file *fi)
> > {
> > @@ -585,7 +578,7 @@ put_nfs4_file(struct nfs4_file *fi)
> > nfsd4_file_hash_remove(fi);
> > WARN_ON_ONCE(!list_empty(&fi->fi_clnt_odstate));
> > WARN_ON_ONCE(!list_empty(&fi->fi_delegations));
> > - call_rcu(&fi->fi_rcu, nfsd4_free_file_rcu);
> > + kfree_rcu(fi, fi_rcu);
> > }
> > }
> >
> > diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
> > index 7c29f4afc23d..338c52168e61 100644
> > --- a/fs/tracefs/inode.c
> > +++ b/fs/tracefs/inode.c
> > @@ -53,14 +53,6 @@ static struct inode *tracefs_alloc_inode(struct super_block *sb)
> > return &ti->vfs_inode;
> > }
> >
> > -static void tracefs_free_inode_rcu(struct rcu_head *rcu)
> > -{
> > - struct tracefs_inode *ti;
> > -
> > - ti = container_of(rcu, struct tracefs_inode, rcu);
> > - kmem_cache_free(tracefs_inode_cachep, ti);
> > -}
> > -
> > static void tracefs_free_inode(struct inode *inode)
> > {
> > struct tracefs_inode *ti = get_tracefs(inode);
> > @@ -70,7 +62,7 @@ static void tracefs_free_inode(struct inode *inode)
> > list_del_rcu(&ti->list);
> > spin_unlock_irqrestore(&tracefs_inode_lock, flags);
> >
> > - call_rcu(&ti->rcu, tracefs_free_inode_rcu);
> > + kfree_rcu(ti, rcu);
> > }
> >
> > static ssize_t default_read_file(struct file *file, char __user *buf,
> > diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
> > index b924f0f096fa..bad5db979664 100644
> > --- a/kernel/time/posix-timers.c
> > +++ b/kernel/time/posix-timers.c
> > @@ -412,18 +412,11 @@ static struct k_itimer * alloc_posix_timer(void)
> > return tmr;
> > }
> >
> > -static void k_itimer_rcu_free(struct rcu_head *head)
> > -{
> > - struct k_itimer *tmr = container_of(head, struct k_itimer, rcu);
> > -
> > - kmem_cache_free(posix_timers_cache, tmr);
> > -}
> > -
> > static void posix_timer_free(struct k_itimer *tmr)
> > {
> > put_pid(tmr->it_pid);
> > sigqueue_free(tmr->sigq);
> > - call_rcu(&tmr->rcu, k_itimer_rcu_free);
> > + kfree_rcu(tmr, rcu);
> > }
> >
> > static void posix_timer_unhash_and_free(struct k_itimer *tmr)
> > diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> > index 003474c9a77d..367fc459cbd2 100644
> > --- a/kernel/workqueue.c
> > +++ b/kernel/workqueue.c
> > @@ -5022,12 +5022,6 @@ static struct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs)
> > return NULL;
> > }
> >
> > -static void rcu_free_pwq(struct rcu_head *rcu)
> > -{
> > - kmem_cache_free(pwq_cache,
> > - container_of(rcu, struct pool_workqueue, rcu));
> > -}
> > -
> > /*
> > * Scheduled on pwq_release_worker by put_pwq() when an unbound pwq hits zero
> > * refcnt and needs to be destroyed.
> > @@ -5073,7 +5067,7 @@ static void pwq_release_workfn(struct kthread_work *work)
> > raw_spin_unlock_irq(&nna->lock);
> > }
> >
> > - call_rcu(&pwq->rcu, rcu_free_pwq);
> > + kfree_rcu(pwq, rcu);
> >
> > /*
> > * If we're the last pwq going away, @wq is already dead and no one
> > diff --git a/net/bridge/br_fdb.c b/net/bridge/br_fdb.c
> > index c77591e63841..6d04b48f7e2c 100644
> > --- a/net/bridge/br_fdb.c
> > +++ b/net/bridge/br_fdb.c
> > @@ -73,13 +73,6 @@ static inline int has_expired(const struct net_bridge *br,
> > time_before_eq(fdb->updated + hold_time(br), jiffies);
> > }
> >
> > -static void fdb_rcu_free(struct rcu_head *head)
> > -{
> > - struct net_bridge_fdb_entry *ent
> > - = container_of(head, struct net_bridge_fdb_entry, rcu);
> > - kmem_cache_free(br_fdb_cache, ent);
> > -}
> > -
> > static int fdb_to_nud(const struct net_bridge *br,
> > const struct net_bridge_fdb_entry *fdb)
> > {
> > @@ -329,7 +322,7 @@ static void fdb_delete(struct net_bridge *br, struct net_bridge_fdb_entry *f,
> > if (test_and_clear_bit(BR_FDB_DYNAMIC_LEARNED, &f->flags))
> > atomic_dec(&br->fdb_n_learned);
> > fdb_notify(br, f, RTM_DELNEIGH, swdev_notify);
> > - call_rcu(&f->rcu, fdb_rcu_free);
> > + kfree_rcu(f, rcu);
> > }
> >
> > /* Delete a local entry if no other port had the same address.
> > diff --git a/net/can/gw.c b/net/can/gw.c
> > index 37528826935e..ffb9870e2d01 100644
> > --- a/net/can/gw.c
> > +++ b/net/can/gw.c
> > @@ -577,13 +577,6 @@ static inline void cgw_unregister_filter(struct net *net, struct cgw_job *gwj)
> > gwj->ccgw.filter.can_mask, can_can_gw_rcv, gwj);
> > }
> >
> > -static void cgw_job_free_rcu(struct rcu_head *rcu_head)
> > -{
> > - struct cgw_job *gwj = container_of(rcu_head, struct cgw_job, rcu);
> > -
> > - kmem_cache_free(cgw_cache, gwj);
> > -}
> > -
> > static int cgw_notifier(struct notifier_block *nb,
> > unsigned long msg, void *ptr)
> > {
> > @@ -603,7 +596,7 @@ static int cgw_notifier(struct notifier_block *nb,
> > if (gwj->src.dev == dev || gwj->dst.dev == dev) {
> > hlist_del(&gwj->list);
> > cgw_unregister_filter(net, gwj);
> > - call_rcu(&gwj->rcu, cgw_job_free_rcu);
> > + kfree_rcu(gwj, rcu);
> > }
> > }
> > }
> > @@ -1168,7 +1161,7 @@ static void cgw_remove_all_jobs(struct net *net)
> > hlist_for_each_entry_safe(gwj, nx, &net->can.cgw_list, list) {
> > hlist_del(&gwj->list);
> > cgw_unregister_filter(net, gwj);
> > - call_rcu(&gwj->rcu, cgw_job_free_rcu);
> > + kfree_rcu(gwj, rcu);
> > }
> > }
> >
> > @@ -1236,7 +1229,7 @@ static int cgw_remove_job(struct sk_buff *skb, struct nlmsghdr *nlh,
> >
> > hlist_del(&gwj->list);
> > cgw_unregister_filter(net, gwj);
> > - call_rcu(&gwj->rcu, cgw_job_free_rcu);
> > + kfree_rcu(gwj, rcu);
> > err = 0;
> > break;
> > }
> > diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
> > index f474106464d2..3ed92e583417 100644
> > --- a/net/ipv4/fib_trie.c
> > +++ b/net/ipv4/fib_trie.c
> > @@ -292,15 +292,9 @@ static const int inflate_threshold = 50;
> > static const int halve_threshold_root = 15;
> > static const int inflate_threshold_root = 30;
> >
> > -static void __alias_free_mem(struct rcu_head *head)
> > -{
> > - struct fib_alias *fa = container_of(head, struct fib_alias, rcu);
> > - kmem_cache_free(fn_alias_kmem, fa);
> > -}
> > -
> > static inline void alias_free_mem_rcu(struct fib_alias *fa)
> > {
> > - call_rcu(&fa->rcu, __alias_free_mem);
> > + kfree_rcu(fa, rcu);
> > }
> >
> > #define TNODE_VMALLOC_MAX \
> > diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c
> > index 5bd759963451..5ab56f4cb529 100644
> > --- a/net/ipv4/inetpeer.c
> > +++ b/net/ipv4/inetpeer.c
> > @@ -128,11 +128,6 @@ static struct inet_peer *lookup(const struct inetpeer_addr *daddr,
> > return NULL;
> > }
> >
> > -static void inetpeer_free_rcu(struct rcu_head *head)
> > -{
> > - kmem_cache_free(peer_cachep, container_of(head, struct inet_peer, rcu));
> > -}
> > -
> > /* perform garbage collect on all items stacked during a lookup */
> > static void inet_peer_gc(struct inet_peer_base *base,
> > struct inet_peer *gc_stack[],
> > @@ -168,7 +163,7 @@ static void inet_peer_gc(struct inet_peer_base *base,
> > if (p) {
> > rb_erase(&p->rb_node, &base->rb_root);
> > base->total--;
> > - call_rcu(&p->rcu, inetpeer_free_rcu);
> > + kfree_rcu(p, rcu);
> > }
> > }
> > }
> > @@ -242,7 +237,7 @@ void inet_putpeer(struct inet_peer *p)
> > WRITE_ONCE(p->dtime, (__u32)jiffies);
> >
> > if (refcount_dec_and_test(&p->refcnt))
> > - call_rcu(&p->rcu, inetpeer_free_rcu);
> > + kfree_rcu(p, rcu);
> > }
> > EXPORT_SYMBOL_GPL(inet_putpeer);
> >
> > diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
> > index 31d77885bcae..bafc49de270e 100644
> > --- a/net/ipv6/ip6_fib.c
> > +++ b/net/ipv6/ip6_fib.c
> > @@ -198,16 +198,9 @@ static void node_free_immediate(struct net *net, struct fib6_node *fn)
> > net->ipv6.rt6_stats->fib_nodes--;
> > }
> >
> > -static void node_free_rcu(struct rcu_head *head)
> > -{
> > - struct fib6_node *fn = container_of(head, struct fib6_node, rcu);
> > -
> > - kmem_cache_free(fib6_node_kmem, fn);
> > -}
> > -
> > static void node_free(struct net *net, struct fib6_node *fn)
> > {
> > - call_rcu(&fn->rcu, node_free_rcu);
> > + kfree_rcu(fn, rcu);
> > net->ipv6.rt6_stats->fib_nodes--;
> > }
> >
> > diff --git a/net/ipv6/xfrm6_tunnel.c b/net/ipv6/xfrm6_tunnel.c
> > index bf140ef781c1..c3c893ddb6ee 100644
> > --- a/net/ipv6/xfrm6_tunnel.c
> > +++ b/net/ipv6/xfrm6_tunnel.c
> > @@ -178,12 +178,6 @@ __be32 xfrm6_tunnel_alloc_spi(struct net *net, xfrm_address_t *saddr)
> > }
> > EXPORT_SYMBOL(xfrm6_tunnel_alloc_spi);
> >
> > -static void x6spi_destroy_rcu(struct rcu_head *head)
> > -{
> > - kmem_cache_free(xfrm6_tunnel_spi_kmem,
> > - container_of(head, struct xfrm6_tunnel_spi, rcu_head));
> > -}
> > -
> > static void xfrm6_tunnel_free_spi(struct net *net, xfrm_address_t *saddr)
> > {
> > struct xfrm6_tunnel_net *xfrm6_tn = xfrm6_tunnel_pernet(net);
> > @@ -200,7 +194,7 @@ static void xfrm6_tunnel_free_spi(struct net *net, xfrm_address_t *saddr)
> > if (refcount_dec_and_test(&x6spi->refcnt)) {
> > hlist_del_rcu(&x6spi->list_byaddr);
> > hlist_del_rcu(&x6spi->list_byspi);
> > - call_rcu(&x6spi->rcu_head, x6spi_destroy_rcu);
> > + kfree_rcu(x6spi, rcu_head);
> > break;
> > }
> > }
> > diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
> > index 2f191e50d4fc..fbb730cd2d38 100644
> > --- a/net/kcm/kcmsock.c
> > +++ b/net/kcm/kcmsock.c
> > @@ -1580,14 +1580,6 @@ static int kcm_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
> > return err;
> > }
> >
> > -static void free_mux(struct rcu_head *rcu)
> > -{
> > - struct kcm_mux *mux = container_of(rcu,
> > - struct kcm_mux, rcu);
> > -
> > - kmem_cache_free(kcm_muxp, mux);
> > -}
> > -
> > static void release_mux(struct kcm_mux *mux)
> > {
> > struct kcm_net *knet = mux->knet;
> > @@ -1615,7 +1607,7 @@ static void release_mux(struct kcm_mux *mux)
> > knet->count--;
> > mutex_unlock(&knet->mutex);
> >
> > - call_rcu(&mux->rcu, free_mux);
> > + kfree_rcu(mux, rcu);
> > }
> >
> > static void kcm_done(struct kcm_sock *kcm)
> > diff --git a/net/netfilter/nf_conncount.c b/net/netfilter/nf_conncount.c
> > index 8715617b02fe..587bfcb79723 100644
> > --- a/net/netfilter/nf_conncount.c
> > +++ b/net/netfilter/nf_conncount.c
> > @@ -275,14 +275,6 @@ bool nf_conncount_gc_list(struct net *net,
> > }
> > EXPORT_SYMBOL_GPL(nf_conncount_gc_list);
> >
> > -static void __tree_nodes_free(struct rcu_head *h)
> > -{
> > - struct nf_conncount_rb *rbconn;
> > -
> > - rbconn = container_of(h, struct nf_conncount_rb, rcu_head);
> > - kmem_cache_free(conncount_rb_cachep, rbconn);
> > -}
> > -
> > /* caller must hold tree nf_conncount_locks[] lock */
> > static void tree_nodes_free(struct rb_root *root,
> > struct nf_conncount_rb *gc_nodes[],
> > @@ -295,7 +287,7 @@ static void tree_nodes_free(struct rb_root *root,
> > spin_lock(&rbconn->list.list_lock);
> > if (!rbconn->list.count) {
> > rb_erase(&rbconn->node, root);
> > - call_rcu(&rbconn->rcu_head, __tree_nodes_free);
> > + kfree_rcu(rbconn, rcu_head);
> > }
> > spin_unlock(&rbconn->list.list_lock);
> > }
> > diff --git a/net/netfilter/nf_conntrack_expect.c b/net/netfilter/nf_conntrack_expect.c
> > index 21fa550966f0..9dcaef6f3663 100644
> > --- a/net/netfilter/nf_conntrack_expect.c
> > +++ b/net/netfilter/nf_conntrack_expect.c
> > @@ -367,18 +367,10 @@ void nf_ct_expect_init(struct nf_conntrack_expect *exp, unsigned int class,
> > }
> > EXPORT_SYMBOL_GPL(nf_ct_expect_init);
> >
> > -static void nf_ct_expect_free_rcu(struct rcu_head *head)
> > -{
> > - struct nf_conntrack_expect *exp;
> > -
> > - exp = container_of(head, struct nf_conntrack_expect, rcu);
> > - kmem_cache_free(nf_ct_expect_cachep, exp);
> > -}
> > -
> > void nf_ct_expect_put(struct nf_conntrack_expect *exp)
> > {
> > if (refcount_dec_and_test(&exp->use))
> > - call_rcu(&exp->rcu, nf_ct_expect_free_rcu);
> > + kfree_rcu(exp, rcu);
> > }
> > EXPORT_SYMBOL_GPL(nf_ct_expect_put);
> >
> > diff --git a/net/netfilter/xt_hashlimit.c b/net/netfilter/xt_hashlimit.c
> > index 0859b8f76764..c2b9b954eb53 100644
> > --- a/net/netfilter/xt_hashlimit.c
> > +++ b/net/netfilter/xt_hashlimit.c
> > @@ -256,18 +256,11 @@ dsthash_alloc_init(struct xt_hashlimit_htable *ht,
> > return ent;
> > }
> >
> > -static void dsthash_free_rcu(struct rcu_head *head)
> > -{
> > - struct dsthash_ent *ent = container_of(head, struct dsthash_ent, rcu);
> > -
> > - kmem_cache_free(hashlimit_cachep, ent);
> > -}
> > -
> > static inline void
> > dsthash_free(struct xt_hashlimit_htable *ht, struct dsthash_ent *ent)
> > {
> > hlist_del_rcu(&ent->node);
> > - call_rcu(&ent->rcu, dsthash_free_rcu);
> > + kfree_rcu(ent, rcu);
> > ht->count--;
> > }
> > static void htable_gc(struct work_struct *work);
I have sent out patches for the above changes. There remain the ones
cited below, ones where the release function is used in other ways, and
ones where the first argument to call_rcu does not have the form &e->f. I
will check on these remaining cases.
julia
> Looks good. What is about extra below useres:
>
> lima_sched.c
> ipmr.c
> ip6mr.c
>
> <snip>
> diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
> index bbf3f8feab94..21ec6fbe732e 100644
> --- a/drivers/gpu/drm/lima/lima_sched.c
> +++ b/drivers/gpu/drm/lima/lima_sched.c
> @@ -64,19 +64,11 @@ static const char *lima_fence_get_timeline_name(struct dma_fence *fence)
> return f->pipe->base.name;
> }
>
> -static void lima_fence_release_rcu(struct rcu_head *rcu)
> -{
> - struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);
> - struct lima_fence *fence = to_lima_fence(f);
> -
> - kmem_cache_free(lima_fence_slab, fence);
> -}
> -
> static void lima_fence_release(struct dma_fence *fence)
> {
> struct lima_fence *f = to_lima_fence(fence);
>
> - call_rcu(&f->base.rcu, lima_fence_release_rcu);
> + kfree_rcu(f, base.rcu);
> }
>
> static const struct dma_fence_ops lima_fence_ops = {
> diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
> index 6c750bd13dd8..7a9496e51dc8 100644
> --- a/net/ipv4/ipmr.c
> +++ b/net/ipv4/ipmr.c
> @@ -711,7 +711,7 @@ static void ipmr_cache_free_rcu(struct rcu_head *head)
>
> static void ipmr_cache_free(struct mfc_cache *c)
> {
> - call_rcu(&c->_c.rcu, ipmr_cache_free_rcu);
> + kfree_rcu(c, _c.rcu);
> }
>
> /* Destroy an unresolved cache entry, killing queued skbs
> diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
> index dd342e6ecf3f..c236f2d0c88f 100644
> --- a/net/ipv6/ip6mr.c
> +++ b/net/ipv6/ip6mr.c
> @@ -762,7 +762,7 @@ static inline void ip6mr_cache_free_rcu(struct rcu_head *head)
>
> static inline void ip6mr_cache_free(struct mfc6_cache *c)
> {
> - call_rcu(&c->_c.rcu, ip6mr_cache_free_rcu);
> + kfree_rcu(c, _c.rcu);
> }
>
> /* Destroy an unresolved cache entry, killing queued skbs
> <snip>
>
> ?
>
> --
> Uladzislau Rezki
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [cocci] patch idea: convert trivial call_rcu users to kfree_rcu
2024-06-04 11:26 ` Uladzislau Rezki
2024-06-09 8:32 ` Julia Lawall
@ 2024-06-09 10:00 ` Julia Lawall
2024-06-09 17:03 ` Paul E. McKenney
1 sibling, 1 reply; 24+ messages in thread
From: Julia Lawall @ 2024-06-09 10:00 UTC (permalink / raw)
To: Uladzislau Rezki; +Cc: Paul E. McKenney, Vlastimil Babka, linux-mm, RCU, cocci
On Tue, 4 Jun 2024, Uladzislau Rezki wrote:
> On Fri, May 31, 2024 at 06:02:01PM +0200, Julia Lawall wrote:
> > Here are the changes proposed by Coccinelle.
> >
> > I didn't succeed to compile the powerpc code, but the rest has been
> > compile tested.
> >
> > julia
> >
> > commit 1881f31fe3ad693d07ecff45985dd0e87534923f
> > Author: Julia Lawall <Julia.Lawall@inria.fr>
> > Date: Wed May 29 18:54:41 2024 +0200
> >
> > misc
> >
> > diff --git a/arch/powerpc/kvm/book3s_mmu_hpte.c b/arch/powerpc/kvm/book3s_mmu_hpte.c
> > index ce79ac33e8d3..d904e13e069b 100644
> > --- a/arch/powerpc/kvm/book3s_mmu_hpte.c
> > +++ b/arch/powerpc/kvm/book3s_mmu_hpte.c
> > @@ -92,12 +92,6 @@ void kvmppc_mmu_hpte_cache_map(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
> > spin_unlock(&vcpu3s->mmu_lock);
> > }
> >
> > -static void free_pte_rcu(struct rcu_head *head)
> > -{
> > - struct hpte_cache *pte = container_of(head, struct hpte_cache, rcu_head);
> > - kmem_cache_free(hpte_cache, pte);
> > -}
> > -
> > static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
> > {
> > struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu);
> > @@ -126,7 +120,7 @@ static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
> >
> > spin_unlock(&vcpu3s->mmu_lock);
> >
> > - call_rcu(&pte->rcu_head, free_pte_rcu);
> > + kfree_rcu(pte, rcu_head);
> > }
> >
> > static void kvmppc_mmu_pte_flush_all(struct kvm_vcpu *vcpu)
> > diff --git a/block/blk-ioc.c b/block/blk-ioc.c
> > index 25dd4db11121..ce82770c72ab 100644
> > --- a/block/blk-ioc.c
> > +++ b/block/blk-ioc.c
> > @@ -32,13 +32,6 @@ static void get_io_context(struct io_context *ioc)
> > atomic_long_inc(&ioc->refcount);
> > }
> >
> > -static void icq_free_icq_rcu(struct rcu_head *head)
> > -{
> > - struct io_cq *icq = container_of(head, struct io_cq, __rcu_head);
> > -
> > - kmem_cache_free(icq->__rcu_icq_cache, icq);
> > -}
> > -
> > /*
> > * Exit an icq. Called with ioc locked for blk-mq, and with both ioc
> > * and queue locked for legacy.
> > @@ -102,7 +95,7 @@ static void ioc_destroy_icq(struct io_cq *icq)
> > */
> > icq->__rcu_icq_cache = et->icq_cache;
> > icq->flags |= ICQ_DESTROYED;
> > - call_rcu(&icq->__rcu_head, icq_free_icq_rcu);
> > + kfree_rcu(icq, __rcu_head);
> > }
> >
> > /*
> > diff --git a/drivers/net/wireguard/allowedips.c b/drivers/net/wireguard/allowedips.c
> > index 0ba714ca5185..e4e1638fce1b 100644
> > --- a/drivers/net/wireguard/allowedips.c
> > +++ b/drivers/net/wireguard/allowedips.c
> > @@ -48,11 +48,6 @@ static void push_rcu(struct allowedips_node **stack,
> > }
> > }
> >
> > -static void node_free_rcu(struct rcu_head *rcu)
> > -{
> > - kmem_cache_free(node_cache, container_of(rcu, struct allowedips_node, rcu));
> > -}
> > -
> > static void root_free_rcu(struct rcu_head *rcu)
> > {
> > struct allowedips_node *node, *stack[MAX_ALLOWEDIPS_DEPTH] = {
> > @@ -330,13 +325,13 @@ void wg_allowedips_remove_by_peer(struct allowedips *table,
> > child = rcu_dereference_protected(
> > parent->bit[!(node->parent_bit_packed & 1)],
> > lockdep_is_held(lock));
> > - call_rcu(&node->rcu, node_free_rcu);
> > + kfree_rcu(node, rcu);
> > if (!free_parent)
> > continue;
> > if (child)
> > child->parent_bit_packed = parent->parent_bit_packed;
> > *(struct allowedips_node **)(parent->parent_bit_packed & ~3UL) = child;
> > - call_rcu(&parent->rcu, node_free_rcu);
> > + kfree_rcu(parent, rcu);
> > }
> > }
> >
> > diff --git a/fs/ecryptfs/dentry.c b/fs/ecryptfs/dentry.c
> > index acaa0825e9bb..49d626ff33a9 100644
> > --- a/fs/ecryptfs/dentry.c
> > +++ b/fs/ecryptfs/dentry.c
> > @@ -51,12 +51,6 @@ static int ecryptfs_d_revalidate(struct dentry *dentry, unsigned int flags)
> >
> > struct kmem_cache *ecryptfs_dentry_info_cache;
> >
> > -static void ecryptfs_dentry_free_rcu(struct rcu_head *head)
> > -{
> > - kmem_cache_free(ecryptfs_dentry_info_cache,
> > - container_of(head, struct ecryptfs_dentry_info, rcu));
> > -}
> > -
> > /**
> > * ecryptfs_d_release
> > * @dentry: The ecryptfs dentry
> > @@ -68,7 +62,7 @@ static void ecryptfs_d_release(struct dentry *dentry)
> > struct ecryptfs_dentry_info *p = dentry->d_fsdata;
> > if (p) {
> > path_put(&p->lower_path);
> > - call_rcu(&p->rcu, ecryptfs_dentry_free_rcu);
> > + kfree_rcu(p, rcu);
> > }
> > }
> >
> > diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> > index a20c2c9d7d45..eba5083504c7 100644
> > --- a/fs/nfsd/nfs4state.c
> > +++ b/fs/nfsd/nfs4state.c
> > @@ -571,13 +571,6 @@ opaque_hashval(const void *ptr, int nbytes)
> > return x;
> > }
> >
> > -static void nfsd4_free_file_rcu(struct rcu_head *rcu)
> > -{
> > - struct nfs4_file *fp = container_of(rcu, struct nfs4_file, fi_rcu);
> > -
> > - kmem_cache_free(file_slab, fp);
> > -}
> > -
> > void
> > put_nfs4_file(struct nfs4_file *fi)
> > {
> > @@ -585,7 +578,7 @@ put_nfs4_file(struct nfs4_file *fi)
> > nfsd4_file_hash_remove(fi);
> > WARN_ON_ONCE(!list_empty(&fi->fi_clnt_odstate));
> > WARN_ON_ONCE(!list_empty(&fi->fi_delegations));
> > - call_rcu(&fi->fi_rcu, nfsd4_free_file_rcu);
> > + kfree_rcu(fi, fi_rcu);
> > }
> > }
> >
> > diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
> > index 7c29f4afc23d..338c52168e61 100644
> > --- a/fs/tracefs/inode.c
> > +++ b/fs/tracefs/inode.c
> > @@ -53,14 +53,6 @@ static struct inode *tracefs_alloc_inode(struct super_block *sb)
> > return &ti->vfs_inode;
> > }
> >
> > -static void tracefs_free_inode_rcu(struct rcu_head *rcu)
> > -{
> > - struct tracefs_inode *ti;
> > -
> > - ti = container_of(rcu, struct tracefs_inode, rcu);
> > - kmem_cache_free(tracefs_inode_cachep, ti);
> > -}
> > -
> > static void tracefs_free_inode(struct inode *inode)
> > {
> > struct tracefs_inode *ti = get_tracefs(inode);
> > @@ -70,7 +62,7 @@ static void tracefs_free_inode(struct inode *inode)
> > list_del_rcu(&ti->list);
> > spin_unlock_irqrestore(&tracefs_inode_lock, flags);
> >
> > - call_rcu(&ti->rcu, tracefs_free_inode_rcu);
> > + kfree_rcu(ti, rcu);
> > }
> >
> > static ssize_t default_read_file(struct file *file, char __user *buf,
> > diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
> > index b924f0f096fa..bad5db979664 100644
> > --- a/kernel/time/posix-timers.c
> > +++ b/kernel/time/posix-timers.c
> > @@ -412,18 +412,11 @@ static struct k_itimer * alloc_posix_timer(void)
> > return tmr;
> > }
> >
> > -static void k_itimer_rcu_free(struct rcu_head *head)
> > -{
> > - struct k_itimer *tmr = container_of(head, struct k_itimer, rcu);
> > -
> > - kmem_cache_free(posix_timers_cache, tmr);
> > -}
> > -
> > static void posix_timer_free(struct k_itimer *tmr)
> > {
> > put_pid(tmr->it_pid);
> > sigqueue_free(tmr->sigq);
> > - call_rcu(&tmr->rcu, k_itimer_rcu_free);
> > + kfree_rcu(tmr, rcu);
> > }
> >
> > static void posix_timer_unhash_and_free(struct k_itimer *tmr)
> > diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> > index 003474c9a77d..367fc459cbd2 100644
> > --- a/kernel/workqueue.c
> > +++ b/kernel/workqueue.c
> > @@ -5022,12 +5022,6 @@ static struct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs)
> > return NULL;
> > }
> >
> > -static void rcu_free_pwq(struct rcu_head *rcu)
> > -{
> > - kmem_cache_free(pwq_cache,
> > - container_of(rcu, struct pool_workqueue, rcu));
> > -}
> > -
> > /*
> > * Scheduled on pwq_release_worker by put_pwq() when an unbound pwq hits zero
> > * refcnt and needs to be destroyed.
> > @@ -5073,7 +5067,7 @@ static void pwq_release_workfn(struct kthread_work *work)
> > raw_spin_unlock_irq(&nna->lock);
> > }
> >
> > - call_rcu(&pwq->rcu, rcu_free_pwq);
> > + kfree_rcu(pwq, rcu);
> >
> > /*
> > * If we're the last pwq going away, @wq is already dead and no one
> > diff --git a/net/bridge/br_fdb.c b/net/bridge/br_fdb.c
> > index c77591e63841..6d04b48f7e2c 100644
> > --- a/net/bridge/br_fdb.c
> > +++ b/net/bridge/br_fdb.c
> > @@ -73,13 +73,6 @@ static inline int has_expired(const struct net_bridge *br,
> > time_before_eq(fdb->updated + hold_time(br), jiffies);
> > }
> >
> > -static void fdb_rcu_free(struct rcu_head *head)
> > -{
> > - struct net_bridge_fdb_entry *ent
> > - = container_of(head, struct net_bridge_fdb_entry, rcu);
> > - kmem_cache_free(br_fdb_cache, ent);
> > -}
> > -
> > static int fdb_to_nud(const struct net_bridge *br,
> > const struct net_bridge_fdb_entry *fdb)
> > {
> > @@ -329,7 +322,7 @@ static void fdb_delete(struct net_bridge *br, struct net_bridge_fdb_entry *f,
> > if (test_and_clear_bit(BR_FDB_DYNAMIC_LEARNED, &f->flags))
> > atomic_dec(&br->fdb_n_learned);
> > fdb_notify(br, f, RTM_DELNEIGH, swdev_notify);
> > - call_rcu(&f->rcu, fdb_rcu_free);
> > + kfree_rcu(f, rcu);
> > }
> >
> > /* Delete a local entry if no other port had the same address.
> > diff --git a/net/can/gw.c b/net/can/gw.c
> > index 37528826935e..ffb9870e2d01 100644
> > --- a/net/can/gw.c
> > +++ b/net/can/gw.c
> > @@ -577,13 +577,6 @@ static inline void cgw_unregister_filter(struct net *net, struct cgw_job *gwj)
> > gwj->ccgw.filter.can_mask, can_can_gw_rcv, gwj);
> > }
> >
> > -static void cgw_job_free_rcu(struct rcu_head *rcu_head)
> > -{
> > - struct cgw_job *gwj = container_of(rcu_head, struct cgw_job, rcu);
> > -
> > - kmem_cache_free(cgw_cache, gwj);
> > -}
> > -
> > static int cgw_notifier(struct notifier_block *nb,
> > unsigned long msg, void *ptr)
> > {
> > @@ -603,7 +596,7 @@ static int cgw_notifier(struct notifier_block *nb,
> > if (gwj->src.dev == dev || gwj->dst.dev == dev) {
> > hlist_del(&gwj->list);
> > cgw_unregister_filter(net, gwj);
> > - call_rcu(&gwj->rcu, cgw_job_free_rcu);
> > + kfree_rcu(gwj, rcu);
> > }
> > }
> > }
> > @@ -1168,7 +1161,7 @@ static void cgw_remove_all_jobs(struct net *net)
> > hlist_for_each_entry_safe(gwj, nx, &net->can.cgw_list, list) {
> > hlist_del(&gwj->list);
> > cgw_unregister_filter(net, gwj);
> > - call_rcu(&gwj->rcu, cgw_job_free_rcu);
> > + kfree_rcu(gwj, rcu);
> > }
> > }
> >
> > @@ -1236,7 +1229,7 @@ static int cgw_remove_job(struct sk_buff *skb, struct nlmsghdr *nlh,
> >
> > hlist_del(&gwj->list);
> > cgw_unregister_filter(net, gwj);
> > - call_rcu(&gwj->rcu, cgw_job_free_rcu);
> > + kfree_rcu(gwj, rcu);
> > err = 0;
> > break;
> > }
> > diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
> > index f474106464d2..3ed92e583417 100644
> > --- a/net/ipv4/fib_trie.c
> > +++ b/net/ipv4/fib_trie.c
> > @@ -292,15 +292,9 @@ static const int inflate_threshold = 50;
> > static const int halve_threshold_root = 15;
> > static const int inflate_threshold_root = 30;
> >
> > -static void __alias_free_mem(struct rcu_head *head)
> > -{
> > - struct fib_alias *fa = container_of(head, struct fib_alias, rcu);
> > - kmem_cache_free(fn_alias_kmem, fa);
> > -}
> > -
> > static inline void alias_free_mem_rcu(struct fib_alias *fa)
> > {
> > - call_rcu(&fa->rcu, __alias_free_mem);
> > + kfree_rcu(fa, rcu);
> > }
> >
> > #define TNODE_VMALLOC_MAX \
> > diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c
> > index 5bd759963451..5ab56f4cb529 100644
> > --- a/net/ipv4/inetpeer.c
> > +++ b/net/ipv4/inetpeer.c
> > @@ -128,11 +128,6 @@ static struct inet_peer *lookup(const struct inetpeer_addr *daddr,
> > return NULL;
> > }
> >
> > -static void inetpeer_free_rcu(struct rcu_head *head)
> > -{
> > - kmem_cache_free(peer_cachep, container_of(head, struct inet_peer, rcu));
> > -}
> > -
> > /* perform garbage collect on all items stacked during a lookup */
> > static void inet_peer_gc(struct inet_peer_base *base,
> > struct inet_peer *gc_stack[],
> > @@ -168,7 +163,7 @@ static void inet_peer_gc(struct inet_peer_base *base,
> > if (p) {
> > rb_erase(&p->rb_node, &base->rb_root);
> > base->total--;
> > - call_rcu(&p->rcu, inetpeer_free_rcu);
> > + kfree_rcu(p, rcu);
> > }
> > }
> > }
> > @@ -242,7 +237,7 @@ void inet_putpeer(struct inet_peer *p)
> > WRITE_ONCE(p->dtime, (__u32)jiffies);
> >
> > if (refcount_dec_and_test(&p->refcnt))
> > - call_rcu(&p->rcu, inetpeer_free_rcu);
> > + kfree_rcu(p, rcu);
> > }
> > EXPORT_SYMBOL_GPL(inet_putpeer);
> >
> > diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
> > index 31d77885bcae..bafc49de270e 100644
> > --- a/net/ipv6/ip6_fib.c
> > +++ b/net/ipv6/ip6_fib.c
> > @@ -198,16 +198,9 @@ static void node_free_immediate(struct net *net, struct fib6_node *fn)
> > net->ipv6.rt6_stats->fib_nodes--;
> > }
> >
> > -static void node_free_rcu(struct rcu_head *head)
> > -{
> > - struct fib6_node *fn = container_of(head, struct fib6_node, rcu);
> > -
> > - kmem_cache_free(fib6_node_kmem, fn);
> > -}
> > -
> > static void node_free(struct net *net, struct fib6_node *fn)
> > {
> > - call_rcu(&fn->rcu, node_free_rcu);
> > + kfree_rcu(fn, rcu);
> > net->ipv6.rt6_stats->fib_nodes--;
> > }
> >
> > diff --git a/net/ipv6/xfrm6_tunnel.c b/net/ipv6/xfrm6_tunnel.c
> > index bf140ef781c1..c3c893ddb6ee 100644
> > --- a/net/ipv6/xfrm6_tunnel.c
> > +++ b/net/ipv6/xfrm6_tunnel.c
> > @@ -178,12 +178,6 @@ __be32 xfrm6_tunnel_alloc_spi(struct net *net, xfrm_address_t *saddr)
> > }
> > EXPORT_SYMBOL(xfrm6_tunnel_alloc_spi);
> >
> > -static void x6spi_destroy_rcu(struct rcu_head *head)
> > -{
> > - kmem_cache_free(xfrm6_tunnel_spi_kmem,
> > - container_of(head, struct xfrm6_tunnel_spi, rcu_head));
> > -}
> > -
> > static void xfrm6_tunnel_free_spi(struct net *net, xfrm_address_t *saddr)
> > {
> > struct xfrm6_tunnel_net *xfrm6_tn = xfrm6_tunnel_pernet(net);
> > @@ -200,7 +194,7 @@ static void xfrm6_tunnel_free_spi(struct net *net, xfrm_address_t *saddr)
> > if (refcount_dec_and_test(&x6spi->refcnt)) {
> > hlist_del_rcu(&x6spi->list_byaddr);
> > hlist_del_rcu(&x6spi->list_byspi);
> > - call_rcu(&x6spi->rcu_head, x6spi_destroy_rcu);
> > + kfree_rcu(x6spi, rcu_head);
> > break;
> > }
> > }
> > diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
> > index 2f191e50d4fc..fbb730cd2d38 100644
> > --- a/net/kcm/kcmsock.c
> > +++ b/net/kcm/kcmsock.c
> > @@ -1580,14 +1580,6 @@ static int kcm_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
> > return err;
> > }
> >
> > -static void free_mux(struct rcu_head *rcu)
> > -{
> > - struct kcm_mux *mux = container_of(rcu,
> > - struct kcm_mux, rcu);
> > -
> > - kmem_cache_free(kcm_muxp, mux);
> > -}
> > -
> > static void release_mux(struct kcm_mux *mux)
> > {
> > struct kcm_net *knet = mux->knet;
> > @@ -1615,7 +1607,7 @@ static void release_mux(struct kcm_mux *mux)
> > knet->count--;
> > mutex_unlock(&knet->mutex);
> >
> > - call_rcu(&mux->rcu, free_mux);
> > + kfree_rcu(mux, rcu);
> > }
> >
> > static void kcm_done(struct kcm_sock *kcm)
> > diff --git a/net/netfilter/nf_conncount.c b/net/netfilter/nf_conncount.c
> > index 8715617b02fe..587bfcb79723 100644
> > --- a/net/netfilter/nf_conncount.c
> > +++ b/net/netfilter/nf_conncount.c
> > @@ -275,14 +275,6 @@ bool nf_conncount_gc_list(struct net *net,
> > }
> > EXPORT_SYMBOL_GPL(nf_conncount_gc_list);
> >
> > -static void __tree_nodes_free(struct rcu_head *h)
> > -{
> > - struct nf_conncount_rb *rbconn;
> > -
> > - rbconn = container_of(h, struct nf_conncount_rb, rcu_head);
> > - kmem_cache_free(conncount_rb_cachep, rbconn);
> > -}
> > -
> > /* caller must hold tree nf_conncount_locks[] lock */
> > static void tree_nodes_free(struct rb_root *root,
> > struct nf_conncount_rb *gc_nodes[],
> > @@ -295,7 +287,7 @@ static void tree_nodes_free(struct rb_root *root,
> > spin_lock(&rbconn->list.list_lock);
> > if (!rbconn->list.count) {
> > rb_erase(&rbconn->node, root);
> > - call_rcu(&rbconn->rcu_head, __tree_nodes_free);
> > + kfree_rcu(rbconn, rcu_head);
> > }
> > spin_unlock(&rbconn->list.list_lock);
> > }
> > diff --git a/net/netfilter/nf_conntrack_expect.c b/net/netfilter/nf_conntrack_expect.c
> > index 21fa550966f0..9dcaef6f3663 100644
> > --- a/net/netfilter/nf_conntrack_expect.c
> > +++ b/net/netfilter/nf_conntrack_expect.c
> > @@ -367,18 +367,10 @@ void nf_ct_expect_init(struct nf_conntrack_expect *exp, unsigned int class,
> > }
> > EXPORT_SYMBOL_GPL(nf_ct_expect_init);
> >
> > -static void nf_ct_expect_free_rcu(struct rcu_head *head)
> > -{
> > - struct nf_conntrack_expect *exp;
> > -
> > - exp = container_of(head, struct nf_conntrack_expect, rcu);
> > - kmem_cache_free(nf_ct_expect_cachep, exp);
> > -}
> > -
> > void nf_ct_expect_put(struct nf_conntrack_expect *exp)
> > {
> > if (refcount_dec_and_test(&exp->use))
> > - call_rcu(&exp->rcu, nf_ct_expect_free_rcu);
> > + kfree_rcu(exp, rcu);
> > }
> > EXPORT_SYMBOL_GPL(nf_ct_expect_put);
> >
> > diff --git a/net/netfilter/xt_hashlimit.c b/net/netfilter/xt_hashlimit.c
> > index 0859b8f76764..c2b9b954eb53 100644
> > --- a/net/netfilter/xt_hashlimit.c
> > +++ b/net/netfilter/xt_hashlimit.c
> > @@ -256,18 +256,11 @@ dsthash_alloc_init(struct xt_hashlimit_htable *ht,
> > return ent;
> > }
> >
> > -static void dsthash_free_rcu(struct rcu_head *head)
> > -{
> > - struct dsthash_ent *ent = container_of(head, struct dsthash_ent, rcu);
> > -
> > - kmem_cache_free(hashlimit_cachep, ent);
> > -}
> > -
> > static inline void
> > dsthash_free(struct xt_hashlimit_htable *ht, struct dsthash_ent *ent)
> > {
> > hlist_del_rcu(&ent->node);
> > - call_rcu(&ent->rcu, dsthash_free_rcu);
> > + kfree_rcu(ent, rcu);
> > ht->count--;
> > }
> > static void htable_gc(struct work_struct *work);
> Looks good. What is about extra below useres:
>
> lima_sched.c
> ipmr.c
> ip6mr.c
>
> <snip>
> diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
> index bbf3f8feab94..21ec6fbe732e 100644
> --- a/drivers/gpu/drm/lima/lima_sched.c
> +++ b/drivers/gpu/drm/lima/lima_sched.c
> @@ -64,19 +64,11 @@ static const char *lima_fence_get_timeline_name(struct dma_fence *fence)
> return f->pipe->base.name;
> }
>
> -static void lima_fence_release_rcu(struct rcu_head *rcu)
> -{
> - struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);
> - struct lima_fence *fence = to_lima_fence(f);
> -
> - kmem_cache_free(lima_fence_slab, fence);
> -}
> -
> static void lima_fence_release(struct dma_fence *fence)
> {
> struct lima_fence *f = to_lima_fence(fence);
>
> - call_rcu(&f->base.rcu, lima_fence_release_rcu);
> + kfree_rcu(f, base.rcu);
> }
>
> static const struct dma_fence_ops lima_fence_ops = {
> diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
> index 6c750bd13dd8..7a9496e51dc8 100644
> --- a/net/ipv4/ipmr.c
> +++ b/net/ipv4/ipmr.c
> @@ -711,7 +711,7 @@ static void ipmr_cache_free_rcu(struct rcu_head *head)
>
> static void ipmr_cache_free(struct mfc_cache *c)
> {
> - call_rcu(&c->_c.rcu, ipmr_cache_free_rcu);
> + kfree_rcu(c, _c.rcu);
> }
>
> /* Destroy an unresolved cache entry, killing queued skbs
> diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
> index dd342e6ecf3f..c236f2d0c88f 100644
> --- a/net/ipv6/ip6mr.c
> +++ b/net/ipv6/ip6mr.c
> @@ -762,7 +762,7 @@ static inline void ip6mr_cache_free_rcu(struct rcu_head *head)
>
> static inline void ip6mr_cache_free(struct mfc6_cache *c)
> {
> - call_rcu(&c->_c.rcu, ip6mr_cache_free_rcu);
> + kfree_rcu(c, _c.rcu);
> }
>
> /* Destroy an unresolved cache entry, killing queued skbs
> <snip>
>
> ?
These look ok to me. In the last two cases, the callback function is also
stored in a data structure, eg:
static struct mfc6_cache *ip6mr_cache_alloc(void)
{
struct mfc6_cache *c = kmem_cache_zalloc(mrt_cachep, GFP_KERNEL);
if (!c)
return NULL;
c->_c.mfc_un.res.last_assert = jiffies - MFC_ASSERT_THRESH - 1;
c->_c.mfc_un.res.minvif = MAXMIFS;
c->_c.free = ip6mr_cache_free_rcu;
refcount_set(&c->_c.mfc_un.res.refcount, 1);
return c;
}
Should that be left as it is?
julia
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [cocci] patch idea: convert trivial call_rcu users to kfree_rcu
2024-06-09 10:00 ` Julia Lawall
@ 2024-06-09 17:03 ` Paul E. McKenney
2024-06-11 16:40 ` Uladzislau Rezki
0 siblings, 1 reply; 24+ messages in thread
From: Paul E. McKenney @ 2024-06-09 17:03 UTC (permalink / raw)
To: Julia Lawall; +Cc: Uladzislau Rezki, Vlastimil Babka, linux-mm, RCU, cocci
On Sun, Jun 09, 2024 at 12:00:58PM +0200, Julia Lawall wrote:
>
>
> On Tue, 4 Jun 2024, Uladzislau Rezki wrote:
>
> > On Fri, May 31, 2024 at 06:02:01PM +0200, Julia Lawall wrote:
> > > Here are the changes proposed by Coccinelle.
> > >
> > > I didn't succeed to compile the powerpc code, but the rest has been
> > > compile tested.
> > >
> > > julia
> > >
> > > commit 1881f31fe3ad693d07ecff45985dd0e87534923f
> > > Author: Julia Lawall <Julia.Lawall@inria.fr>
> > > Date: Wed May 29 18:54:41 2024 +0200
> > >
> > > misc
> > >
> > > diff --git a/arch/powerpc/kvm/book3s_mmu_hpte.c b/arch/powerpc/kvm/book3s_mmu_hpte.c
> > > index ce79ac33e8d3..d904e13e069b 100644
> > > --- a/arch/powerpc/kvm/book3s_mmu_hpte.c
> > > +++ b/arch/powerpc/kvm/book3s_mmu_hpte.c
> > > @@ -92,12 +92,6 @@ void kvmppc_mmu_hpte_cache_map(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
> > > spin_unlock(&vcpu3s->mmu_lock);
> > > }
> > >
> > > -static void free_pte_rcu(struct rcu_head *head)
> > > -{
> > > - struct hpte_cache *pte = container_of(head, struct hpte_cache, rcu_head);
> > > - kmem_cache_free(hpte_cache, pte);
> > > -}
> > > -
> > > static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
> > > {
> > > struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu);
> > > @@ -126,7 +120,7 @@ static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
> > >
> > > spin_unlock(&vcpu3s->mmu_lock);
> > >
> > > - call_rcu(&pte->rcu_head, free_pte_rcu);
> > > + kfree_rcu(pte, rcu_head);
> > > }
> > >
> > > static void kvmppc_mmu_pte_flush_all(struct kvm_vcpu *vcpu)
> > > diff --git a/block/blk-ioc.c b/block/blk-ioc.c
> > > index 25dd4db11121..ce82770c72ab 100644
> > > --- a/block/blk-ioc.c
> > > +++ b/block/blk-ioc.c
> > > @@ -32,13 +32,6 @@ static void get_io_context(struct io_context *ioc)
> > > atomic_long_inc(&ioc->refcount);
> > > }
> > >
> > > -static void icq_free_icq_rcu(struct rcu_head *head)
> > > -{
> > > - struct io_cq *icq = container_of(head, struct io_cq, __rcu_head);
> > > -
> > > - kmem_cache_free(icq->__rcu_icq_cache, icq);
> > > -}
> > > -
> > > /*
> > > * Exit an icq. Called with ioc locked for blk-mq, and with both ioc
> > > * and queue locked for legacy.
> > > @@ -102,7 +95,7 @@ static void ioc_destroy_icq(struct io_cq *icq)
> > > */
> > > icq->__rcu_icq_cache = et->icq_cache;
> > > icq->flags |= ICQ_DESTROYED;
> > > - call_rcu(&icq->__rcu_head, icq_free_icq_rcu);
> > > + kfree_rcu(icq, __rcu_head);
> > > }
> > >
> > > /*
> > > diff --git a/drivers/net/wireguard/allowedips.c b/drivers/net/wireguard/allowedips.c
> > > index 0ba714ca5185..e4e1638fce1b 100644
> > > --- a/drivers/net/wireguard/allowedips.c
> > > +++ b/drivers/net/wireguard/allowedips.c
> > > @@ -48,11 +48,6 @@ static void push_rcu(struct allowedips_node **stack,
> > > }
> > > }
> > >
> > > -static void node_free_rcu(struct rcu_head *rcu)
> > > -{
> > > - kmem_cache_free(node_cache, container_of(rcu, struct allowedips_node, rcu));
> > > -}
> > > -
> > > static void root_free_rcu(struct rcu_head *rcu)
> > > {
> > > struct allowedips_node *node, *stack[MAX_ALLOWEDIPS_DEPTH] = {
> > > @@ -330,13 +325,13 @@ void wg_allowedips_remove_by_peer(struct allowedips *table,
> > > child = rcu_dereference_protected(
> > > parent->bit[!(node->parent_bit_packed & 1)],
> > > lockdep_is_held(lock));
> > > - call_rcu(&node->rcu, node_free_rcu);
> > > + kfree_rcu(node, rcu);
> > > if (!free_parent)
> > > continue;
> > > if (child)
> > > child->parent_bit_packed = parent->parent_bit_packed;
> > > *(struct allowedips_node **)(parent->parent_bit_packed & ~3UL) = child;
> > > - call_rcu(&parent->rcu, node_free_rcu);
> > > + kfree_rcu(parent, rcu);
> > > }
> > > }
> > >
> > > diff --git a/fs/ecryptfs/dentry.c b/fs/ecryptfs/dentry.c
> > > index acaa0825e9bb..49d626ff33a9 100644
> > > --- a/fs/ecryptfs/dentry.c
> > > +++ b/fs/ecryptfs/dentry.c
> > > @@ -51,12 +51,6 @@ static int ecryptfs_d_revalidate(struct dentry *dentry, unsigned int flags)
> > >
> > > struct kmem_cache *ecryptfs_dentry_info_cache;
> > >
> > > -static void ecryptfs_dentry_free_rcu(struct rcu_head *head)
> > > -{
> > > - kmem_cache_free(ecryptfs_dentry_info_cache,
> > > - container_of(head, struct ecryptfs_dentry_info, rcu));
> > > -}
> > > -
> > > /**
> > > * ecryptfs_d_release
> > > * @dentry: The ecryptfs dentry
> > > @@ -68,7 +62,7 @@ static void ecryptfs_d_release(struct dentry *dentry)
> > > struct ecryptfs_dentry_info *p = dentry->d_fsdata;
> > > if (p) {
> > > path_put(&p->lower_path);
> > > - call_rcu(&p->rcu, ecryptfs_dentry_free_rcu);
> > > + kfree_rcu(p, rcu);
> > > }
> > > }
> > >
> > > diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> > > index a20c2c9d7d45..eba5083504c7 100644
> > > --- a/fs/nfsd/nfs4state.c
> > > +++ b/fs/nfsd/nfs4state.c
> > > @@ -571,13 +571,6 @@ opaque_hashval(const void *ptr, int nbytes)
> > > return x;
> > > }
> > >
> > > -static void nfsd4_free_file_rcu(struct rcu_head *rcu)
> > > -{
> > > - struct nfs4_file *fp = container_of(rcu, struct nfs4_file, fi_rcu);
> > > -
> > > - kmem_cache_free(file_slab, fp);
> > > -}
> > > -
> > > void
> > > put_nfs4_file(struct nfs4_file *fi)
> > > {
> > > @@ -585,7 +578,7 @@ put_nfs4_file(struct nfs4_file *fi)
> > > nfsd4_file_hash_remove(fi);
> > > WARN_ON_ONCE(!list_empty(&fi->fi_clnt_odstate));
> > > WARN_ON_ONCE(!list_empty(&fi->fi_delegations));
> > > - call_rcu(&fi->fi_rcu, nfsd4_free_file_rcu);
> > > + kfree_rcu(fi, fi_rcu);
> > > }
> > > }
> > >
> > > diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
> > > index 7c29f4afc23d..338c52168e61 100644
> > > --- a/fs/tracefs/inode.c
> > > +++ b/fs/tracefs/inode.c
> > > @@ -53,14 +53,6 @@ static struct inode *tracefs_alloc_inode(struct super_block *sb)
> > > return &ti->vfs_inode;
> > > }
> > >
> > > -static void tracefs_free_inode_rcu(struct rcu_head *rcu)
> > > -{
> > > - struct tracefs_inode *ti;
> > > -
> > > - ti = container_of(rcu, struct tracefs_inode, rcu);
> > > - kmem_cache_free(tracefs_inode_cachep, ti);
> > > -}
> > > -
> > > static void tracefs_free_inode(struct inode *inode)
> > > {
> > > struct tracefs_inode *ti = get_tracefs(inode);
> > > @@ -70,7 +62,7 @@ static void tracefs_free_inode(struct inode *inode)
> > > list_del_rcu(&ti->list);
> > > spin_unlock_irqrestore(&tracefs_inode_lock, flags);
> > >
> > > - call_rcu(&ti->rcu, tracefs_free_inode_rcu);
> > > + kfree_rcu(ti, rcu);
> > > }
> > >
> > > static ssize_t default_read_file(struct file *file, char __user *buf,
> > > diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
> > > index b924f0f096fa..bad5db979664 100644
> > > --- a/kernel/time/posix-timers.c
> > > +++ b/kernel/time/posix-timers.c
> > > @@ -412,18 +412,11 @@ static struct k_itimer * alloc_posix_timer(void)
> > > return tmr;
> > > }
> > >
> > > -static void k_itimer_rcu_free(struct rcu_head *head)
> > > -{
> > > - struct k_itimer *tmr = container_of(head, struct k_itimer, rcu);
> > > -
> > > - kmem_cache_free(posix_timers_cache, tmr);
> > > -}
> > > -
> > > static void posix_timer_free(struct k_itimer *tmr)
> > > {
> > > put_pid(tmr->it_pid);
> > > sigqueue_free(tmr->sigq);
> > > - call_rcu(&tmr->rcu, k_itimer_rcu_free);
> > > + kfree_rcu(tmr, rcu);
> > > }
> > >
> > > static void posix_timer_unhash_and_free(struct k_itimer *tmr)
> > > diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> > > index 003474c9a77d..367fc459cbd2 100644
> > > --- a/kernel/workqueue.c
> > > +++ b/kernel/workqueue.c
> > > @@ -5022,12 +5022,6 @@ static struct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs)
> > > return NULL;
> > > }
> > >
> > > -static void rcu_free_pwq(struct rcu_head *rcu)
> > > -{
> > > - kmem_cache_free(pwq_cache,
> > > - container_of(rcu, struct pool_workqueue, rcu));
> > > -}
> > > -
> > > /*
> > > * Scheduled on pwq_release_worker by put_pwq() when an unbound pwq hits zero
> > > * refcnt and needs to be destroyed.
> > > @@ -5073,7 +5067,7 @@ static void pwq_release_workfn(struct kthread_work *work)
> > > raw_spin_unlock_irq(&nna->lock);
> > > }
> > >
> > > - call_rcu(&pwq->rcu, rcu_free_pwq);
> > > + kfree_rcu(pwq, rcu);
> > >
> > > /*
> > > * If we're the last pwq going away, @wq is already dead and no one
> > > diff --git a/net/bridge/br_fdb.c b/net/bridge/br_fdb.c
> > > index c77591e63841..6d04b48f7e2c 100644
> > > --- a/net/bridge/br_fdb.c
> > > +++ b/net/bridge/br_fdb.c
> > > @@ -73,13 +73,6 @@ static inline int has_expired(const struct net_bridge *br,
> > > time_before_eq(fdb->updated + hold_time(br), jiffies);
> > > }
> > >
> > > -static void fdb_rcu_free(struct rcu_head *head)
> > > -{
> > > - struct net_bridge_fdb_entry *ent
> > > - = container_of(head, struct net_bridge_fdb_entry, rcu);
> > > - kmem_cache_free(br_fdb_cache, ent);
> > > -}
> > > -
> > > static int fdb_to_nud(const struct net_bridge *br,
> > > const struct net_bridge_fdb_entry *fdb)
> > > {
> > > @@ -329,7 +322,7 @@ static void fdb_delete(struct net_bridge *br, struct net_bridge_fdb_entry *f,
> > > if (test_and_clear_bit(BR_FDB_DYNAMIC_LEARNED, &f->flags))
> > > atomic_dec(&br->fdb_n_learned);
> > > fdb_notify(br, f, RTM_DELNEIGH, swdev_notify);
> > > - call_rcu(&f->rcu, fdb_rcu_free);
> > > + kfree_rcu(f, rcu);
> > > }
> > >
> > > /* Delete a local entry if no other port had the same address.
> > > diff --git a/net/can/gw.c b/net/can/gw.c
> > > index 37528826935e..ffb9870e2d01 100644
> > > --- a/net/can/gw.c
> > > +++ b/net/can/gw.c
> > > @@ -577,13 +577,6 @@ static inline void cgw_unregister_filter(struct net *net, struct cgw_job *gwj)
> > > gwj->ccgw.filter.can_mask, can_can_gw_rcv, gwj);
> > > }
> > >
> > > -static void cgw_job_free_rcu(struct rcu_head *rcu_head)
> > > -{
> > > - struct cgw_job *gwj = container_of(rcu_head, struct cgw_job, rcu);
> > > -
> > > - kmem_cache_free(cgw_cache, gwj);
> > > -}
> > > -
> > > static int cgw_notifier(struct notifier_block *nb,
> > > unsigned long msg, void *ptr)
> > > {
> > > @@ -603,7 +596,7 @@ static int cgw_notifier(struct notifier_block *nb,
> > > if (gwj->src.dev == dev || gwj->dst.dev == dev) {
> > > hlist_del(&gwj->list);
> > > cgw_unregister_filter(net, gwj);
> > > - call_rcu(&gwj->rcu, cgw_job_free_rcu);
> > > + kfree_rcu(gwj, rcu);
> > > }
> > > }
> > > }
> > > @@ -1168,7 +1161,7 @@ static void cgw_remove_all_jobs(struct net *net)
> > > hlist_for_each_entry_safe(gwj, nx, &net->can.cgw_list, list) {
> > > hlist_del(&gwj->list);
> > > cgw_unregister_filter(net, gwj);
> > > - call_rcu(&gwj->rcu, cgw_job_free_rcu);
> > > + kfree_rcu(gwj, rcu);
> > > }
> > > }
> > >
> > > @@ -1236,7 +1229,7 @@ static int cgw_remove_job(struct sk_buff *skb, struct nlmsghdr *nlh,
> > >
> > > hlist_del(&gwj->list);
> > > cgw_unregister_filter(net, gwj);
> > > - call_rcu(&gwj->rcu, cgw_job_free_rcu);
> > > + kfree_rcu(gwj, rcu);
> > > err = 0;
> > > break;
> > > }
> > > diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
> > > index f474106464d2..3ed92e583417 100644
> > > --- a/net/ipv4/fib_trie.c
> > > +++ b/net/ipv4/fib_trie.c
> > > @@ -292,15 +292,9 @@ static const int inflate_threshold = 50;
> > > static const int halve_threshold_root = 15;
> > > static const int inflate_threshold_root = 30;
> > >
> > > -static void __alias_free_mem(struct rcu_head *head)
> > > -{
> > > - struct fib_alias *fa = container_of(head, struct fib_alias, rcu);
> > > - kmem_cache_free(fn_alias_kmem, fa);
> > > -}
> > > -
> > > static inline void alias_free_mem_rcu(struct fib_alias *fa)
> > > {
> > > - call_rcu(&fa->rcu, __alias_free_mem);
> > > + kfree_rcu(fa, rcu);
> > > }
> > >
> > > #define TNODE_VMALLOC_MAX \
> > > diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c
> > > index 5bd759963451..5ab56f4cb529 100644
> > > --- a/net/ipv4/inetpeer.c
> > > +++ b/net/ipv4/inetpeer.c
> > > @@ -128,11 +128,6 @@ static struct inet_peer *lookup(const struct inetpeer_addr *daddr,
> > > return NULL;
> > > }
> > >
> > > -static void inetpeer_free_rcu(struct rcu_head *head)
> > > -{
> > > - kmem_cache_free(peer_cachep, container_of(head, struct inet_peer, rcu));
> > > -}
> > > -
> > > /* perform garbage collect on all items stacked during a lookup */
> > > static void inet_peer_gc(struct inet_peer_base *base,
> > > struct inet_peer *gc_stack[],
> > > @@ -168,7 +163,7 @@ static void inet_peer_gc(struct inet_peer_base *base,
> > > if (p) {
> > > rb_erase(&p->rb_node, &base->rb_root);
> > > base->total--;
> > > - call_rcu(&p->rcu, inetpeer_free_rcu);
> > > + kfree_rcu(p, rcu);
> > > }
> > > }
> > > }
> > > @@ -242,7 +237,7 @@ void inet_putpeer(struct inet_peer *p)
> > > WRITE_ONCE(p->dtime, (__u32)jiffies);
> > >
> > > if (refcount_dec_and_test(&p->refcnt))
> > > - call_rcu(&p->rcu, inetpeer_free_rcu);
> > > + kfree_rcu(p, rcu);
> > > }
> > > EXPORT_SYMBOL_GPL(inet_putpeer);
> > >
> > > diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
> > > index 31d77885bcae..bafc49de270e 100644
> > > --- a/net/ipv6/ip6_fib.c
> > > +++ b/net/ipv6/ip6_fib.c
> > > @@ -198,16 +198,9 @@ static void node_free_immediate(struct net *net, struct fib6_node *fn)
> > > net->ipv6.rt6_stats->fib_nodes--;
> > > }
> > >
> > > -static void node_free_rcu(struct rcu_head *head)
> > > -{
> > > - struct fib6_node *fn = container_of(head, struct fib6_node, rcu);
> > > -
> > > - kmem_cache_free(fib6_node_kmem, fn);
> > > -}
> > > -
> > > static void node_free(struct net *net, struct fib6_node *fn)
> > > {
> > > - call_rcu(&fn->rcu, node_free_rcu);
> > > + kfree_rcu(fn, rcu);
> > > net->ipv6.rt6_stats->fib_nodes--;
> > > }
> > >
> > > diff --git a/net/ipv6/xfrm6_tunnel.c b/net/ipv6/xfrm6_tunnel.c
> > > index bf140ef781c1..c3c893ddb6ee 100644
> > > --- a/net/ipv6/xfrm6_tunnel.c
> > > +++ b/net/ipv6/xfrm6_tunnel.c
> > > @@ -178,12 +178,6 @@ __be32 xfrm6_tunnel_alloc_spi(struct net *net, xfrm_address_t *saddr)
> > > }
> > > EXPORT_SYMBOL(xfrm6_tunnel_alloc_spi);
> > >
> > > -static void x6spi_destroy_rcu(struct rcu_head *head)
> > > -{
> > > - kmem_cache_free(xfrm6_tunnel_spi_kmem,
> > > - container_of(head, struct xfrm6_tunnel_spi, rcu_head));
> > > -}
> > > -
> > > static void xfrm6_tunnel_free_spi(struct net *net, xfrm_address_t *saddr)
> > > {
> > > struct xfrm6_tunnel_net *xfrm6_tn = xfrm6_tunnel_pernet(net);
> > > @@ -200,7 +194,7 @@ static void xfrm6_tunnel_free_spi(struct net *net, xfrm_address_t *saddr)
> > > if (refcount_dec_and_test(&x6spi->refcnt)) {
> > > hlist_del_rcu(&x6spi->list_byaddr);
> > > hlist_del_rcu(&x6spi->list_byspi);
> > > - call_rcu(&x6spi->rcu_head, x6spi_destroy_rcu);
> > > + kfree_rcu(x6spi, rcu_head);
> > > break;
> > > }
> > > }
> > > diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
> > > index 2f191e50d4fc..fbb730cd2d38 100644
> > > --- a/net/kcm/kcmsock.c
> > > +++ b/net/kcm/kcmsock.c
> > > @@ -1580,14 +1580,6 @@ static int kcm_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
> > > return err;
> > > }
> > >
> > > -static void free_mux(struct rcu_head *rcu)
> > > -{
> > > - struct kcm_mux *mux = container_of(rcu,
> > > - struct kcm_mux, rcu);
> > > -
> > > - kmem_cache_free(kcm_muxp, mux);
> > > -}
> > > -
> > > static void release_mux(struct kcm_mux *mux)
> > > {
> > > struct kcm_net *knet = mux->knet;
> > > @@ -1615,7 +1607,7 @@ static void release_mux(struct kcm_mux *mux)
> > > knet->count--;
> > > mutex_unlock(&knet->mutex);
> > >
> > > - call_rcu(&mux->rcu, free_mux);
> > > + kfree_rcu(mux, rcu);
> > > }
> > >
> > > static void kcm_done(struct kcm_sock *kcm)
> > > diff --git a/net/netfilter/nf_conncount.c b/net/netfilter/nf_conncount.c
> > > index 8715617b02fe..587bfcb79723 100644
> > > --- a/net/netfilter/nf_conncount.c
> > > +++ b/net/netfilter/nf_conncount.c
> > > @@ -275,14 +275,6 @@ bool nf_conncount_gc_list(struct net *net,
> > > }
> > > EXPORT_SYMBOL_GPL(nf_conncount_gc_list);
> > >
> > > -static void __tree_nodes_free(struct rcu_head *h)
> > > -{
> > > - struct nf_conncount_rb *rbconn;
> > > -
> > > - rbconn = container_of(h, struct nf_conncount_rb, rcu_head);
> > > - kmem_cache_free(conncount_rb_cachep, rbconn);
> > > -}
> > > -
> > > /* caller must hold tree nf_conncount_locks[] lock */
> > > static void tree_nodes_free(struct rb_root *root,
> > > struct nf_conncount_rb *gc_nodes[],
> > > @@ -295,7 +287,7 @@ static void tree_nodes_free(struct rb_root *root,
> > > spin_lock(&rbconn->list.list_lock);
> > > if (!rbconn->list.count) {
> > > rb_erase(&rbconn->node, root);
> > > - call_rcu(&rbconn->rcu_head, __tree_nodes_free);
> > > + kfree_rcu(rbconn, rcu_head);
> > > }
> > > spin_unlock(&rbconn->list.list_lock);
> > > }
> > > diff --git a/net/netfilter/nf_conntrack_expect.c b/net/netfilter/nf_conntrack_expect.c
> > > index 21fa550966f0..9dcaef6f3663 100644
> > > --- a/net/netfilter/nf_conntrack_expect.c
> > > +++ b/net/netfilter/nf_conntrack_expect.c
> > > @@ -367,18 +367,10 @@ void nf_ct_expect_init(struct nf_conntrack_expect *exp, unsigned int class,
> > > }
> > > EXPORT_SYMBOL_GPL(nf_ct_expect_init);
> > >
> > > -static void nf_ct_expect_free_rcu(struct rcu_head *head)
> > > -{
> > > - struct nf_conntrack_expect *exp;
> > > -
> > > - exp = container_of(head, struct nf_conntrack_expect, rcu);
> > > - kmem_cache_free(nf_ct_expect_cachep, exp);
> > > -}
> > > -
> > > void nf_ct_expect_put(struct nf_conntrack_expect *exp)
> > > {
> > > if (refcount_dec_and_test(&exp->use))
> > > - call_rcu(&exp->rcu, nf_ct_expect_free_rcu);
> > > + kfree_rcu(exp, rcu);
> > > }
> > > EXPORT_SYMBOL_GPL(nf_ct_expect_put);
> > >
> > > diff --git a/net/netfilter/xt_hashlimit.c b/net/netfilter/xt_hashlimit.c
> > > index 0859b8f76764..c2b9b954eb53 100644
> > > --- a/net/netfilter/xt_hashlimit.c
> > > +++ b/net/netfilter/xt_hashlimit.c
> > > @@ -256,18 +256,11 @@ dsthash_alloc_init(struct xt_hashlimit_htable *ht,
> > > return ent;
> > > }
> > >
> > > -static void dsthash_free_rcu(struct rcu_head *head)
> > > -{
> > > - struct dsthash_ent *ent = container_of(head, struct dsthash_ent, rcu);
> > > -
> > > - kmem_cache_free(hashlimit_cachep, ent);
> > > -}
> > > -
> > > static inline void
> > > dsthash_free(struct xt_hashlimit_htable *ht, struct dsthash_ent *ent)
> > > {
> > > hlist_del_rcu(&ent->node);
> > > - call_rcu(&ent->rcu, dsthash_free_rcu);
> > > + kfree_rcu(ent, rcu);
> > > ht->count--;
> > > }
> > > static void htable_gc(struct work_struct *work);
> > Looks good. What is about extra below useres:
> >
> > lima_sched.c
> > ipmr.c
> > ip6mr.c
> >
> > <snip>
> > diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
> > index bbf3f8feab94..21ec6fbe732e 100644
> > --- a/drivers/gpu/drm/lima/lima_sched.c
> > +++ b/drivers/gpu/drm/lima/lima_sched.c
> > @@ -64,19 +64,11 @@ static const char *lima_fence_get_timeline_name(struct dma_fence *fence)
> > return f->pipe->base.name;
> > }
> >
> > -static void lima_fence_release_rcu(struct rcu_head *rcu)
> > -{
> > - struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);
> > - struct lima_fence *fence = to_lima_fence(f);
> > -
> > - kmem_cache_free(lima_fence_slab, fence);
> > -}
> > -
> > static void lima_fence_release(struct dma_fence *fence)
> > {
> > struct lima_fence *f = to_lima_fence(fence);
> >
> > - call_rcu(&f->base.rcu, lima_fence_release_rcu);
> > + kfree_rcu(f, base.rcu);
> > }
> >
> > static const struct dma_fence_ops lima_fence_ops = {
> > diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
> > index 6c750bd13dd8..7a9496e51dc8 100644
> > --- a/net/ipv4/ipmr.c
> > +++ b/net/ipv4/ipmr.c
> > @@ -711,7 +711,7 @@ static void ipmr_cache_free_rcu(struct rcu_head *head)
> >
> > static void ipmr_cache_free(struct mfc_cache *c)
> > {
> > - call_rcu(&c->_c.rcu, ipmr_cache_free_rcu);
> > + kfree_rcu(c, _c.rcu);
> > }
> >
> > /* Destroy an unresolved cache entry, killing queued skbs
> > diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
> > index dd342e6ecf3f..c236f2d0c88f 100644
> > --- a/net/ipv6/ip6mr.c
> > +++ b/net/ipv6/ip6mr.c
> > @@ -762,7 +762,7 @@ static inline void ip6mr_cache_free_rcu(struct rcu_head *head)
> >
> > static inline void ip6mr_cache_free(struct mfc6_cache *c)
> > {
> > - call_rcu(&c->_c.rcu, ip6mr_cache_free_rcu);
> > + kfree_rcu(c, _c.rcu);
> > }
> >
> > /* Destroy an unresolved cache entry, killing queued skbs
> > <snip>
> >
> > ?
>
> These look ok to me. In the last two cases, the callback function is also
> stored in a data structure, eg:
>
> static struct mfc6_cache *ip6mr_cache_alloc(void)
> {
> struct mfc6_cache *c = kmem_cache_zalloc(mrt_cachep, GFP_KERNEL);
> if (!c)
> return NULL;
> c->_c.mfc_un.res.last_assert = jiffies - MFC_ASSERT_THRESH - 1;
> c->_c.mfc_un.res.minvif = MAXMIFS;
> c->_c.free = ip6mr_cache_free_rcu;
> refcount_set(&c->_c.mfc_un.res.refcount, 1);
> return c;
> }
>
> Should that be left as it is?
Given that ->_c.free isn't used in the RCU callback, I am guessing that
this is intended for debugging purposes, so that you can see from a crash
dump how this will be freed. But I could be completely off-base here.
One approach would be to remove the ->_c.free field and call attention
to this in the patches' commit logs.
Another would be to instead put the address of the allocation function
in ->_c.free, and again call attention to this in the commit logs.
Is there a better approach than these three? ;-)
Thanx, Paul
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [cocci] patch idea: convert trivial call_rcu users to kfree_rcu
2024-06-09 17:03 ` Paul E. McKenney
@ 2024-06-11 16:40 ` Uladzislau Rezki
2024-06-11 17:25 ` Paul E. McKenney
0 siblings, 1 reply; 24+ messages in thread
From: Uladzislau Rezki @ 2024-06-11 16:40 UTC (permalink / raw)
To: Paul E. McKenney, Julia Lawall
Cc: Julia Lawall, Uladzislau Rezki, Vlastimil Babka, linux-mm, RCU, cocci
> >
> > These look ok to me. In the last two cases, the callback function is also
> > stored in a data structure, eg:
> >
> > static struct mfc6_cache *ip6mr_cache_alloc(void)
> > {
> > struct mfc6_cache *c = kmem_cache_zalloc(mrt_cachep, GFP_KERNEL);
> > if (!c)
> > return NULL;
> > c->_c.mfc_un.res.last_assert = jiffies - MFC_ASSERT_THRESH - 1;
> > c->_c.mfc_un.res.minvif = MAXMIFS;
> > c->_c.free = ip6mr_cache_free_rcu;
> > refcount_set(&c->_c.mfc_un.res.refcount, 1);
> > return c;
> > }
> >
> > Should that be left as it is?
>
> Given that ->_c.free isn't used in the RCU callback, I am guessing that
> this is intended for debugging purposes, so that you can see from a crash
> dump how this will be freed. But I could be completely off-base here.
>
> One approach would be to remove the ->_c.free field and call attention
> to this in the patches' commit logs.
>
> Another would be to instead put the address of the allocation function
> in ->_c.free, and again call attention to this in the commit logs.
>
> Is there a better approach than these three? ;-)
>
IMO, "_c.free" should be removed:
diff --git a/include/linux/mroute_base.h b/include/linux/mroute_base.h
index 9dd4bf157255..45f220f761df 100644
--- a/include/linux/mroute_base.h
+++ b/include/linux/mroute_base.h
@@ -130,7 +130,6 @@ enum {
* @refcount: reference count for this entry
* @list: global entry list
* @rcu: used for entry destruction
- * @free: Operation used for freeing an entry under RCU
*/
struct mr_mfc {
struct rhlist_head mnode;
@@ -156,13 +155,12 @@ struct mr_mfc {
} mfc_un;
struct list_head list;
struct rcu_head rcu;
- void (*free)(struct rcu_head *head);
};
static inline void mr_cache_put(struct mr_mfc *c)
{
if (refcount_dec_and_test(&c->mfc_un.res.refcount))
- call_rcu(&c->rcu, c->free);
+ kfree_rcu(c, rcu);
}
static inline void mr_cache_hold(struct mr_mfc *c)
diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
index 6c750bd13dd8..5d2e339f09cc 100644
--- a/net/ipv4/ipmr.c
+++ b/net/ipv4/ipmr.c
@@ -702,16 +702,9 @@ static int vif_delete(struct mr_table *mrt, int vifi, int notify,
return 0;
}
-static void ipmr_cache_free_rcu(struct rcu_head *head)
-{
- struct mr_mfc *c = container_of(head, struct mr_mfc, rcu);
-
- kmem_cache_free(mrt_cachep, (struct mfc_cache *)c);
-}
-
static void ipmr_cache_free(struct mfc_cache *c)
{
- call_rcu(&c->_c.rcu, ipmr_cache_free_rcu);
+ kfree_rcu(c, _c.rcu);
}
/* Destroy an unresolved cache entry, killing queued skbs
@@ -959,7 +952,6 @@ static struct mfc_cache *ipmr_cache_alloc(void)
if (c) {
c->_c.mfc_un.res.last_assert = jiffies - MFC_ASSERT_THRESH - 1;
c->_c.mfc_un.res.minvif = MAXVIFS;
- c->_c.free = ipmr_cache_free_rcu;
refcount_set(&c->_c.mfc_un.res.refcount, 1);
}
return c;
diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
index dd342e6ecf3f..1634fa794ea2 100644
--- a/net/ipv6/ip6mr.c
+++ b/net/ipv6/ip6mr.c
@@ -753,16 +753,9 @@ static int mif6_delete(struct mr_table *mrt, int vifi, int notify,
return 0;
}
-static inline void ip6mr_cache_free_rcu(struct rcu_head *head)
-{
- struct mr_mfc *c = container_of(head, struct mr_mfc, rcu);
-
- kmem_cache_free(mrt_cachep, (struct mfc6_cache *)c);
-}
-
static inline void ip6mr_cache_free(struct mfc6_cache *c)
{
- call_rcu(&c->_c.rcu, ip6mr_cache_free_rcu);
+ kfree_rcu(c, _c.rcu);
}
/* Destroy an unresolved cache entry, killing queued skbs
@@ -985,7 +978,6 @@ static struct mfc6_cache *ip6mr_cache_alloc(void)
return NULL;
c->_c.mfc_un.res.last_assert = jiffies - MFC_ASSERT_THRESH - 1;
c->_c.mfc_un.res.minvif = MAXMIFS;
- c->_c.free = ip6mr_cache_free_rcu;
refcount_set(&c->_c.mfc_un.res.refcount, 1);
return c;
}
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [cocci] patch idea: convert trivial call_rcu users to kfree_rcu
2024-06-11 16:40 ` Uladzislau Rezki
@ 2024-06-11 17:25 ` Paul E. McKenney
2024-06-11 17:27 ` Uladzislau Rezki
0 siblings, 1 reply; 24+ messages in thread
From: Paul E. McKenney @ 2024-06-11 17:25 UTC (permalink / raw)
To: Uladzislau Rezki; +Cc: Julia Lawall, Vlastimil Babka, linux-mm, RCU, cocci
On Tue, Jun 11, 2024 at 06:40:49PM +0200, Uladzislau Rezki wrote:
> > >
> > > These look ok to me. In the last two cases, the callback function is also
> > > stored in a data structure, eg:
> > >
> > > static struct mfc6_cache *ip6mr_cache_alloc(void)
> > > {
> > > struct mfc6_cache *c = kmem_cache_zalloc(mrt_cachep, GFP_KERNEL);
> > > if (!c)
> > > return NULL;
> > > c->_c.mfc_un.res.last_assert = jiffies - MFC_ASSERT_THRESH - 1;
> > > c->_c.mfc_un.res.minvif = MAXMIFS;
> > > c->_c.free = ip6mr_cache_free_rcu;
> > > refcount_set(&c->_c.mfc_un.res.refcount, 1);
> > > return c;
> > > }
> > >
> > > Should that be left as it is?
> >
> > Given that ->_c.free isn't used in the RCU callback, I am guessing that
> > this is intended for debugging purposes, so that you can see from a crash
> > dump how this will be freed. But I could be completely off-base here.
> >
> > One approach would be to remove the ->_c.free field and call attention
> > to this in the patches' commit logs.
> >
> > Another would be to instead put the address of the allocation function
> > in ->_c.free, and again call attention to this in the commit logs.
> >
> > Is there a better approach than these three? ;-)
> >
> IMO, "_c.free" should be removed:
Why not send the patch and see what the maintainers say? If they object,
you can always fall back to one of the other two methods, depending on
the nature of their objection.
Thanx, Paul
> diff --git a/include/linux/mroute_base.h b/include/linux/mroute_base.h
> index 9dd4bf157255..45f220f761df 100644
> --- a/include/linux/mroute_base.h
> +++ b/include/linux/mroute_base.h
> @@ -130,7 +130,6 @@ enum {
> * @refcount: reference count for this entry
> * @list: global entry list
> * @rcu: used for entry destruction
> - * @free: Operation used for freeing an entry under RCU
> */
> struct mr_mfc {
> struct rhlist_head mnode;
> @@ -156,13 +155,12 @@ struct mr_mfc {
> } mfc_un;
> struct list_head list;
> struct rcu_head rcu;
> - void (*free)(struct rcu_head *head);
> };
>
> static inline void mr_cache_put(struct mr_mfc *c)
> {
> if (refcount_dec_and_test(&c->mfc_un.res.refcount))
> - call_rcu(&c->rcu, c->free);
> + kfree_rcu(c, rcu);
> }
>
> static inline void mr_cache_hold(struct mr_mfc *c)
> diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
> index 6c750bd13dd8..5d2e339f09cc 100644
> --- a/net/ipv4/ipmr.c
> +++ b/net/ipv4/ipmr.c
> @@ -702,16 +702,9 @@ static int vif_delete(struct mr_table *mrt, int vifi, int notify,
> return 0;
> }
>
> -static void ipmr_cache_free_rcu(struct rcu_head *head)
> -{
> - struct mr_mfc *c = container_of(head, struct mr_mfc, rcu);
> -
> - kmem_cache_free(mrt_cachep, (struct mfc_cache *)c);
> -}
> -
> static void ipmr_cache_free(struct mfc_cache *c)
> {
> - call_rcu(&c->_c.rcu, ipmr_cache_free_rcu);
> + kfree_rcu(c, _c.rcu);
> }
>
> /* Destroy an unresolved cache entry, killing queued skbs
> @@ -959,7 +952,6 @@ static struct mfc_cache *ipmr_cache_alloc(void)
> if (c) {
> c->_c.mfc_un.res.last_assert = jiffies - MFC_ASSERT_THRESH - 1;
> c->_c.mfc_un.res.minvif = MAXVIFS;
> - c->_c.free = ipmr_cache_free_rcu;
> refcount_set(&c->_c.mfc_un.res.refcount, 1);
> }
> return c;
> diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
> index dd342e6ecf3f..1634fa794ea2 100644
> --- a/net/ipv6/ip6mr.c
> +++ b/net/ipv6/ip6mr.c
> @@ -753,16 +753,9 @@ static int mif6_delete(struct mr_table *mrt, int vifi, int notify,
> return 0;
> }
>
> -static inline void ip6mr_cache_free_rcu(struct rcu_head *head)
> -{
> - struct mr_mfc *c = container_of(head, struct mr_mfc, rcu);
> -
> - kmem_cache_free(mrt_cachep, (struct mfc6_cache *)c);
> -}
> -
> static inline void ip6mr_cache_free(struct mfc6_cache *c)
> {
> - call_rcu(&c->_c.rcu, ip6mr_cache_free_rcu);
> + kfree_rcu(c, _c.rcu);
> }
>
> /* Destroy an unresolved cache entry, killing queued skbs
> @@ -985,7 +978,6 @@ static struct mfc6_cache *ip6mr_cache_alloc(void)
> return NULL;
> c->_c.mfc_un.res.last_assert = jiffies - MFC_ASSERT_THRESH - 1;
> c->_c.mfc_un.res.minvif = MAXMIFS;
> - c->_c.free = ip6mr_cache_free_rcu;
> refcount_set(&c->_c.mfc_un.res.refcount, 1);
> return c;
> }
>
> --
> Uladzislau Rezki
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [cocci] patch idea: convert trivial call_rcu users to kfree_rcu
2024-06-11 17:25 ` Paul E. McKenney
@ 2024-06-11 17:27 ` Uladzislau Rezki
0 siblings, 0 replies; 24+ messages in thread
From: Uladzislau Rezki @ 2024-06-11 17:27 UTC (permalink / raw)
To: Paul E. McKenney
Cc: Uladzislau Rezki, Julia Lawall, Vlastimil Babka, linux-mm, RCU, cocci
On Tue, Jun 11, 2024 at 10:25:52AM -0700, Paul E. McKenney wrote:
> On Tue, Jun 11, 2024 at 06:40:49PM +0200, Uladzislau Rezki wrote:
> > > >
> > > > These look ok to me. In the last two cases, the callback function is also
> > > > stored in a data structure, eg:
> > > >
> > > > static struct mfc6_cache *ip6mr_cache_alloc(void)
> > > > {
> > > > struct mfc6_cache *c = kmem_cache_zalloc(mrt_cachep, GFP_KERNEL);
> > > > if (!c)
> > > > return NULL;
> > > > c->_c.mfc_un.res.last_assert = jiffies - MFC_ASSERT_THRESH - 1;
> > > > c->_c.mfc_un.res.minvif = MAXMIFS;
> > > > c->_c.free = ip6mr_cache_free_rcu;
> > > > refcount_set(&c->_c.mfc_un.res.refcount, 1);
> > > > return c;
> > > > }
> > > >
> > > > Should that be left as it is?
> > >
> > > Given that ->_c.free isn't used in the RCU callback, I am guessing that
> > > this is intended for debugging purposes, so that you can see from a crash
> > > dump how this will be freed. But I could be completely off-base here.
> > >
> > > One approach would be to remove the ->_c.free field and call attention
> > > to this in the patches' commit logs.
> > >
> > > Another would be to instead put the address of the allocation function
> > > in ->_c.free, and again call attention to this in the commit logs.
> > >
> > > Is there a better approach than these three? ;-)
> > >
> > IMO, "_c.free" should be removed:
>
> Why not send the patch and see what the maintainers say? If they object,
> you can always fall back to one of the other two methods, depending on
> the nature of their objection.
>
I can send the patch :)
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2024-06-11 17:27 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-05-27 7:46 patch idea: convert trivial call_rcu users to kfree_rcu Vlastimil Babka
2024-05-27 8:13 ` [cocci] " Julia Lawall
2024-05-27 19:27 ` Paul E. McKenney
2024-05-27 19:46 ` Uladzislau Rezki
2024-05-27 19:51 ` Julia Lawall
2024-05-27 20:36 ` Uladzislau Rezki
2024-05-27 20:43 ` Julia Lawall
2024-05-27 21:48 ` Paul E. McKenney
2024-05-28 5:09 ` Julia Lawall
2024-05-28 12:03 ` Uladzislau Rezki
2024-05-28 12:08 ` Julia Lawall
2024-05-28 13:21 ` Uladzislau Rezki
2024-05-28 14:29 ` Paul E. McKenney
2024-05-31 16:02 ` Julia Lawall
2024-06-03 17:25 ` Paul E. McKenney
2024-06-03 19:29 ` Vlastimil Babka
2024-06-03 19:51 ` Julia Lawall
2024-06-04 11:26 ` Uladzislau Rezki
2024-06-09 8:32 ` Julia Lawall
2024-06-09 10:00 ` Julia Lawall
2024-06-09 17:03 ` Paul E. McKenney
2024-06-11 16:40 ` Uladzislau Rezki
2024-06-11 17:25 ` Paul E. McKenney
2024-06-11 17:27 ` Uladzislau Rezki
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox