* [PATCH] percpu: fix data race in pcpu_alloc_noprof() and extend spinlock protection area
@ 2024-09-08 13:52 Jeongjun Park
2024-09-08 18:53 ` Dennis Zhou
0 siblings, 1 reply; 3+ messages in thread
From: Jeongjun Park @ 2024-09-08 13:52 UTC (permalink / raw)
To: dennis, tj, cl, akpm; +Cc: linux-mm, linux-kernel, syzbot, Jeongjun Park
I got the following KCSAN report during syzbot testing:
==================================================================
BUG: KCSAN: data-race in pcpu_alloc_noprof / pcpu_free_area
read-write to 0xffffffff883f872c of 4 bytes by task 3378 on cpu 0:
pcpu_update_empty_pages mm/percpu.c:602 [inline]
pcpu_block_update_hint_free mm/percpu.c:1044 [inline]
pcpu_free_area+0x4dc/0x570 mm/percpu.c:1302
free_percpu+0x1c6/0xb30 mm/percpu.c:2277
xt_percpu_counter_free+0x63/0x80 net/netfilter/x_tables.c:1951
cleanup_entry+0x195/0x1c0 net/ipv6/netfilter/ip6_tables.c:671
__do_replace+0x470/0x580 net/ipv6/netfilter/ip6_tables.c:1099
do_replace net/ipv6/netfilter/ip6_tables.c:1158 [inline]
do_ip6t_set_ctl+0x820/0x8c0 net/ipv6/netfilter/ip6_tables.c:1644
nf_setsockopt+0x195/0x1b0 net/netfilter/nf_sockopt.c:101
ipv6_setsockopt+0x126/0x140 net/ipv6/ipv6_sockglue.c:998
tcp_setsockopt+0x93/0xb0 net/ipv4/tcp.c:3768
sock_common_setsockopt+0x64/0x80 net/core/sock.c:3735
do_sock_setsockopt net/socket.c:2324 [inline]
__sys_setsockopt+0x1d8/0x250 net/socket.c:2347
__do_sys_setsockopt net/socket.c:2356 [inline]
__se_sys_setsockopt net/socket.c:2353 [inline]
__x64_sys_setsockopt+0x66/0x80 net/socket.c:2353
x64_sys_call+0x278d/0x2d60 arch/x86/include/generated/asm/syscalls_64.h:55
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0x54/0x120 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x76/0x7e
read to 0xffffffff883f872c of 4 bytes by task 3374 on cpu 1:
pcpu_alloc_noprof+0x9a5/0x10c0 mm/percpu.c:1894
xt_percpu_counter_alloc+0x79/0x110 net/netfilter/x_tables.c:1931
find_check_entry net/ipv4/netfilter/ip_tables.c:526 [inline]
translate_table+0x921/0xf70 net/ipv4/netfilter/ip_tables.c:716
do_replace net/ipv4/netfilter/ip_tables.c:1137 [inline]
do_ipt_set_ctl+0x7bd/0x8b0 net/ipv4/netfilter/ip_tables.c:1635
nf_setsockopt+0x195/0x1b0 net/netfilter/nf_sockopt.c:101
ip_setsockopt+0xea/0x100 net/ipv4/ip_sockglue.c:1424
tcp_setsockopt+0x93/0xb0 net/ipv4/tcp.c:3768
sock_common_setsockopt+0x64/0x80 net/core/sock.c:3735
do_sock_setsockopt net/socket.c:2324 [inline]
__sys_setsockopt+0x1d8/0x250 net/socket.c:2347
__do_sys_setsockopt net/socket.c:2356 [inline]
__se_sys_setsockopt net/socket.c:2353 [inline]
__x64_sys_setsockopt+0x66/0x80 net/socket.c:2353
x64_sys_call+0x278d/0x2d60 arch/x86/include/generated/asm/syscalls_64.h:55
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0x54/0x120 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x76/0x7e
value changed: 0x00000005 -> 0x00000006
Reported by Kernel Concurrency Sanitizer on:
CPU: 1 UID: 0 PID: 3374 Comm: syz-executor.3 Not tainted 6.11.0-rc6-syzkaller-00326-gd1f2d51b711a-dirty #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/06/2024
==================================================================
The global variable pcpu_nr_empty_pop_pages can be protected by pcpu_lock,
but since pcpu_alloc_noprof reads outside the spinlock protection section,
a data race may occur and the branch of the conditional statement may change.
Therefore, the reading of pcpu_nr_empty_pop_pages should be modified to be
performed within the spinlock protection section.
However, the for_each_clear_bitrange_from loop requires and uses a spinlock,
but it repeatedly locks and unlocks the spinlock unnecessarily.
Therefore, I think it is appropriate to remove the repeated spin_lock and
spin_unlock in for_each_clear_bitrange_from and perform the operation of
reading pcpu_nr_empty_pop_pages and then perform spin_unlock to postpone
the point in time when the spin_unlock is performed.
Reported-by: syzbot <syzkaller@googlegroups.com>
Fixes: e04d320838f5 ("percpu: indent the population block in pcpu_alloc()")
Signed-off-by: Jeongjun Park <aha310510@gmail.com>
---
mm/percpu.c | 5 ++---
1 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/mm/percpu.c b/mm/percpu.c
index 20d91af8c033..5c958a54da51 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -1864,7 +1864,6 @@ void __percpu *pcpu_alloc_noprof(size_t size, size_t align, bool reserved,
area_found:
pcpu_stats_area_alloc(chunk, size);
- spin_unlock_irqrestore(&pcpu_lock, flags);
/* populate if not all pages are already there */
if (!is_atomic) {
@@ -1878,14 +1877,12 @@ void __percpu *pcpu_alloc_noprof(size_t size, size_t align, bool reserved,
ret = pcpu_populate_chunk(chunk, rs, re, pcpu_gfp);
- spin_lock_irqsave(&pcpu_lock, flags);
if (ret) {
pcpu_free_area(chunk, off);
err = "failed to populate";
goto fail_unlock;
}
pcpu_chunk_populated(chunk, rs, re);
- spin_unlock_irqrestore(&pcpu_lock, flags);
}
mutex_unlock(&pcpu_alloc_mutex);
@@ -1894,6 +1891,8 @@ void __percpu *pcpu_alloc_noprof(size_t size, size_t align, bool reserved,
if (pcpu_nr_empty_pop_pages < PCPU_EMPTY_POP_PAGES_LOW)
pcpu_schedule_balance_work();
+ spin_unlock_irqrestore(&pcpu_lock, flags);
+
/* clear the areas and return address relative to base address */
for_each_possible_cpu(cpu)
memset((void *)pcpu_chunk_addr(chunk, cpu, 0) + off, 0, size);
--
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] percpu: fix data race in pcpu_alloc_noprof() and extend spinlock protection area
2024-09-08 13:52 [PATCH] percpu: fix data race in pcpu_alloc_noprof() and extend spinlock protection area Jeongjun Park
@ 2024-09-08 18:53 ` Dennis Zhou
2024-09-09 10:40 ` Jeongjun Park
0 siblings, 1 reply; 3+ messages in thread
From: Dennis Zhou @ 2024-09-08 18:53 UTC (permalink / raw)
To: Jeongjun Park; +Cc: dennis, tj, cl, akpm, linux-mm, linux-kernel, syzbot
Hello,
On Sun, Sep 08, 2024 at 10:52:10PM +0900, Jeongjun Park wrote:
> I got the following KCSAN report during syzbot testing:
>
> ==================================================================
> BUG: KCSAN: data-race in pcpu_alloc_noprof / pcpu_free_area
>
> read-write to 0xffffffff883f872c of 4 bytes by task 3378 on cpu 0:
> pcpu_update_empty_pages mm/percpu.c:602 [inline]
> pcpu_block_update_hint_free mm/percpu.c:1044 [inline]
> pcpu_free_area+0x4dc/0x570 mm/percpu.c:1302
> free_percpu+0x1c6/0xb30 mm/percpu.c:2277
> xt_percpu_counter_free+0x63/0x80 net/netfilter/x_tables.c:1951
> cleanup_entry+0x195/0x1c0 net/ipv6/netfilter/ip6_tables.c:671
> __do_replace+0x470/0x580 net/ipv6/netfilter/ip6_tables.c:1099
> do_replace net/ipv6/netfilter/ip6_tables.c:1158 [inline]
> do_ip6t_set_ctl+0x820/0x8c0 net/ipv6/netfilter/ip6_tables.c:1644
> nf_setsockopt+0x195/0x1b0 net/netfilter/nf_sockopt.c:101
> ipv6_setsockopt+0x126/0x140 net/ipv6/ipv6_sockglue.c:998
> tcp_setsockopt+0x93/0xb0 net/ipv4/tcp.c:3768
> sock_common_setsockopt+0x64/0x80 net/core/sock.c:3735
> do_sock_setsockopt net/socket.c:2324 [inline]
> __sys_setsockopt+0x1d8/0x250 net/socket.c:2347
> __do_sys_setsockopt net/socket.c:2356 [inline]
> __se_sys_setsockopt net/socket.c:2353 [inline]
> __x64_sys_setsockopt+0x66/0x80 net/socket.c:2353
> x64_sys_call+0x278d/0x2d60 arch/x86/include/generated/asm/syscalls_64.h:55
> do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> do_syscall_64+0x54/0x120 arch/x86/entry/common.c:83
> entry_SYSCALL_64_after_hwframe+0x76/0x7e
>
> read to 0xffffffff883f872c of 4 bytes by task 3374 on cpu 1:
> pcpu_alloc_noprof+0x9a5/0x10c0 mm/percpu.c:1894
> xt_percpu_counter_alloc+0x79/0x110 net/netfilter/x_tables.c:1931
> find_check_entry net/ipv4/netfilter/ip_tables.c:526 [inline]
> translate_table+0x921/0xf70 net/ipv4/netfilter/ip_tables.c:716
> do_replace net/ipv4/netfilter/ip_tables.c:1137 [inline]
> do_ipt_set_ctl+0x7bd/0x8b0 net/ipv4/netfilter/ip_tables.c:1635
> nf_setsockopt+0x195/0x1b0 net/netfilter/nf_sockopt.c:101
> ip_setsockopt+0xea/0x100 net/ipv4/ip_sockglue.c:1424
> tcp_setsockopt+0x93/0xb0 net/ipv4/tcp.c:3768
> sock_common_setsockopt+0x64/0x80 net/core/sock.c:3735
> do_sock_setsockopt net/socket.c:2324 [inline]
> __sys_setsockopt+0x1d8/0x250 net/socket.c:2347
> __do_sys_setsockopt net/socket.c:2356 [inline]
> __se_sys_setsockopt net/socket.c:2353 [inline]
> __x64_sys_setsockopt+0x66/0x80 net/socket.c:2353
> x64_sys_call+0x278d/0x2d60 arch/x86/include/generated/asm/syscalls_64.h:55
> do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> do_syscall_64+0x54/0x120 arch/x86/entry/common.c:83
> entry_SYSCALL_64_after_hwframe+0x76/0x7e
>
> value changed: 0x00000005 -> 0x00000006
>
> Reported by Kernel Concurrency Sanitizer on:
> CPU: 1 UID: 0 PID: 3374 Comm: syz-executor.3 Not tainted 6.11.0-rc6-syzkaller-00326-gd1f2d51b711a-dirty #0
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/06/2024
> ==================================================================
>
> The global variable pcpu_nr_empty_pop_pages can be protected by pcpu_lock,
> but since pcpu_alloc_noprof reads outside the spinlock protection section,
> a data race may occur and the branch of the conditional statement may change.
> Therefore, the reading of pcpu_nr_empty_pop_pages should be modified to be
> performed within the spinlock protection section.
>
> However, the for_each_clear_bitrange_from loop requires and uses a spinlock,
> but it repeatedly locks and unlocks the spinlock unnecessarily.
>
> Therefore, I think it is appropriate to remove the repeated spin_lock and
> spin_unlock in for_each_clear_bitrange_from and perform the operation of
> reading pcpu_nr_empty_pop_pages and then perform spin_unlock to postpone
> the point in time when the spin_unlock is performed.
>
> Reported-by: syzbot <syzkaller@googlegroups.com>
> Fixes: e04d320838f5 ("percpu: indent the population block in pcpu_alloc()")
> Signed-off-by: Jeongjun Park <aha310510@gmail.com>
> ---
> mm/percpu.c | 5 ++---
> 1 files changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/mm/percpu.c b/mm/percpu.c
> index 20d91af8c033..5c958a54da51 100644
> --- a/mm/percpu.c
> +++ b/mm/percpu.c
> @@ -1864,7 +1864,6 @@ void __percpu *pcpu_alloc_noprof(size_t size, size_t align, bool reserved,
>
> area_found:
> pcpu_stats_area_alloc(chunk, size);
> - spin_unlock_irqrestore(&pcpu_lock, flags);
>
> /* populate if not all pages are already there */
> if (!is_atomic) {
> @@ -1878,14 +1877,12 @@ void __percpu *pcpu_alloc_noprof(size_t size, size_t align, bool reserved,
>
> ret = pcpu_populate_chunk(chunk, rs, re, pcpu_gfp);
>
> - spin_lock_irqsave(&pcpu_lock, flags);
> if (ret) {
> pcpu_free_area(chunk, off);
> err = "failed to populate";
> goto fail_unlock;
> }
> pcpu_chunk_populated(chunk, rs, re);
> - spin_unlock_irqrestore(&pcpu_lock, flags);
> }
We don't want to do this because pcpu_populate_chunk() calls
alloc_pages_node() which can block.
>
> mutex_unlock(&pcpu_alloc_mutex);
> @@ -1894,6 +1891,8 @@ void __percpu *pcpu_alloc_noprof(size_t size, size_t align, bool reserved,
> if (pcpu_nr_empty_pop_pages < PCPU_EMPTY_POP_PAGES_LOW)
> pcpu_schedule_balance_work();
>
> + spin_unlock_irqrestore(&pcpu_lock, flags);
> +
> /* clear the areas and return address relative to base address */
> for_each_possible_cpu(cpu)
> memset((void *)pcpu_chunk_addr(chunk, cpu, 0) + off, 0, size);
> --
I sent out [1] which is a more appropriate fix. I'll merge it later
today.
Thanks,
Dennis
[1] https://lore.kernel.org/lkml/20240906031151.80719-1-dennis@kernel.org/
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] percpu: fix data race in pcpu_alloc_noprof() and extend spinlock protection area
2024-09-08 18:53 ` Dennis Zhou
@ 2024-09-09 10:40 ` Jeongjun Park
0 siblings, 0 replies; 3+ messages in thread
From: Jeongjun Park @ 2024-09-09 10:40 UTC (permalink / raw)
To: Dennis Zhou; +Cc: tj, cl, akpm, linux-mm, linux-kernel, syzbot
2024년 9월 9일 (월) 오전 3:53, Dennis Zhou <dennis@kernel.org>님이 작성:
>
> Hello,
>
> On Sun, Sep 08, 2024 at 10:52:10PM +0900, Jeongjun Park wrote:
> > I got the following KCSAN report during syzbot testing:
> >
> > ==================================================================
> > BUG: KCSAN: data-race in pcpu_alloc_noprof / pcpu_free_area
> >
> > read-write to 0xffffffff883f872c of 4 bytes by task 3378 on cpu 0:
> > pcpu_update_empty_pages mm/percpu.c:602 [inline]
> > pcpu_block_update_hint_free mm/percpu.c:1044 [inline]
> > pcpu_free_area+0x4dc/0x570 mm/percpu.c:1302
> > free_percpu+0x1c6/0xb30 mm/percpu.c:2277
> > xt_percpu_counter_free+0x63/0x80 net/netfilter/x_tables.c:1951
> > cleanup_entry+0x195/0x1c0 net/ipv6/netfilter/ip6_tables.c:671
> > __do_replace+0x470/0x580 net/ipv6/netfilter/ip6_tables.c:1099
> > do_replace net/ipv6/netfilter/ip6_tables.c:1158 [inline]
> > do_ip6t_set_ctl+0x820/0x8c0 net/ipv6/netfilter/ip6_tables.c:1644
> > nf_setsockopt+0x195/0x1b0 net/netfilter/nf_sockopt.c:101
> > ipv6_setsockopt+0x126/0x140 net/ipv6/ipv6_sockglue.c:998
> > tcp_setsockopt+0x93/0xb0 net/ipv4/tcp.c:3768
> > sock_common_setsockopt+0x64/0x80 net/core/sock.c:3735
> > do_sock_setsockopt net/socket.c:2324 [inline]
> > __sys_setsockopt+0x1d8/0x250 net/socket.c:2347
> > __do_sys_setsockopt net/socket.c:2356 [inline]
> > __se_sys_setsockopt net/socket.c:2353 [inline]
> > __x64_sys_setsockopt+0x66/0x80 net/socket.c:2353
> > x64_sys_call+0x278d/0x2d60 arch/x86/include/generated/asm/syscalls_64.h:55
> > do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> > do_syscall_64+0x54/0x120 arch/x86/entry/common.c:83
> > entry_SYSCALL_64_after_hwframe+0x76/0x7e
> >
> > read to 0xffffffff883f872c of 4 bytes by task 3374 on cpu 1:
> > pcpu_alloc_noprof+0x9a5/0x10c0 mm/percpu.c:1894
> > xt_percpu_counter_alloc+0x79/0x110 net/netfilter/x_tables.c:1931
> > find_check_entry net/ipv4/netfilter/ip_tables.c:526 [inline]
> > translate_table+0x921/0xf70 net/ipv4/netfilter/ip_tables.c:716
> > do_replace net/ipv4/netfilter/ip_tables.c:1137 [inline]
> > do_ipt_set_ctl+0x7bd/0x8b0 net/ipv4/netfilter/ip_tables.c:1635
> > nf_setsockopt+0x195/0x1b0 net/netfilter/nf_sockopt.c:101
> > ip_setsockopt+0xea/0x100 net/ipv4/ip_sockglue.c:1424
> > tcp_setsockopt+0x93/0xb0 net/ipv4/tcp.c:3768
> > sock_common_setsockopt+0x64/0x80 net/core/sock.c:3735
> > do_sock_setsockopt net/socket.c:2324 [inline]
> > __sys_setsockopt+0x1d8/0x250 net/socket.c:2347
> > __do_sys_setsockopt net/socket.c:2356 [inline]
> > __se_sys_setsockopt net/socket.c:2353 [inline]
> > __x64_sys_setsockopt+0x66/0x80 net/socket.c:2353
> > x64_sys_call+0x278d/0x2d60 arch/x86/include/generated/asm/syscalls_64.h:55
> > do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> > do_syscall_64+0x54/0x120 arch/x86/entry/common.c:83
> > entry_SYSCALL_64_after_hwframe+0x76/0x7e
> >
> > value changed: 0x00000005 -> 0x00000006
> >
> > Reported by Kernel Concurrency Sanitizer on:
> > CPU: 1 UID: 0 PID: 3374 Comm: syz-executor.3 Not tainted 6.11.0-rc6-syzkaller-00326-gd1f2d51b711a-dirty #0
> > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/06/2024
> > ==================================================================
> >
> > The global variable pcpu_nr_empty_pop_pages can be protected by pcpu_lock,
> > but since pcpu_alloc_noprof reads outside the spinlock protection section,
> > a data race may occur and the branch of the conditional statement may change.
> > Therefore, the reading of pcpu_nr_empty_pop_pages should be modified to be
> > performed within the spinlock protection section.
> >
> > However, the for_each_clear_bitrange_from loop requires and uses a spinlock,
> > but it repeatedly locks and unlocks the spinlock unnecessarily.
> >
> > Therefore, I think it is appropriate to remove the repeated spin_lock and
> > spin_unlock in for_each_clear_bitrange_from and perform the operation of
> > reading pcpu_nr_empty_pop_pages and then perform spin_unlock to postpone
> > the point in time when the spin_unlock is performed.
> >
> > Reported-by: syzbot <syzkaller@googlegroups.com>
> > Fixes: e04d320838f5 ("percpu: indent the population block in pcpu_alloc()")
> > Signed-off-by: Jeongjun Park <aha310510@gmail.com>
> > ---
> > mm/percpu.c | 5 ++---
> > 1 files changed, 2 insertions(+), 3 deletions(-)
> >
> > diff --git a/mm/percpu.c b/mm/percpu.c
> > index 20d91af8c033..5c958a54da51 100644
> > --- a/mm/percpu.c
> > +++ b/mm/percpu.c
> > @@ -1864,7 +1864,6 @@ void __percpu *pcpu_alloc_noprof(size_t size, size_t align, bool reserved,
> >
> > area_found:
> > pcpu_stats_area_alloc(chunk, size);
> > - spin_unlock_irqrestore(&pcpu_lock, flags);
> >
> > /* populate if not all pages are already there */
> > if (!is_atomic) {
> > @@ -1878,14 +1877,12 @@ void __percpu *pcpu_alloc_noprof(size_t size, size_t align, bool reserved,
> >
> > ret = pcpu_populate_chunk(chunk, rs, re, pcpu_gfp);
> >
> > - spin_lock_irqsave(&pcpu_lock, flags);
> > if (ret) {
> > pcpu_free_area(chunk, off);
> > err = "failed to populate";
> > goto fail_unlock;
> > }
> > pcpu_chunk_populated(chunk, rs, re);
> > - spin_unlock_irqrestore(&pcpu_lock, flags);
> > }
>
> We don't want to do this because pcpu_populate_chunk() calls
> alloc_pages_node() which can block.
> >
> > mutex_unlock(&pcpu_alloc_mutex);
> > @@ -1894,6 +1891,8 @@ void __percpu *pcpu_alloc_noprof(size_t size, size_t align, bool reserved,
> > if (pcpu_nr_empty_pop_pages < PCPU_EMPTY_POP_PAGES_LOW)
> > pcpu_schedule_balance_work();
> >
> > + spin_unlock_irqrestore(&pcpu_lock, flags);
> > +
> > /* clear the areas and return address relative to base address */
> > for_each_possible_cpu(cpu)
> > memset((void *)pcpu_chunk_addr(chunk, cpu, 0) + off, 0, size);
> > --
>
> I sent out [1] which is a more appropriate fix. I'll merge it later
> today.
>
> Thanks,
> Dennis
>
> [1] https://lore.kernel.org/lkml/20240906031151.80719-1-dennis@kernel.org/
>
Oh, you already patched that bug a few days ago.
Sorry for taking up your time.
Regards,
Jeongjun Park
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2024-12-05 15:19 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-09-08 13:52 [PATCH] percpu: fix data race in pcpu_alloc_noprof() and extend spinlock protection area Jeongjun Park
2024-09-08 18:53 ` Dennis Zhou
2024-09-09 10:40 ` Jeongjun Park
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox