From: patchwork-bot+netdevbpf@kernel.org
To: Peilin Ye <yepeilin@google.com>
Cc: bpf@vger.kernel.org, ast@kernel.org, shakeel.butt@linux.dev,
hannes@cmpxchg.org, tj@kernel.org, roman.gushchin@linux.dev,
daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
haoluo@google.com, jolsa@kernel.org, memxor@gmail.com,
joshdon@google.com, brho@google.com, linux-mm@kvack.org,
leon.hwang@linux.dev
Subject: Re: [PATCH bpf v2] bpf/helpers: Use __GFP_HIGH instead of GFP_ATOMIC in __bpf_async_init()
Date: Tue, 09 Sep 2025 22:40:06 +0000 [thread overview]
Message-ID: <175745760626.837268.10273222616654949199.git-patchwork-notify@kernel.org> (raw)
In-Reply-To: <20250909095222.2121438-1-yepeilin@google.com>
Hello:
This patch was applied to bpf/bpf.git (master)
by Alexei Starovoitov <ast@kernel.org>:
On Tue, 9 Sep 2025 09:52:20 +0000 you wrote:
> Currently, calling bpf_map_kmalloc_node() from __bpf_async_init() can
> cause various locking issues; see the following stack trace (edited for
> style) as one example:
>
> ...
> [10.011566] do_raw_spin_lock.cold
> [10.011570] try_to_wake_up (5) double-acquiring the same
> [10.011575] kick_pool rq_lock, causing a hardlockup
> [10.011579] __queue_work
> [10.011582] queue_work_on
> [10.011585] kernfs_notify
> [10.011589] cgroup_file_notify
> [10.011593] try_charge_memcg (4) memcg accounting raises an
> [10.011597] obj_cgroup_charge_pages MEMCG_MAX event
> [10.011599] obj_cgroup_charge_account
> [10.011600] __memcg_slab_post_alloc_hook
> [10.011603] __kmalloc_node_noprof
> ...
> [10.011611] bpf_map_kmalloc_node
> [10.011612] __bpf_async_init
> [10.011615] bpf_timer_init (3) BPF calls bpf_timer_init()
> [10.011617] bpf_prog_xxxxxxxxxxxxxxxx_fcg_runnable
> [10.011619] bpf__sched_ext_ops_runnable
> [10.011620] enqueue_task_scx (2) BPF runs with rq_lock held
> [10.011622] enqueue_task
> [10.011626] ttwu_do_activate
> [10.011629] sched_ttwu_pending (1) grabs rq_lock
> ...
>
> [...]
Here is the summary with links:
- [bpf,v2] bpf/helpers: Use __GFP_HIGH instead of GFP_ATOMIC in __bpf_async_init()
https://git.kernel.org/bpf/bpf/c/6d78b4473cdb
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
prev parent reply other threads:[~2025-09-09 22:40 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-09 9:52 Peilin Ye
2025-09-09 22:40 ` patchwork-bot+netdevbpf [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=175745760626.837268.10273222616654949199.git-patchwork-notify@kernel.org \
--to=patchwork-bot+netdevbpf@kernel.org \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=brho@google.com \
--cc=daniel@iogearbox.net \
--cc=eddyz87@gmail.com \
--cc=hannes@cmpxchg.org \
--cc=haoluo@google.com \
--cc=john.fastabend@gmail.com \
--cc=jolsa@kernel.org \
--cc=joshdon@google.com \
--cc=kpsingh@kernel.org \
--cc=leon.hwang@linux.dev \
--cc=linux-mm@kvack.org \
--cc=martin.lau@linux.dev \
--cc=memxor@gmail.com \
--cc=roman.gushchin@linux.dev \
--cc=sdf@fomichev.me \
--cc=shakeel.butt@linux.dev \
--cc=song@kernel.org \
--cc=tj@kernel.org \
--cc=yepeilin@google.com \
--cc=yonghong.song@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox