From: 姜智伟 <qq282012236@gmail.com>
To: Jens Axboe <axboe@kernel.dk>
Cc: viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz,
akpm@linux-foundation.org, peterx@redhat.com,
asml.silence@gmail.com, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
io-uring@vger.kernel.org
Subject: Re: [PATCH v2 1/2] io_uring: Add new functions to handle user fault scenarios
Date: Wed, 23 Apr 2025 11:11:03 +0800 [thread overview]
Message-ID: <CANHzP_u3zN2a_t2O+BLwgV=KJZaXtANwXVq6VVD26TvF2hFL8Q@mail.gmail.com> (raw)
In-Reply-To: <CANHzP_tLV29_uk2gcRAjT9sJNVPH3rMyVuQP07q+c_TWWgsfDg@mail.gmail.com>
Sorry, I may have misunderstood. I thought your test case
was working correctly. In io_wq_worker_running() it will return
if in io worker context, that is different from common progress
context.I hope the graph above can help you understand.
On Wed, Apr 23, 2025 at 10:49 AM 姜智伟 <qq282012236@gmail.com> wrote:
>
> On Wed, Apr 23, 2025 at 1:33 AM Jens Axboe <axboe@kernel.dk> wrote:
> >
> > On 4/22/25 11:04 AM, ??? wrote:
> > > On Wed, Apr 23, 2025 at 12:32?AM Jens Axboe <axboe@kernel.dk> wrote:
> > >>
> > >> On 4/22/25 10:29 AM, Zhiwei Jiang wrote:
> > >>> diff --git a/io_uring/io-wq.h b/io_uring/io-wq.h
> > >>> index d4fb2940e435..8567a9c819db 100644
> > >>> --- a/io_uring/io-wq.h
> > >>> +++ b/io_uring/io-wq.h
> > >>> @@ -70,8 +70,10 @@ enum io_wq_cancel io_wq_cancel_cb(struct io_wq *wq, work_cancel_fn *cancel,
> > >>> void *data, bool cancel_all);
> > >>>
> > >>> #if defined(CONFIG_IO_WQ)
> > >>> -extern void io_wq_worker_sleeping(struct task_struct *);
> > >>> -extern void io_wq_worker_running(struct task_struct *);
> > >>> +extern void io_wq_worker_sleeping(struct task_struct *tsk);
> > >>> +extern void io_wq_worker_running(struct task_struct *tsk);
> > >>> +extern void set_userfault_flag_for_ioworker(void);
> > >>> +extern void clear_userfault_flag_for_ioworker(void);
> > >>> #else
> > >>> static inline void io_wq_worker_sleeping(struct task_struct *tsk)
> > >>> {
> > >>> @@ -79,6 +81,12 @@ static inline void io_wq_worker_sleeping(struct task_struct *tsk)
> > >>> static inline void io_wq_worker_running(struct task_struct *tsk)
> > >>> {
> > >>> }
> > >>> +static inline void set_userfault_flag_for_ioworker(void)
> > >>> +{
> > >>> +}
> > >>> +static inline void clear_userfault_flag_for_ioworker(void)
> > >>> +{
> > >>> +}
> > >>> #endif
> > >>>
> > >>> static inline bool io_wq_current_is_worker(void)
> > >>
> > >> This should go in include/linux/io_uring.h and then userfaultfd would
> > >> not have to include io_uring private headers.
> > >>
> > >> But that's beside the point, like I said we still need to get to the
> > >> bottom of what is going on here first, rather than try and paper around
> > >> it. So please don't post more versions of this before we have that
> > >> understanding.
> > >>
> > >> See previous emails on 6.8 and other kernel versions.
> > >>
> > >> --
> > >> Jens Axboe
> > > The issue did not involve creating new worker processes. Instead, the
> > > existing IOU worker kernel threads (about a dozen) associated with the VM
> > > process were fully utilizing CPU without writing data, caused by a fault
> > > while reading user data pages in the fault_in_iov_iter_readable function
> > > when pulling user memory into kernel space.
> >
> > OK that makes more sense, I can certainly reproduce a loop in this path:
> >
> > iou-wrk-726 729 36.910071: 9737 cycles:P:
> > ffff800080456c44 handle_userfault+0x47c
> > ffff800080381fc0 hugetlb_fault+0xb68
> > ffff80008031fee4 handle_mm_fault+0x2fc
> > ffff8000812ada6c do_page_fault+0x1e4
> > ffff8000812ae024 do_translation_fault+0x9c
> > ffff800080049a9c do_mem_abort+0x44
> > ffff80008129bd78 el1_abort+0x38
> > ffff80008129ceb4 el1h_64_sync_handler+0xd4
> > ffff8000800112b4 el1h_64_sync+0x6c
> > ffff80008030984c fault_in_readable+0x74
> > ffff800080476f3c iomap_file_buffered_write+0x14c
> > ffff8000809b1230 blkdev_write_iter+0x1a8
> > ffff800080a1f378 io_write+0x188
> > ffff800080a14f30 io_issue_sqe+0x68
> > ffff800080a155d0 io_wq_submit_work+0xa8
> > ffff800080a32afc io_worker_handle_work+0x1f4
> > ffff800080a332b8 io_wq_worker+0x110
> > ffff80008002dd38 ret_from_fork+0x10
> >
> > which seems to be expected, we'd continually try and fault in the
> > ranges, if the userfaultfd handler isn't filling them.
> >
> > I guess this is where I'm still confused, because I don't see how this
> > is different from if you have a normal write(2) syscall doing the same
> > thing - you'd get the same looping.
> >
> > ??
> >
> > > This issue occurs like during VM snapshot loading (which uses
> > > userfaultfd for on-demand memory loading), while the task in the guest is
> > > writing data to disk.
> > >
> > > Normally, the VM first triggers a user fault to fill the page table.
> > > So in the IOU worker thread, the page tables are already filled,
> > > fault no chance happens when faulting in memory pages
> > > in fault_in_iov_iter_readable.
> > >
> > > I suspect that during snapshot loading, a memory access in the
> > > VM triggers an async page fault handled by the kernel thread,
> > > while the IOU worker's async kernel thread is also running.
> > > Maybe If the IOU worker's thread is scheduled first.
> > > I?m going to bed now.
> >
> > Ah ok, so what you're saying is that because we end up not sleeping
> > (because a signal is pending, it seems), then the fault will never get
> > filled and hence progress not made? And the signal is pending because
> > someone tried to create a net worker, and this work is not getting
> > processed.
> >
> > --
> > Jens Axboe
> handle_userfault() {
> hugetlb_vma_lock_read();
> _raw_spin_lock_irq() {
> __pv_queued_spin_lock_slowpath();
> }
> vma_mmu_pagesize() {
> hugetlb_vm_op_pagesize();
> }
> huge_pte_offset();
> hugetlb_vma_unlock_read();
> up_read();
> __wake_up() {
> _raw_spin_lock_irqsave() {
> __pv_queued_spin_lock_slowpath();
> }
> __wake_up_common();
> _raw_spin_unlock_irqrestore();
> }
> schedule() {
> io_wq_worker_sleeping() {
> io_wq_dec_running();
> }
> rcu_note_context_switch();
> raw_spin_rq_lock_nested() {
> _raw_spin_lock();
> }
> update_rq_clock();
> pick_next_task() {
> pick_next_task_fair() {
> update_curr() {
> update_curr_se();
> __calc_delta.constprop.0();
> update_min_vruntime();
> }
> check_cfs_rq_runtime();
> pick_next_entity() {
> pick_eevdf();
> }
> update_curr() {
> update_curr_se();
> __calc_delta.constprop.0();
> update_min_vruntime();
> }
> check_cfs_rq_runtime();
> pick_next_entity() {
> pick_eevdf();
> }
> update_curr() {
> update_curr_se();
> update_min_vruntime();
> cpuacct_charge();
> __cgroup_account_cputime() {
> cgroup_rstat_updated();
> }
> }
> check_cfs_rq_runtime();
> pick_next_entity() {
> pick_eevdf();
> }
> }
> }
> raw_spin_rq_unlock();
> io_wq_worker_running();
> }
> _raw_spin_lock_irq() {
> __pv_queued_spin_lock_slowpath();
> }
> userfaultfd_ctx_put();
> }
> }
> The execution flow above is the one that kept faulting
> repeatedly in the IOU worker during the issue. The entire fault path,
> including this final userfault handling code you're seeing, would be
> triggered in an infinite loop. That's why I traced and found that the
> io_wq_worker_running() function returns early, causing the flow to
> differ from a normal user fault, where it should be sleeping.
>
> However, your call stack appears to behave normally,
> which makes me curious about what's different about execution flow.
> Would you be able to share your test case code so I can study it
> and try to reproduce the behavior on my side?
next prev parent reply other threads:[~2025-04-23 3:11 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-22 16:29 [PATCH v2 0/2] Fix 100% CPU usage issue in IOU worker threads Zhiwei Jiang
2025-04-22 16:29 ` [PATCH v2 1/2] io_uring: Add new functions to handle user fault scenarios Zhiwei Jiang
2025-04-22 16:32 ` Jens Axboe
2025-04-22 17:04 ` 姜智伟
2025-04-22 17:33 ` Jens Axboe
2025-04-23 2:49 ` 姜智伟
2025-04-23 3:11 ` 姜智伟 [this message]
2025-04-23 6:22 ` 姜智伟
2025-04-23 13:34 ` Jens Axboe
2025-04-23 14:29 ` 姜智伟
2025-04-23 15:10 ` Jens Axboe
2025-04-23 18:55 ` Jens Axboe
2025-04-23 15:55 ` Jens Axboe
2025-04-23 16:07 ` 姜智伟
2025-04-23 16:17 ` Pavel Begunkov
2025-04-23 16:23 ` Jens Axboe
2025-04-23 22:57 ` Jens Axboe
2025-04-24 14:08 ` 姜智伟
2025-04-24 14:13 ` Jens Axboe
2025-04-24 14:45 ` 姜智伟
2025-04-24 14:52 ` Jens Axboe
2025-04-24 15:12 ` 姜智伟
2025-04-24 15:21 ` Jens Axboe
2025-04-24 15:51 ` 姜智伟
2025-04-22 16:29 ` [PATCH v2 2/2] userfaultfd: Set the corresponding flag in IOU worker context Zhiwei Jiang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CANHzP_u3zN2a_t2O+BLwgV=KJZaXtANwXVq6VVD26TvF2hFL8Q@mail.gmail.com' \
--to=qq282012236@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=asml.silence@gmail.com \
--cc=axboe@kernel.dk \
--cc=brauner@kernel.org \
--cc=io-uring@vger.kernel.org \
--cc=jack@suse.cz \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=peterx@redhat.com \
--cc=viro@zeniv.linux.org.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox