linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@kernel.dk>
To: 姜智伟 <qq282012236@gmail.com>
Cc: viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz,
	akpm@linux-foundation.org, peterx@redhat.com,
	asml.silence@gmail.com, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	io-uring@vger.kernel.org
Subject: Re: [PATCH v2 1/2] io_uring: Add new functions to handle user fault scenarios
Date: Tue, 22 Apr 2025 11:33:41 -0600	[thread overview]
Message-ID: <b61ac651-fafe-449a-82ed-7239123844e1@kernel.dk> (raw)
In-Reply-To: <CANHzP_uW4+-M1yTg-GPdPzYWAmvqP5vh6+s1uBhrMZ3eBusLug@mail.gmail.com>

On 4/22/25 11:04 AM, ??? wrote:
> On Wed, Apr 23, 2025 at 12:32?AM Jens Axboe <axboe@kernel.dk> wrote:
>>
>> On 4/22/25 10:29 AM, Zhiwei Jiang wrote:
>>> diff --git a/io_uring/io-wq.h b/io_uring/io-wq.h
>>> index d4fb2940e435..8567a9c819db 100644
>>> --- a/io_uring/io-wq.h
>>> +++ b/io_uring/io-wq.h
>>> @@ -70,8 +70,10 @@ enum io_wq_cancel io_wq_cancel_cb(struct io_wq *wq, work_cancel_fn *cancel,
>>>                                       void *data, bool cancel_all);
>>>
>>>  #if defined(CONFIG_IO_WQ)
>>> -extern void io_wq_worker_sleeping(struct task_struct *);
>>> -extern void io_wq_worker_running(struct task_struct *);
>>> +extern void io_wq_worker_sleeping(struct task_struct *tsk);
>>> +extern void io_wq_worker_running(struct task_struct *tsk);
>>> +extern void set_userfault_flag_for_ioworker(void);
>>> +extern void clear_userfault_flag_for_ioworker(void);
>>>  #else
>>>  static inline void io_wq_worker_sleeping(struct task_struct *tsk)
>>>  {
>>> @@ -79,6 +81,12 @@ static inline void io_wq_worker_sleeping(struct task_struct *tsk)
>>>  static inline void io_wq_worker_running(struct task_struct *tsk)
>>>  {
>>>  }
>>> +static inline void set_userfault_flag_for_ioworker(void)
>>> +{
>>> +}
>>> +static inline void clear_userfault_flag_for_ioworker(void)
>>> +{
>>> +}
>>>  #endif
>>>
>>>  static inline bool io_wq_current_is_worker(void)
>>
>> This should go in include/linux/io_uring.h and then userfaultfd would
>> not have to include io_uring private headers.
>>
>> But that's beside the point, like I said we still need to get to the
>> bottom of what is going on here first, rather than try and paper around
>> it. So please don't post more versions of this before we have that
>> understanding.
>>
>> See previous emails on 6.8 and other kernel versions.
>>
>> --
>> Jens Axboe
> The issue did not involve creating new worker processes. Instead, the
> existing IOU worker kernel threads (about a dozen) associated with the VM
> process were fully utilizing CPU without writing data, caused by a fault
> while reading user data pages in the fault_in_iov_iter_readable function
> when pulling user memory into kernel space.

OK that makes more sense, I can certainly reproduce a loop in this path:

iou-wrk-726     729    36.910071:       9737 cycles:P: 
        ffff800080456c44 handle_userfault+0x47c
        ffff800080381fc0 hugetlb_fault+0xb68
        ffff80008031fee4 handle_mm_fault+0x2fc
        ffff8000812ada6c do_page_fault+0x1e4
        ffff8000812ae024 do_translation_fault+0x9c
        ffff800080049a9c do_mem_abort+0x44
        ffff80008129bd78 el1_abort+0x38
        ffff80008129ceb4 el1h_64_sync_handler+0xd4
        ffff8000800112b4 el1h_64_sync+0x6c
        ffff80008030984c fault_in_readable+0x74
        ffff800080476f3c iomap_file_buffered_write+0x14c
        ffff8000809b1230 blkdev_write_iter+0x1a8
        ffff800080a1f378 io_write+0x188
        ffff800080a14f30 io_issue_sqe+0x68
        ffff800080a155d0 io_wq_submit_work+0xa8
        ffff800080a32afc io_worker_handle_work+0x1f4
        ffff800080a332b8 io_wq_worker+0x110
        ffff80008002dd38 ret_from_fork+0x10

which seems to be expected, we'd continually try and fault in the
ranges, if the userfaultfd handler isn't filling them.

I guess this is where I'm still confused, because I don't see how this
is different from if you have a normal write(2) syscall doing the same
thing - you'd get the same looping.

??

> This issue occurs like during VM snapshot loading (which uses
> userfaultfd for on-demand memory loading), while the task in the guest is
> writing data to disk.
> 
> Normally, the VM first triggers a user fault to fill the page table.
> So in the IOU worker thread, the page tables are already filled,
> fault no chance happens when faulting in memory pages
> in fault_in_iov_iter_readable.
> 
> I suspect that during snapshot loading, a memory access in the
> VM triggers an async page fault handled by the kernel thread,
> while the IOU worker's async kernel thread is also running.
> Maybe If the IOU worker's thread is scheduled first.
> I?m going to bed now.

Ah ok, so what you're saying is that because we end up not sleeping
(because a signal is pending, it seems), then the fault will never get
filled and hence progress not made? And the signal is pending because
someone tried to create a net worker, and this work is not getting
processed.

-- 
Jens Axboe


  reply	other threads:[~2025-04-22 17:33 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-04-22 16:29 [PATCH v2 0/2] Fix 100% CPU usage issue in IOU worker threads Zhiwei Jiang
2025-04-22 16:29 ` [PATCH v2 1/2] io_uring: Add new functions to handle user fault scenarios Zhiwei Jiang
2025-04-22 16:32   ` Jens Axboe
2025-04-22 17:04     ` 姜智伟
2025-04-22 17:33       ` Jens Axboe [this message]
2025-04-23  2:49         ` 姜智伟
2025-04-23  3:11           ` 姜智伟
2025-04-23  6:22             ` 姜智伟
2025-04-23 13:34           ` Jens Axboe
2025-04-23 14:29             ` 姜智伟
2025-04-23 15:10               ` Jens Axboe
2025-04-23 18:55                 ` Jens Axboe
2025-04-23 15:55             ` Jens Axboe
2025-04-23 16:07               ` 姜智伟
2025-04-23 16:17               ` Pavel Begunkov
2025-04-23 16:23                 ` Jens Axboe
2025-04-23 22:57               ` Jens Axboe
2025-04-24 14:08                 ` 姜智伟
2025-04-24 14:13                   ` Jens Axboe
2025-04-24 14:45                     ` 姜智伟
2025-04-24 14:52                       ` Jens Axboe
2025-04-24 15:12                         ` 姜智伟
2025-04-24 15:21                           ` Jens Axboe
2025-04-24 15:51                             ` 姜智伟
2025-04-22 16:29 ` [PATCH v2 2/2] userfaultfd: Set the corresponding flag in IOU worker context Zhiwei Jiang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b61ac651-fafe-449a-82ed-7239123844e1@kernel.dk \
    --to=axboe@kernel.dk \
    --cc=akpm@linux-foundation.org \
    --cc=asml.silence@gmail.com \
    --cc=brauner@kernel.org \
    --cc=io-uring@vger.kernel.org \
    --cc=jack@suse.cz \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=peterx@redhat.com \
    --cc=qq282012236@gmail.com \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox