From: Yongji Xie <xieyongji@bytedance.com>
To: Jason Wang <jasowang@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
Eugenio Perez Martin <eperezma@redhat.com>,
Maxime Coquelin <maxime.coquelin@redhat.com>,
virtualization@lists.linux.dev,
linux-kernel <linux-kernel@vger.kernel.org>,
21cnbao@gmail.com, penguin-kernel@i-love.sakura.ne.jp,
linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>
Subject: Re: Re: Re: [PATCH] vduse: avoid using __GFP_NOFAIL
Date: Mon, 12 Aug 2024 15:21:29 +0800 [thread overview]
Message-ID: <CACycT3tE8qb0+X6O27e0KAevA230f8bzqeXh45fHNQOB6a5agA@mail.gmail.com> (raw)
In-Reply-To: <CACGkMEsYRrzSzNHgN490TDCWFm3EG1ic_-f4F+mu9CNz4uY=iw@mail.gmail.com>
On Mon, Aug 12, 2024 at 3:00 PM Jason Wang <jasowang@redhat.com> wrote:
>
> On Thu, Aug 8, 2024 at 6:52 PM Yongji Xie <xieyongji@bytedance.com> wrote:
> >
> > On Thu, Aug 8, 2024 at 10:58 AM Jason Wang <jasowang@redhat.com> wrote:
> > >
> > > On Wed, Aug 7, 2024 at 2:52 PM Yongji Xie <xieyongji@bytedance.com> wrote:
> > > >
> > > > On Mon, Aug 5, 2024 at 4:21 PM Jason Wang <jasowang@redhat.com> wrote:
> > > > >
> > > > > Barry said [1]:
> > > > >
> > > > > """
> > > > > mm doesn't support non-blockable __GFP_NOFAIL allocation. Because
> > > > > __GFP_NOFAIL without direct reclamation may just result in a busy
> > > > > loop within non-sleepable contexts.
> > > > > ""“
> > > > >
> > > > > Unfortuantely, we do that under read lock. A possible way to fix that
> > > > > is to move the pages allocation out of the lock into the caller, but
> > > > > having to allocate a huge number of pages and auxiliary page array
> > > > > seems to be problematic as well per Tetsuon [2]:
> > > > >
> > > > > """
> > > > > You should implement proper error handling instead of using
> > > > > __GFP_NOFAIL if count can become large.
> > > > > """
> > > > >
> > > > > So I choose another way, which does not release kernel bounce pages
> > > > > when user tries to register usersapce bounce pages. Then we don't need
> > > > > to do allocation in the path which is not expected to be fail (e.g in
> > > > > the release). We pay this for more memory usage but further
> > > > > optimizations could be done on top.
> > > > >
> > > > > [1] https://lore.kernel.org/all/CACGkMEtcOJAA96SF9B8m-nZ1X04-XZr+nq8ZQ2saLnUdfOGOLg@mail.gmail.com/T/#m3caef86a66ea6318ef94f9976ddb3a0ccfe6fcf8
> > > > > [2] https://lore.kernel.org/all/CACGkMEtcOJAA96SF9B8m-nZ1X04-XZr+nq8ZQ2saLnUdfOGOLg@mail.gmail.com/T/#m7ad10eaba48ade5abf2d572f24e185d9fb146480
> > > > >
> > > > > Fixes: 6c77ed22880d ("vduse: Support using userspace pages as bounce buffer")
> > > > > Signed-off-by: Jason Wang <jasowang@redhat.com>
> > > > > ---
> > > >
> > > > Reviewed-by: Xie Yongji <xieyongji@bytedance.com>
> > > > Tested-by: Xie Yongji <xieyongji@bytedance.com>
> > >
> > > Thanks.
> > >
> > > >
> > > > Have tested it with qemu-storage-daemon [1]:
> > > >
> > > > $ qemu-storage-daemon \
> > > > --chardev socket,id=charmonitor,path=/tmp/qmp.sock,server=on,wait=off \
> > > > --monitor chardev=charmonitor \
> > > > --blockdev driver=host_device,cache.direct=on,aio=native,filename=/dev/nullb0,node-name=disk0
> > > > \
> > > > --export type=vduse-blk,id=vduse-test,name=vduse-test,node-name=disk0,writable=on
> > > >
> > > > [1] https://github.com/bytedance/qemu/tree/vduse-umem
> > >
> > > Great, would you want to post them to the Qemu?
> > >
> >
> > Looks like qemu-storage-daemon would not benefit from this feature
> > which is designed for some hugepage users such as SPDK/DPDK.
>
> Yes, but maybe for testing purposes like here?
>
OK for me.
Thanks,
Yongji
prev parent reply other threads:[~2024-08-12 7:21 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-05 8:21 Jason Wang
2024-08-05 8:23 ` Jason Wang
2024-08-05 10:42 ` Yongji Xie
2024-08-06 2:28 ` Jason Wang
2024-08-06 3:10 ` Yongji Xie
[not found] ` <CACGkMEue9RU+MMgOC0t4Yuk5wRHfTdnJeZZs38g2h+gyZv+3VQ@mail.gmail.com>
[not found] ` <CACycT3sHT-izwAKzxAWPbqGFgyf82WxkHHOrp1SjWa+HE01mCg@mail.gmail.com>
[not found] ` <CACGkMEvsMQS-5Oy7rTyA5a2u1xYRf0beBHbZ16geHJCZTE0jLw@mail.gmail.com>
[not found] ` <CACycT3sfUhz1PjK3Q=pA7GEm7=fsL0XT16ccwCQ2m2LF+TTD7Q@mail.gmail.com>
[not found] ` <CACGkMEu+RrD2JdO=F9BySwhVY5uPr6kKWWdkcdG4XX6GN5b=Bg@mail.gmail.com>
[not found] ` <CACycT3u-v+XkWzSPq39Mk9sdQftuNZvZqZyzDvhTecH3uyuk8w@mail.gmail.com>
2024-08-12 6:59 ` Jason Wang
2024-08-05 8:25 ` Michael S. Tsirkin
2024-08-06 2:26 ` Jason Wang
2024-08-06 2:30 ` Barry Song
[not found] ` <CACycT3uM1jSdqFT0LGqy1zXZkWF8BNQN=8EMKYMoyP_wjRtsng@mail.gmail.com>
[not found] ` <CACGkMEtYE1OY+okxHAj=cVfW-Qz45an28oO=Wv15yOtpD6UqdQ@mail.gmail.com>
[not found] ` <CACycT3vAv1K0yBKgc_8GBLpEPwASTCCPZYAxMyUROQsyntQdOw@mail.gmail.com>
2024-08-12 7:00 ` Jason Wang
2024-08-12 7:21 ` Yongji Xie [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CACycT3tE8qb0+X6O27e0KAevA230f8bzqeXh45fHNQOB6a5agA@mail.gmail.com \
--to=xieyongji@bytedance.com \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=eperezma@redhat.com \
--cc=jasowang@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=maxime.coquelin@redhat.com \
--cc=mst@redhat.com \
--cc=penguin-kernel@i-love.sakura.ne.jp \
--cc=virtualization@lists.linux.dev \
--cc=xuanzhuo@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox