* Re: [HELP] FUSE writeback performance bottleneck
[not found] ` <ffca9534-cb75-4dc6-9830-fe8e84db2413@linux.alibaba.com>
@ 2024-06-04 9:32 ` Bernd Schubert
2024-06-04 10:02 ` Miklos Szeredi
2024-06-04 12:24 ` Jingbo Xu
0 siblings, 2 replies; 14+ messages in thread
From: Bernd Schubert @ 2024-06-04 9:32 UTC (permalink / raw)
To: Jingbo Xu, Miklos Szeredi
Cc: linux-fsdevel, linux-kernel, lege.wang, Matthew Wilcox (Oracle),
linux-mm
On 6/4/24 09:36, Jingbo Xu wrote:
>
>
> On 6/4/24 3:27 PM, Miklos Szeredi wrote:
>> On Tue, 4 Jun 2024 at 03:57, Jingbo Xu <jefflexu@linux.alibaba.com> wrote:
>>
>>> IIUC, there are two sources that may cause deadlock:
>>> 1) the fuse server needs memory allocation when processing FUSE_WRITE
>>> requests, which in turn triggers direct memory reclaim, and FUSE
>>> writeback then - deadlock here
>>
>> Yep, see the folio_wait_writeback() call deep in the guts of direct
>> reclaim, which sleeps until the PG_writeback flag is cleared. If that
>> happens to be triggered by the writeback in question, then that's a
>> deadlock.
>>
>>> 2) a process that trigfgers direct memory reclaim or calls sync(2) may
>>> hang there forever, if the fuse server is buggyly or malicious and thus
>>> hang there when processing FUSE_WRITE requests
>>
>> Ah, yes, sync(2) is also an interesting case. We don't want unpriv
>> fuse servers to be able to block sync(2), which means that sync(2)
>> won't actually guarantee a synchronization of fuse's dirty pages. I
>> don't think there's even a theoretical solution to that, but
>> apparently nobody cares...
>
> Okay if the temp page design is unavoidable, then I don't know if there
> is any approach (in FUSE or VFS layer) helps page copy offloading. At
> least we don't want the writeback performance to be limited by the
> single writeback kworker. This is also the initial attempt of this thread.
>
Offloading it to another thread is just a workaround, though maybe a
temporary solution.
Back to the background for the copy, so it copies pages to avoid
blocking on memory reclaim. With that allocation it in fact increases
memory pressure even more. Isn't the right solution to mark those pages
as not reclaimable and to avoid blocking on it? Which is what the tmp
pages do, just not in beautiful way.
Thanks,
Bernd
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [HELP] FUSE writeback performance bottleneck
2024-06-04 9:32 ` [HELP] FUSE writeback performance bottleneck Bernd Schubert
@ 2024-06-04 10:02 ` Miklos Szeredi
2024-06-04 14:13 ` Bernd Schubert
` (2 more replies)
2024-06-04 12:24 ` Jingbo Xu
1 sibling, 3 replies; 14+ messages in thread
From: Miklos Szeredi @ 2024-06-04 10:02 UTC (permalink / raw)
To: Bernd Schubert
Cc: Jingbo Xu, linux-fsdevel, linux-kernel, lege.wang,
Matthew Wilcox (Oracle),
linux-mm
On Tue, 4 Jun 2024 at 11:32, Bernd Schubert <bernd.schubert@fastmail.fm> wrote:
> Back to the background for the copy, so it copies pages to avoid
> blocking on memory reclaim. With that allocation it in fact increases
> memory pressure even more. Isn't the right solution to mark those pages
> as not reclaimable and to avoid blocking on it? Which is what the tmp
> pages do, just not in beautiful way.
Copying to the tmp page is the same as marking the pages as
non-reclaimable and non-syncable.
Conceptually it would be nice to only copy when there's something
actually waiting for writeback on the page.
Note: normally the WRITE request would be copied to userspace along
with the contents of the pages very soon after starting writeback.
After this the contents of the page no longer matter, and we can just
clear writeback without doing the copy.
But if the request gets stuck in the input queue before being copied
to userspace, then deadlock can still happen if the server blocks on
direct reclaim and won't continue with processing the queue. And
sync(2) will also block in that case.
So we'd somehow need to handle stuck WRITE requests. I don't see an
easy way to do this "on demand", when something actually starts
waiting on PG_writeback. Alternatively the page copy could be done
after a timeout, which is ugly, but much easier to implement.
Also splice from the fuse dev would need to copy those pages, but that
shouldn't be a problem, since it's just moving the copy from one place
to another.
Thanks,
Miklos
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [HELP] FUSE writeback performance bottleneck
2024-06-04 9:32 ` [HELP] FUSE writeback performance bottleneck Bernd Schubert
2024-06-04 10:02 ` Miklos Szeredi
@ 2024-06-04 12:24 ` Jingbo Xu
1 sibling, 0 replies; 14+ messages in thread
From: Jingbo Xu @ 2024-06-04 12:24 UTC (permalink / raw)
To: Bernd Schubert, Miklos Szeredi
Cc: linux-fsdevel, linux-kernel, lege.wang, Matthew Wilcox (Oracle),
linux-mm
On 6/4/24 5:32 PM, Bernd Schubert wrote:
>
>
> On 6/4/24 09:36, Jingbo Xu wrote:
>>
>>
>> On 6/4/24 3:27 PM, Miklos Szeredi wrote:
>>> On Tue, 4 Jun 2024 at 03:57, Jingbo Xu <jefflexu@linux.alibaba.com> wrote:
>>>
>>>> IIUC, there are two sources that may cause deadlock:
>>>> 1) the fuse server needs memory allocation when processing FUSE_WRITE
>>>> requests, which in turn triggers direct memory reclaim, and FUSE
>>>> writeback then - deadlock here
>>>
>>> Yep, see the folio_wait_writeback() call deep in the guts of direct
>>> reclaim, which sleeps until the PG_writeback flag is cleared. If that
>>> happens to be triggered by the writeback in question, then that's a
>>> deadlock.
>>>
>>>> 2) a process that trigfgers direct memory reclaim or calls sync(2) may
>>>> hang there forever, if the fuse server is buggyly or malicious and thus
>>>> hang there when processing FUSE_WRITE requests
>>>
>>> Ah, yes, sync(2) is also an interesting case. We don't want unpriv
>>> fuse servers to be able to block sync(2), which means that sync(2)
>>> won't actually guarantee a synchronization of fuse's dirty pages. I
>>> don't think there's even a theoretical solution to that, but
>>> apparently nobody cares...
>>
>> Okay if the temp page design is unavoidable, then I don't know if there
>> is any approach (in FUSE or VFS layer) helps page copy offloading. At
>> least we don't want the writeback performance to be limited by the
>> single writeback kworker. This is also the initial attempt of this thread.
>>
>
> Offloading it to another thread is just a workaround, though maybe a
> temporary solution.
If we could break the limit that only one single (writeback) kworker for
one bdi... Apparently it's much more complicated. Just a brainstorming
idea...
I agree it's a tough thing.
--
Thanks,
Jingbo
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [HELP] FUSE writeback performance bottleneck
2024-06-04 10:02 ` Miklos Szeredi
@ 2024-06-04 14:13 ` Bernd Schubert
2024-06-04 16:53 ` Josef Bacik
2024-08-22 17:00 ` Joanne Koong
2024-08-23 3:34 ` Jingbo Xu
2 siblings, 1 reply; 14+ messages in thread
From: Bernd Schubert @ 2024-06-04 14:13 UTC (permalink / raw)
To: Miklos Szeredi
Cc: Jingbo Xu, linux-fsdevel, linux-kernel, lege.wang,
Matthew Wilcox (Oracle),
linux-mm
On 6/4/24 12:02, Miklos Szeredi wrote:
> On Tue, 4 Jun 2024 at 11:32, Bernd Schubert <bernd.schubert@fastmail.fm> wrote:
>
>> Back to the background for the copy, so it copies pages to avoid
>> blocking on memory reclaim. With that allocation it in fact increases
>> memory pressure even more. Isn't the right solution to mark those pages
>> as not reclaimable and to avoid blocking on it? Which is what the tmp
>> pages do, just not in beautiful way.
>
> Copying to the tmp page is the same as marking the pages as
> non-reclaimable and non-syncable.
>
> Conceptually it would be nice to only copy when there's something
> actually waiting for writeback on the page.
>
> Note: normally the WRITE request would be copied to userspace along
> with the contents of the pages very soon after starting writeback.
> After this the contents of the page no longer matter, and we can just
> clear writeback without doing the copy.
>
> But if the request gets stuck in the input queue before being copied
> to userspace, then deadlock can still happen if the server blocks on
> direct reclaim and won't continue with processing the queue. And
> sync(2) will also block in that case.>
> So we'd somehow need to handle stuck WRITE requests. I don't see an
> easy way to do this "on demand", when something actually starts
> waiting on PG_writeback. Alternatively the page copy could be done
> after a timeout, which is ugly, but much easier to implement.
I think the timeout method would only work if we have already allocated
the pages, under memory pressure page allocation might not work well.
But then this still seems to be a workaround, because we don't take any
less memory with these copied pages.
I'm going to look into mm/ if there isn't a better solution.
>
> Also splice from the fuse dev would need to copy those pages, but that
> shouldn't be a problem, since it's just moving the copy from one place
> to another.
Ok, at least I need to keep an eye on it that it doesn't break when I
write a patch.
Thanks,
Bernd
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [HELP] FUSE writeback performance bottleneck
2024-06-04 14:13 ` Bernd Schubert
@ 2024-06-04 16:53 ` Josef Bacik
2024-06-04 21:39 ` Bernd Schubert
0 siblings, 1 reply; 14+ messages in thread
From: Josef Bacik @ 2024-06-04 16:53 UTC (permalink / raw)
To: Bernd Schubert
Cc: Miklos Szeredi, Jingbo Xu, linux-fsdevel, linux-kernel,
lege.wang, Matthew Wilcox (Oracle),
linux-mm
On Tue, Jun 04, 2024 at 04:13:25PM +0200, Bernd Schubert wrote:
>
>
> On 6/4/24 12:02, Miklos Szeredi wrote:
> > On Tue, 4 Jun 2024 at 11:32, Bernd Schubert <bernd.schubert@fastmail.fm> wrote:
> >
> >> Back to the background for the copy, so it copies pages to avoid
> >> blocking on memory reclaim. With that allocation it in fact increases
> >> memory pressure even more. Isn't the right solution to mark those pages
> >> as not reclaimable and to avoid blocking on it? Which is what the tmp
> >> pages do, just not in beautiful way.
> >
> > Copying to the tmp page is the same as marking the pages as
> > non-reclaimable and non-syncable.
> >
> > Conceptually it would be nice to only copy when there's something
> > actually waiting for writeback on the page.
> >
> > Note: normally the WRITE request would be copied to userspace along
> > with the contents of the pages very soon after starting writeback.
> > After this the contents of the page no longer matter, and we can just
> > clear writeback without doing the copy.
> >
> > But if the request gets stuck in the input queue before being copied
> > to userspace, then deadlock can still happen if the server blocks on
> > direct reclaim and won't continue with processing the queue. And
> > sync(2) will also block in that case.>
> > So we'd somehow need to handle stuck WRITE requests. I don't see an
> > easy way to do this "on demand", when something actually starts
> > waiting on PG_writeback. Alternatively the page copy could be done
> > after a timeout, which is ugly, but much easier to implement.
>
> I think the timeout method would only work if we have already allocated
> the pages, under memory pressure page allocation might not work well.
> But then this still seems to be a workaround, because we don't take any
> less memory with these copied pages.
> I'm going to look into mm/ if there isn't a better solution.
I've thought a bit about this, and I still don't have a good solution, so I'm
going to throw out my random thoughts and see if it helps us get to a good spot.
1. Generally we are moving away from GFP_NOFS/GFP_NOIO to instead use
memalloc_*_save/memalloc_*_restore, so instead the process is marked being in
these contexts. We could do something similar for FUSE, tho this gets hairy
with things that async off request handling to other threads (which is all of
the FUSE file systems we have internally). We'd need to have some way to
apply this to an entire process group, but this could be a workable solution.
2. Per-request timeouts. This is something we're planning on tackling for other
reasons, but it could fit nicely here to say "if this fuse fs has a
per-request timeout, skip the copy". That way we at least know we're upper
bound on how long we would be "deadlocked". I don't love this approach
because it's still a deadlock until the timeout elapsed, but it's an idea.
3. Since we're limiting writeout per the BDI, we could just say FUSE is special,
only one memory reclaim related writeout at a time. We flag when we're doing
a write via memory reclaim, and then if we try to trigger writeout via memory
reclaim again we simply reject it to avoid the deadlock. This has the
downside of making it so non-fuse related things that may be triggering
direct reclaim through FUSE means they'll reclaim something else, and if the
dirty pages from FUSE are the ones causing the problem we could spin a bunch
evicting pages that we don't care about and thrashing a bit.
As I said all of these have downsides, I think #1 is probably the most workable,
but I haven't thought about it super thoroughly. Thanks,
Josef
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [HELP] FUSE writeback performance bottleneck
2024-06-04 16:53 ` Josef Bacik
@ 2024-06-04 21:39 ` Bernd Schubert
2024-06-04 22:16 ` Josef Bacik
0 siblings, 1 reply; 14+ messages in thread
From: Bernd Schubert @ 2024-06-04 21:39 UTC (permalink / raw)
To: Josef Bacik
Cc: Miklos Szeredi, Jingbo Xu, linux-fsdevel, linux-kernel,
lege.wang, Matthew Wilcox (Oracle),
linux-mm
On 6/4/24 18:53, Josef Bacik wrote:
> On Tue, Jun 04, 2024 at 04:13:25PM +0200, Bernd Schubert wrote:
>>
>>
>> On 6/4/24 12:02, Miklos Szeredi wrote:
>>> On Tue, 4 Jun 2024 at 11:32, Bernd Schubert <bernd.schubert@fastmail.fm> wrote:
>>>
>>>> Back to the background for the copy, so it copies pages to avoid
>>>> blocking on memory reclaim. With that allocation it in fact increases
>>>> memory pressure even more. Isn't the right solution to mark those pages
>>>> as not reclaimable and to avoid blocking on it? Which is what the tmp
>>>> pages do, just not in beautiful way.
>>>
>>> Copying to the tmp page is the same as marking the pages as
>>> non-reclaimable and non-syncable.
>>>
>>> Conceptually it would be nice to only copy when there's something
>>> actually waiting for writeback on the page.
>>>
>>> Note: normally the WRITE request would be copied to userspace along
>>> with the contents of the pages very soon after starting writeback.
>>> After this the contents of the page no longer matter, and we can just
>>> clear writeback without doing the copy.
>>>
>>> But if the request gets stuck in the input queue before being copied
>>> to userspace, then deadlock can still happen if the server blocks on
>>> direct reclaim and won't continue with processing the queue. And
>>> sync(2) will also block in that case.>
>>> So we'd somehow need to handle stuck WRITE requests. I don't see an
>>> easy way to do this "on demand", when something actually starts
>>> waiting on PG_writeback. Alternatively the page copy could be done
>>> after a timeout, which is ugly, but much easier to implement.
>>
>> I think the timeout method would only work if we have already allocated
>> the pages, under memory pressure page allocation might not work well.
>> But then this still seems to be a workaround, because we don't take any
>> less memory with these copied pages.
>> I'm going to look into mm/ if there isn't a better solution.
>
> I've thought a bit about this, and I still don't have a good solution, so I'm
> going to throw out my random thoughts and see if it helps us get to a good spot.
>
> 1. Generally we are moving away from GFP_NOFS/GFP_NOIO to instead use
> memalloc_*_save/memalloc_*_restore, so instead the process is marked being in
> these contexts. We could do something similar for FUSE, tho this gets hairy
> with things that async off request handling to other threads (which is all of
> the FUSE file systems we have internally). We'd need to have some way to
> apply this to an entire process group, but this could be a workable solution.
>
I'm not sure how either of of both (GFP_ and memalloc_) would work for
userspace allocations.
Wouldn't we basically need to have a feature to disable memory
allocations for fuse userspace tasks? Hmm, maybe through mem_cgroup.
Although even then, the file system might depend on other kernel
resources (backend file system or block device or even network) that
might do allocations on their own without the knowledge of the fuse server.
> 2. Per-request timeouts. This is something we're planning on tackling for other
> reasons, but it could fit nicely here to say "if this fuse fs has a
> per-request timeout, skip the copy". That way we at least know we're upper
> bound on how long we would be "deadlocked". I don't love this approach
> because it's still a deadlock until the timeout elapsed, but it's an idea.
Hmm, how do we know "this fuse fs has a per-request timeout"? I don't
think we could trust initialization flags set by userspace.
>
> 3. Since we're limiting writeout per the BDI, we could just say FUSE is special,
> only one memory reclaim related writeout at a time. We flag when we're doing
> a write via memory reclaim, and then if we try to trigger writeout via memory
> reclaim again we simply reject it to avoid the deadlock. This has the
> downside of making it so non-fuse related things that may be triggering
> direct reclaim through FUSE means they'll reclaim something else, and if the
> dirty pages from FUSE are the ones causing the problem we could spin a bunch
> evicting pages that we don't care about and thrashing a bit.
Isn't that what we have right now? Reclaim basically ignores fuse tmp pages.
Thanks,
Bernd
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [HELP] FUSE writeback performance bottleneck
2024-06-04 21:39 ` Bernd Schubert
@ 2024-06-04 22:16 ` Josef Bacik
2024-06-05 5:49 ` Amir Goldstein
0 siblings, 1 reply; 14+ messages in thread
From: Josef Bacik @ 2024-06-04 22:16 UTC (permalink / raw)
To: Bernd Schubert
Cc: Miklos Szeredi, Jingbo Xu, linux-fsdevel, linux-kernel,
lege.wang, Matthew Wilcox (Oracle),
linux-mm
On Tue, Jun 04, 2024 at 11:39:17PM +0200, Bernd Schubert wrote:
>
>
> On 6/4/24 18:53, Josef Bacik wrote:
> > On Tue, Jun 04, 2024 at 04:13:25PM +0200, Bernd Schubert wrote:
> >>
> >>
> >> On 6/4/24 12:02, Miklos Szeredi wrote:
> >>> On Tue, 4 Jun 2024 at 11:32, Bernd Schubert <bernd.schubert@fastmail.fm> wrote:
> >>>
> >>>> Back to the background for the copy, so it copies pages to avoid
> >>>> blocking on memory reclaim. With that allocation it in fact increases
> >>>> memory pressure even more. Isn't the right solution to mark those pages
> >>>> as not reclaimable and to avoid blocking on it? Which is what the tmp
> >>>> pages do, just not in beautiful way.
> >>>
> >>> Copying to the tmp page is the same as marking the pages as
> >>> non-reclaimable and non-syncable.
> >>>
> >>> Conceptually it would be nice to only copy when there's something
> >>> actually waiting for writeback on the page.
> >>>
> >>> Note: normally the WRITE request would be copied to userspace along
> >>> with the contents of the pages very soon after starting writeback.
> >>> After this the contents of the page no longer matter, and we can just
> >>> clear writeback without doing the copy.
> >>>
> >>> But if the request gets stuck in the input queue before being copied
> >>> to userspace, then deadlock can still happen if the server blocks on
> >>> direct reclaim and won't continue with processing the queue. And
> >>> sync(2) will also block in that case.>
> >>> So we'd somehow need to handle stuck WRITE requests. I don't see an
> >>> easy way to do this "on demand", when something actually starts
> >>> waiting on PG_writeback. Alternatively the page copy could be done
> >>> after a timeout, which is ugly, but much easier to implement.
> >>
> >> I think the timeout method would only work if we have already allocated
> >> the pages, under memory pressure page allocation might not work well.
> >> But then this still seems to be a workaround, because we don't take any
> >> less memory with these copied pages.
> >> I'm going to look into mm/ if there isn't a better solution.
> >
> > I've thought a bit about this, and I still don't have a good solution, so I'm
> > going to throw out my random thoughts and see if it helps us get to a good spot.
> >
> > 1. Generally we are moving away from GFP_NOFS/GFP_NOIO to instead use
> > memalloc_*_save/memalloc_*_restore, so instead the process is marked being in
> > these contexts. We could do something similar for FUSE, tho this gets hairy
> > with things that async off request handling to other threads (which is all of
> > the FUSE file systems we have internally). We'd need to have some way to
> > apply this to an entire process group, but this could be a workable solution.
> >
>
> I'm not sure how either of of both (GFP_ and memalloc_) would work for
> userspace allocations.
> Wouldn't we basically need to have a feature to disable memory
> allocations for fuse userspace tasks? Hmm, maybe through mem_cgroup.
> Although even then, the file system might depend on other kernel
> resources (backend file system or block device or even network) that
> might do allocations on their own without the knowledge of the fuse server.
>
Basically that only in the case that we're handling a request from memory
pressure we would invoke this, and then any allocation would automatically have
gfp_nofs protection because it's flagged at the task level.
Again there's a lot of problems with this, like how do we set it for the task,
how does it work for threads etc.
> > 2. Per-request timeouts. This is something we're planning on tackling for other
> > reasons, but it could fit nicely here to say "if this fuse fs has a
> > per-request timeout, skip the copy". That way we at least know we're upper
> > bound on how long we would be "deadlocked". I don't love this approach
> > because it's still a deadlock until the timeout elapsed, but it's an idea.
>
> Hmm, how do we know "this fuse fs has a per-request timeout"? I don't
> think we could trust initialization flags set by userspace.
>
It would be controlled by the kernel. So at init time the fuse file system says
"my command timeout is 30 minutes." Then the kernel enforces this by having a
per-request timeout, and once that 30 minutes elapses we cancel the request and
EIO it. User space doesn't do anything beyond telling the kernel what it's
timeout is, so this would be safe.
> >
> > 3. Since we're limiting writeout per the BDI, we could just say FUSE is special,
> > only one memory reclaim related writeout at a time. We flag when we're doing
> > a write via memory reclaim, and then if we try to trigger writeout via memory
> > reclaim again we simply reject it to avoid the deadlock. This has the
> > downside of making it so non-fuse related things that may be triggering
> > direct reclaim through FUSE means they'll reclaim something else, and if the
> > dirty pages from FUSE are the ones causing the problem we could spin a bunch
> > evicting pages that we don't care about and thrashing a bit.
>
>
> Isn't that what we have right now? Reclaim basically ignores fuse tmp pages.
Yes but extending it to no longer have tmp pages and tie it to the BDI instead,
my goal is to get rid of all the excess copying. Thanks,
Josef
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [HELP] FUSE writeback performance bottleneck
2024-06-04 22:16 ` Josef Bacik
@ 2024-06-05 5:49 ` Amir Goldstein
2024-06-05 15:35 ` Josef Bacik
0 siblings, 1 reply; 14+ messages in thread
From: Amir Goldstein @ 2024-06-05 5:49 UTC (permalink / raw)
To: Josef Bacik
Cc: Bernd Schubert, Miklos Szeredi, Jingbo Xu, linux-fsdevel,
linux-kernel, lege.wang, Matthew Wilcox (Oracle),
linux-mm
On Wed, Jun 5, 2024 at 1:17 AM Josef Bacik <josef@toxicpanda.com> wrote:
>
> On Tue, Jun 04, 2024 at 11:39:17PM +0200, Bernd Schubert wrote:
> >
> >
> > On 6/4/24 18:53, Josef Bacik wrote:
> > > On Tue, Jun 04, 2024 at 04:13:25PM +0200, Bernd Schubert wrote:
> > >>
> > >>
> > >> On 6/4/24 12:02, Miklos Szeredi wrote:
> > >>> On Tue, 4 Jun 2024 at 11:32, Bernd Schubert <bernd.schubert@fastmail.fm> wrote:
> > >>>
> > >>>> Back to the background for the copy, so it copies pages to avoid
> > >>>> blocking on memory reclaim. With that allocation it in fact increases
> > >>>> memory pressure even more. Isn't the right solution to mark those pages
> > >>>> as not reclaimable and to avoid blocking on it? Which is what the tmp
> > >>>> pages do, just not in beautiful way.
> > >>>
> > >>> Copying to the tmp page is the same as marking the pages as
> > >>> non-reclaimable and non-syncable.
> > >>>
> > >>> Conceptually it would be nice to only copy when there's something
> > >>> actually waiting for writeback on the page.
> > >>>
> > >>> Note: normally the WRITE request would be copied to userspace along
> > >>> with the contents of the pages very soon after starting writeback.
> > >>> After this the contents of the page no longer matter, and we can just
> > >>> clear writeback without doing the copy.
> > >>>
> > >>> But if the request gets stuck in the input queue before being copied
> > >>> to userspace, then deadlock can still happen if the server blocks on
> > >>> direct reclaim and won't continue with processing the queue. And
> > >>> sync(2) will also block in that case.>
> > >>> So we'd somehow need to handle stuck WRITE requests. I don't see an
> > >>> easy way to do this "on demand", when something actually starts
> > >>> waiting on PG_writeback. Alternatively the page copy could be done
> > >>> after a timeout, which is ugly, but much easier to implement.
> > >>
> > >> I think the timeout method would only work if we have already allocated
> > >> the pages, under memory pressure page allocation might not work well.
> > >> But then this still seems to be a workaround, because we don't take any
> > >> less memory with these copied pages.
> > >> I'm going to look into mm/ if there isn't a better solution.
> > >
> > > I've thought a bit about this, and I still don't have a good solution, so I'm
> > > going to throw out my random thoughts and see if it helps us get to a good spot.
> > >
> > > 1. Generally we are moving away from GFP_NOFS/GFP_NOIO to instead use
> > > memalloc_*_save/memalloc_*_restore, so instead the process is marked being in
> > > these contexts. We could do something similar for FUSE, tho this gets hairy
> > > with things that async off request handling to other threads (which is all of
> > > the FUSE file systems we have internally). We'd need to have some way to
> > > apply this to an entire process group, but this could be a workable solution.
> > >
> >
> > I'm not sure how either of of both (GFP_ and memalloc_) would work for
> > userspace allocations.
> > Wouldn't we basically need to have a feature to disable memory
> > allocations for fuse userspace tasks? Hmm, maybe through mem_cgroup.
> > Although even then, the file system might depend on other kernel
> > resources (backend file system or block device or even network) that
> > might do allocations on their own without the knowledge of the fuse server.
> >
>
> Basically that only in the case that we're handling a request from memory
> pressure we would invoke this, and then any allocation would automatically have
> gfp_nofs protection because it's flagged at the task level.
>
> Again there's a lot of problems with this, like how do we set it for the task,
> how does it work for threads etc.
>
> > > 2. Per-request timeouts. This is something we're planning on tackling for other
> > > reasons, but it could fit nicely here to say "if this fuse fs has a
> > > per-request timeout, skip the copy". That way we at least know we're upper
> > > bound on how long we would be "deadlocked". I don't love this approach
> > > because it's still a deadlock until the timeout elapsed, but it's an idea.
> >
> > Hmm, how do we know "this fuse fs has a per-request timeout"? I don't
> > think we could trust initialization flags set by userspace.
> >
>
> It would be controlled by the kernel. So at init time the fuse file system says
> "my command timeout is 30 minutes." Then the kernel enforces this by having a
> per-request timeout, and once that 30 minutes elapses we cancel the request and
> EIO it. User space doesn't do anything beyond telling the kernel what it's
> timeout is, so this would be safe.
>
Maybe that would be better to configure by mounter, similar to nfs -otimeo
and maybe consider opt-in to returning ETIMEDOUT in this case.
At least nfsd will pass that error to nfs client and nfs client will retry.
Different applications (or network protocols) handle timeouts differently,
so the timeout and error seems like a decision for the admin/mounter not
for the fuse server, although there may be a fuse fs that would want to
set the default timeout, as if to request the kernel to be its watchdog
(i.e. do not expect me to take more than 30 min to handle any request).
Thanks,
Amir.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [HELP] FUSE writeback performance bottleneck
2024-06-05 5:49 ` Amir Goldstein
@ 2024-06-05 15:35 ` Josef Bacik
0 siblings, 0 replies; 14+ messages in thread
From: Josef Bacik @ 2024-06-05 15:35 UTC (permalink / raw)
To: Amir Goldstein
Cc: Bernd Schubert, Miklos Szeredi, Jingbo Xu, linux-fsdevel,
linux-kernel, lege.wang, Matthew Wilcox (Oracle),
linux-mm
On Wed, Jun 05, 2024 at 08:49:48AM +0300, Amir Goldstein wrote:
> On Wed, Jun 5, 2024 at 1:17 AM Josef Bacik <josef@toxicpanda.com> wrote:
> >
> > On Tue, Jun 04, 2024 at 11:39:17PM +0200, Bernd Schubert wrote:
> > >
> > >
> > > On 6/4/24 18:53, Josef Bacik wrote:
> > > > On Tue, Jun 04, 2024 at 04:13:25PM +0200, Bernd Schubert wrote:
> > > >>
> > > >>
> > > >> On 6/4/24 12:02, Miklos Szeredi wrote:
> > > >>> On Tue, 4 Jun 2024 at 11:32, Bernd Schubert <bernd.schubert@fastmail.fm> wrote:
> > > >>>
> > > >>>> Back to the background for the copy, so it copies pages to avoid
> > > >>>> blocking on memory reclaim. With that allocation it in fact increases
> > > >>>> memory pressure even more. Isn't the right solution to mark those pages
> > > >>>> as not reclaimable and to avoid blocking on it? Which is what the tmp
> > > >>>> pages do, just not in beautiful way.
> > > >>>
> > > >>> Copying to the tmp page is the same as marking the pages as
> > > >>> non-reclaimable and non-syncable.
> > > >>>
> > > >>> Conceptually it would be nice to only copy when there's something
> > > >>> actually waiting for writeback on the page.
> > > >>>
> > > >>> Note: normally the WRITE request would be copied to userspace along
> > > >>> with the contents of the pages very soon after starting writeback.
> > > >>> After this the contents of the page no longer matter, and we can just
> > > >>> clear writeback without doing the copy.
> > > >>>
> > > >>> But if the request gets stuck in the input queue before being copied
> > > >>> to userspace, then deadlock can still happen if the server blocks on
> > > >>> direct reclaim and won't continue with processing the queue. And
> > > >>> sync(2) will also block in that case.>
> > > >>> So we'd somehow need to handle stuck WRITE requests. I don't see an
> > > >>> easy way to do this "on demand", when something actually starts
> > > >>> waiting on PG_writeback. Alternatively the page copy could be done
> > > >>> after a timeout, which is ugly, but much easier to implement.
> > > >>
> > > >> I think the timeout method would only work if we have already allocated
> > > >> the pages, under memory pressure page allocation might not work well.
> > > >> But then this still seems to be a workaround, because we don't take any
> > > >> less memory with these copied pages.
> > > >> I'm going to look into mm/ if there isn't a better solution.
> > > >
> > > > I've thought a bit about this, and I still don't have a good solution, so I'm
> > > > going to throw out my random thoughts and see if it helps us get to a good spot.
> > > >
> > > > 1. Generally we are moving away from GFP_NOFS/GFP_NOIO to instead use
> > > > memalloc_*_save/memalloc_*_restore, so instead the process is marked being in
> > > > these contexts. We could do something similar for FUSE, tho this gets hairy
> > > > with things that async off request handling to other threads (which is all of
> > > > the FUSE file systems we have internally). We'd need to have some way to
> > > > apply this to an entire process group, but this could be a workable solution.
> > > >
> > >
> > > I'm not sure how either of of both (GFP_ and memalloc_) would work for
> > > userspace allocations.
> > > Wouldn't we basically need to have a feature to disable memory
> > > allocations for fuse userspace tasks? Hmm, maybe through mem_cgroup.
> > > Although even then, the file system might depend on other kernel
> > > resources (backend file system or block device or even network) that
> > > might do allocations on their own without the knowledge of the fuse server.
> > >
> >
> > Basically that only in the case that we're handling a request from memory
> > pressure we would invoke this, and then any allocation would automatically have
> > gfp_nofs protection because it's flagged at the task level.
> >
> > Again there's a lot of problems with this, like how do we set it for the task,
> > how does it work for threads etc.
> >
> > > > 2. Per-request timeouts. This is something we're planning on tackling for other
> > > > reasons, but it could fit nicely here to say "if this fuse fs has a
> > > > per-request timeout, skip the copy". That way we at least know we're upper
> > > > bound on how long we would be "deadlocked". I don't love this approach
> > > > because it's still a deadlock until the timeout elapsed, but it's an idea.
> > >
> > > Hmm, how do we know "this fuse fs has a per-request timeout"? I don't
> > > think we could trust initialization flags set by userspace.
> > >
> >
> > It would be controlled by the kernel. So at init time the fuse file system says
> > "my command timeout is 30 minutes." Then the kernel enforces this by having a
> > per-request timeout, and once that 30 minutes elapses we cancel the request and
> > EIO it. User space doesn't do anything beyond telling the kernel what it's
> > timeout is, so this would be safe.
> >
>
> Maybe that would be better to configure by mounter, similar to nfs -otimeo
> and maybe consider opt-in to returning ETIMEDOUT in this case.
> At least nfsd will pass that error to nfs client and nfs client will retry.
>
> Different applications (or network protocols) handle timeouts differently,
> so the timeout and error seems like a decision for the admin/mounter not
> for the fuse server, although there may be a fuse fs that would want to
> set the default timeout, as if to request the kernel to be its watchdog
> (i.e. do not expect me to take more than 30 min to handle any request).
Oh yeah for sure, I'm just saying for the purposes of allowing the FUSE daemon
to be a little riskier with system resources we base it off of wether it opts in
to command timeouts.
My plans are to have it be able to be set by the fuse daemon, or externally by a
sysadmin via sysfs. Thanks,
Josef
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [HELP] FUSE writeback performance bottleneck
2024-06-04 10:02 ` Miklos Szeredi
2024-06-04 14:13 ` Bernd Schubert
@ 2024-08-22 17:00 ` Joanne Koong
2024-08-22 21:01 ` Joanne Koong
2024-08-23 3:34 ` Jingbo Xu
2 siblings, 1 reply; 14+ messages in thread
From: Joanne Koong @ 2024-08-22 17:00 UTC (permalink / raw)
To: Miklos Szeredi
Cc: Bernd Schubert, Jingbo Xu, linux-fsdevel, linux-kernel,
lege.wang, Matthew Wilcox (Oracle),
linux-mm
On Tue, Jun 4, 2024 at 3:02 AM Miklos Szeredi <miklos@szeredi.hu> wrote:
>
> On Tue, 4 Jun 2024 at 11:32, Bernd Schubert <bernd.schubert@fastmail.fm> wrote:
>
> > Back to the background for the copy, so it copies pages to avoid
> > blocking on memory reclaim. With that allocation it in fact increases
> > memory pressure even more. Isn't the right solution to mark those pages
> > as not reclaimable and to avoid blocking on it? Which is what the tmp
> > pages do, just not in beautiful way.
>
> Copying to the tmp page is the same as marking the pages as
> non-reclaimable and non-syncable.
>
> Conceptually it would be nice to only copy when there's something
> actually waiting for writeback on the page.
>
> Note: normally the WRITE request would be copied to userspace along
> with the contents of the pages very soon after starting writeback.
> After this the contents of the page no longer matter, and we can just
> clear writeback without doing the copy.
>
> But if the request gets stuck in the input queue before being copied
> to userspace, then deadlock can still happen if the server blocks on
> direct reclaim and won't continue with processing the queue. And
> sync(2) will also block in that case.
Why doesn't it suffice to just check if the page is being reclaimed
and do the tmp page allocation only if it's under reclaim?
>
> So we'd somehow need to handle stuck WRITE requests. I don't see an
> easy way to do this "on demand", when something actually starts
> waiting on PG_writeback. Alternatively the page copy could be done
> after a timeout, which is ugly, but much easier to implement.
>
> Also splice from the fuse dev would need to copy those pages, but that
> shouldn't be a problem, since it's just moving the copy from one place
> to another.
>
> Thanks,
> Miklos
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [HELP] FUSE writeback performance bottleneck
2024-08-22 17:00 ` Joanne Koong
@ 2024-08-22 21:01 ` Joanne Koong
0 siblings, 0 replies; 14+ messages in thread
From: Joanne Koong @ 2024-08-22 21:01 UTC (permalink / raw)
To: Miklos Szeredi
Cc: Bernd Schubert, Jingbo Xu, linux-fsdevel, linux-kernel,
lege.wang, Matthew Wilcox (Oracle),
linux-mm
On Thu, Aug 22, 2024 at 10:00 AM Joanne Koong <joannelkoong@gmail.com> wrote:
>
> On Tue, Jun 4, 2024 at 3:02 AM Miklos Szeredi <miklos@szeredi.hu> wrote:
> >
> > On Tue, 4 Jun 2024 at 11:32, Bernd Schubert <bernd.schubert@fastmail.fm> wrote:
> >
> > > Back to the background for the copy, so it copies pages to avoid
> > > blocking on memory reclaim. With that allocation it in fact increases
> > > memory pressure even more. Isn't the right solution to mark those pages
> > > as not reclaimable and to avoid blocking on it? Which is what the tmp
> > > pages do, just not in beautiful way.
> >
> > Copying to the tmp page is the same as marking the pages as
> > non-reclaimable and non-syncable.
> >
> > Conceptually it would be nice to only copy when there's something
> > actually waiting for writeback on the page.
> >
> > Note: normally the WRITE request would be copied to userspace along
> > with the contents of the pages very soon after starting writeback.
> > After this the contents of the page no longer matter, and we can just
> > clear writeback without doing the copy.
> >
> > But if the request gets stuck in the input queue before being copied
> > to userspace, then deadlock can still happen if the server blocks on
> > direct reclaim and won't continue with processing the queue. And
> > sync(2) will also block in that case.
>
> Why doesn't it suffice to just check if the page is being reclaimed
> and do the tmp page allocation only if it's under reclaim?
Never mind, Josef explained it to me. I misunderstood what the
PG_reclaim flag does.
>
> >
> > So we'd somehow need to handle stuck WRITE requests. I don't see an
> > easy way to do this "on demand", when something actually starts
> > waiting on PG_writeback. Alternatively the page copy could be done
> > after a timeout, which is ugly, but much easier to implement.
> >
> > Also splice from the fuse dev would need to copy those pages, but that
> > shouldn't be a problem, since it's just moving the copy from one place
> > to another.
> >
> > Thanks,
> > Miklos
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [HELP] FUSE writeback performance bottleneck
2024-06-04 10:02 ` Miklos Szeredi
2024-06-04 14:13 ` Bernd Schubert
2024-08-22 17:00 ` Joanne Koong
@ 2024-08-23 3:34 ` Jingbo Xu
2024-09-13 0:00 ` Joanne Koong
2 siblings, 1 reply; 14+ messages in thread
From: Jingbo Xu @ 2024-08-23 3:34 UTC (permalink / raw)
To: Miklos Szeredi, Bernd Schubert
Cc: linux-fsdevel, linux-kernel, lege.wang, Matthew Wilcox (Oracle),
linux-mm
On 6/4/24 6:02 PM, Miklos Szeredi wrote:
> On Tue, 4 Jun 2024 at 11:32, Bernd Schubert <bernd.schubert@fastmail.fm> wrote:
>
>> Back to the background for the copy, so it copies pages to avoid
>> blocking on memory reclaim. With that allocation it in fact increases
>> memory pressure even more. Isn't the right solution to mark those pages
>> as not reclaimable and to avoid blocking on it? Which is what the tmp
>> pages do, just not in beautiful way.
>
> Copying to the tmp page is the same as marking the pages as
> non-reclaimable and non-syncable.
>
> Conceptually it would be nice to only copy when there's something
> actually waiting for writeback on the page.
>
> Note: normally the WRITE request would be copied to userspace along
> with the contents of the pages very soon after starting writeback.
> After this the contents of the page no longer matter, and we can just
> clear writeback without doing the copy.
OK this really deviates from my previous understanding of the deadlock
issue. Previously I thought *after* the server has received the WRITE
request, i.e. has copied the request and page content to userspace, the
server needs to allocate some memory to handle the WRITE request, e.g.
make the data persistent on disk, or send the data to the remote
storage. It is the memory allocation at this point that actually
triggers a memory direct reclaim (on the FUSE dirty page) and causes a
deadlock. It seems that I misunderstand it.
If that's true, we can clear PF_writeback as long as the whole request
along with the page content has already been copied to userspace, and
thus eliminate the tmp page copying.
>
> But if the request gets stuck in the input queue before being copied
> to userspace, then deadlock can still happen if the server blocks on
> direct reclaim and won't continue with processing the queue. And
> sync(2) will also block in that case.
>
Hi, Miklos,
Would you please give more details on how "the request can get stuck in
the input queue before being copied userspace"? Do you mean the WRITE
requests (submitted from writeback) are still pending in the
background/pending list, waiting to be processed by the server, while at
the same time the server gets blocked from processing the queue, either
due to the server is blocked on direct reclaim (when handling *another*
request), or it's a malicious server and refuses to process any request?
--
Thanks,
Jingbo
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [HELP] FUSE writeback performance bottleneck
2024-08-23 3:34 ` Jingbo Xu
@ 2024-09-13 0:00 ` Joanne Koong
2024-09-13 1:25 ` Jingbo Xu
0 siblings, 1 reply; 14+ messages in thread
From: Joanne Koong @ 2024-09-13 0:00 UTC (permalink / raw)
To: Jingbo Xu
Cc: Miklos Szeredi, Bernd Schubert, linux-fsdevel, linux-kernel,
lege.wang, Matthew Wilcox (Oracle),
linux-mm
On Thu, Aug 22, 2024 at 8:34 PM Jingbo Xu <jefflexu@linux.alibaba.com> wrote:
>
> On 6/4/24 6:02 PM, Miklos Szeredi wrote:
> > On Tue, 4 Jun 2024 at 11:32, Bernd Schubert <bernd.schubert@fastmail.fm> wrote:
> >
> >> Back to the background for the copy, so it copies pages to avoid
> >> blocking on memory reclaim. With that allocation it in fact increases
> >> memory pressure even more. Isn't the right solution to mark those pages
> >> as not reclaimable and to avoid blocking on it? Which is what the tmp
> >> pages do, just not in beautiful way.
> >
> > Copying to the tmp page is the same as marking the pages as
> > non-reclaimable and non-syncable.
> >
> > Conceptually it would be nice to only copy when there's something
> > actually waiting for writeback on the page.
> >
> > Note: normally the WRITE request would be copied to userspace along
> > with the contents of the pages very soon after starting writeback.
> > After this the contents of the page no longer matter, and we can just
> > clear writeback without doing the copy.
>
> OK this really deviates from my previous understanding of the deadlock
> issue. Previously I thought *after* the server has received the WRITE
> request, i.e. has copied the request and page content to userspace, the
> server needs to allocate some memory to handle the WRITE request, e.g.
> make the data persistent on disk, or send the data to the remote
> storage. It is the memory allocation at this point that actually
> triggers a memory direct reclaim (on the FUSE dirty page) and causes a
> deadlock. It seems that I misunderstand it.
I think your previous understanding is correct (or if not, then my
understanding of this is incorrect too lol).
The first write request makes it to userspace and when the server is
in the middle of handling it, a memory reclaim is triggered where
pages need to be written back. This leads to a SECOND write request
(eg writing back the pages that are reclaimed) but this second write
request will never be copied out to userspace because the server is
stuck handling the first write request and waiting for the page
reclaim bits of the reclaimed pages to be unset, but those reclaim
bits can only be unset when the pages have been copied out to
userspace, which only happens when the server reads /dev/fuse for the
next request.
>
> If that's true, we can clear PF_writeback as long as the whole request
> along with the page content has already been copied to userspace, and
> thus eliminate the tmp page copying.
>
I think the problem is that on a single-threaded server, the pages
will not be copied out to userspace for the second request (aka
writing back the dirty reclaimed pages) since the server is stuck on
the first request.
> >
> > But if the request gets stuck in the input queue before being copied
> > to userspace, then deadlock can still happen if the server blocks on
> > direct reclaim and won't continue with processing the queue. And
> > sync(2) will also block in that case.
> >
>
> Hi, Miklos,
>
> Would you please give more details on how "the request can get stuck in
> the input queue before being copied userspace"? Do you mean the WRITE
> requests (submitted from writeback) are still pending in the
> background/pending list, waiting to be processed by the server, while at
> the same time the server gets blocked from processing the queue, either
> due to the server is blocked on direct reclaim (when handling *another*
> request), or it's a malicious server and refuses to process any request?
>
>
> --
> Thanks,
> Jingbo
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [HELP] FUSE writeback performance bottleneck
2024-09-13 0:00 ` Joanne Koong
@ 2024-09-13 1:25 ` Jingbo Xu
0 siblings, 0 replies; 14+ messages in thread
From: Jingbo Xu @ 2024-09-13 1:25 UTC (permalink / raw)
To: Joanne Koong
Cc: Miklos Szeredi, Bernd Schubert, linux-fsdevel, linux-kernel,
lege.wang, Matthew Wilcox (Oracle),
linux-mm
On 9/13/24 8:00 AM, Joanne Koong wrote:
> On Thu, Aug 22, 2024 at 8:34 PM Jingbo Xu <jefflexu@linux.alibaba.com> wrote:
>>
>> On 6/4/24 6:02 PM, Miklos Szeredi wrote:
>>> On Tue, 4 Jun 2024 at 11:32, Bernd Schubert <bernd.schubert@fastmail.fm> wrote:
>>>
>>>> Back to the background for the copy, so it copies pages to avoid
>>>> blocking on memory reclaim. With that allocation it in fact increases
>>>> memory pressure even more. Isn't the right solution to mark those pages
>>>> as not reclaimable and to avoid blocking on it? Which is what the tmp
>>>> pages do, just not in beautiful way.
>>>
>>> Copying to the tmp page is the same as marking the pages as
>>> non-reclaimable and non-syncable.
>>>
>>> Conceptually it would be nice to only copy when there's something
>>> actually waiting for writeback on the page.
>>>
>>> Note: normally the WRITE request would be copied to userspace along
>>> with the contents of the pages very soon after starting writeback.
>>> After this the contents of the page no longer matter, and we can just
>>> clear writeback without doing the copy.
>>
>> OK this really deviates from my previous understanding of the deadlock
>> issue. Previously I thought *after* the server has received the WRITE
>> request, i.e. has copied the request and page content to userspace, the
>> server needs to allocate some memory to handle the WRITE request, e.g.
>> make the data persistent on disk, or send the data to the remote
>> storage. It is the memory allocation at this point that actually
>> triggers a memory direct reclaim (on the FUSE dirty page) and causes a
>> deadlock. It seems that I misunderstand it.
>
> I think your previous understanding is correct (or if not, then my
> understanding of this is incorrect too lol).
> The first write request makes it to userspace and when the server is
> in the middle of handling it, a memory reclaim is triggered where
> pages need to be written back. This leads to a SECOND write request
> (eg writing back the pages that are reclaimed) but this second write
> request will never be copied out to userspace because the server is
> stuck handling the first write request and waiting for the page
> reclaim bits of the reclaimed pages to be unset, but those reclaim
> bits can only be unset when the pages have been copied out to
> userspace, which only happens when the server reads /dev/fuse for the
> next request.
Right, that's true.
>
>>
>> If that's true, we can clear PF_writeback as long as the whole request
>> along with the page content has already been copied to userspace, and
>> thus eliminate the tmp page copying.
>>
>
> I think the problem is that on a single-threaded server, the pages
> will not be copied out to userspace for the second request (aka
> writing back the dirty reclaimed pages) since the server is stuck on
> the first request.
Agreed.
--
Thanks,
Jingbo
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2024-09-13 1:25 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <495d2400-1d96-4924-99d3-8b2952e05fc3@linux.alibaba.com>
[not found] ` <67771830-977f-4fca-9d0b-0126abf120a5@fastmail.fm>
[not found] ` <CAJfpeguts=V9KkBsMJN_WfdkLHPzB6RswGvumVHUMJ87zOAbDQ@mail.gmail.com>
[not found] ` <bd49fcba-3eb6-4e84-a0f0-e73bce31ddb2@linux.alibaba.com>
[not found] ` <CAJfpegsfF77SV96wvaxn9VnRkNt5FKCnA4mJ0ieFsZtwFeRuYw@mail.gmail.com>
[not found] ` <ffca9534-cb75-4dc6-9830-fe8e84db2413@linux.alibaba.com>
2024-06-04 9:32 ` [HELP] FUSE writeback performance bottleneck Bernd Schubert
2024-06-04 10:02 ` Miklos Szeredi
2024-06-04 14:13 ` Bernd Schubert
2024-06-04 16:53 ` Josef Bacik
2024-06-04 21:39 ` Bernd Schubert
2024-06-04 22:16 ` Josef Bacik
2024-06-05 5:49 ` Amir Goldstein
2024-06-05 15:35 ` Josef Bacik
2024-08-22 17:00 ` Joanne Koong
2024-08-22 21:01 ` Joanne Koong
2024-08-23 3:34 ` Jingbo Xu
2024-09-13 0:00 ` Joanne Koong
2024-09-13 1:25 ` Jingbo Xu
2024-06-04 12:24 ` Jingbo Xu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox