From: "Maxim V. Patlasov" <mpatlasov@parallels.com>
To: lsf-pc@lists.linux-foundation.org
Cc: "linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
"fuse-devel@lists.sourceforge.net"
<fuse-devel@lists.sourceforge.net>,
linux-mm@kvack.org
Subject: Re: [ATTEND][LSF/MM TOPIC] FUSE: write-back cache policy and other improvements
Date: Thu, 28 Feb 2013 16:19:10 +0400 [thread overview]
Message-ID: <512F4B3E.6030409@parallels.com> (raw)
In-Reply-To: <511BAC51.4030309@parallels.com>
Adding linux-mm to cc:. One more point to discuss:
* balance_dirty_pages(): should we account NR_WRITEBACK_TEMP there?
Currently, any FUSE user may consume arbitrary amount of RAM (stuck in
kernel FUSE writeback) by intensive write to a huge mmap-ed area.
02/13/2013 07:08 PM, Maxim V. Patlasov D?D,N?DuN?:
> Hi,
>
> I'm interested in attending to discuss the latest advances in
> accelerating FUSE and making it more friendly to distributed
> file-systems. I'd like to propose and participate in the following
> discussions in the upcoming LSF/MM:
>
> * write-back cache policy: one of the problems with the existing FUSE
> implementation is that it uses the write-through cache policy which
> results in performance problems on certain workloads. A good solution
> of this is switching the FUSE page cache into a write-back policy.
> With this file data are pushed to the userspace with big chunks which
> lets the FUSE daemons handle requests in a more efficient manner.
>
> * optimize scatter-gather direct IO: dio performance can be improved
> significantly by stuffing many io-vectors into a single fuse request.
> This is especially the case for device virtualization thread
> performing i/o on behalf of virtual-machine it serves.
>
> * process direct IO asynchronously: both AIO and ordinary synchronous
> direct IO can be boosted by submitting fuse requests in non-blocking
> way (where it's possible) and either returning -EIOCBQUEUED or waiting
> for their completions synchronously.
>
> * synchronous close(2): currently, in-kernel fuse sends release
> request to userspace and returns without waiting for ACK from
> userspace. Consequently, there is a gap when user regards the file
> released while userspace fuse is still working on it. This leads to
> unnecessary synchronization complications for file-systems with shared
> access. That behaviour can be fixed by making close(2) synchronous.
>
> * throttle request allocations: currently, in-kernel fuse throttles
> allocations of all fuse requests. Switching to the policy where only
> background requests are throttled would improve the latency of
> synchronous requests and resolve thundering herd problem of waking up
> all threads blocked on fuse request allocations.
>
> Thanks,
> Maxim
>
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
parent reply other threads:[~2013-02-28 12:18 UTC|newest]
Thread overview: expand[flat|nested] mbox.gz Atom feed
[parent not found: <511BAC51.4030309@parallels.com>]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=512F4B3E.6030409@parallels.com \
--to=mpatlasov@parallels.com \
--cc=fuse-devel@lists.sourceforge.net \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox