From: Dave Chinner <david@fromorbit.com>
To: Kent Overstreet <kent.overstreet@linux.dev>
Cc: James Bottomley <James.Bottomley@hansenpartnership.com>,
Matthew Wilcox <willy@infradead.org>,
Christoph Hellwig <hch@infradead.org>,
ksummit@lists.linux.dev, linux-fsdevel@vger.kernel.org
Subject: Re: [MAINTAINERS/KERNEL SUMMIT] Trust and maintenance of file systems
Date: Mon, 11 Sep 2023 12:07:07 +1000 [thread overview]
Message-ID: <ZP52S8jPsNt0IvQE@dread.disaster.area> (raw)
In-Reply-To: <20230911012914.xoeowcbruxxonw7u@moria.home.lan>
On Sun, Sep 10, 2023 at 09:29:14PM -0400, Kent Overstreet wrote:
> On Mon, Sep 11, 2023 at 11:05:09AM +1000, Dave Chinner wrote:
> > On Sat, Sep 09, 2023 at 06:42:30PM -0400, Kent Overstreet wrote:
> > > On Sat, Sep 09, 2023 at 08:50:39AM -0400, James Bottomley wrote:
> > > > So why can't we figure out that easier way? What's wrong with trying to
> > > > figure out if we can do some sort of helper or library set that assists
> > > > supporting and porting older filesystems. If we can do that it will not
> > > > only make the job of an old fs maintainer a lot easier, but it might
> > > > just provide the stepping stones we need to encourage more people climb
> > > > up into the modern VFS world.
> > >
> > > What if we could run our existing filesystem code in userspace?
> >
> > You mean like lklfuse already enables?
>
> I'm not seeing that it does?
>
> I just had a look at the code, and I don't see anything there related to
> the VFS - AFAIK, a VFS -> fuse layer doesn't exist yet.
Just to repeat what I said on #xfs here...
It doesn't try to cut in half way through the VFS -> filesystem
path. It just redirects the fuse operations to "lkl syscalls" and so
runs the entire kernel VFS->filesystem path.
https://github.com/lkl/linux/blob/master/tools/lkl/lklfuse.c
> And that looks a lot heavier than what we'd ideally want, i.e. a _lot_
> more kernel code would be getting pulled in. The entire block layer,
> probably the scheduler as well.
Yes, but arguing that "performance sucks" misses the entire point of
this discussion: that for the untrusted user mounts of untrusted
filesystem images we already have a viable method for moving the
dangerous processing out into userspace that requires almost *zero
additional work* from anyone.
As long as the performance of the lklfuse implementation doesn't
totally suck, nobody will really care that much that isn't quite as
fast as a native implementation. PLuggable drives (e.g. via USB) are
already going to be much slower than a host installed drive, so I
don't think performance is even really a consideration for these
sorts of use cases....
> What I've got in bcachefs-tools is a much thinner mapping from e.g.
> kthreads -> pthreads, block layer -> aio, etc.
Right, and we've got that in userspace for XFS, too. If we really
cared that much about XFS-FUSE, I'd be converting userspace to use
ublk w/ io_uring on top of a port of the kernel XFS buffer cache as
the basis for a performant fuse implementation. However, there's a
massive amount of userspace work needed to get a native XFS FUSE
implementation up and running (even ignoring performance), so it's
just not a viable short-term - or even medium-term - solution to the
current problems.
Indeed, if you do a fuse->fs ops wrapper, I'd argue that lklfuse is
the place to do it so that there is a single code base that supports
all kernel filesystems without requiring anyone to support a
separate userspace code base. Requiring every filesystem to do their
own FUSE ports and then support them doesn't reduce the overall
maintenance overhead burden on filesystem developers....
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
next prev parent reply other threads:[~2023-09-11 2:07 UTC|newest]
Thread overview: 97+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-30 14:07 Christoph Hellwig
2023-09-05 23:06 ` Dave Chinner
2023-09-05 23:23 ` Matthew Wilcox
2023-09-06 2:09 ` Dave Chinner
2023-09-06 15:06 ` Christian Brauner
2023-09-06 15:59 ` Christian Brauner
2023-09-06 19:09 ` Geert Uytterhoeven
2023-09-08 8:34 ` Christoph Hellwig
2023-09-07 0:46 ` Bagas Sanjaya
2023-09-09 12:50 ` James Bottomley
2023-09-09 15:44 ` Matthew Wilcox
2023-09-10 19:51 ` James Bottomley
2023-09-10 20:19 ` Kent Overstreet
2023-09-10 21:15 ` Guenter Roeck
2023-09-11 3:10 ` Theodore Ts'o
2023-09-11 19:03 ` James Bottomley
2023-09-12 0:23 ` Dave Chinner
2023-09-12 16:52 ` H. Peter Anvin
2023-09-09 22:42 ` Kent Overstreet
2023-09-10 8:19 ` Geert Uytterhoeven
2023-09-10 8:37 ` Bernd Schubert
2023-09-10 16:35 ` Kent Overstreet
2023-09-10 17:26 ` Geert Uytterhoeven
2023-09-10 17:35 ` Kent Overstreet
2023-09-11 1:05 ` Dave Chinner
2023-09-11 1:29 ` Kent Overstreet
2023-09-11 2:07 ` Dave Chinner [this message]
2023-09-11 13:35 ` David Disseldorp
2023-09-11 17:45 ` Bart Van Assche
2023-09-11 19:11 ` David Disseldorp
2023-09-11 23:05 ` Dave Chinner
2023-09-26 5:24 ` Eric W. Biederman
2023-09-08 8:55 ` Christoph Hellwig
2023-09-08 22:47 ` Dave Chinner
2023-09-06 22:32 ` Guenter Roeck
2023-09-06 22:54 ` Dave Chinner
2023-09-07 0:53 ` Bagas Sanjaya
2023-09-07 3:14 ` Dave Chinner
2023-09-07 1:53 ` Steven Rostedt
2023-09-07 2:22 ` Dave Chinner
2023-09-07 2:51 ` Steven Rostedt
2023-09-07 3:26 ` Matthew Wilcox
2023-09-07 8:04 ` Thorsten Leemhuis
2023-09-07 10:29 ` Christian Brauner
2023-09-07 11:18 ` Thorsten Leemhuis
2023-09-07 12:04 ` Matthew Wilcox
2023-09-07 12:57 ` Guenter Roeck
2023-09-07 13:56 ` Christian Brauner
2023-09-08 8:44 ` Christoph Hellwig
2023-09-07 3:38 ` Dave Chinner
2023-09-07 11:18 ` Steven Rostedt
2023-09-13 16:43 ` Eric Sandeen
2023-09-13 16:58 ` Guenter Roeck
2023-09-13 17:03 ` Linus Torvalds
2023-09-15 22:48 ` Dave Chinner
2023-09-16 19:44 ` Steven Rostedt
2023-09-16 21:50 ` James Bottomley
2023-09-17 1:40 ` NeilBrown
2023-09-17 17:30 ` Linus Torvalds
2023-09-17 18:09 ` Linus Torvalds
2023-09-17 18:57 ` Theodore Ts'o
2023-09-17 19:45 ` Linus Torvalds
2023-09-18 11:14 ` Jan Kara
2023-09-18 17:26 ` Linus Torvalds
2023-09-18 19:32 ` Jiri Kosina
2023-09-18 19:59 ` Linus Torvalds
2023-09-18 20:50 ` Theodore Ts'o
2023-09-18 22:48 ` Linus Torvalds
2023-09-18 20:33 ` H. Peter Anvin
2023-09-19 4:56 ` Dave Chinner
2023-09-25 9:43 ` Christoph Hellwig
2023-09-27 22:23 ` Dave Kleikamp
2023-09-19 1:15 ` Dave Chinner
2023-09-19 5:17 ` Matthew Wilcox
2023-09-19 16:34 ` Theodore Ts'o
2023-09-19 16:45 ` Matthew Wilcox
2023-09-19 17:15 ` Linus Torvalds
2023-09-19 22:57 ` Dave Chinner
2023-09-18 14:54 ` Bill O'Donnell
2023-09-19 2:44 ` Dave Chinner
2023-09-19 16:57 ` James Bottomley
2023-09-25 9:38 ` Christoph Hellwig
2023-09-25 14:14 ` Dan Carpenter
2023-09-25 16:50 ` Linus Torvalds
2023-09-07 9:48 ` Dan Carpenter
2023-09-07 11:04 ` Segher Boessenkool
2023-09-07 11:22 ` Steven Rostedt
2023-09-07 12:24 ` Segher Boessenkool
2023-09-07 11:23 ` Dan Carpenter
2023-09-07 12:30 ` Segher Boessenkool
2023-09-12 9:50 ` Richard Biener
2023-10-23 5:19 ` Eric Gallager
2023-09-08 8:39 ` Christoph Hellwig
2023-09-08 8:38 ` Christoph Hellwig
2023-09-08 23:21 ` Dave Chinner
2023-09-07 0:48 ` Bagas Sanjaya
2023-09-07 3:07 ` Guenter Roeck
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZP52S8jPsNt0IvQE@dread.disaster.area \
--to=david@fromorbit.com \
--cc=James.Bottomley@hansenpartnership.com \
--cc=hch@infradead.org \
--cc=kent.overstreet@linux.dev \
--cc=ksummit@lists.linux.dev \
--cc=linux-fsdevel@vger.kernel.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox