From: Daniel Phillips <phillips@bonn-fries.net>
To: Anton Altaparmakov <aia21@cam.ac.uk>,
"Stephen C. Tweedie" <sct@redhat.com>
Cc: Chris Mason <mason@suse.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [RFC] using writepage to start io
Date: Tue, 7 Aug 2001 15:29:26 +0200 [thread overview]
Message-ID: <01080715292606.02365@starship> (raw)
In-Reply-To: <5.1.0.14.2.20010807123805.027f19a0@pop.cus.cam.ac.uk>
On Tuesday 07 August 2001 14:02, Anton Altaparmakov wrote:
> At 12:02 07/08/01, Stephen C. Tweedie wrote:
> >On Mon, Aug 06, 2001 at 11:18:26PM +0200, Daniel Phillips wrote:
> >FWIW, we've seen big performance degradations in the past when
> > testing different ext3 checkpointing modes. You can't reuse a disk
> > block in the journal without making sure that the data in it has
> > been flushed to disk, so ext3 does regular checkpointing to flush
> > journaled blocks out. That can interact very badly with normal VM
> > writeback if you're not careful: having two threads doing the same
> > thing at the same time can just thrash the disk.
> >
> >Parallel sync() calls from multiple processes has shown up the same
> >behaviour on ext2 in the past. I'd definitely like to see at most
> > one thread of writeback per disk to avoid that.
>
> Why not have a facility with which each fs can register their own
> writeback functions with a time interval? The daemon would be doing
> the writing to the device and would be invoking the fs registered
> writers every <time interval> seconds. That would avoid the problem of
> having two fs trying to write in parallel but that ignores the problem
> of having two parallel writers on separate partitions of the same disk
> but that could be solved at the fs writeback function level.
>
> At least for NTFS TNG I was thinking of having a daemon running every
> 5 seconds and committing dirty data to disk but it would be iterating
> over all mounted ntfs volumes in sequence and flushing all dirty data
> for each, thus avoiding concurrent writing to the same disk, which I
> had thought might cause a problem and you just confirmed it...[1]
Let me see:
Ext3 has its own writeback daemon
ReiserFS has its own writeback daemon
NTFS quite possibly will have its own writeback daemon
Tux2 has its own writeback daemon
xfs... does it?
jfs?
And then there is kupdate, which is a writeback daemon for all those
filesystems too dumb to have their own.
I think I see a pattern here. We must come up with a model for
efficient interaction between these writeback daemons, or better yet,
provide a generic mechanism that provides the scheduling for all fs
writeback, and knows about the fs->device topology.
> [1] I am aware this probably doesn't scale too well but considering a
> volume can span several disk partitions on the same disk or across
> several disks I don't see how to parallelize at the fs level.
One thread per block device; flushes across mounts on the same device
are serialized. This model works well for fs->device graphs that are
strict trees. For a non-strict tree (acyclic graph) its not clear
what to do, but you could argue that such a configuration is stupid,
so any kind of punt would do.
--
Daniel
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/
next prev parent reply other threads:[~2001-08-07 13:29 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2001-08-05 18:34 Chris Mason
2001-08-05 22:38 ` Daniel Phillips
2001-08-05 23:32 ` Chris Mason
2001-08-06 5:39 ` Daniel Phillips
2001-08-06 13:24 ` Chris Mason
2001-08-06 16:13 ` Daniel Phillips
2001-08-06 16:51 ` Chris Mason
2001-08-06 19:45 ` Daniel Phillips
2001-08-06 20:12 ` Chris Mason
2001-08-06 21:18 ` Daniel Phillips
2001-08-07 11:02 ` Stephen C. Tweedie
2001-08-07 11:39 ` Ed Tomlinson
2001-08-07 12:07 ` Anton Altaparmakov
2001-08-07 18:36 ` Daniel Phillips
2001-08-07 12:02 ` Anton Altaparmakov
2001-08-07 13:29 ` Daniel Phillips [this message]
2001-08-07 13:31 ` Alexander Viro
2001-08-07 15:52 ` Daniel Phillips
2001-08-07 14:23 ` Stephen C. Tweedie
2001-08-07 15:51 ` Daniel Phillips
2001-08-08 14:49 ` Stephen C. Tweedie
2001-08-06 15:13 ` Eric W. Biederman
-- strict thread matches above, loose matches on Subject: below --
2001-08-07 15:19 Chris Mason
[not found] <76740000.996336108@tiny>
2001-07-31 19:07 ` Chris Mason
2001-08-01 1:01 ` Daniel Phillips
2001-08-01 2:05 ` Chris Mason
2001-08-01 14:57 ` Daniel Phillips
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=01080715292606.02365@starship \
--to=phillips@bonn-fries.net \
--cc=aia21@cam.ac.uk \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mason@suse.com \
--cc=sct@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox