From: ebiederm+eric@npwt.net (Eric W. Biederman)
To: "Stephen C. Tweedie" <sct@dcs.ed.ac.uk>
Cc: Hans Reiser <reiser@ricochet.net>,
Shawn Leas <sleas@ixion.honeywell.com>,
Reiserfs <reiserfs@devlinux.com>,
Ken Tetrick <ktetrick@ixion.honeywell.com>,
linux-mm@kvack.org
Subject: Re: (reiserfs) Re: More on Re: (reiserfs) Reiserfs and ext2fs (was Re: (reiserfs) Sum Benchmarks (these look typical?))
Date: 30 Jun 1998 19:17:15 -0500 [thread overview]
Message-ID: <m1n2au77ck.fsf@flinx.npwt.net> (raw)
In-Reply-To: "Stephen C. Tweedie"'s message of Tue, 30 Jun 1998 17:10:46 +0100
>>>>> "ST" == Stephen C Tweedie <sct@dcs.ed.ac.uk> writes:
ST> Hi,
ST> On 29 Jun 1998 14:59:37 -0500, ebiederm+eric@npwt.net (Eric
ST> W. Biederman) said:
>> There are two problems I see.
>> 1) A DMA controller actively access the same memory the CPU is
>> accessing could be a problem. Recall video flicker on old video
>> cards.
ST> Shouldn't be a problem.
When either I trace through the code, or a hardware guy convinces me,
that it is safe to both write to a page, and do DMA from a page
simultaneously I'll believe it.
>> 2) More importantly the cpu writes to the _cache_, and the DMA
>> controller reads from the RAM. I don't see any consistency garnatees
>> there. We may be able solve these problems on a per architecture or
>> device basis however.
ST> Again, not important. If we ever modify a page which is already being
ST> written out to a device, then we mark that page dirty. On write, we
ST> mark it clean (but locked) _before_ starting the IO, not after. So, if
ST> there is ever an overlap of a filesystem/mmap write with an IO to disk,
ST> we will always schedule another IO later to clean the re-dirtied
ST> buffers.
Duh. I wonder what I was thinking...
Anyhow I've implemented the conservative version. The only
change needed is to change from unmapping pages to removing the dirty
bit, and the basic code stands.
The most important change needed would be to tell unuse_page it can't
remove a a locked page from the page cache. Either that or I need to
worry about incrementing the count for page writes, which wouldn't be
a bad idea either.
Eric
next prev parent reply other threads:[~1998-07-01 2:31 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <Pine.HPP.3.96.980617035608.29950A-100000@ixion.honeywell.com>
[not found] ` <199806221138.MAA00852@dax.dcs.ed.ac.uk>
[not found] ` <358F4FBE.821B333C@ricochet.net>
[not found] ` <m11zsgrvnf.fsf@flinx.npwt.net>
[not found] ` <199806241154.MAA03544@dax.dcs.ed.ac.uk>
[not found] ` <m11zse6ecw.fsf@flinx.npwt.net>
1998-06-25 11:00 ` Stephen C. Tweedie
1998-06-26 15:56 ` Eric W. Biederman
1998-06-29 10:35 ` Stephen C. Tweedie
1998-06-29 19:59 ` Eric W. Biederman
1998-06-30 16:10 ` Stephen C. Tweedie
1998-07-01 0:17 ` Eric W. Biederman [this message]
1998-07-01 9:12 ` Stephen C. Tweedie
1998-07-01 12:45 ` Eric W. Biederman
1998-07-01 13:11 ` Eric W. Biederman
1998-07-01 20:07 ` Stephen C. Tweedie
1998-07-02 15:17 ` Eric W. Biederman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=m1n2au77ck.fsf@flinx.npwt.net \
--to=ebiederm+eric@npwt.net \
--cc=ktetrick@ixion.honeywell.com \
--cc=linux-mm@kvack.org \
--cc=reiser@ricochet.net \
--cc=reiserfs@devlinux.com \
--cc=sct@dcs.ed.ac.uk \
--cc=sleas@ixion.honeywell.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox