From: David Wragg <dpw@doc.ic.ac.uk>
To: "Benjamin C.R. LaHaise" <blah@kvack.org>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: limit on number of kmapped pages
Date: 24 Jan 2001 10:09:22 +0000 [thread overview]
Message-ID: <y7r7l3ldzxp.fsf@sytry.doc.ic.ac.uk> (raw)
In-Reply-To: "Benjamin C.R. LaHaise"'s message of "Tue, 23 Jan 2001 21:03:09 -0500 (EST)"
"Benjamin C.R. LaHaise" <blah@kvack.org> writes:
> On 24 Jan 2001, David Wragg wrote:
>
> > ebiederm@xmission.com (Eric W. Biederman) writes:
> > > Why do you need such a large buffer?
> >
> > ext2 doesn't guarantee sustained write bandwidth (in particular,
> > writing a page to an ext2 file can have a high latency due to reading
> > the block bitmap synchronously). To deal with this I need at least a
> > 2MB buffer.
>
> This is the wrong way of going about things -- you should probably insert
> the pages into the page cache and write them into the filesystem via
> writepage.
I currently use prepare_write/commit_write, but I think writepage
would have the same issue: When ext2 allocates a block, and has to
allocate from a new block group, it may do a synchronous read of the
new block group bitmap. So before the writepage (or whatever) that
causes this completes, it has to wait for the read to get picked by
the elevator, the seek for the read, etc. By the time it gets back to
writing normally, I've buffered a couple of MB of data.
But I do have a workaround for the ext2 issue.
> That way the pages don't need to be mapped while being written
> out.
Point taken, though the kmap needed before prepare_write is much less
significant than the kmap I need to do before copying data into the
page.
> For incoming data from a network socket, making use of the
> data_ready callbacks and directly copying from the skbs in one pass with a
> kmap of only one page at a time.
>
> Maybe I'm guessing incorrect at what is being attempted, but kmap should
> be used sparingly and as briefly as possible.
I'm going to see if the one-page-kmapped approach makes a measurable
difference.
I'd still like to know what the basis for the current kmap limit
setting is.
David Wragg
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
next prev parent reply other threads:[~2001-01-24 10:09 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <y7rsnmav0cv.fsf@sytry.doc.ic.ac.uk>
2001-01-23 18:23 ` Eric W. Biederman
2001-01-24 0:35 ` David Wragg
2001-01-24 2:03 ` Benjamin C.R. LaHaise
2001-01-24 10:09 ` David Wragg [this message]
2001-01-24 14:27 ` Eric W. Biederman
2001-01-25 10:06 ` Random thoughts on sustained write performance Daniel Phillips
[not found] ` <y7rsnm7mai7.fsf@sytry.doc.ic.ac.uk>
[not found] ` <01012615062602.20169@gimli>
2001-01-27 13:50 ` David Wragg
2001-01-27 17:23 ` Daniel Phillips
2001-01-27 21:23 ` David Wragg
2001-01-25 18:16 ` limit on number of kmapped pages Stephen C. Tweedie
2001-01-25 23:53 ` David Wragg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=y7r7l3ldzxp.fsf@sytry.doc.ic.ac.uk \
--to=dpw@doc.ic.ac.uk \
--cc=blah@kvack.org \
--cc=ebiederm@xmission.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox