From: Andrew Morton <akpm@digeo.com>
To: Daniel Phillips <phillips@arcor.de>
Cc: lkml <linux-kernel@vger.kernel.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"lse-tech@lists.sourceforge.net" <lse-tech@lists.sourceforge.net>
Subject: Re: 2.5.35-mm1
Date: Thu, 19 Sep 2002 01:19:40 -0700 [thread overview]
Message-ID: <3D89889C.F5868818@digeo.com> (raw)
In-Reply-To: <E17rw5X-0000vG-00@starship>
Daniel Phillips wrote:
>
> On Monday 16 September 2002 09:15, Andrew Morton wrote:
> > A 4x performance regression in heavy dbench testing has been fixed. The
> > VM was accidentally being fair to the dbench instances in page reclaim.
> > It's better to be unfair so just a few instances can get ahead and submit
> > more contiguous IO. It's a silly thing, but it's what I meant to do anyway.
>
> Curious... did the performance hit show anywhere other than dbench?
Other benchmarky tests would have suffered, but I did not check.
I have logic in there which is designed to throttle heavy writers
within the page allocator, as well as within balance_dirty_pages.
basically:
generic_file_write()
{
current->backing_dev_info = mapping->backing_dev_info;
alloc_page()
current->backing_dev_info = 0;
}
shrink_list()
{
if (PageDirty(page)) {
if (page->mapping->backing_dev_info == current->backing_dev_info)
blocking_write(page->mapping);
else
nonblocking_write(page->mapping);
}
}
What this says is "if this task is prepared to block against this
page's queue, then write the dirty data, even if that would block".
This means that all the dbench instances will write each other's
dirty data as it comes off the tail of the LRU. Which provides
some additional throttling, and means that we don't just refile
the page.
But the logic was not correctly implemented. The dbench instances
were performing non-blocking writes. This meant that all 64 instances
were cheerfully running all the time, submitting IO all over the disk.
The /proc/meminfo:Writeback figure never even hit a megabyte. That
number tells us how much memory is currently in the request queue.
Clearly, it was very fragmented.
By forcing the dbench instance to block on the queue, particular instances
were able to submit decent amounts of IO. The `Writeback' figure went
back to around 4 megabytes, because the individual requests were
larger - more merging.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/
prev parent reply other threads:[~2002-09-19 8:19 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2002-09-16 7:15 2.5.35-mm1 Andrew Morton
2002-09-17 16:07 ` 2.5.35-mm1 Pavel Machek
2002-09-18 21:31 ` 2.5.35-mm1 Andrew Morton
2002-09-18 21:54 ` 2.5.35-mm1 Pavel Machek
2002-09-19 7:51 ` 2.5.35-mm1 Daniel Phillips
2002-09-19 8:19 ` Andrew Morton [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3D89889C.F5868818@digeo.com \
--to=akpm@digeo.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lse-tech@lists.sourceforge.net \
--cc=phillips@arcor.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox