From: Peter Zijlstra <a.p.zijlstra@chello.nl>
To: Andy Whitcroft <apw@shadowen.org>
Cc: Andrew Morton <akpm@osdl.org>, Mel Gorman <mel@csn.ul.ie>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 0/2] Synchronous Lumpy Reclaim V3
Date: Thu, 02 Aug 2007 20:35:10 +0200 [thread overview]
Message-ID: <1186079710.11797.12.camel@lappy> (raw)
In-Reply-To: <exportbomb.1186077923@pinky>
On Thu, 2007-08-02 at 19:17 +0100, Andy Whitcroft wrote:
> [This is a re-spin based on feedback from akpm.]
>
> As pointed out by Mel when reclaim is applied at higher orders a
> significant amount of IO may be started. As this takes finite time
> to drain reclaim will consider more areas than ultimatly needed
> to satisfy the request. This leads to more reclaim than strictly
> required and reduced success rates.
>
> I was able to confirm Mel's test results on systems locally.
> These show that even under light load the success rates drop off far
> more than expected. Testing with a modified version of his patch
> (which follows) I was able to allocate almost all of ZONE_MOVABLE
> with a near idle system. I ran 5 test passes sequentially following
> system boot (the system has 29 hugepages in ZONE_MOVABLE):
>
> 2.6.23-rc1 11 8 6 7 7
> sync_lumpy 28 28 29 29 26
>
> These show that although hugely better than the near 0% success
> normally expected we can only allocate about a 1/4 of the zone.
> Using synchronous reclaim for these allocations we get close to 100%
> as expected.
>
> I have also run our standard high order tests and these show no
> regressions in allocation success rates at rest, and some significant
> improvements under load.
>
> Following this email are two patches, both should be considered as
> bug fixes to lumpy reclaim for 2.6.23:
>
> ensure-we-count-pages-transitioning-inactive-via-clear_active_flags:
> this a bug fix for Lumpy Reclaim fixing up a bug in VM Event
> accounting when it marks pages inactive, and
>
> Wait-for-page-writeback-when-directly-reclaiming-contiguous-areas:
> updates reclaim making direct reclaim synchronous when applied
> at orders above PAGE_ALLOC_COSTLY_ORDER.
>
> Patches against 2.6.23-rc1. Andrew please consider for -mm and
> for pushing to mainline.
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
prev parent reply other threads:[~2007-08-02 18:35 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-08-02 18:17 Andy Whitcroft
2007-08-02 18:18 ` [PATCH 1/2] ensure we count pages transitioning inactive via clear_active_flags Andy Whitcroft
2007-08-02 18:18 ` [PATCH 2/2] wait for page writeback when directly reclaiming contiguous areas Andy Whitcroft
2007-08-06 19:22 ` Andrew Morton
2007-08-07 15:31 ` [PATCH] synchronous lumpy: improve commentary on writeback wait Andy Whitcroft
2007-08-02 18:35 ` Peter Zijlstra [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1186079710.11797.12.camel@lappy \
--to=a.p.zijlstra@chello.nl \
--cc=akpm@osdl.org \
--cc=apw@shadowen.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mel@csn.ul.ie \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox