From: "Martin J. Bligh" <mbligh@aracnet.com>
To: Daniel Phillips <phillips@arcor.de>, Mel Gorman <mel@csn.ul.ie>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [RFC] My research agenda for 2.7
Date: Fri, 27 Jun 2003 08:04:39 -0700 [thread overview]
Message-ID: <25700000.1056726277@[10.10.2.4]> (raw)
In-Reply-To: <200306271654.46491.phillips@arcor.de>
--Daniel Phillips <phillips@arcor.de> wrote (on Friday, June 27, 2003 16:54:46 +0200):
> On Friday 27 June 2003 16:43, Martin J. Bligh wrote:
>> The buddy allocator is not a good system for getting rid of fragmentation.
>
> We've talked in the past about throwing out the buddy allocator and adopting
> something more modern and efficient and I hope somebody will actually get
> around to doing that. In any event, defragging is an orthogonal issue. Some
> allocation strategies may be statistically more resistiant to fragmentation
> than others, but no allocator has been invented, or ever will be, that can
> guarantee that terminal fragmentation will never occur - only active
> defragmentation can provide such a guarantee.
Whilst I agree with that in principle, it's inevitably expensive. Thus
whilst we may need to have that code, we should try to avoid using it ;-)
The buddy allocator is obviously flawed in this department ... strategies
like allocating a 4M block to a process up front, then allocing out of that
until we're low on mem, then reclaiming in as large blocks as possible from
those process caches, etc, etc, would obviously help too. Though maybe
we're just permanently low on mem after a while, so it'd be better to just
group pagecache pages together ... that would actually be pretty simple to
change ... hmmm.
M.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>
next prev parent reply other threads:[~2003-06-27 15:04 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2003-06-24 23:11 Daniel Phillips
2003-06-25 0:47 ` William Lee Irwin III
2003-06-25 1:07 ` Daniel Phillips
2003-06-25 1:10 ` William Lee Irwin III
2003-06-25 1:25 ` Daniel Phillips
2003-06-25 1:30 ` William Lee Irwin III
2003-06-25 9:29 ` Mel Gorman
2003-06-26 19:00 ` Daniel Phillips
2003-06-26 20:01 ` Mel Gorman
2003-06-26 20:10 ` Andrew Morton
2003-06-27 0:30 ` Daniel Phillips
2003-06-27 13:00 ` Mel Gorman
2003-06-27 14:38 ` Martin J. Bligh
2003-06-27 14:41 ` Daniel Phillips
2003-06-27 14:43 ` Martin J. Bligh
2003-06-27 14:54 ` Daniel Phillips
2003-06-27 15:04 ` Martin J. Bligh [this message]
2003-06-27 15:17 ` Daniel Phillips
2003-06-27 15:22 ` Mel Gorman
2003-06-27 15:50 ` Daniel Phillips
2003-06-27 16:00 ` Daniel Phillips
2003-06-29 19:25 ` Mel Gorman
2003-06-28 21:06 ` Daniel Phillips
2003-06-29 21:26 ` Mel Gorman
2003-06-28 21:54 ` Daniel Phillips
2003-06-29 22:07 ` William Lee Irwin III
2003-06-28 23:18 ` Daniel Phillips
2003-07-02 21:10 ` Mike Fedyk
2003-07-03 2:04 ` Larry McVoy
2003-07-03 2:20 ` William Lee Irwin III
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='25700000.1056726277@[10.10.2.4]' \
--to=mbligh@aracnet.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mel@csn.ul.ie \
--cc=phillips@arcor.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox