From: "Martin J. Bligh" <mbligh@mbligh.org>
To: Mel Gorman <mel@csn.ul.ie>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>,
jschopp@austin.ibm.com, linux-mm@kvack.org,
lkml <linux-kernel@vger.kernel.org>,
Andrew Morton <akpm@osdl.org>
Subject: Re: Avoiding external fragmentation with a placement policy Version 12
Date: Wed, 08 Jun 2005 10:18:02 -0700 [thread overview]
Message-ID: <537960000.1118251081@[10.10.2.4]> (raw)
In-Reply-To: <Pine.LNX.4.58.0506081734480.10706@skynet>
>> > Unfortunately, it is a fundemental flaw of the buddy allocator that it
>> > fragments badly. The thing is, other allocators that do not fragment are
>> > also slower.
>>
>> Do we care? 99.9% of allocations are fronted by the hot/cold page cache
>> now anyway ...
>
> Very true, but only for order-0 allocations. As it is, higher order
> allocations are a lot less important because Linux has always avoided them
> unless absolutely necessary. I would like to reach the point where we can
> reliably allocate large blocks of memory so we do not have to split large
> amounts of data into page-sized chunks all the time.
Right. I agree that large allocs should be reliable. Whether we care so
much about if they're performant or not, I don't know ... is an interesting
question. I think the answer is maybe not, within reason. The cost of
fishing in the allocator might well be irrelevant compared to the cost
of freeing the necessary memory area?
> I did measure it and there is a slow-down on high order allocations which
> is not very surprising. The following is the result of a micro-benchmark
> comparing the standard and modified allocator for 1500 order-5
> allocations.
>
> Standard
> Average Max Min Allocs
> ------- --- --- ------
> 0.73 1.09 0.53 1476
> 1.33 1.87 1.10 23
> 2.10 2.10 2.10 1
>
> Modified
> Average Max Min Allocs
> ------- --- --- ------
> 0.82 1.23 0.60 1440
> 1.36 1.96 1.23 57
> 2.42 2.92 2.09 3
>
> The average, max and min are in 1000's of clock cycles for an allocation
> so there is not a massive difference between the two allocators. Aim9
> still shows that overall, the modified allocator is as fast as the normal
> allocator.
Mmmm. that doesn't look too bad at all to me.
> High order allocations do slow down a lot when under memory pressure and
> neither allocator performs very well although the modified allocator
> probably performs worse as it has more lists to search. In the case of the
> placement policy though, I can work on the linear scanning patch to avoid
> using a blunderbuss on memory. With the standard allocator, linear scanning
> will not help significantly because non-reclaimable memory is scattered
> all over the place.
>
> I have also found that the modified allocator can fairly reliably allocate
> memory on a desktop system which has been running a full day where the
> standard allocator cannot. However, that experience is subjective and
> benchmarks based on loads like kernel compiles will not be anything like a
> desktop system. At the very least, kernel compiles, while they load the
> system, will not pin memory used for PTEs like a desktop running
> long-lived applications would.
>
> I'll work on reproducing scenarios that show where the standard allocator
> fails to allocate large blocks of memory without paging everything out
> that the placement policy works with.
Sounds great ... would be really valuable to get those testcases.
M.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>
next prev parent reply other threads:[~2005-06-08 17:18 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-05-31 11:20 Mel Gorman
2005-06-01 20:55 ` Joel Schopp
2005-06-01 23:09 ` Nick Piggin
2005-06-01 23:23 ` David S. Miller, Nick Piggin
2005-06-01 23:28 ` Martin J. Bligh
2005-06-01 23:43 ` Nick Piggin
2005-06-02 0:02 ` Martin J. Bligh
2005-06-02 0:20 ` Nick Piggin
2005-06-02 13:55 ` Mel Gorman
2005-06-02 15:52 ` Joel Schopp
2005-06-02 19:50 ` Ray Bryant
2005-06-02 20:10 ` Joel Schopp
2005-06-04 16:09 ` Marcelo Tosatti
2005-06-03 3:48 ` Nick Piggin
2005-06-03 4:49 ` David S. Miller, Nick Piggin
2005-06-03 5:34 ` Martin J. Bligh
2005-06-03 5:37 ` David S. Miller, Martin J. Bligh
2005-06-03 5:42 ` Martin J. Bligh
2005-06-03 5:51 ` David S. Miller, Martin J. Bligh
2005-06-03 13:13 ` Mel Gorman
2005-06-03 6:43 ` Nick Piggin
2005-06-03 13:57 ` Martin J. Bligh
2005-06-03 16:43 ` Dave Hansen
2005-06-03 18:43 ` David S. Miller, Dave Hansen
2005-06-04 1:44 ` Herbert Xu
2005-06-04 2:15 ` Nick Piggin
2005-06-05 19:52 ` David S. Miller, Nick Piggin
2005-06-03 13:05 ` Mel Gorman
2005-06-03 14:00 ` Martin J. Bligh
2005-06-08 17:03 ` Mel Gorman
2005-06-08 17:18 ` Martin J. Bligh [this message]
2005-06-10 16:20 ` Christoph Lameter
2005-06-10 17:53 ` Steve Lord
2005-06-02 18:28 ` Andi Kleen
2005-06-02 18:42 ` Martin J. Bligh
2005-06-02 13:15 ` Mel Gorman
2005-06-02 14:01 ` Martin J. Bligh
[not found] ` <20050603174706.GA25663@localhost.localdomain>
2005-06-03 17:56 ` Martin J. Bligh
2005-06-01 23:47 ` Mike Kravetz
2005-06-01 23:56 ` Nick Piggin
2005-06-02 0:07 ` Mike Kravetz
2005-06-02 9:49 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='537960000.1118251081@[10.10.2.4]' \
--to=mbligh@mbligh.org \
--cc=akpm@osdl.org \
--cc=jschopp@austin.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mel@csn.ul.ie \
--cc=nickpiggin@yahoo.com.au \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox