linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Martin J. Bligh" <mbligh@mbligh.org>
To: Mel Gorman <mel@csn.ul.ie>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>,
	jschopp@austin.ibm.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, akpm@osdl.org
Subject: Re: Avoiding external fragmentation with a placement policy Version 12
Date: Thu, 02 Jun 2005 07:01:37 -0700	[thread overview]
Message-ID: <333490000.1117720896@[10.10.2.4]> (raw)
In-Reply-To: <Pine.LNX.4.58.0506021120120.4112@skynet>

>> >> Other than the very minor whitespace changes above I have nothing bad to
>> >> say about this patch.  I think it is about time to pick in up in -mm for
>> >> wider testing.
>> >> 
>> > 
>> > It adds a lot of complexity to the page allocator and while
>> > it might be very good, the only improvement we've been shown
>> > yet is allocating lots of MAX_ORDER allocations I think? (ie.
>> > not very useful)
>> 
>> I agree that MAX_ORDER allocs aren't interesting, but we can hit
>> frag problems easily at way less than max order. CIFS does it, NFS
>> does it, jumbo frame gigabit ethernet does it, to name a few. The
>> most common failure I see is order 3.
>> 
> 
> I focused on the MAX_ORDER allocations for two reasons. The first is
> because they are very difficult to satisfy. If we can service MAX_ORDER
> allocations, we can certainly service order 3. The second is that my very
> long-term (and currently vapour-ware) aim is to transparently support
> large pages which will require 4MiB blocks on the x86 at least.

Oh, I wasn't arguing with your approach ... is always better to go a bit
further. Was just illustrating that there are real world problems right
now that hit this stuff, ergo we need it. Yes, I'd like to be able to do 
large page, memory hotplug, etc too ... but if people aren't excited about
those, there are plenty of other reasons to fix the frag problem.

It seems apparent statistically that the larger the machine, the worse the
frag problem is, as we'll blow away more memory before getting contig 
blocks. If it wasn't pre-7am, I'd try to calculate the statistics, but 
frankly, I can't be bothered ;-) I'm sure there are others whose math
degree is less rusty than mine.

> With this allocator, we are still using a blunderbus approach but the
> chances of big enough chunks been available are a lot better. I released a
> proof-of-concept patch that freed pages by linearly scanning that worked
> very well, but it needs a lot of work. Linearly scanning would help
> guarantee high-order allocations but the penalty is that LRU-ordering
> would be violated.

Yes, would be nice ... but we need to gather things into freeable and 
non-freeable either way, it seems, so doesn't invalidate what you're 
doing at all.

It seems apparent statistically that the larger the machine, the worse the
frag problem is, as we'll blow away more memory before getting contig 
blocks. If it wasn't pre-7am, I'd try to calculate the statistics, but 
frankly, I can't be bothered ;-) I'm sure there are others whose math
degree is less rusty than mine, and I'd hate to deprive them of the 
opportunity to play ;-)

> To test lower-order allocations, I ran a slightly different test where I
> tried to allocate 6000 order-5 pages under heavy pressure. The standard
> allocator repeatadly went OOM and allocated 5190 pages. The modified one
> did not OOM and allocated 5961. The test is not very fair though because
> it pins memory and the allocations are type GFP_KERNEL. For the gigabit
> ethernet and network filesystem tests, I imagine we are dealing with
> GFP_ATOMIC or GFP_NFS?

cifsd: page allocation failure. order:3, mode:0xd0

M.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>

  reply	other threads:[~2005-06-02 14:01 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-05-31 11:20 Mel Gorman
2005-06-01 20:55 ` Joel Schopp
2005-06-01 23:09   ` Nick Piggin
2005-06-01 23:23     ` David S. Miller, Nick Piggin
2005-06-01 23:28     ` Martin J. Bligh
2005-06-01 23:43       ` Nick Piggin
2005-06-02  0:02         ` Martin J. Bligh
2005-06-02  0:20           ` Nick Piggin
2005-06-02 13:55             ` Mel Gorman
2005-06-02 15:52             ` Joel Schopp
2005-06-02 19:50               ` Ray Bryant
2005-06-02 20:10                 ` Joel Schopp
2005-06-04 16:09                   ` Marcelo Tosatti
2005-06-03  3:48               ` Nick Piggin
2005-06-03  4:49                 ` David S. Miller, Nick Piggin
2005-06-03  5:34                   ` Martin J. Bligh
2005-06-03  5:37                     ` David S. Miller, Martin J. Bligh
2005-06-03  5:42                       ` Martin J. Bligh
2005-06-03  5:51                         ` David S. Miller, Martin J. Bligh
2005-06-03 13:13                         ` Mel Gorman
2005-06-03  6:43                     ` Nick Piggin
2005-06-03 13:57                       ` Martin J. Bligh
2005-06-03 16:43                         ` Dave Hansen
2005-06-03 18:43                           ` David S. Miller, Dave Hansen
2005-06-04  1:44                       ` Herbert Xu
2005-06-04  2:15                         ` Nick Piggin
2005-06-05 19:52                           ` David S. Miller, Nick Piggin
2005-06-03 13:05                 ` Mel Gorman
2005-06-03 14:00                   ` Martin J. Bligh
2005-06-08 17:03                     ` Mel Gorman
2005-06-08 17:18                       ` Martin J. Bligh
2005-06-10 16:20                         ` Christoph Lameter
2005-06-10 17:53                           ` Steve Lord
2005-06-02 18:28           ` Andi Kleen
2005-06-02 18:42             ` Martin J. Bligh
2005-06-02 13:15       ` Mel Gorman
2005-06-02 14:01         ` Martin J. Bligh [this message]
     [not found]       ` <20050603174706.GA25663@localhost.localdomain>
2005-06-03 17:56         ` Martin J. Bligh
2005-06-01 23:47     ` Mike Kravetz
2005-06-01 23:56       ` Nick Piggin
2005-06-02  0:07         ` Mike Kravetz
2005-06-02  9:49   ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='333490000.1117720896@[10.10.2.4]' \
    --to=mbligh@mbligh.org \
    --cc=akpm@osdl.org \
    --cc=jschopp@austin.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mel@csn.ul.ie \
    --cc=nickpiggin@yahoo.com.au \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox