linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Nick Piggin <nickpiggin@yahoo.com.au>
To: "Martin J. Bligh" <mbligh@mbligh.org>
Cc: jschopp@austin.ibm.com, Mel Gorman <mel@csn.ul.ie>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org, akpm@osdl.org
Subject: Re: Avoiding external fragmentation with a placement policy Version 12
Date: Thu, 02 Jun 2005 10:20:08 +1000	[thread overview]
Message-ID: <429E50B8.1060405@yahoo.com.au> (raw)
In-Reply-To: <434510000.1117670555@flay>

Martin J. Bligh wrote:

> There's one example ... we can probably work around it if we try hard
> enough. However, the fundamental question becomes "do we support higher
> order allocs, or not?". If not fine ... but we ought to quit pretending
> we do. If so, then we need to make them more reliable.
> 

It appears that we basically support order 3 allocations and
less (those will stay in the page allocator until something
happens).

I see your point... Mel's patch has failure cases though.
For example, someone turns swap off, or mlocks some memory
(I guess we then add the page migration defrag patch and
problem is solved?).

I do see your point. The extra complexity makes me cringe though
(no offence to Mel - I'm sure it is a complex problem).

>>Yeah more or less. But with the fragmentation patch, it by
>>no means becomes an exact science ;) I wouldn't have thought
>>it would make it hugely easier to free an order 2 or 3 area
>>memory block on a loaded machine.
> 
> 
> Ummm. so the blunderbuss is an exact science? ;-) At least it fairly
> consistently doesn't work, I suppose ;-) ;-)
>  

No but I was just saying it is just another degree of
"unsuportedness" (or supportedness, if you are a half full man).

>>Why not just have kernel allocations going from the bottom
>>up, and user allocations going from the top down. That would
>>get you most of the way there, wouldn't it? (disclaimer: I
>>could well be talking shit here).
> 
> 
> Not sure it's quite that simple, though I haven't looked in detail
> at these patches. My point was merely that we need to do *something*.
> Off the top of my head ... what happens when kernel meets user in
> the middle. where do we free and allocate from now ? ;-) Once we've
> been up for a while, mem is nearly all used, nearly all of the time.
> 

No, I'm quite sure it isn't that simple, unfortunately. Hence
disclaimer ;)

> Is a good discussion to have though ;-)
> 

Yep, I was trying to help get something going!
Send instant messages to your online friends http://au.messenger.yahoo.com 
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>

  reply	other threads:[~2005-06-02  0:20 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-05-31 11:20 Mel Gorman
2005-06-01 20:55 ` Joel Schopp
2005-06-01 23:09   ` Nick Piggin
2005-06-01 23:23     ` David S. Miller, Nick Piggin
2005-06-01 23:28     ` Martin J. Bligh
2005-06-01 23:43       ` Nick Piggin
2005-06-02  0:02         ` Martin J. Bligh
2005-06-02  0:20           ` Nick Piggin [this message]
2005-06-02 13:55             ` Mel Gorman
2005-06-02 15:52             ` Joel Schopp
2005-06-02 19:50               ` Ray Bryant
2005-06-02 20:10                 ` Joel Schopp
2005-06-04 16:09                   ` Marcelo Tosatti
2005-06-03  3:48               ` Nick Piggin
2005-06-03  4:49                 ` David S. Miller, Nick Piggin
2005-06-03  5:34                   ` Martin J. Bligh
2005-06-03  5:37                     ` David S. Miller, Martin J. Bligh
2005-06-03  5:42                       ` Martin J. Bligh
2005-06-03  5:51                         ` David S. Miller, Martin J. Bligh
2005-06-03 13:13                         ` Mel Gorman
2005-06-03  6:43                     ` Nick Piggin
2005-06-03 13:57                       ` Martin J. Bligh
2005-06-03 16:43                         ` Dave Hansen
2005-06-03 18:43                           ` David S. Miller, Dave Hansen
2005-06-04  1:44                       ` Herbert Xu
2005-06-04  2:15                         ` Nick Piggin
2005-06-05 19:52                           ` David S. Miller, Nick Piggin
2005-06-03 13:05                 ` Mel Gorman
2005-06-03 14:00                   ` Martin J. Bligh
2005-06-08 17:03                     ` Mel Gorman
2005-06-08 17:18                       ` Martin J. Bligh
2005-06-10 16:20                         ` Christoph Lameter
2005-06-10 17:53                           ` Steve Lord
2005-06-02 18:28           ` Andi Kleen
2005-06-02 18:42             ` Martin J. Bligh
2005-06-02 13:15       ` Mel Gorman
2005-06-02 14:01         ` Martin J. Bligh
     [not found]       ` <20050603174706.GA25663@localhost.localdomain>
2005-06-03 17:56         ` Martin J. Bligh
2005-06-01 23:47     ` Mike Kravetz
2005-06-01 23:56       ` Nick Piggin
2005-06-02  0:07         ` Mike Kravetz
2005-06-02  9:49   ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=429E50B8.1060405@yahoo.com.au \
    --to=nickpiggin@yahoo.com.au \
    --cc=akpm@osdl.org \
    --cc=jschopp@austin.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mbligh@mbligh.org \
    --cc=mel@csn.ul.ie \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox