linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Christoph Lameter <clameter@sgi.com>
To: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: netdev@vger.kernel.org, linux-mm@kvack.org,
	David Miller <davem@davemloft.net>
Subject: Re: [RFC][PATCH 1/6] mm: slab allocation fairness
Date: Thu, 30 Nov 2006 10:52:52 -0800 (PST)	[thread overview]
Message-ID: <Pine.LNX.4.64.0611301049220.23820@schroedinger.engr.sgi.com> (raw)
In-Reply-To: <20061130101921.113055000@chello.nl>>

On Thu, 30 Nov 2006, Peter Zijlstra wrote:

> The slab has some unfairness wrt gfp flags; when the slab is grown the gfp 
> flags are used to allocate more memory, however when there is slab space 
> available, gfp flags are ignored. Thus it is possible for less critical 
> slab allocations to succeed and gobble up precious memory.

The gfpflags are ignored if there are

1) objects in the per cpu, shared or alien caches

2) objects are in partial or free slabs in the per node queues.

> This patch avoids this by keeping track of the allocation hardness when 
> growing. This is then compared to the current slab alloc's gfp flags.

The approach is to force the allocation of additional slab to increase the 
number of free slabs? The next free will drop the number of free slabs 
back again to the allowed amount.

I would think that one would need a rank with each cached object and 
free slab in order to do this the right way.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2006-11-30 18:52 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-11-30 10:14 [RFC][PATCH 0/6] VM deadlock avoidance -v9 Peter Zijlstra
2006-11-30 10:14 ` [RFC][PATCH 1/6] mm: slab allocation fairness Peter Zijlstra
2006-11-30 18:52   ` Christoph Lameter [this message]
2006-11-30 18:55     ` Peter Zijlstra
2006-11-30 19:33       ` Christoph Lameter
2006-11-30 19:33         ` Peter Zijlstra
2006-12-01 11:28         ` Peter Zijlstra
2006-11-30 19:02     ` Peter Zijlstra
2006-11-30 19:37       ` Christoph Lameter
2006-11-30 19:40         ` Peter Zijlstra
2006-11-30 20:11           ` Christoph Lameter
2006-11-30 20:15             ` Peter Zijlstra
2006-11-30 20:29               ` Christoph Lameter
2006-11-30 10:14 ` [RFC][PATCH 2/6] mm: allow PF_MEMALLOC from softirq context Peter Zijlstra
2006-11-30 10:14 ` [RFC][PATCH 3/6] mm: serialize access to min_free_kbytes Peter Zijlstra
2006-11-30 10:14 ` [RFC][PATCH 4/6] mm: emergency pool and __GFP_EMERGENCY Peter Zijlstra
2006-11-30 10:14 ` [RFC][PATCH 5/6] slab: kmem_cache_objs_to_pages() Peter Zijlstra
2006-11-30 18:55   ` Christoph Lameter
2006-11-30 18:55     ` Peter Zijlstra
2006-11-30 19:06       ` Christoph Lameter
2006-11-30 19:03         ` Peter Zijlstra
2006-12-01 12:14     ` Peter Zijlstra
2006-11-30 10:14 ` [RFC][PATCH 6/6] net: vm deadlock avoidance core Peter Zijlstra
2006-11-30 12:04   ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.64.0611301049220.23820@schroedinger.engr.sgi.com \
    --to=clameter@sgi.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=davem@davemloft.net \
    --cc=linux-mm@kvack.org \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox