From: Christoph Lameter <christoph@lameter.com>
To: Matthew Dobson <colpatch@us.ibm.com>
Cc: Christoph Lameter <clameter@engr.sgi.com>,
Dave Hansen <haveblue@us.ibm.com>,
"Martin J. Bligh" <mbligh@mbligh.org>,
Jesse Barnes <jbarnes@virtuousgeek.org>,
Andy Whitcroft <apw@shadowen.org>, Andrew Morton <akpm@osdl.org>,
linux-mm <linux-mm@kvack.org>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
shai@scalex86.org, steiner@sgi.com
Subject: Re: NUMA aware slab allocator V3
Date: Wed, 18 May 2005 14:40:36 -0700 (PDT) [thread overview]
Message-ID: <Pine.LNX.4.62.0505181439080.10598@graphe.net> (raw)
In-Reply-To: <428BB05B.6090704@us.ibm.com>
On Wed, 18 May 2005, Matthew Dobson wrote:
> I can't promise anything, but if you send me the latest version of your
> patch (preferably with the loops fixed to eliminate the possibility of it
> accessing an unavailable/unusable node), I can build & boot it on a PPC64
> box and see what happens.
Ok. Maybe one of the other issues addressed will fix the issue.
------------------
Fixes to the slab allocator in 2.6.12-rc4-mm2
- Remove MAX_NUMNODES check
- use for_each_node/cpu
- Fix determination of INDEX_AC
Signed-off-by: Christoph Lameter <christoph@lameter.com>
Signed-off-by: Alok N Kataria <alokk@calsoftinc.com>
Index: linux-2.6.12-rc4/mm/slab.c
===================================================================
--- linux-2.6.12-rc4.orig/mm/slab.c 2005-05-17 02:20:02.000000000 +0000
+++ linux-2.6.12-rc4/mm/slab.c 2005-05-18 21:36:51.000000000 +0000
@@ -108,17 +108,6 @@
#include <asm/page.h>
/*
- * Some Linux kernels currently have weird notions of NUMA. Make sure that
- * there is only a single node if CONFIG_NUMA is not set. Remove this check
- * after the situation has stabilized.
- */
-#ifndef CONFIG_NUMA
-#if MAX_NUMNODES != 1
-#error "Broken Configuration: CONFIG_NUMA not set but MAX_NUMNODES !=1 !!"
-#endif
-#endif
-
-/*
* DEBUG - 1 for kmem_cache_create() to honour; SLAB_DEBUG_INITIAL,
* SLAB_RED_ZONE & SLAB_POISON.
* 0 for faster, smaller code (especially in the critical paths).
@@ -341,7 +330,7 @@
}
}
-#define INDEX_AC index_of(sizeof(struct array_cache))
+#define INDEX_AC index_of(sizeof(struct arraycache_init))
#define INDEX_L3 index_of(sizeof(struct kmem_list3))
#ifdef CONFIG_NUMA
@@ -800,7 +789,7 @@
limit = 12;
ac_ptr = kmalloc_node(memsize, GFP_KERNEL, node);
if (ac_ptr) {
- for (i = 0; i < MAX_NUMNODES; i++) {
+ for_each_node(i) {
if (i == node) {
ac_ptr[i] = NULL;
continue;
@@ -823,7 +812,7 @@
if (!ac_ptr)
return;
- for (i = 0; i < MAX_NUMNODES; i++)
+ for_each_node(i)
kfree(ac_ptr[i]);
kfree(ac_ptr);
@@ -847,7 +836,7 @@
struct array_cache *ac;
unsigned long flags;
- for (i = 0; i < MAX_NUMNODES; i++) {
+ for_each_node(i) {
ac = l3->alien[i];
if (ac) {
spin_lock_irqsave(&ac->lock, flags);
@@ -1197,7 +1186,7 @@
* Register the timers that return unneeded
* pages to gfp.
*/
- for (cpu = 0; cpu < NR_CPUS; cpu++) {
+ for_each_cpu(cpu) {
if (cpu_online(cpu))
start_cpu_timer(cpu);
}
@@ -1986,7 +1975,7 @@
drain_cpu_caches(cachep);
check_irq_on();
- for (i = 0; i < MAX_NUMNODES; i++) {
+ for_each_node(i) {
l3 = cachep->nodelists[i];
if (l3) {
spin_lock_irq(&l3->list_lock);
@@ -2064,11 +2053,11 @@
/* no cpu_online check required here since we clear the percpu
* array on cpu offline and set this to NULL.
*/
- for (i = 0; i < NR_CPUS; i++)
+ for_each_cpu(i)
kfree(cachep->array[i]);
/* NUMA: free the list3 structures */
- for (i = 0; i < MAX_NUMNODES; i++) {
+ for_each_node(i) {
if ((l3 = cachep->nodelists[i])) {
kfree(l3->shared);
#ifdef CONFIG_NUMA
@@ -2975,7 +2964,7 @@
if (!pdata)
return NULL;
- for (i = 0; i < NR_CPUS; i++) {
+ for_each_cpu(i) {
if (!cpu_possible(i))
continue;
pdata->ptrs[i] = kmalloc_node(size, GFP_KERNEL,
@@ -3075,7 +3064,7 @@
int i;
struct percpu_data *p = (struct percpu_data *) (~(unsigned long) objp);
- for (i = 0; i < NR_CPUS; i++) {
+ for_each_cpu(i) {
if (!cpu_possible(i))
continue;
kfree(p->ptrs[i]);
@@ -3189,7 +3178,7 @@
struct kmem_list3 *l3;
int err = 0;
- for (i = 0; i < NR_CPUS; i++) {
+ for_each_cpu(i) {
if (cpu_online(i)) {
struct array_cache *nc = NULL, *new;
#ifdef CONFIG_NUMA
@@ -3280,7 +3269,7 @@
int i, err;
memset(&new.new,0,sizeof(new.new));
- for (i = 0; i < NR_CPUS; i++) {
+ for_each_cpu(i) {
if (cpu_online(i)) {
new.new[i] = alloc_arraycache(i, limit, batchcount);
if (!new.new[i]) {
@@ -3302,7 +3291,7 @@
cachep->shared = shared;
spin_unlock_irq(&cachep->spinlock);
- for (i = 0; i < NR_CPUS; i++) {
+ for_each_cpu(i) {
struct array_cache *ccold = new.new[i];
if (!ccold)
continue;
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>
next prev parent reply other threads:[~2005-05-18 21:40 UTC|newest]
Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-05-11 15:17 NUMA aware slab allocator V2 Christoph Lameter
2005-05-11 15:46 ` Jack Steiner
2005-05-12 7:04 ` Andrew Morton
2005-05-12 9:39 ` Niraj kumar
2005-05-12 20:02 ` Christoph Lameter
2005-05-12 20:22 ` Andrew Morton
2005-05-13 7:06 ` Andrew Morton
2005-05-13 11:21 ` Christoph Lameter
2005-05-13 11:33 ` Andrew Morton
2005-05-13 11:37 ` Christoph Lameter
2005-05-13 13:56 ` Dave Hansen
2005-05-13 16:20 ` Christoph Lameter
2005-05-14 1:24 ` NUMA aware slab allocator V3 Christoph Lameter
2005-05-14 7:42 ` Andrew Morton
2005-05-14 16:24 ` Christoph Lameter
2005-05-16 5:00 ` Andrew Morton
2005-05-16 13:52 ` Dave Hansen
2005-05-16 16:47 ` Christoph Lameter
2005-05-16 17:22 ` Dave Hansen
2005-05-16 17:54 ` Christoph Lameter
2005-05-16 18:08 ` Martin J. Bligh
2005-05-16 21:10 ` Jesse Barnes
2005-05-16 21:21 ` Martin J. Bligh
2005-05-17 0:14 ` Christoph Lameter
2005-05-17 0:26 ` Dave Hansen
2005-05-17 23:36 ` Matthew Dobson
2005-05-17 23:49 ` Christoph Lameter
2005-05-18 17:27 ` Matthew Dobson
2005-05-18 17:48 ` Christoph Lameter
2005-05-18 21:15 ` Matthew Dobson
2005-05-18 21:40 ` Christoph Lameter [this message]
2005-05-19 5:07 ` Christoph Lameter
2005-05-19 16:14 ` Jesse Barnes
2005-05-19 19:03 ` Matthew Dobson
2005-05-19 21:46 ` Matthew Dobson
2005-05-20 19:03 ` Matthew Dobson
2005-05-20 19:23 ` Christoph Lameter
2005-05-20 20:20 ` Matthew Dobson
2005-05-20 21:30 ` Matthew Dobson
2005-05-20 23:42 ` Christoph Lameter
2005-05-24 21:37 ` Christoph Lameter
2005-05-24 23:02 ` Matthew Dobson
2005-05-25 5:21 ` Christoph Lameter
2005-05-25 18:27 ` Matthew Dobson
2005-05-25 21:03 ` Christoph Lameter
2005-05-26 6:48 ` Martin J. Bligh
2005-05-28 1:59 ` NUMA aware slab allocator V4 Christoph Lameter
2005-05-16 21:54 ` NUMA aware slab allocator V3 Dave Hansen
2005-05-16 18:12 ` Dave Hansen
2005-05-13 13:46 ` NUMA aware slab allocator V2 Dave Hansen
2005-05-17 23:29 ` Matthew Dobson
2005-05-18 1:07 ` Christoph Lameter
2005-05-12 21:49 ` Robin Holt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Pine.LNX.4.62.0505181439080.10598@graphe.net \
--to=christoph@lameter.com \
--cc=akpm@osdl.org \
--cc=apw@shadowen.org \
--cc=clameter@engr.sgi.com \
--cc=colpatch@us.ibm.com \
--cc=haveblue@us.ibm.com \
--cc=jbarnes@virtuousgeek.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mbligh@mbligh.org \
--cc=shai@scalex86.org \
--cc=steiner@sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox