linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: clameter@sgi.com
To: akpm@linux-foundation.org
Cc: linux-mm@kvack.org
Subject: [patch 07/10] Memoryless nodes: SLUB support
Date: Mon, 18 Jun 2007 12:20:03 -0700	[thread overview]
Message-ID: <20070618192545.764710140@sgi.com> (raw)
In-Reply-To: <20070618191956.411091458@sgi.com>

[-- Attachment #1: memless_slub --]
[-- Type: text/plain, Size: 3389 bytes --]

Simply switch all for_each_online_node to for_each_memory_node. That way
SLUB only operates on nodes with memory. Any allocation attempt on a
memoryless node will fall whereupon SLUB will fetch memory from a nearby
node (depending on how memory policies and cpuset describe fallback).

Signed-off-by: Christoph Lameter <clameter@sgi.com>

Index: linux-2.6.22-rc4-mm2/mm/slub.c
===================================================================
--- linux-2.6.22-rc4-mm2.orig/mm/slub.c	2007-06-18 11:16:15.000000000 -0700
+++ linux-2.6.22-rc4-mm2/mm/slub.c	2007-06-18 11:28:50.000000000 -0700
@@ -2086,7 +2086,7 @@ static void free_kmem_cache_nodes(struct
 {
 	int node;
 
-	for_each_online_node(node) {
+	for_each_memory_node(node) {
 		struct kmem_cache_node *n = s->node[node];
 		if (n && n != &s->local_node)
 			kmem_cache_free(kmalloc_caches, n);
@@ -2104,7 +2104,7 @@ static int init_kmem_cache_nodes(struct 
 	else
 		local_node = 0;
 
-	for_each_online_node(node) {
+	for_each_memory_node(node) {
 		struct kmem_cache_node *n;
 
 		if (local_node == node)
@@ -2366,7 +2366,7 @@ static inline int kmem_cache_close(struc
 	/* Attempt to free all objects */
 	free_kmem_cache_cpus(s);
 
-	for_each_online_node(node) {
+	for_each_memory_node(node) {
 		struct kmem_cache_node *n = get_node(s, node);
 
 		n->nr_partial -= free_list(s, n, &n->partial);
@@ -2937,7 +2937,7 @@ int kmem_cache_shrink(struct kmem_cache 
 	if (!scratch)
 		return -ENOMEM;
 
-	for_each_online_node(node)
+	for_each_memory_node(node)
 		__kmem_cache_shrink(s, get_node(s, node), scratch);
 
 	kfree(scratch);
@@ -3008,7 +3008,7 @@ int kmem_cache_defrag(int percent, int n
 		scratch = kmalloc(sizeof(struct list_head) * s->objects,
 								GFP_KERNEL);
 		if (node == -1) {
-			for_each_online_node(node)
+			for_each_memory_node(node)
 				pages += __kmem_cache_defrag(s, percent,
 							node, scratch);
 		} else
@@ -3392,7 +3392,7 @@ static unsigned long validate_slab_cache
 	unsigned long count = 0;
 
 	flush_all(s);
-	for_each_online_node(node) {
+	for_each_memory_node(node) {
 		struct kmem_cache_node *n = get_node(s, node);
 
 		count += validate_slab_node(s, n);
@@ -3611,7 +3611,7 @@ static int list_locations(struct kmem_ca
 	/* Push back cpu slabs */
 	flush_all(s);
 
-	for_each_online_node(node) {
+	for_each_memory_node(node) {
 		struct kmem_cache_node *n = get_node(s, node);
 		unsigned long flags;
 		struct page *page;
@@ -3723,7 +3723,7 @@ static unsigned long slab_objects(struct
 		}
 	}
 
-	for_each_online_node(node) {
+	for_each_memory_node(node) {
 		struct kmem_cache_node *n = get_node(s, node);
 
 		if (flags & SO_PARTIAL) {
@@ -3751,7 +3751,7 @@ static unsigned long slab_objects(struct
 
 	x = sprintf(buf, "%lu", total);
 #ifdef CONFIG_NUMA
-	for_each_online_node(node)
+	for_each_memory_node(node)
 		if (nodes[node])
 			x += sprintf(buf + x, " N%d=%lu",
 					node, nodes[node]);
@@ -3772,7 +3772,7 @@ static int any_slab_objects(struct kmem_
 			return 1;
 	}
 
-	for_each_online_node(node) {
+	for_each_memory_node(node) {
 		struct kmem_cache_node *n = get_node(s, node);
 
 		if (n && (n->nr_partial || atomic_read(&n->nr_slabs)))

-- 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2007-06-18 19:20 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-06-18 19:19 [patch 00/10] NUMA: Memoryless Node support V1 clameter
2007-06-18 19:19 ` [patch 01/10] Memoryless nodes: Fix GFP_THISNODE behavior clameter
2007-06-18 19:19 ` [patch 02/10] NUMA: Introduce node_memory_map clameter
2007-06-18 19:19 ` [patch 03/10] Fix MPOL_INTERLEAVE behavior for memoryless nodes clameter
2007-06-18 19:20 ` [patch 04/10] OOM: use the node_memory_map instead of constructing one on the fly clameter
2007-06-18 19:20 ` [patch 05/10] Memoryless Nodes: No need for kswapd clameter
2007-06-18 19:20 ` [patch 06/10] Memoryless Node: Slab support clameter
2007-06-18 19:20 ` clameter [this message]
2007-06-20 14:10   ` [patch 07/10] Memoryless nodes: SLUB support Lee Schermerhorn
2007-06-20 16:53     ` Christoph Lameter
2007-06-20 17:17       ` Lee Schermerhorn
2007-06-18 19:20 ` [patch 08/10] Uncached allocator: Handle memoryless nodes clameter
2007-06-19  6:59   ` Jes Sorensen
2007-06-18 19:20 ` [patch 09/10] Memoryless node: Allow profiling data to fall back to other nodes clameter
2007-06-18 19:20 ` [patch 10/10] Memoryless nodes: Update memory policy and page migration clameter
2007-06-19 18:48 ` [patch 00/10] NUMA: Memoryless Node support V1 Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20070618192545.764710140@sgi.com \
    --to=clameter@sgi.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox