* [RFC][PATCH 1/4] hugetlb: search harder for memory in alloc_fresh_huge_page()
@ 2007-08-09 0:47 Nishanth Aravamudan
2007-08-09 0:49 ` [RFC][PATCH 2/4] hugetlb: fix pool allocation with empty nodes Nishanth Aravamudan
0 siblings, 1 reply; 6+ messages in thread
From: Nishanth Aravamudan @ 2007-08-09 0:47 UTC (permalink / raw)
To: clameter; +Cc: anton, lee.schermerhorn, wli, linux-mm
Currently, alloc_fresh_huge_page() returns NULL when it is not able to
allocate a huge page on the current node, as specified by its custom
interleave variable. The callers of this function, though, assume that a
failure in alloc_fresh_huge_page() indicates no hugepages can be
allocated on the system period. This might not be the case, for
instance, if we have an uneven NUMA system, and we happen to try to
allocate a hugepage on a node with less memory and fail, while there is
still plenty of free memory on the other nodes.
To correct this, make alloc_fresh_huge_page() search through all online
nodes before deciding no hugepages can be allocated. Add a helper
function for actually allocating the hugepage. Also, while we expect
particular semantics for __GFP_THISNODE, which are newly enforced --
that is, that the allocation won't go off-node -- still use
page_to_nid() to guarantee we don't mess up the accounting.
Tested on 4-node ppc64 (2 memoryless nodes), 2-node IA64, 4-node x86
(NUMAQ), !NUMA x86
Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
---
with just Christoph's patches, on a 4-node ppc64 with 2 memoryless nodes:
Trying to clear the hugetlb pool
Done. 0 free
Trying to resize the pool to 100
Node 3 HugePages_Free: 0
Node 2 HugePages_Free: 0
Node 1 HugePages_Free: 75
Node 0 HugePages_Free: 25
Done. Initially 100 free
Trying to resize the pool to 200
Node 3 HugePages_Free: 0
Node 2 HugePages_Free: 0
Node 1 HugePages_Free: 150
Node 0 HugePages_Free: 50
Done. 200 free
Trying to resize the pool back to 100
Node 3 HugePages_Free: 0
Node 2 HugePages_Free: 0
Node 1 HugePages_Free: 100
Node 0 HugePages_Free: 0
Done. 100 free
with this patch on top (THISNODE forces allocations to stay on-node and
thus are balanced):
Trying to clear the hugetlb pool
Done. 0 free
Trying to resize the pool to 100
Node 3 HugePages_Free: 0
Node 2 HugePages_Free: 0
Node 1 HugePages_Free: 50
Node 0 HugePages_Free: 50
Done. Initially 100 free
Trying to resize the pool to 200
Node 3 HugePages_Free: 0
Node 2 HugePages_Free: 0
Node 1 HugePages_Free: 100
Node 0 HugePages_Free: 100
Done. 200 free
Trying to resize the pool back to 100
Node 3 HugePages_Free: 0
Node 2 HugePages_Free: 0
Node 1 HugePages_Free: 100
Node 0 HugePages_Free: 0
Done. 100 free
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index d7ca59d..7f6ab1b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -101,26 +101,13 @@ static void free_huge_page(struct page *page)
spin_unlock(&hugetlb_lock);
}
-static int alloc_fresh_huge_page(void)
+static struct page *alloc_fresh_huge_page_node(int nid)
{
- static int prev_nid;
struct page *page;
- int nid;
-
- /*
- * Copy static prev_nid to local nid, work on that, then copy it
- * back to prev_nid afterwards: otherwise there's a window in which
- * a racer might pass invalid nid MAX_NUMNODES to alloc_pages_node.
- * But we don't need to use a spin_lock here: it really doesn't
- * matter if occasionally a racer chooses the same nid as we do.
- */
- nid = next_node(prev_nid, node_online_map);
- if (nid == MAX_NUMNODES)
- nid = first_node(node_online_map);
- prev_nid = nid;
- page = alloc_pages_node(nid, htlb_alloc_mask|__GFP_COMP|__GFP_NOWARN,
- HUGETLB_PAGE_ORDER);
+ page = alloc_pages_node(nid,
+ htlb_alloc_mask|__GFP_COMP|__GFP_THISNODE|__GFP_NOWARN,
+ HUGETLB_PAGE_ORDER);
if (page) {
set_compound_page_dtor(page, free_huge_page);
spin_lock(&hugetlb_lock);
@@ -128,9 +115,45 @@ static int alloc_fresh_huge_page(void)
nr_huge_pages_node[page_to_nid(page)]++;
spin_unlock(&hugetlb_lock);
put_page(page); /* free it into the hugepage allocator */
- return 1;
}
- return 0;
+
+ return page;
+}
+
+static int alloc_fresh_huge_page(void)
+{
+ static int nid = -1;
+ struct page *page;
+ int start_nid;
+ int next_nid;
+ int ret = 0;
+
+ if (nid < 0)
+ nid = first_node(node_online_map);
+ start_nid = nid;
+
+ do {
+ page = alloc_fresh_huge_page_node(nid);
+ if (page)
+ ret = 1;
+ /*
+ * Use a helper variable to find the next node and then
+ * copy it back to nid nid afterwards: otherwise there's
+ * a window in which a racer might pass invalid nid
+ * MAX_NUMNODES to alloc_pages_node. But we don't need
+ * to use a spin_lock here: it really doesn't matter if
+ * occasionally a racer chooses the same nid as we do.
+ * Move nid forward in the mask even if we just
+ * successfully allocated a hugepage so that the next
+ * caller gets hugepages on the next node.
+ */
+ next_nid = next_node(nid, node_online_map);
+ if (next_nid == MAX_NUMNODES)
+ next_nid = first_node(node_online_map);
+ nid = next_nid;
+ } while (!page && nid != start_nid);
+
+ return ret;
}
static struct page *alloc_huge_page(struct vm_area_struct *vma,
--
Nishanth Aravamudan <nacc@us.ibm.com>
IBM Linux Technology Center
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [RFC][PATCH 2/4] hugetlb: fix pool allocation with empty nodes
2007-08-09 0:47 [RFC][PATCH 1/4] hugetlb: search harder for memory in alloc_fresh_huge_page() Nishanth Aravamudan
@ 2007-08-09 0:49 ` Nishanth Aravamudan
2007-08-09 0:51 ` [RFC][PATCH 3/4] hugetlb: interleave dequeueing of huge pages Nishanth Aravamudan
0 siblings, 1 reply; 6+ messages in thread
From: Nishanth Aravamudan @ 2007-08-09 0:49 UTC (permalink / raw)
To: clameter; +Cc: anton, lee.schermerhorn, wli, linux-mm
[V10] hugetlb: fix pool allocation with empty nodes
Anton found a problem with the hugetlb pool allocation when some nodes
have no memory (http://marc.info/?l=linux-mm&m=118133042025995&w=2). Lee
worked on versions that tried to fix it, but none were accepted.
Christoph has created a set of patches which allow for GFP_THISNODE
allocations to fail if the node has no memory and for exporting a
nodemask indicating which nodes have memory. Simply interleave across
this nodemask rather than the online nodemask.
---
Note: given that alloc_fresh_huge_page() now interleaves using
GFP_THISNODE, it might be the case that this patch is no longer needed?
Instead, we'll just end up skipping over those memoryless nodes more
when we could just ignore them altogether.
Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
Cc: Anton Blanchard <anton@samba.org>
Cc: Lee Schermerhorn <lee.schermerhon@hp.com>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: William Lee Irwin III <wli@holomorphy.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 7f6ab1b..7ca37f6 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -129,7 +129,7 @@ static int alloc_fresh_huge_page(void)
int ret = 0;
if (nid < 0)
- nid = first_node(node_online_map);
+ nid = first_node(node_states[N_HIGH_MEMORY]);
start_nid = nid;
do {
@@ -147,9 +147,9 @@ static int alloc_fresh_huge_page(void)
* successfully allocated a hugepage so that the next
* caller gets hugepages on the next node.
*/
- next_nid = next_node(nid, node_online_map);
+ next_nid = next_node(nid, node_states[N_HIGH_MEMORY]);
if (next_nid == MAX_NUMNODES)
- next_nid = first_node(node_online_map);
+ next_nid = first_node(node_states[N_HIGH_MEMORY]);
nid = next_nid;
} while (!page && nid != start_nid);
--
Nishanth Aravamudan <nacc@us.ibm.com>
IBM Linux Technology Center
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [RFC][PATCH 3/4] hugetlb: interleave dequeueing of huge pages
2007-08-09 0:49 ` [RFC][PATCH 2/4] hugetlb: fix pool allocation with empty nodes Nishanth Aravamudan
@ 2007-08-09 0:51 ` Nishanth Aravamudan
2007-08-09 0:52 ` [RFC][PATCH 4/4] hugetlb: add per-node nr_hugepages sysfs attribute Nishanth Aravamudan
0 siblings, 1 reply; 6+ messages in thread
From: Nishanth Aravamudan @ 2007-08-09 0:51 UTC (permalink / raw)
To: clameter; +Cc: anton, lee.schermerhorn, wli, linux-mm
Currently, when shrinking the hugetlb pool, we free all of the pages on
node 0, then all the pages on node 1, etc. Instead, we interleave over
the nodes with memory. If some particularly node should be cleared
first, the to-be-introduced sysfs allocator can be used for
finer-grained control. This also helps with keeping the pool balanced as
we change the pool at run-time.
Tested on 4-node ppc64 (2 memoryless nodes), 2-node IA64, 4-node x86
(NUMAQ), !NUMA x86.
Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
---
Christoph's patches + patches 1,2/4 on a 4-node ppc64 with 2 memoryless:
Trying to clear the hugetlb pool
Done. 0 free
Trying to resize the pool to 100
Node 3 HugePages_Free: 0
Node 2 HugePages_Free: 0
Node 1 HugePages_Free: 50
Node 0 HugePages_Free: 50
Done. Initially 100 free
Trying to resize the pool to 200
Node 3 HugePages_Free: 0
Node 2 HugePages_Free: 0
Node 1 HugePages_Free: 100
Node 0 HugePages_Free: 100
Done. 200 free
Trying to resize the pool back to 100
Node 3 HugePages_Free: 0
Node 2 HugePages_Free: 0
Node 1 HugePages_Free: 100
Node 0 HugePages_Free: 0
Done. 100 free
Christoph's patches + patches 1,2,3/4:
Trying to clear the hugetlb pool
Done. 0 free
Trying to resize the pool to 100
Node 3 HugePages_Free: 0
Node 2 HugePages_Free: 0
Node 1 HugePages_Free: 50
Node 0 HugePages_Free: 50
Done. Initially 100 free
Trying to resize the pool to 200
Node 3 HugePages_Free: 0
Node 2 HugePages_Free: 0
Node 1 HugePages_Free: 100
Node 0 HugePages_Free: 100
Done. 200 free
Trying to resize the pool back to 100
Node 3 HugePages_Free: 0
Node 2 HugePages_Free: 0
Node 1 HugePages_Free: 50
Node 0 HugePages_Free: 50
Done. 100 free
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 7ca37f6..8139568 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -66,11 +66,56 @@ static void enqueue_huge_page(struct page *page)
free_huge_pages_node[nid]++;
}
-static struct page *dequeue_huge_page(struct vm_area_struct *vma,
+static struct page *dequeue_huge_page_node(int nid)
+{
+ struct page *page;
+
+ page = list_entry(hugepage_freelists[nid].next,
+ struct page, lru);
+ list_del(&page->lru);
+ free_huge_pages--;
+ free_huge_pages_node[nid]--;
+ return page;
+}
+
+static struct page *dequeue_huge_page(void)
+{
+ static int nid = -1;
+ struct page *page = NULL;
+ int start_nid;
+ int next_nid;
+
+ if (nid < 0)
+ nid = first_node(node_states[N_HIGH_MEMORY]);
+ start_nid = nid;
+
+ do {
+ if (!list_empty(&hugepage_freelists[nid]))
+ page = dequeue_huge_page_node(nid);
+ /*
+ * Use a helper variable to find the next node and then
+ * copy it back to nid nid afterwards: otherwise there's
+ * a window in which a racer might pass invalid nid
+ * MAX_NUMNODES to dequeue_huge_page_node. But we don't
+ * need to use a spin_lock here: it really doesn't
+ * matter if occasionally a racer chooses the same nid
+ * as we do. Move nid forward in the mask even if we
+ * just successfully allocated a hugepage so that the
+ * next caller frees hugepages on the next node.
+ */
+ next_nid = next_node(nid, node_states[N_HIGH_MEMORY]);
+ if (next_nid == MAX_NUMNODES)
+ next_nid = first_node(node_states[N_HIGH_MEMORY]);
+ nid = next_nid;
+ } while (!page && nid != start_nid);
+
+ return page;
+}
+
+static struct page *dequeue_huge_page_vma(struct vm_area_struct *vma,
unsigned long address)
{
int nid;
- struct page *page = NULL;
struct zonelist *zonelist = huge_zonelist(vma, address,
htlb_alloc_mask);
struct zone **z;
@@ -79,15 +124,10 @@ static struct page *dequeue_huge_page(struct vm_area_struct *vma,
nid = zone_to_nid(*z);
if (cpuset_zone_allowed_softwall(*z, htlb_alloc_mask) &&
!list_empty(&hugepage_freelists[nid])) {
- page = list_entry(hugepage_freelists[nid].next,
- struct page, lru);
- list_del(&page->lru);
- free_huge_pages--;
- free_huge_pages_node[nid]--;
- break;
+ return dequeue_huge_page_node(nid);
}
}
- return page;
+ return NULL;
}
static void free_huge_page(struct page *page)
@@ -167,7 +207,7 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma,
else if (free_huge_pages <= resv_huge_pages)
goto fail;
- page = dequeue_huge_page(vma, addr);
+ page = dequeue_huge_page_vma(vma, addr);
if (!page)
goto fail;
@@ -275,7 +315,7 @@ static unsigned long set_max_huge_pages(unsigned long count)
count = max(count, resv_huge_pages);
try_to_free_low(count);
while (count < nr_huge_pages) {
- struct page *page = dequeue_huge_page(NULL, 0);
+ struct page *page = dequeue_huge_page();
if (!page)
break;
update_and_free_page(page);
--
Nishanth Aravamudan <nacc@us.ibm.com>
IBM Linux Technology Center
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [RFC][PATCH 4/4] hugetlb: add per-node nr_hugepages sysfs attribute
2007-08-09 0:51 ` [RFC][PATCH 3/4] hugetlb: interleave dequeueing of huge pages Nishanth Aravamudan
@ 2007-08-09 0:52 ` Nishanth Aravamudan
2007-08-13 17:55 ` Dave Hansen
0 siblings, 1 reply; 6+ messages in thread
From: Nishanth Aravamudan @ 2007-08-09 0:52 UTC (permalink / raw)
To: clameter; +Cc: anton, lee.schermerhorn, wli, linux-mm
Allow specifying the number of hugepages to allocate on a particular
node. Our current global sysctl will try its best to put hugepages
equally on each node, but htat may not always be desired. This allows
the admin to control the layout of hugepage allocation at a finer level
(while not breaking the existing interface). Add callbacks in the sysfs
node registration and unregistration functions into hugetlb to add the
nr_hugepages attribute, which is a no-op if !NUMA or !HUGETLB.
Tested on 4-node ppc64 (2 memoryless nodes), 2-node IA64, 4-node x86
(NUMAQ), !NUMA x86.
Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
diff --git a/drivers/base/node.c b/drivers/base/node.c
index cae346e..c9d531f 100644
--- a/drivers/base/node.c
+++ b/drivers/base/node.c
@@ -151,6 +151,7 @@ int register_node(struct node *node, int num, struct node *parent)
sysdev_create_file(&node->sysdev, &attr_meminfo);
sysdev_create_file(&node->sysdev, &attr_numastat);
sysdev_create_file(&node->sysdev, &attr_distance);
+ hugetlb_register_node(node);
}
return error;
}
@@ -168,6 +169,7 @@ void unregister_node(struct node *node)
sysdev_remove_file(&node->sysdev, &attr_meminfo);
sysdev_remove_file(&node->sysdev, &attr_numastat);
sysdev_remove_file(&node->sysdev, &attr_distance);
+ hugetlb_unregister_node(node);
sysdev_unregister(&node->sysdev);
}
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index e6a71c8..aad43e0 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -6,7 +6,9 @@
#ifdef CONFIG_HUGETLB_PAGE
#include <linux/mempolicy.h>
+#include <linux/node.h>
#include <linux/shm.h>
+#include <linux/sysdev.h>
#include <asm/tlbflush.h>
struct ctl_table;
@@ -25,6 +27,13 @@ void __unmap_hugepage_range(struct vm_area_struct *, unsigned long, unsigned lon
int hugetlb_prefault(struct address_space *, struct vm_area_struct *);
int hugetlb_report_meminfo(char *);
int hugetlb_report_node_meminfo(int, char *);
+#ifdef CONFIG_NUMA
+int hugetlb_register_node(struct node *);
+void hugetlb_unregister_node(struct node *);
+#else
+#define hugetlb_register_node(node) 0
+#define hugetlb_unregister_node(node) ((void)0)
+#endif
unsigned long hugetlb_total_pages(void);
int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long address, int write_access);
@@ -116,6 +125,8 @@ static inline unsigned long hugetlb_total_pages(void)
#define unmap_hugepage_range(vma, start, end) BUG()
#define hugetlb_report_meminfo(buf) 0
#define hugetlb_report_node_meminfo(n, buf) 0
+#define hugetlb_register_node(node) 0
+#define hugetlb_unregister_node(node) ((void)0)
#define follow_huge_pmd(mm, addr, pmd, write) NULL
#define prepare_hugepage_range(addr,len,pgoff) (-EINVAL)
#define pmd_huge(x) 0
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 8139568..6e4311b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -261,12 +261,11 @@ static unsigned int cpuset_mems_nr(unsigned int *array)
return nr;
}
-#ifdef CONFIG_SYSCTL
-static void update_and_free_page(struct page *page)
+static void update_and_free_page(int nid, struct page *page)
{
int i;
nr_huge_pages--;
- nr_huge_pages_node[page_to_nid(page)]--;
+ nr_huge_pages_node[nid]--;
for (i = 0; i < (HPAGE_SIZE / PAGE_SIZE); i++) {
page[i].flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced |
1 << PG_dirty | 1 << PG_active | 1 << PG_reserved |
@@ -278,30 +277,42 @@ static void update_and_free_page(struct page *page)
}
#ifdef CONFIG_HIGHMEM
+static void try_to_free_low_node(int nid, unsigned long count)
+{
+ struct page *page, *next;
+ list_for_each_entry_safe(page, next, &hugepage_freelists[nid], lru) {
+ if (PageHighMem(page))
+ continue;
+ list_del(&page->lru);
+ update_and_free_page(nid, page);
+ free_huge_pages--;
+ free_huge_pages_node[nid]--;
+ if (count >= nr_huge_pages_node[nid])
+ return;
+ }
+}
+
static void try_to_free_low(unsigned long count)
{
int i;
for (i = 0; i < MAX_NUMNODES; ++i) {
- struct page *page, *next;
- list_for_each_entry_safe(page, next, &hugepage_freelists[i], lru) {
- if (PageHighMem(page))
- continue;
- list_del(&page->lru);
- update_and_free_page(page);
- free_huge_pages--;
- free_huge_pages_node[page_to_nid(page)]--;
- if (count >= nr_huge_pages)
- return;
- }
+ try_to_free_low_node(i, count);
+ if (count >= nr_huge_pages)
+ return;
}
}
#else
+static inline void try_to_free_low_node(int nid, unsigned long count)
+{
+}
+
static inline void try_to_free_low(unsigned long count)
{
}
#endif
+#ifdef CONFIG_SYSCTL
static unsigned long set_max_huge_pages(unsigned long count)
{
while (count > nr_huge_pages) {
@@ -318,7 +329,7 @@ static unsigned long set_max_huge_pages(unsigned long count)
struct page *page = dequeue_huge_page();
if (!page)
break;
- update_and_free_page(page);
+ update_and_free_page(page_to_nid(page), page);
}
spin_unlock(&hugetlb_lock);
return nr_huge_pages;
@@ -369,6 +380,67 @@ int hugetlb_report_node_meminfo(int nid, char *buf)
nid, free_huge_pages_node[nid]);
}
+#ifdef CONFIG_NUMA
+static ssize_t hugetlb_read_nr_hugepages_node(struct sys_device *dev,
+ char *buf)
+{
+ return sprintf(buf, "%u\n", nr_huge_pages_node[dev->id]);
+}
+
+static ssize_t hugetlb_write_nr_hugepages_node(struct sys_device *dev,
+ const char *buf, size_t count)
+{
+ int nid = dev->id;
+ unsigned long target;
+ unsigned long free_on_other_nodes;
+ unsigned long nr_huge_pages_req = simple_strtoul(buf, NULL, 10);
+
+ while (nr_huge_pages_req > nr_huge_pages_node[nid]) {
+ if (!alloc_fresh_huge_page_node(nid))
+ return count;
+ }
+ if (nr_huge_pages_req >= nr_huge_pages_node[nid])
+ return count;
+
+ /* need to ensure that our counts are accurate */
+ spin_lock(&hugetlb_lock);
+ free_on_other_nodes = free_huge_pages - free_huge_pages_node[nid];
+ if (free_on_other_nodes >= resv_huge_pages) {
+ /* other nodes can satisfy reserve */
+ target = nr_huge_pages_req;
+ } else {
+ /* this node needs some free to satisfy reserve */
+ target = max((resv_huge_pages - free_on_other_nodes),
+ nr_huge_pages_req);
+ }
+ try_to_free_low_node(nid, target);
+ while (target < nr_huge_pages_node[nid]) {
+ struct page *page = dequeue_huge_page_node(nid);
+ if (!page)
+ break;
+ update_and_free_page(nid, page);
+ }
+ spin_unlock(&hugetlb_lock);
+
+ return count;
+}
+
+static SYSDEV_ATTR(nr_hugepages, S_IRUGO | S_IWUSR,
+ hugetlb_read_nr_hugepages_node,
+ hugetlb_write_nr_hugepages_node);
+
+int hugetlb_register_node(struct node *node)
+{
+ return sysdev_create_file(&node->sysdev, &attr_nr_hugepages);
+}
+
+void hugetlb_unregister_node(struct node *node)
+{
+ sysdev_remove_file(&node->sysdev, &attr_nr_hugepages);
+}
+
+#endif
+
/* Return the number pages of memory we physically have, in PAGE_SIZE units. */
unsigned long hugetlb_total_pages(void)
{
--
Nishanth Aravamudan <nacc@us.ibm.com>
IBM Linux Technology Center
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [RFC][PATCH 4/4] hugetlb: add per-node nr_hugepages sysfs attribute
2007-08-09 0:52 ` [RFC][PATCH 4/4] hugetlb: add per-node nr_hugepages sysfs attribute Nishanth Aravamudan
@ 2007-08-13 17:55 ` Dave Hansen
2007-08-22 21:17 ` Nishanth Aravamudan
0 siblings, 1 reply; 6+ messages in thread
From: Dave Hansen @ 2007-08-13 17:55 UTC (permalink / raw)
To: Nishanth Aravamudan; +Cc: clameter, anton, lee.schermerhorn, wli, linux-mm
On Wed, 2007-08-08 at 17:52 -0700, Nishanth Aravamudan wrote:
>
> +#ifdef CONFIG_NUMA
> +int hugetlb_register_node(struct node *);
> +void hugetlb_unregister_node(struct node *);
> +#else
> +#define hugetlb_register_node(node) 0
> +#define hugetlb_unregister_node(node) ((void)0)
> +#endif
This is to keep someone from doing:
ret = hugetlb_unregister_node(node);
?
I think it's a little more standard to do:
#define hugetlb_unregister_node(node) do {} while(0)
-- Dave
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [RFC][PATCH 4/4] hugetlb: add per-node nr_hugepages sysfs attribute
2007-08-13 17:55 ` Dave Hansen
@ 2007-08-22 21:17 ` Nishanth Aravamudan
0 siblings, 0 replies; 6+ messages in thread
From: Nishanth Aravamudan @ 2007-08-22 21:17 UTC (permalink / raw)
To: Dave Hansen; +Cc: clameter, anton, lee.schermerhorn, wli, linux-mm
On 13.08.2007 [10:55:46 -0700], Dave Hansen wrote:
> On Wed, 2007-08-08 at 17:52 -0700, Nishanth Aravamudan wrote:
> >
> > +#ifdef CONFIG_NUMA
> > +int hugetlb_register_node(struct node *);
> > +void hugetlb_unregister_node(struct node *);
> > +#else
> > +#define hugetlb_register_node(node) 0
> > +#define hugetlb_unregister_node(node) ((void)0)
> > +#endif
>
> This is to keep someone from doing:
>
> ret = hugetlb_unregister_node(node);
>
> ?
>
> I think it's a little more standard to do:
>
> #define hugetlb_unregister_node(node) do {} while(0)
That's a good point. Now that I'm back from vacation, I'll make this
adjustment.
Thanks,
Nish
--
Nishanth Aravamudan <nacc@us.ibm.com>
IBM Linux Technology Center
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2007-08-22 21:17 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-08-09 0:47 [RFC][PATCH 1/4] hugetlb: search harder for memory in alloc_fresh_huge_page() Nishanth Aravamudan
2007-08-09 0:49 ` [RFC][PATCH 2/4] hugetlb: fix pool allocation with empty nodes Nishanth Aravamudan
2007-08-09 0:51 ` [RFC][PATCH 3/4] hugetlb: interleave dequeueing of huge pages Nishanth Aravamudan
2007-08-09 0:52 ` [RFC][PATCH 4/4] hugetlb: add per-node nr_hugepages sysfs attribute Nishanth Aravamudan
2007-08-13 17:55 ` Dave Hansen
2007-08-22 21:17 ` Nishanth Aravamudan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox