* [00/11] Virtualizable Compound Page Support V5
@ 2008-04-30 4:42 Christoph Lameter
2008-04-30 4:42 ` [01/11] vmalloc: Return page array on vunmap Christoph Lameter
` (10 more replies)
0 siblings, 11 replies; 12+ messages in thread
From: Christoph Lameter @ 2008-04-30 4:42 UTC (permalink / raw)
To: akpm; +Cc: linux-mm
Allocations of larger pages are not reliable in Linux. If larger
pages have to be allocated then one faces various choices of allowing
graceful fallback or using vmalloc with a performance penalty due
to the use of a page table. Virtualizable Compound Pages are
a simple solution out of this dilemma.
A virtualizable compound allocation means that there will be first
an attempt to satisfy the request with physically contiguous memory
through a traditional compound page. If that is not possible then
virtually contiguous memory will be used for the page.
This has two advantages:
1. Current uses of vmalloc can be converted to request for virtualizable
compounds instead. In most cases physically contiguous memory can be
used which avoids the vmalloc performance penalty. See f.e. the
e1000 driver patch.
2. Uses of higher order allocations (stacks, buffers etc) can be
converted to use virtualizable compounds instead. Physically contiguous
memory will still be used for those higher order allocs in general
but the system can degrade to the use of vmalloc should memory
become heavily fragmented.
There is a compile time option to switch on fallback for
testing purposes. Virtually mapped memory may behave differently
and the CONFIG_VIRTUALIZE_ALWAYS option can be used ensure that the code is
tested to deal with virtualized compound page.
This patchset contains first of all the core pieces to make virtualizable
compound pages possible and then a set of example uses of virtualizable
compound pages.
V4->V5
- Cleanup various portions
- Simplify code
- Complete documentation
- Limit the number of example uses.
V3->V4:
- Drop fallback for IA64 stack (arches with software tlb handlers
could get into deep trouble if a tlb needs to be installed
for the stack that is needed by the tlb fault handler).
- Drop ehash_lock vcompound patch.
V2->V3:
- Put the code into mm/vmalloc.c and leave the page allocator alone.
- Add a series of examples where virtual compound pages can be used.
- Diffed on top of the page flags and the vmalloc info patches
already in mm.
- Simplify things by omitting some of the more complex code
that used to be in there.
V1->V2
- Remove some cleanup patches and the SLUB patches from this set.
- Transparent vcompound support through page_address() and
virt_to_head_page().
- Additional use cases.
- Factor the code better for an easier read
- Add configurable stack size.
- Follow up on various suggestions made for V1
RFC->V1
- Complete support for all compound functions for virtual compound pages
(including the compound_nth_page() necessary for LBS mmap support)
- Fix various bugs
- Fix i386 build
--
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [01/11] vmalloc: Return page array on vunmap
2008-04-30 4:42 [00/11] Virtualizable Compound Page Support V5 Christoph Lameter
@ 2008-04-30 4:42 ` Christoph Lameter
2008-04-30 4:42 ` [02/11] vcompound: pageflags: Add PageVcompound() Christoph Lameter
` (9 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Christoph Lameter @ 2008-04-30 4:42 UTC (permalink / raw)
To: akpm; +Cc: linux-mm
[-- Attachment #1: vcp_return_page_array_on_vunmap --]
[-- Type: text/plain, Size: 3766 bytes --]
Make vunmap return the page array that was used at vmap(). This is useful
if one has no structures to track the page array but simply stores the
virtual address returned by vmap somewhere. The caller may only need the
page array to dispose of it after vunmap().
vfree() can also now be used instead of vunmap(). vfree() will release the
page array after vunmap'ping it. If vfree() is called to free the page
array then the page array must either be
1. Allocated via the slab allocator
2. Allocated via vmalloc but then VM_VPAGES must have been passed at
vunmap to specify that a vfree is needed.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
---
include/linux/vmalloc.h | 2 +-
mm/vmalloc.c | 28 ++++++++++++++++++----------
2 files changed, 19 insertions(+), 11 deletions(-)
Index: linux-2.6/include/linux/vmalloc.h
===================================================================
--- linux-2.6.orig/include/linux/vmalloc.h 2008-04-28 14:34:40.033649949 -0700
+++ linux-2.6/include/linux/vmalloc.h 2008-04-29 16:44:49.273706592 -0700
@@ -50,7 +50,7 @@ extern void vfree(const void *addr);
extern void *vmap(struct page **pages, unsigned int count,
unsigned long flags, pgprot_t prot);
-extern void vunmap(const void *addr);
+extern struct page **vunmap(const void *addr);
extern int remap_vmalloc_range(struct vm_area_struct *vma, void *addr,
unsigned long pgoff);
Index: linux-2.6/mm/vmalloc.c
===================================================================
--- linux-2.6.orig/mm/vmalloc.c 2008-04-28 14:34:40.623748915 -0700
+++ linux-2.6/mm/vmalloc.c 2008-04-29 16:44:49.273706592 -0700
@@ -372,17 +372,18 @@ struct vm_struct *remove_vm_area(const v
return v;
}
-static void __vunmap(const void *addr, int deallocate_pages)
+static struct page **__vunmap(const void *addr, int deallocate_pages)
{
struct vm_struct *area;
+ struct page **pages;
if (!addr)
- return;
+ return NULL;
if ((PAGE_SIZE-1) & (unsigned long)addr) {
printk(KERN_ERR "Trying to vfree() bad address (%p)\n", addr);
WARN_ON(1);
- return;
+ return NULL;
}
area = remove_vm_area(addr);
@@ -390,29 +391,30 @@ static void __vunmap(const void *addr, i
printk(KERN_ERR "Trying to vfree() nonexistent vm area (%p)\n",
addr);
WARN_ON(1);
- return;
+ return NULL;
}
+ pages = area->pages;
debug_check_no_locks_freed(addr, area->size);
if (deallocate_pages) {
int i;
for (i = 0; i < area->nr_pages; i++) {
- struct page *page = area->pages[i];
+ struct page *page = pages[i];
BUG_ON(!page);
__free_page(page);
}
if (area->flags & VM_VPAGES)
- vfree(area->pages);
+ vfree(pages);
else
- kfree(area->pages);
+ kfree(pages);
}
kfree(area);
- return;
+ return pages;
}
/**
@@ -441,10 +443,10 @@ EXPORT_SYMBOL(vfree);
*
* Must not be called in interrupt context.
*/
-void vunmap(const void *addr)
+struct page **vunmap(const void *addr)
{
BUG_ON(in_interrupt());
- __vunmap(addr, 0);
+ return __vunmap(addr, 0);
}
EXPORT_SYMBOL(vunmap);
@@ -457,6 +459,11 @@ EXPORT_SYMBOL(vunmap);
*
* Maps @count pages from @pages into contiguous kernel virtual
* space.
+ *
+ * The page array may be freed using if the result of vmap is passed to
+ * vfree(). In that case the page array must either have been allocated
+ * using kmalloc or via vmalloc. For the vmalloc case VM_VPAGES must
+ * be set in flags.
*/
void *vmap(struct page **pages, unsigned int count,
unsigned long flags, pgprot_t prot)
--
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [02/11] vcompound: pageflags: Add PageVcompound()
2008-04-30 4:42 [00/11] Virtualizable Compound Page Support V5 Christoph Lameter
2008-04-30 4:42 ` [01/11] vmalloc: Return page array on vunmap Christoph Lameter
@ 2008-04-30 4:42 ` Christoph Lameter
2008-04-30 4:42 ` [03/11] vmallocinfo: Support display of virtualized compound pages Christoph Lameter
` (8 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Christoph Lameter @ 2008-04-30 4:42 UTC (permalink / raw)
To: akpm; +Cc: linux-mm
[-- Attachment #1: vcp_add_pagevcompound --]
[-- Type: text/plain, Size: 2435 bytes --]
Add a page flag that can be used to figure out if a compound page was
virtually mapped (virtualized). The mark is necessary since we have to
know when freeing pages if we have to destroy a virtual mapping and
we need to know that the pages of the compound are not in sequence.
A pageflag is only used if we have lots of available flags
(PAGEFLAGS_EXTENDED). Otherwise no additional flag is needed by
combining PG_swapcache together with PG_compound (similar to
PageHead() and PageTail()).
Overlaying flags has two bad effects:
1. The tests for PageVcompound become more expensive since multiple
bits must be tested. There is a potential effect on hot codepaths.
2. Vcompound pages can not be on the LRU since PG_swapcache has
another meaning for pages on the LRU.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
---
include/linux/page-flags.h | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
Index: linux-2.6/include/linux/page-flags.h
===================================================================
--- linux-2.6.orig/include/linux/page-flags.h 2008-04-28 14:34:39.953650145 -0700
+++ linux-2.6/include/linux/page-flags.h 2008-04-29 16:45:00.481208036 -0700
@@ -86,6 +86,7 @@ enum pageflags {
#ifdef CONFIG_PAGEFLAGS_EXTENDED
PG_head, /* A head page */
PG_tail, /* A tail page */
+ PG_vcompound, /* A virtualized compound page */
#else
PG_compound, /* A compound page */
#endif
@@ -262,6 +263,7 @@ static inline void set_page_writeback(st
*/
__PAGEFLAG(Head, head)
__PAGEFLAG(Tail, tail)
+__PAGEFLAG(Vcompound, vcompound)
static inline int PageCompound(struct page *page)
{
@@ -305,6 +307,20 @@ static inline void __ClearPageTail(struc
page->flags &= ~PG_head_tail_mask;
}
+#define PG_vcompound_mask ((1L << PG_compound) | (1L << PG_swapcache))
++#define PageVcompound(page) ((page->flags & PG_vcompound_mask) \
+ == PG_vcompound_mask)
+
+static inline void __SetPageVcompound(struct page *page)
+{
+ page->flags |= PG_vcompound_mask;
+}
+
+static inline void __ClearPageVcompound(struct page *page)
+{
+ page->flags &= ~PG_vcompound_mask;
+}
+
#endif /* !PAGEFLAGS_EXTENDED */
#endif /* !__GENERATING_BOUNDS_H */
#endif /* PAGE_FLAGS_H */
--
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [03/11] vmallocinfo: Support display of virtualized compound pages
2008-04-30 4:42 [00/11] Virtualizable Compound Page Support V5 Christoph Lameter
2008-04-30 4:42 ` [01/11] vmalloc: Return page array on vunmap Christoph Lameter
2008-04-30 4:42 ` [02/11] vcompound: pageflags: Add PageVcompound() Christoph Lameter
@ 2008-04-30 4:42 ` Christoph Lameter
2008-04-30 4:42 ` [04/11] vcompound: Core piece for virtualizable compound page allocation Christoph Lameter
` (7 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Christoph Lameter @ 2008-04-30 4:42 UTC (permalink / raw)
To: akpm; +Cc: linux-mm
[-- Attachment #1: vcp_vmalloc_type --]
[-- Type: text/plain, Size: 1678 bytes --]
Add another flag to the vmalloc subsystem to mark vmalloc areas used
for virtualized compound pages. Display vcompound in /proc/vmallocinfo.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
---
include/linux/vmalloc.h | 1 +
mm/vmalloc.c | 3 +++
2 files changed, 4 insertions(+)
Index: linux-2.6.25-rc8-mm2/include/linux/vmalloc.h
===================================================================
--- linux-2.6.25-rc8-mm2.orig/include/linux/vmalloc.h 2008-04-14 20:01:25.295741503 -0700
+++ linux-2.6.25-rc8-mm2/include/linux/vmalloc.h 2008-04-14 20:01:27.465740891 -0700
@@ -12,6 +12,7 @@ struct vm_area_struct;
#define VM_MAP 0x00000004 /* vmap()ed pages */
#define VM_USERMAP 0x00000008 /* suitable for remap_vmalloc_range */
#define VM_VPAGES 0x00000010 /* buffer for pages was vmalloc'ed */
+#define VM_VCOMPOUND 0x00000020 /* Virtualized Compound Page */
/* bits [20..32] reserved for arch specific ioremap internals */
/*
Index: linux-2.6.25-rc8-mm2/mm/vmalloc.c
===================================================================
--- linux-2.6.25-rc8-mm2.orig/mm/vmalloc.c 2008-04-14 20:01:25.295741503 -0700
+++ linux-2.6.25-rc8-mm2/mm/vmalloc.c 2008-04-14 20:01:27.485750108 -0700
@@ -972,6 +972,9 @@ static int s_show(struct seq_file *m, vo
if (v->flags & VM_VPAGES)
seq_printf(m, " vpages");
+ if (v->flags & VM_VCOMPOUND)
+ seq_printf(m, " vcompound");
+
seq_putc(m, '\n');
return 0;
}
--
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [04/11] vcompound: Core piece for virtualizable compound page allocation
2008-04-30 4:42 [00/11] Virtualizable Compound Page Support V5 Christoph Lameter
` (2 preceding siblings ...)
2008-04-30 4:42 ` [03/11] vmallocinfo: Support display of virtualized compound pages Christoph Lameter
@ 2008-04-30 4:42 ` Christoph Lameter
2008-04-30 4:42 ` [05/11] vcompound: Debugging aid Christoph Lameter
` (6 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Christoph Lameter @ 2008-04-30 4:42 UTC (permalink / raw)
To: akpm; +Cc: linux-mm
[-- Attachment #1: vcp_core --]
[-- Type: text/plain, Size: 8713 bytes --]
Add support functions to allow the creation and destruction of virtualizable
compound pages. A virtualizable compound page is either allocated as a compound
page (using physically contiguous memory) or as a virtualized compound page
(using virtually contiguous memory).
Virtualized compound pages are in many ways similar to regular compound pages
1. If PageTail(page) is true then page->first points to the first page.
compound_head(page) works also for virtualized compound pages.
2. page[1].lru.next contains the order of the virtualized compound page.
However, the page structs of virtual compound pages are not in order.
So page[1] means the second page belonging to the virtual compound mapping
which is not necessarily the page following the head page physically.
There is a special function:
vcompound_head_page(address)
(similar to virt_to_head_page) that can be used to determine the head page
from a virtual address.
Freeing of virtualized compound pages is supported both from preemptible and
non preemptible context (freeing requires a preemptible context, we simply
defer free if we are not in a preemptible context).
However, allocation of virtualized compound pages must at this stage be done
from preemptible contexts only.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
---
include/linux/vmalloc.h | 19 +++
mm/vmalloc.c | 238 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 257 insertions(+)
Index: linux-2.6/include/linux/vmalloc.h
===================================================================
--- linux-2.6.orig/include/linux/vmalloc.h 2008-04-29 20:23:50.016939945 -0700
+++ linux-2.6/include/linux/vmalloc.h 2008-04-29 20:23:50.685509617 -0700
@@ -86,6 +86,25 @@ extern struct vm_struct *alloc_vm_area(s
extern void free_vm_area(struct vm_struct *area);
/*
+ * Support for virtualizable compound pages.
+ *
+ * Calls to vcompound_alloc will result in the allocation of normal compound
+ * pages unless memory is fragmented. If insufficient physical linear memory
+ * is available then a virtual contiguous area of memory will be created
+ * using the vmalloc functionality to allocate a virtualized compound page.
+ */
+struct page *alloc_vcompound_node(int node, gfp_t flags, int order);
+static inline struct page *alloc_vcompound(gfp_t flags, int order)
+{
+ return alloc_vcompound_node(-1, flags, order);
+};
+
+void free_vcompound(struct page *);
+void *__alloc_vcompound(gfp_t flags, int order);
+void __free_vcompound(void *addr);
+struct page *vcompound_head_page(const void *x);
+
+/*
* Internals. Dont't use..
*/
extern rwlock_t vmlist_lock;
Index: linux-2.6/mm/vmalloc.c
===================================================================
--- linux-2.6.orig/mm/vmalloc.c 2008-04-29 20:23:50.016939945 -0700
+++ linux-2.6/mm/vmalloc.c 2008-04-29 21:27:32.237026026 -0700
@@ -986,3 +986,241 @@ const struct seq_operations vmalloc_op =
};
#endif
+/*
+ * Virtualized Compound Pages are used to fall back to order 0 allocations if
+ * large linear mappings are not available. A virtualized compound page is
+ * provided using a series of order 0 allocations that have been stringed
+ * together using vmap().
+ *
+ * Virtualized Compound Pages are formatted according to compound page
+ * conventions. I.e. following page->first_page (if PageTail(page) is set)
+ * can be used to determine the head page.
+ *
+ * The order of the allocation is stored in page[1].lru.next. However, the
+ * pages are not in sequence. In order to determine the second page the
+ * vmstruct structure needs to be located. Then the page array can be
+ * used to find the remaining pages.
+ */
+
+/*
+ * Determine the appropriate page struct given a virtual address
+ * (including vmalloced areas).
+ *
+ * Return the head page if this is a compound page.
+ *
+ * Cannot be inlined since VMALLOC_START and VMALLOC_END may contain
+ * complex calculations that depend on multiple arch includes or
+ * even variables.
+ */
+struct page *vcompound_head_page(const void *x)
+{
+ unsigned long addr = (unsigned long)x;
+ struct page *page;
+
+ if (unlikely(is_vmalloc_addr(x)))
+ page = vmalloc_to_page(x);
+ else
+ page = virt_to_page(addr);
+
+ return compound_head(page);
+}
+EXPORT_SYMBOL(vcompound_head_page);
+
+static void __vcompound_free(void *addr)
+{
+
+ struct page **pages;
+ int i;
+ int order;
+ struct page *head;
+
+ pages = vunmap(addr);
+ order = (unsigned long)pages[1]->lru.prev;
+
+ /*
+ * The first page will have zero refcount since it maintains state
+ * for the virtualized compound.
+ */
+ head = pages[0];
+ set_page_address(head, NULL);
+ __ClearPageVcompound(head);
+ __ClearPageHead(head);
+ free_hot_page(head);
+
+ for (i = 1; i < (1 << order); i++) {
+ struct page *page = pages[i];
+
+ BUG_ON(!PageTail(page));
+ set_page_address(page, NULL);
+ __ClearPageTail(page);
+ __free_page(page);
+ }
+ kfree(pages);
+}
+
+static void vcompound_free_work(struct work_struct *w)
+{
+ __vcompound_free((void *)w);
+}
+
+static void vcompound_free(void *addr, struct page *page)
+{
+ struct work_struct *w = addr;
+
+ BUG_ON((!PageVcompound(page) || !PageHead(page)));
+
+ if (!put_page_testzero(page))
+ return;
+
+ if (!preemptible()) {
+ /*
+ * Need to defer the free until we are in
+ * a preemptible context.
+ */
+ INIT_WORK(w, vcompound_free_work);
+ schedule_work(w);
+ } else
+ __vcompound_free(addr);
+}
+
+
+void __free_vcompound(void *addr)
+{
+ struct page *page;
+
+ if (unlikely(is_vmalloc_addr(addr)))
+ vcompound_free(addr, vmalloc_to_page(addr));
+ else {
+ page = virt_to_page(addr);
+ free_pages((unsigned long)addr, compound_order(page));
+ }
+}
+EXPORT_SYMBOL(__free_vcompound);
+
+void free_vcompound(struct page *page)
+{
+ if (unlikely(PageVcompound(page)))
+ vcompound_free(page_address(page), page);
+ else
+ __free_pages(page, compound_order(page));
+}
+EXPORT_SYMBOL(free_vcompound);
+
+static struct vm_struct *____alloc_vcompound(int node, gfp_t gfp_mask,
+ unsigned long order, void *caller)
+{
+ int i;
+ struct vm_struct *vm;
+ int nr_pages = 1 << order;
+ struct page **pages = kmalloc(nr_pages * sizeof(struct page *),
+ gfp_mask & GFP_RECLAIM_MASK);
+ struct page **pages2;
+ struct page *head;
+
+ BUG_ON(!order || order >= MAX_ORDER);
+ if (!pages)
+ return NULL;
+
+ for (i = 0; i < nr_pages; i++) {
+ struct page *page;
+
+ if (node == -1)
+ page = alloc_page(gfp_mask);
+ else
+ page = alloc_pages_node(node, gfp_mask, 0);
+
+ if (!page)
+ goto abort;
+
+ pages[i] = page;
+ }
+
+ vm = __get_vm_area_node(nr_pages << PAGE_SHIFT, VM_VCOMPOUND,
+ VMALLOC_START, VMALLOC_END, node, gfp_mask, caller);
+
+ if (!vm)
+ goto abort;
+
+ vm->caller = caller;
+ vm->pages = pages;
+ vm->nr_pages = nr_pages;
+ pages2 = pages;
+ if (map_vm_area(vm, PAGE_KERNEL, &pages2))
+ goto abort;
+
+ /* Setup head page */
+ head = pages[0];
+ __SetPageHead(head);
+ __SetPageVcompound(head);
+ set_page_address(head, vm->addr);
+ pages[1]->lru.prev = (void *)order;
+
+ /* Setup tail pages */
+ for (i = 1; i < nr_pages; i++) {
+ struct page *page = pages[i];
+
+ __SetPageTail(page);
+ page->first_page = head;
+ set_page_address(page, vm->addr + (i << PAGE_SHIFT));
+ }
+ return vm;
+
+abort:
+ while (i-- > 0) {
+ struct page *page = pages[i];
+
+ if (!page)
+ continue;
+
+ set_page_address(page, NULL);
+ __ClearPageTail(page);
+ __ClearPageHead(page);
+ __ClearPageVcompound(page);
+ __free_page(page);
+ }
+ kfree(pages);
+ return NULL;
+}
+
+struct page *alloc_vcompound_node(int node, gfp_t flags, int order)
+{
+ struct vm_struct *vm;
+ struct page *page;
+ gfp_t alloc_flags = flags | __GFP_NORETRY | __GFP_NOWARN;
+
+ if (order)
+ alloc_flags |= __GFP_COMP;
+
+ if (node == -1) {
+ page = alloc_pages(alloc_flags, order);
+ } else
+ page = alloc_pages_node(node, alloc_flags, order);
+
+ if (page || !order)
+ return page;
+
+ vm = ____alloc_vcompound(node, flags, order, __builtin_return_address(0));
+ if (vm)
+ return vm->pages[0];
+
+ return NULL;
+}
+EXPORT_SYMBOL(alloc_vcompound);
+
+void *__alloc_vcompound(gfp_t flags, int order)
+{
+ struct vm_struct *vm;
+ void *addr;
+
+ addr = (void *)__get_free_pages(flags | __GFP_NORETRY | __GFP_NOWARN,
+ order);
+ if (addr || !order)
+ return addr;
+
+ vm = ____alloc_vcompound(-1, flags, order, __builtin_return_address(0));
+ if (vm)
+ return vm->addr;
+
+ return NULL;
+}
+EXPORT_SYMBOL(__alloc_vcompound);
--
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [05/11] vcompound: Debugging aid
2008-04-30 4:42 [00/11] Virtualizable Compound Page Support V5 Christoph Lameter
` (3 preceding siblings ...)
2008-04-30 4:42 ` [04/11] vcompound: Core piece for virtualizable compound page allocation Christoph Lameter
@ 2008-04-30 4:42 ` Christoph Lameter
2008-04-30 4:42 ` [06/11] sparsemem: Use virtualizable compound page Christoph Lameter
` (5 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Christoph Lameter @ 2008-04-30 4:42 UTC (permalink / raw)
To: akpm; +Cc: linux-mm
[-- Attachment #1: vcp_debugging_aids --]
[-- Type: text/plain, Size: 2669 bytes --]
Virtualized Compound Pages are rare in practice and thus subtle bugs may
creep in if we do not test the kernel with Virtualized Compounds.
CONFIG_VIRTUALIZE_ALWAYS results in virtualizable compound allocation
requests always result in virtualized compounds.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
---
lib/Kconfig.debug | 12 ++++++++++++
mm/vmalloc.c | 15 +++++++++++++--
2 files changed, 25 insertions(+), 2 deletions(-)
Index: linux-2.6/lib/Kconfig.debug
===================================================================
--- linux-2.6.orig/lib/Kconfig.debug 2008-04-29 21:27:20.452525154 -0700
+++ linux-2.6/lib/Kconfig.debug 2008-04-29 21:27:36.533025562 -0700
@@ -159,6 +159,18 @@ config DETECT_SOFTLOCKUP
can be detected via the NMI-watchdog, on platforms that
support it.)
+config VIRTUALIZE_ALWAYS
+ bool "Always allocate virtualized compounds pages"
+ default y
+ help
+ Virtualized compound pages are only allocated if there is no linear
+ memory available. They are a fallback and potential issues created by
+ the use of virtual mappings instead of physically linear memory may
+ not surface because of the infrequent need to create them. Enabling
+ this option makes every allocation of a virtualizable compound page
+ generate virtualized compound page. May have a significant
+ performance impact. Only for testing.
+
config SCHED_DEBUG
bool "Collect scheduler debugging info"
depends on DEBUG_KERNEL && PROC_FS
Index: linux-2.6/mm/vmalloc.c
===================================================================
--- linux-2.6.orig/mm/vmalloc.c 2008-04-29 21:27:32.237026026 -0700
+++ linux-2.6/mm/vmalloc.c 2008-04-29 21:27:36.537025989 -0700
@@ -1191,6 +1191,11 @@ struct page *alloc_vcompound_node(int no
if (order)
alloc_flags |= __GFP_COMP;
+#ifdef CONFIG_VIRTUALIZE_ALWAYS
+ if (system_state == SYSTEM_RUNNING && order)
+ page = NULL;
+ else
+#endif
if (node == -1) {
page = alloc_pages(alloc_flags, order);
} else
@@ -1212,8 +1217,14 @@ void *__alloc_vcompound(gfp_t flags, int
struct vm_struct *vm;
void *addr;
- addr = (void *)__get_free_pages(flags | __GFP_NORETRY | __GFP_NOWARN,
- order);
+#ifdef CONFIG_VIRTUALIZE_ALWAYS
+ if (system_state == SYSTEM_RUNNING && order)
+ addr = NULL;
+ else
+#endif
+ addr = (void *)__get_free_pages(
+ flags | __GFP_NORETRY | __GFP_NOWARN, order);
+
if (addr || !order)
return addr;
--
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [06/11] sparsemem: Use virtualizable compound page
2008-04-30 4:42 [00/11] Virtualizable Compound Page Support V5 Christoph Lameter
` (4 preceding siblings ...)
2008-04-30 4:42 ` [05/11] vcompound: Debugging aid Christoph Lameter
@ 2008-04-30 4:42 ` Christoph Lameter
2008-04-30 4:42 ` [07/11] vcompound: bit waitqueue support Christoph Lameter
` (4 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Christoph Lameter @ 2008-04-30 4:42 UTC (permalink / raw)
To: akpm; +Cc: linux-mm, apw
[-- Attachment #1: vcp_sparsemem_fallback --]
[-- Type: text/plain, Size: 1920 bytes --]
Sparsemem currently attempts to allocate a physically contiguous memmap
and then falls back to vmalloc. The same can now be accomplished
using a request for a virtualizable compound pages.
Cc: apw@shadowen.org
Signed-off-by: Christoph Lameter <clameter@sgi.com>
---
mm/sparse.c | 25 ++-----------------------
1 file changed, 2 insertions(+), 23 deletions(-)
Index: linux-2.6/mm/sparse.c
===================================================================
--- linux-2.6.orig/mm/sparse.c 2008-04-29 16:50:39.761208362 -0700
+++ linux-2.6/mm/sparse.c 2008-04-29 17:07:42.773707952 -0700
@@ -383,24 +383,7 @@ static void free_map_bootmem(struct page
#else
static struct page *__kmalloc_section_memmap(unsigned long nr_pages)
{
- struct page *page, *ret;
- unsigned long memmap_size = sizeof(struct page) * nr_pages;
-
- page = alloc_pages(GFP_KERNEL|__GFP_NOWARN, get_order(memmap_size));
- if (page)
- goto got_map_page;
-
- ret = vmalloc(memmap_size);
- if (ret)
- goto got_map_ptr;
-
- return NULL;
-got_map_page:
- ret = (struct page *)pfn_to_kaddr(page_to_pfn(page));
-got_map_ptr:
- memset(ret, 0, memmap_size);
-
- return ret;
+ return __alloc_vcompound(GFP_KERNEL, get_order(memmap_size)));
}
static inline struct page *kmalloc_section_memmap(unsigned long pnum, int nid,
@@ -411,11 +394,7 @@ static inline struct page *kmalloc_secti
static void __kfree_section_memmap(struct page *memmap, unsigned long nr_pages)
{
- if (is_vmalloc_addr(memmap))
- vfree(memmap);
- else
- free_pages((unsigned long)memmap,
- get_order(sizeof(struct page) * nr_pages));
+ __free_vcompound(memmap);
}
static void free_map_bootmem(struct page *page, unsigned long nr_pages)
--
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [07/11] vcompound: bit waitqueue support
2008-04-30 4:42 [00/11] Virtualizable Compound Page Support V5 Christoph Lameter
` (5 preceding siblings ...)
2008-04-30 4:42 ` [06/11] sparsemem: Use virtualizable compound page Christoph Lameter
@ 2008-04-30 4:42 ` Christoph Lameter
2008-04-30 4:42 ` [08/11] crypto: Use virtualizable compounds for temporary order 2 allocation Christoph Lameter
` (3 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Christoph Lameter @ 2008-04-30 4:42 UTC (permalink / raw)
To: akpm; +Cc: linux-mm
[-- Attachment #1: vcp_waitqueue_support --]
[-- Type: text/plain, Size: 1476 bytes --]
If bit_waitqueue is passed a vmalloc address then it must use
vmalloc_head_page() to determine the page struct address.
vmalloc_head_page will fall back to virt_to_page() for physical
addresses. For virtual addresses it will perform a page table lookup
to find the page.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
---
kernel/wait.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
Index: linux-2.6.25-rc8-mm2/kernel/wait.c
===================================================================
--- linux-2.6.25-rc8-mm2.orig/kernel/wait.c 2008-04-01 12:44:26.000000000 -0700
+++ linux-2.6.25-rc8-mm2/kernel/wait.c 2008-04-11 20:23:32.000000000 -0700
@@ -9,6 +9,7 @@
#include <linux/mm.h>
#include <linux/wait.h>
#include <linux/hash.h>
+#include <linux/vmalloc.h>
void init_waitqueue_head(wait_queue_head_t *q)
{
@@ -245,7 +246,7 @@ EXPORT_SYMBOL(wake_up_bit);
wait_queue_head_t *bit_waitqueue(void *word, int bit)
{
const int shift = BITS_PER_LONG == 32 ? 5 : 6;
- const struct zone *zone = page_zone(virt_to_page(word));
+ const struct zone *zone = page_zone(vcompound_head_page(word));
unsigned long val = (unsigned long)word << shift | bit;
return &zone->wait_table[hash_long(val, zone->wait_table_bits)];
--
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [08/11] crypto: Use virtualizable compounds for temporary order 2 allocation
2008-04-30 4:42 [00/11] Virtualizable Compound Page Support V5 Christoph Lameter
` (6 preceding siblings ...)
2008-04-30 4:42 ` [07/11] vcompound: bit waitqueue support Christoph Lameter
@ 2008-04-30 4:42 ` Christoph Lameter
2008-04-30 4:43 ` [09/11] slub: Use virtualizable compound for buffer Christoph Lameter
` (2 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Christoph Lameter @ 2008-04-30 4:42 UTC (permalink / raw)
To: akpm; +Cc: linux-mm, Dan Williams
[-- Attachment #1: vcp_crypto_fallback --]
[-- Type: text/plain, Size: 1288 bytes --]
The crypto subsystem needs an order 2 allocation. This is a temporary buffer
for xoring data so we can safely allow the use of a virtualizable compound.
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Christoph Lameter <clameter@sgi.com>
---
crypto/xor.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
Index: linux-2.6.25-rc8-mm1/crypto/xor.c
===================================================================
--- linux-2.6.25-rc8-mm1.orig/crypto/xor.c 2008-04-01 12:44:26.000000000 -0700
+++ linux-2.6.25-rc8-mm1/crypto/xor.c 2008-04-02 20:53:25.634569955 -0700
@@ -101,7 +101,7 @@ calibrate_xor_blocks(void)
void *b1, *b2;
struct xor_block_template *f, *fastest;
- b1 = (void *) __get_free_pages(GFP_KERNEL, 2);
+ b1 = __alloc_vcompound(GFP_KERNEL, 2);
if (!b1) {
printk(KERN_WARNING "xor: Yikes! No memory available.\n");
return -ENOMEM;
@@ -140,7 +140,7 @@ calibrate_xor_blocks(void)
#undef xor_speed
- free_pages((unsigned long)b1, 2);
+ __free_vcompound(b1);
active_template = fastest;
return 0;
--
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [09/11] slub: Use virtualizable compound for buffer
2008-04-30 4:42 [00/11] Virtualizable Compound Page Support V5 Christoph Lameter
` (7 preceding siblings ...)
2008-04-30 4:42 ` [08/11] crypto: Use virtualizable compounds for temporary order 2 allocation Christoph Lameter
@ 2008-04-30 4:43 ` Christoph Lameter
2008-04-30 4:43 ` [10/11] vcompound: Fallback for zone wait table Christoph Lameter
2008-04-30 4:43 ` [11/11] e1000: Avoid vmalloc through virtualizable compound page Christoph Lameter
10 siblings, 0 replies; 12+ messages in thread
From: Christoph Lameter @ 2008-04-30 4:43 UTC (permalink / raw)
To: akpm; +Cc: linux-mm, Pekka Enberg
[-- Attachment #1: vcp_fallback_slub_buffer --]
[-- Type: text/plain, Size: 1562 bytes --]
The caller table can get quite large if there are many call sites for a
particular slab. Using a virtual compound page allows fallback to vmalloc in
case the caller table gets too big and memory is fragmented. Currently we
would fail the operation.
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Signed-off-by: Christoph Lameter <clameter@sgi.com>
---
mm/slub.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2008-04-29 16:40:30.851208783 -0700
+++ linux-2.6/mm/slub.c 2008-04-29 16:46:46.471205957 -0700
@@ -21,6 +21,7 @@
#include <linux/ctype.h>
#include <linux/kallsyms.h>
#include <linux/memory.h>
+#include <linux/vmalloc.h>
/*
* Lock order:
@@ -3689,8 +3690,7 @@ struct loc_track {
static void free_loc_track(struct loc_track *t)
{
if (t->max)
- free_pages((unsigned long)t->loc,
- get_order(sizeof(struct location) * t->max));
+ __free_vcompound(t->loc);
}
static int alloc_loc_track(struct loc_track *t, unsigned long max, gfp_t flags)
@@ -3700,7 +3700,7 @@ static int alloc_loc_track(struct loc_tr
order = get_order(sizeof(struct location) * max);
- l = (void *)__get_free_pages(flags, order);
+ l = __alloc_vcompound(flags, order);
if (!l)
return 0;
--
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [10/11] vcompound: Fallback for zone wait table
2008-04-30 4:42 [00/11] Virtualizable Compound Page Support V5 Christoph Lameter
` (8 preceding siblings ...)
2008-04-30 4:43 ` [09/11] slub: Use virtualizable compound for buffer Christoph Lameter
@ 2008-04-30 4:43 ` Christoph Lameter
2008-04-30 4:43 ` [11/11] e1000: Avoid vmalloc through virtualizable compound page Christoph Lameter
10 siblings, 0 replies; 12+ messages in thread
From: Christoph Lameter @ 2008-04-30 4:43 UTC (permalink / raw)
To: akpm; +Cc: linux-mm
[-- Attachment #1: vcp_waittable_support --]
[-- Type: text/plain, Size: 1263 bytes --]
Currently vmalloc may be used for allocating zone wait table.
Use a virtualizable compound page request in order to be able to use
a physically contiguous page that can then use the large kernel TLBs.
Drawback: The zone wait table is rounded up to the next power of two which
may cost some memory.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
---
mm/page_alloc.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
Index: linux-2.6.25-rc8-mm2/mm/page_alloc.c
===================================================================
--- linux-2.6.25-rc8-mm2.orig/mm/page_alloc.c 2008-04-11 20:20:44.000000000 -0700
+++ linux-2.6.25-rc8-mm2/mm/page_alloc.c 2008-04-11 20:23:36.000000000 -0700
@@ -2884,7 +2884,8 @@ int zone_wait_table_init(struct zone *zo
* To use this new node's memory, further consideration will be
* necessary.
*/
- zone->wait_table = vmalloc(alloc_size);
+ zone->wait_table = __alloc_vcompound(GFP_KERNEL,
+ get_order(alloc_size));
}
if (!zone->wait_table)
return -ENOMEM;
--
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [11/11] e1000: Avoid vmalloc through virtualizable compound page
2008-04-30 4:42 [00/11] Virtualizable Compound Page Support V5 Christoph Lameter
` (9 preceding siblings ...)
2008-04-30 4:43 ` [10/11] vcompound: Fallback for zone wait table Christoph Lameter
@ 2008-04-30 4:43 ` Christoph Lameter
10 siblings, 0 replies; 12+ messages in thread
From: Christoph Lameter @ 2008-04-30 4:43 UTC (permalink / raw)
To: akpm; +Cc: linux-mm, netdev
[-- Attachment #1: vcp_e1000_buffers --]
[-- Type: text/plain, Size: 6268 bytes --]
Switch all the uses of vmalloc in the e1000 driver to virtualizable compounds.
This will result in the use of regular memory for the ring buffers etc
avoiding page tables.
Cc: netdev@vger.kernel.org
Signed-off-by: Christoph Lameter <clameter@sgi.com>
---
drivers/net/e1000/e1000_main.c | 23 +++++++++++------------
drivers/net/e1000e/netdev.c | 12 ++++++------
2 files changed, 17 insertions(+), 18 deletions(-)
Index: linux-2.6/drivers/net/e1000e/netdev.c
===================================================================
--- linux-2.6.orig/drivers/net/e1000e/netdev.c 2008-04-25 16:02:34.973649656 -0700
+++ linux-2.6/drivers/net/e1000e/netdev.c 2008-04-29 16:47:03.583706293 -0700
@@ -1103,7 +1103,7 @@ int e1000e_setup_tx_resources(struct e10
int err = -ENOMEM, size;
size = sizeof(struct e1000_buffer) * tx_ring->count;
- tx_ring->buffer_info = vmalloc(size);
+ tx_ring->buffer_info = __alloc_vcompound(GFP_KERNEL, get_order(size));
if (!tx_ring->buffer_info)
goto err;
memset(tx_ring->buffer_info, 0, size);
@@ -1122,7 +1122,7 @@ int e1000e_setup_tx_resources(struct e10
return 0;
err:
- vfree(tx_ring->buffer_info);
+ __free_vcompound(tx_ring->buffer_info);
ndev_err(adapter->netdev,
"Unable to allocate memory for the transmit descriptor ring\n");
return err;
@@ -1141,7 +1141,7 @@ int e1000e_setup_rx_resources(struct e10
int i, size, desc_len, err = -ENOMEM;
size = sizeof(struct e1000_buffer) * rx_ring->count;
- rx_ring->buffer_info = vmalloc(size);
+ rx_ring->buffer_info = __alloc_vcompound(GFP_KERNEL, get_order(size));
if (!rx_ring->buffer_info)
goto err;
memset(rx_ring->buffer_info, 0, size);
@@ -1177,7 +1177,7 @@ err_pages:
kfree(buffer_info->ps_pages);
}
err:
- vfree(rx_ring->buffer_info);
+ __free_vcompound(rx_ring->buffer_info);
ndev_err(adapter->netdev,
"Unable to allocate memory for the transmit descriptor ring\n");
return err;
@@ -1224,7 +1224,7 @@ void e1000e_free_tx_resources(struct e10
e1000_clean_tx_ring(adapter);
- vfree(tx_ring->buffer_info);
+ __free_vcompound(tx_ring->buffer_info);
tx_ring->buffer_info = NULL;
dma_free_coherent(&pdev->dev, tx_ring->size, tx_ring->desc,
@@ -1251,7 +1251,7 @@ void e1000e_free_rx_resources(struct e10
kfree(rx_ring->buffer_info[i].ps_pages);
}
- vfree(rx_ring->buffer_info);
+ __free_vcompound(rx_ring->buffer_info);
rx_ring->buffer_info = NULL;
dma_free_coherent(&pdev->dev, rx_ring->size, rx_ring->desc,
Index: linux-2.6/drivers/net/e1000/e1000_main.c
===================================================================
--- linux-2.6.orig/drivers/net/e1000/e1000_main.c 2008-04-24 22:36:01.033639755 -0700
+++ linux-2.6/drivers/net/e1000/e1000_main.c 2008-04-29 16:47:03.583706293 -0700
@@ -1604,14 +1604,13 @@ e1000_setup_tx_resources(struct e1000_ad
int size;
size = sizeof(struct e1000_buffer) * txdr->count;
- txdr->buffer_info = vmalloc(size);
+ txdr->buffer_info = __alloc_vcompound(GFP_KERNEL | __GFP_ZERO,
+ get_order(size));
if (!txdr->buffer_info) {
DPRINTK(PROBE, ERR,
"Unable to allocate memory for the transmit descriptor ring\n");
return -ENOMEM;
}
- memset(txdr->buffer_info, 0, size);
-
/* round up to nearest 4K */
txdr->size = txdr->count * sizeof(struct e1000_tx_desc);
@@ -1620,7 +1619,7 @@ e1000_setup_tx_resources(struct e1000_ad
txdr->desc = pci_alloc_consistent(pdev, txdr->size, &txdr->dma);
if (!txdr->desc) {
setup_tx_desc_die:
- vfree(txdr->buffer_info);
+ __free_vcompound(txdr->buffer_info);
DPRINTK(PROBE, ERR,
"Unable to allocate memory for the transmit descriptor ring\n");
return -ENOMEM;
@@ -1648,7 +1647,7 @@ setup_tx_desc_die:
DPRINTK(PROBE, ERR,
"Unable to allocate aligned memory "
"for the transmit descriptor ring\n");
- vfree(txdr->buffer_info);
+ __free_vcompound(txdr->buffer_info);
return -ENOMEM;
} else {
/* Free old allocation, new allocation was successful */
@@ -1821,7 +1820,7 @@ e1000_setup_rx_resources(struct e1000_ad
int size, desc_len;
size = sizeof(struct e1000_buffer) * rxdr->count;
- rxdr->buffer_info = vmalloc(size);
+ rxdr->buffer_info = __alloc_vcompound(GFP_KERNEL, get_order(size));
if (!rxdr->buffer_info) {
DPRINTK(PROBE, ERR,
"Unable to allocate memory for the receive descriptor ring\n");
@@ -1832,7 +1831,7 @@ e1000_setup_rx_resources(struct e1000_ad
rxdr->ps_page = kcalloc(rxdr->count, sizeof(struct e1000_ps_page),
GFP_KERNEL);
if (!rxdr->ps_page) {
- vfree(rxdr->buffer_info);
+ __free_vcompound(rxdr->buffer_info);
DPRINTK(PROBE, ERR,
"Unable to allocate memory for the receive descriptor ring\n");
return -ENOMEM;
@@ -1842,7 +1841,7 @@ e1000_setup_rx_resources(struct e1000_ad
sizeof(struct e1000_ps_page_dma),
GFP_KERNEL);
if (!rxdr->ps_page_dma) {
- vfree(rxdr->buffer_info);
+ __free_vcompound(rxdr->buffer_info);
kfree(rxdr->ps_page);
DPRINTK(PROBE, ERR,
"Unable to allocate memory for the receive descriptor ring\n");
@@ -1865,7 +1864,7 @@ e1000_setup_rx_resources(struct e1000_ad
DPRINTK(PROBE, ERR,
"Unable to allocate memory for the receive descriptor ring\n");
setup_rx_desc_die:
- vfree(rxdr->buffer_info);
+ __free_vcompound(rxdr->buffer_info);
kfree(rxdr->ps_page);
kfree(rxdr->ps_page_dma);
return -ENOMEM;
@@ -2170,7 +2169,7 @@ e1000_free_tx_resources(struct e1000_ada
e1000_clean_tx_ring(adapter, tx_ring);
- vfree(tx_ring->buffer_info);
+ __free_vcompound(tx_ring->buffer_info);
tx_ring->buffer_info = NULL;
pci_free_consistent(pdev, tx_ring->size, tx_ring->desc, tx_ring->dma);
@@ -2278,9 +2277,9 @@ e1000_free_rx_resources(struct e1000_ada
e1000_clean_rx_ring(adapter, rx_ring);
- vfree(rx_ring->buffer_info);
+ __free_vcompound(rx_ring->buffer_info);
rx_ring->buffer_info = NULL;
- kfree(rx_ring->ps_page);
+ __free_vcompound(rx_ring->ps_page);
rx_ring->ps_page = NULL;
kfree(rx_ring->ps_page_dma);
rx_ring->ps_page_dma = NULL;
--
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2008-04-30 4:43 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-04-30 4:42 [00/11] Virtualizable Compound Page Support V5 Christoph Lameter
2008-04-30 4:42 ` [01/11] vmalloc: Return page array on vunmap Christoph Lameter
2008-04-30 4:42 ` [02/11] vcompound: pageflags: Add PageVcompound() Christoph Lameter
2008-04-30 4:42 ` [03/11] vmallocinfo: Support display of virtualized compound pages Christoph Lameter
2008-04-30 4:42 ` [04/11] vcompound: Core piece for virtualizable compound page allocation Christoph Lameter
2008-04-30 4:42 ` [05/11] vcompound: Debugging aid Christoph Lameter
2008-04-30 4:42 ` [06/11] sparsemem: Use virtualizable compound page Christoph Lameter
2008-04-30 4:42 ` [07/11] vcompound: bit waitqueue support Christoph Lameter
2008-04-30 4:42 ` [08/11] crypto: Use virtualizable compounds for temporary order 2 allocation Christoph Lameter
2008-04-30 4:43 ` [09/11] slub: Use virtualizable compound for buffer Christoph Lameter
2008-04-30 4:43 ` [10/11] vcompound: Fallback for zone wait table Christoph Lameter
2008-04-30 4:43 ` [11/11] e1000: Avoid vmalloc through virtualizable compound page Christoph Lameter
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox