linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* RE: [PATCH] Avoiding mmap fragmentation - clean rev
@ 2005-05-20  2:14 Chen, Kenneth W
  2005-05-20 12:47 ` Wolfgang Wander
  0 siblings, 1 reply; 22+ messages in thread
From: Chen, Kenneth W @ 2005-05-20  2:14 UTC (permalink / raw)
  To: 'Andrew Morton', 'Wolfgang Wander'
  Cc: herve, mingo, arjanv, linux-mm

Chen, Kenneth W wrote on Thursday, May 19, 2005 7:02 PM
> Oh well, I guess we have to take a performance hit here in favor of
> functionality.  Though this is a problem specific to 32-bit address
> space, please don't unnecessarily penalize 64-bit arch.  If Andrew is
> going to take Wolfgang's patch, then we should minimally take the
> following patch.  This patch revert changes made in arch/ia64 and make
> x86_64 to use alternate cache algorithm for 32-bit app.

Oh, crap, there is a typo in my patch and it won't compile on x86_64.
Here is an updated version.  Please this one instead.


Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>

--- linux-2.6.11/arch/ia64/kernel/sys_ia64.c.orig	2005-05-19 18:35:31.468087777 -0700
+++ linux-2.6.11/arch/ia64/kernel/sys_ia64.c	2005-05-19 18:35:46.521798000 -0700
@@ -38,14 +38,8 @@ arch_get_unmapped_area (struct file *fil
 	if (REGION_NUMBER(addr) == REGION_HPAGE)
 		addr = 0;
 #endif
-	if (!addr) {
-	        if (len > mm->cached_hole_size) {
-		        addr = mm->free_area_cache;
-		} else {
-		        addr = TASK_UNMAPPED_BASE;
-			mm->cached_hole_size = 0;
-		}
-	}
+	if (!addr)
+		addr = mm->free_area_cache;
 
 	if (map_shared && (TASK_SIZE > 0xfffffffful))
 		/*
@@ -65,7 +59,6 @@ arch_get_unmapped_area (struct file *fil
 			if (start_addr != TASK_UNMAPPED_BASE) {
 				/* Start a new search --- just in case we missed some holes.  */
 				addr = TASK_UNMAPPED_BASE;
-				mm->cached_hole_size = 0;
 				goto full_search;
 			}
 			return -ENOMEM;
@@ -75,8 +68,6 @@ arch_get_unmapped_area (struct file *fil
 			mm->free_area_cache = addr + len;
 			return addr;
 		}
-		if (addr + mm->cached_hole_size < vma->vm_start)
-		        mm->cached_hole_size = vma->vm_start - addr;
 		addr = (vma->vm_end + align_mask) & ~align_mask;
 	}
 }
--- linux-2.6.11/arch/x86_64/kernel/sys_x86_64.c.orig	2005-05-19 18:37:32.202461298 -0700
+++ linux-2.6.11/arch/x86_64/kernel/sys_x86_64.c	2005-05-19 18:39:03.110663309 -0700
@@ -111,7 +111,7 @@ arch_get_unmapped_area(struct file *filp
 		    (!vma || addr + len <= vma->vm_start))
 			return addr;
 	}
-	if (len <= mm->cached_hole_size) {
+	if (begin != TASK_UNMAPPED_64 && len <= mm->cached_hole_size) {
 	        mm->cached_hole_size = 0;
 		mm->free_area_cache = begin;
 	}



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread
* RE: [PATCH] Avoiding mmap fragmentation - clean rev
@ 2005-05-25  7:30 linux
  0 siblings, 0 replies; 22+ messages in thread
From: linux @ 2005-05-25  7:30 UTC (permalink / raw)
  To: linux-mm

If you want to minimize fragmentation, you could do worse than
study Doug Lea's malloc, known commonly as dlmalloc.

It's basically straight best-fit, but with one important heuristic:
FIFO use of free blocks.

This contributes enormously to minimizing fragmentation.
Any time a free block is created, it goes on the *end* of the
available list for that size.

This means that every chunk of free memory gets an approximately-equal
chance to be merged with more free chunks.  The chunks that make it
to the front of the available list are the ones with stable neighbors,
which are the best ones use up.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread
* RE: [PATCH] Avoiding mmap fragmentation - clean rev
@ 2005-05-27 16:37 Chen, Kenneth W
  0 siblings, 0 replies; 22+ messages in thread
From: Chen, Kenneth W @ 2005-05-27 16:37 UTC (permalink / raw)
  To: 'Wolfgang Wander'
  Cc: 'Andrew Morton', herve, mingo, arjanv, linux-mm, colin.harrison

Wolfgang Wander wrote on Thursday, May 26, 2005 10:32 AM
> 'prev' seems to possibly be NULL and the assignemnt of
>    tail_vma->vm_private_data = (void*) prev->vm_end;
> which fix-2 adds does not check for that.
> That potential problem does not seem to match the stacktrace
> below however...

Chen, Kenneth W wrote on Thursday, May 26, 2005 10:44 AM
> It sure looks like 'prev' can be null.  It needs the similar check
> like the one in the top down case.  I will double check on it.


Sorry, I was side tracked for awhile yesterday.  Oh boy, major clash for
corrupting vm_private_data with avoiding-mmap-fragmentation-fix-2.patch.

TO fix this, I think we can either pass the hint address down the call
stack, or pull mm->unmap_area() call site in unmap_area() two function
level up into detach_vmas_to_be_unmapped().  It also makes logical sense
to me that at the end of unlinking all the vmas, we fix up free_area_cache
in one shot instead of fixing it up for each iteration of vma removal.

Here is a patch. Majority of them is to change function prototype: to
use mm_struct pointer instead of vm_area_struct and to add hint addr
in the mm->unmap_area function argument.  Last 3 hunks are the real ones
that fix the conflict with vm_private_data.  This patch also fix the bug
where 'prev' is de-referenced without proper checking.


Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>

--- ./include/linux/sched.h.orig	2005-05-27 02:04:17.601950694 -0700
+++ ./include/linux/sched.h	2005-05-27 03:06:31.389014330 -0700
@@ -201,8 +201,8 @@ extern unsigned long
 arch_get_unmapped_area_topdown(struct file *filp, unsigned long addr,
 			  unsigned long len, unsigned long pgoff,
 			  unsigned long flags);
-extern void arch_unmap_area(struct vm_area_struct *area);
-extern void arch_unmap_area_topdown(struct vm_area_struct *area);
+extern void arch_unmap_area(struct mm_struct *, unsigned long);
+extern void arch_unmap_area_topdown(struct mm_struct *, unsigned long);
 
 #define set_mm_counter(mm, member, value) (mm)->_##member = (value)
 #define get_mm_counter(mm, member) ((mm)->_##member)
@@ -218,7 +218,7 @@ struct mm_struct {
 	unsigned long (*get_unmapped_area) (struct file *filp,
 				unsigned long addr, unsigned long len,
 				unsigned long pgoff, unsigned long flags);
-	void (*unmap_area) (struct vm_area_struct *area);
+	void (*unmap_area) (struct mm_struct *mm, unsigned long addr);
         unsigned long mmap_base;		/* base of mmap area */
         unsigned long cached_hole_size;         /* if non-zero, the largest hole below free_area_cache */
 	unsigned long free_area_cache;		/* first hole of size cached_hole_size or larger */
--- ./mm/nommu.c.orig	2005-05-27 02:05:47.380269907 -0700
+++ ./mm/nommu.c	2005-05-27 03:07:31.773779216 -0700
@@ -1067,7 +1067,7 @@ unsigned long arch_get_unmapped_area(str
 	return -ENOMEM;
 }
 
-void arch_unmap_area(struct vm_area_struct *area)
+void arch_unmap_area(struct mm_struct *mm, unsigned long addr)
 {
 }
 
--- ./mm/mmap.c.orig	2005-05-27 02:03:22.152732623 -0700
+++ ./mm/mmap.c	2005-05-27 03:10:36.865573823 -0700
@@ -1212,16 +1212,14 @@ full_search:
 }
 #endif	
 
-void arch_unmap_area(struct vm_area_struct *area)
+void arch_unmap_area(struct mm_struct *mm, unsigned long addr)
 {
 	/*
 	 * Is this a new hole at the lowest possible address?
 	 */
-	unsigned long addr = (unsigned long) area->vm_private_data;
-
-	if (addr >= TASK_UNMAPPED_BASE && addr < area->vm_mm->free_area_cache) {
-		area->vm_mm->free_area_cache = addr;
-		area->vm_mm->cached_hole_size = ~0UL;
+	if (addr >= TASK_UNMAPPED_BASE && addr < mm->free_area_cache) {
+		mm->free_area_cache = addr;
+		mm->cached_hole_size = ~0UL;
 	}
 }
 
@@ -1309,19 +1307,17 @@ arch_get_unmapped_area_topdown(struct fi
 }
 #endif
 
-void arch_unmap_area_topdown(struct vm_area_struct *area)
+void arch_unmap_area_topdown(struct mm_struct *mm, unsigned long addr)
 {
 	/*
 	 * Is this a new hole at the highest possible address?
 	 */
-	unsigned long addr = (unsigned long) area->vm_private_data;
-
-	if (addr > area->vm_mm->free_area_cache)
-		area->vm_mm->free_area_cache = addr;
+	if (addr > mm->free_area_cache)
+		mm->free_area_cache = addr;
 
 	/* dont allow allocations above current base */
-	if (area->vm_mm->free_area_cache > area->vm_mm->mmap_base)
-		area->vm_mm->free_area_cache = area->vm_mm->mmap_base;
+	if (mm->free_area_cache > mm->mmap_base)
+		mm->free_area_cache = mm->mmap_base;
 }
 
 unsigned long
@@ -1621,7 +1617,6 @@ static void unmap_vma(struct mm_struct *
 	if (area->vm_flags & VM_LOCKED)
 		area->vm_mm->locked_vm -= len >> PAGE_SHIFT;
 	vm_stat_unaccount(area);
-	area->vm_mm->unmap_area(area);
 	remove_vm_struct(area);
 }
 
@@ -1675,6 +1670,7 @@ detach_vmas_to_be_unmapped(struct mm_str
 {
 	struct vm_area_struct **insertion_point;
 	struct vm_area_struct *tail_vma = NULL;
+	unsigned long addr;
 
 	insertion_point = (prev ? &prev->vm_next : &mm->mmap);
 	do {
@@ -1686,10 +1682,10 @@ detach_vmas_to_be_unmapped(struct mm_str
 	*insertion_point = vma;
 	tail_vma->vm_next = NULL;
 	if (mm->unmap_area == arch_unmap_area)
-		tail_vma->vm_private_data = (void*) prev->vm_end;
+		addr = prev ? prev->vm_end : mm->mmap_base;
 	else
-		tail_vma->vm_private_data = vma ?
-			(void*) vma->vm_start : (void*) mm->mmap_base;
+		addr = vma ?  vma->vm_start : mm->mmap_base;
+	mm->unmap_area(mm, addr);
 	mm->mmap_cache = NULL;		/* Kill the cache. */
 }
 


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2005-05-27 16:37 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <E4BA51C8E4E9634993418831223F0A49291F06E1@scsmsx401.amr.corp.intel.com>
2005-05-17 22:28 ` [PATCH] Avoiding mmap fragmentation - clean rev Chen, Kenneth W
2005-05-18  7:28   ` Arjan van de Ven
2005-05-18  7:43     ` Ingo Molnar
2005-05-18  7:37   ` Ingo Molnar
2005-05-18 13:05   ` Wolfgang Wander
2005-05-18 15:47   ` Wolfgang Wander
2005-05-18 16:18     ` Chen, Kenneth W
2005-05-18 17:16       ` Wolfgang Wander
2005-05-18 17:57         ` Chen, Kenneth W
2005-05-19 18:38           ` Wolfgang Wander
2005-05-19 22:54             ` Andrew Morton
2005-05-20  2:02               ` Chen, Kenneth W
2005-05-20 23:51               ` Chen, Kenneth W
2005-05-23 18:25                 ` Wolfgang Wander
2005-05-26 17:32                 ` Wolfgang Wander
2005-05-26 17:44                   ` Chen, Kenneth W
2005-05-20  3:10             ` Chen, Kenneth W
2005-05-20 12:39               ` Wolfgang Wander
2005-05-20  2:14 Chen, Kenneth W
2005-05-20 12:47 ` Wolfgang Wander
2005-05-25  7:30 linux
2005-05-27 16:37 Chen, Kenneth W

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox