linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/07][RFC] Remove mapcount from struct page
@ 2005-12-08 11:26 Magnus Damm
  2005-12-08 11:27 ` [PATCH 01/07] Remove page_mapcount Magnus Damm
                   ` (7 more replies)
  0 siblings, 8 replies; 10+ messages in thread
From: Magnus Damm @ 2005-12-08 11:26 UTC (permalink / raw)
  To: linux-mm, linux-kernel; +Cc: Magnus Damm, andrea

This patchset tries to remove page->_mapcount. On x86 systems this saves
4 bytes lowmem per page which means a 0.1% memory reduction. For small
embedded systems this saves one 4 K page per 4 M of memory. For systems
with large amounts of highmem this helps saving valuable lowmem.

The patches introduce a new bit in page->flags called PG_mapped. This bit
is used to determine if the page is mapped or not. The value zero means that 
the page is guaranteed to be unmapped. A one tells us that the page is either
mapped or unmapped, probably the former. So, page_mapped() might be lying.

This PG_mapped bit can go from 0 to 1 at any time, see page_add_anon_rmap() and
page_add_file_rmap(). The transition from 1 to 0 for an active page is more 
complicated and is implemented in the new function update_page_mapped(). The 
PG_mapped bit is also checked when pages are freed. PG_locked protects us.

In order to determine if a page is unmapped or not, the rmap data structures
must be traversed. For this to work correctly an usage counter has been added
to struct anon_vma.

Apart from performace, there are some issues such as:

- The number of maps limit (INT_MAX/2) is removed.
- can_share_swap_page() always returns 0 for now, ie sharing is disabled.
- Nonlinear file backed vmas are not handled yet.
- Is the anon_vma use count really correct?
- Is the PG_locked bit enough protection?
- There might be other places where update_page_mapped() should be used.

Some testing, but no benchmarking has been done. Have fun. Wear a helmet.

Signed-off-by: Magnus Damm <magnus@valinux.co.jp>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 01/07] Remove page_mapcount
  2005-12-08 11:26 [PATCH 00/07][RFC] Remove mapcount from struct page Magnus Damm
@ 2005-12-08 11:27 ` Magnus Damm
  2005-12-08 11:27 ` [PATCH 02/07] Add PG_mapped Magnus Damm
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Magnus Damm @ 2005-12-08 11:27 UTC (permalink / raw)
  To: linux-mm, linux-kernel; +Cc: Magnus Damm, andrea

Remove page_mapcount.

This patch removes the page_mapcount() function and replaces it with
page_mapped() if possible. can_share_swap_page() always returns 0 for now.

Signed-off-by: Magnus Damm <magnus@valinux.co.jp>
---

 fs/proc/task_mmu.c |    2 +-
 include/linux/mm.h |    5 -----
 mm/fremap.c        |    2 --
 mm/page_alloc.c    |   10 ++++------
 mm/rmap.c          |   40 ++++++++--------------------------------
 mm/swapfile.c      |    8 ++------
 6 files changed, 15 insertions(+), 52 deletions(-)

--- from-0002/fs/proc/task_mmu.c
+++ to-work/fs/proc/task_mmu.c	2005-12-08 10:52:13.000000000 +0900
@@ -421,7 +421,7 @@ static struct numa_maps *get_numa_maps(s
  	for (vaddr = vma->vm_start; vaddr < vma->vm_end; vaddr += PAGE_SIZE) {
 		page = follow_page(vma, vaddr, 0);
 		if (page) {
-			int count = page_mapcount(page);
+			int count = page_mapped(page);
 
 			if (count)
 				md->mapped++;
--- from-0002/include/linux/mm.h
+++ to-work/include/linux/mm.h	2005-12-08 10:52:13.000000000 +0900
@@ -586,11 +586,6 @@ static inline void reset_page_mapcount(s
 	atomic_set(&(page)->_mapcount, -1);
 }
 
-static inline int page_mapcount(struct page *page)
-{
-	return atomic_read(&(page)->_mapcount) + 1;
-}
-
 /*
  * Return true if this page is mapped into pagetables.
  */
--- from-0002/mm/fremap.c
+++ to-work/mm/fremap.c	2005-12-08 10:52:13.000000000 +0900
@@ -72,8 +72,6 @@ int install_page(struct mm_struct *mm, s
 	if (!page->mapping || page->index >= size)
 		goto unlock;
 	err = -ENOMEM;
-	if (page_mapcount(page) > INT_MAX/2)
-		goto unlock;
 
 	if (pte_none(*pte) || !zap_pte(mm, vma, addr, pte))
 		inc_mm_counter(mm, file_rss);
--- from-0002/mm/page_alloc.c
+++ to-work/mm/page_alloc.c	2005-12-08 10:52:13.000000000 +0900
@@ -126,9 +126,9 @@ static void bad_page(const char *functio
 {
 	printk(KERN_EMERG "Bad page state at %s (in process '%s', page %p)\n",
 		function, current->comm, page);
-	printk(KERN_EMERG "flags:0x%0*lx mapping:%p mapcount:%d count:%d\n",
+	printk(KERN_EMERG "flags:0x%0*lx mapping:%p count:%d\n",
 		(int)(2*sizeof(unsigned long)), (unsigned long)page->flags,
-		page->mapping, page_mapcount(page), page_count(page));
+		page->mapping, page_count(page));
 	printk(KERN_EMERG "Backtrace:\n");
 	dump_stack();
 	printk(KERN_EMERG "Trying to fix it up, but a reboot is needed\n");
@@ -336,8 +336,7 @@ static inline void __free_pages_bulk (st
 
 static inline int free_pages_check(const char *function, struct page *page)
 {
-	if (	page_mapcount(page) ||
-		page->mapping != NULL ||
+	if (	page->mapping != NULL ||
 		page_count(page) != 0 ||
 		(page->flags & (
 			1 << PG_lru	|
@@ -473,8 +472,7 @@ void set_page_refs(struct page *page, in
  */
 static int prep_new_page(struct page *page, int order)
 {
-	if (	page_mapcount(page) ||
-		page->mapping != NULL ||
+	if (	page->mapping != NULL ||
 		page_count(page) != 0 ||
 		(page->flags & (
 			1 << PG_lru	|
--- from-0002/mm/rmap.c
+++ to-work/mm/rmap.c	2005-12-08 11:02:06.000000000 +0900
@@ -289,8 +289,7 @@ pte_t *page_check_address(struct page *p
  * Subfunctions of page_referenced: page_referenced_one called
  * repeatedly from either page_referenced_anon or page_referenced_file.
  */
-static int page_referenced_one(struct page *page,
-	struct vm_area_struct *vma, unsigned int *mapcount)
+static int page_referenced_one(struct page *page, struct vm_area_struct *vma)
 {
 	struct mm_struct *mm = vma->vm_mm;
 	unsigned long address;
@@ -315,7 +314,6 @@ static int page_referenced_one(struct pa
 			rwsem_is_locked(&mm->mmap_sem))
 		referenced++;
 
-	(*mapcount)--;
 	pte_unmap_unlock(pte, ptl);
 out:
 	return referenced;
@@ -323,7 +321,6 @@ out:
 
 static int page_referenced_anon(struct page *page)
 {
-	unsigned int mapcount;
 	struct anon_vma *anon_vma;
 	struct vm_area_struct *vma;
 	int referenced = 0;
@@ -332,12 +329,9 @@ static int page_referenced_anon(struct p
 	if (!anon_vma)
 		return referenced;
 
-	mapcount = page_mapcount(page);
-	list_for_each_entry(vma, &anon_vma->head, anon_vma_node) {
-		referenced += page_referenced_one(page, vma, &mapcount);
-		if (!mapcount)
-			break;
-	}
+	list_for_each_entry(vma, &anon_vma->head, anon_vma_node)
+		referenced += page_referenced_one(page, vma);
+
 	spin_unlock(&anon_vma->lock);
 	return referenced;
 }
@@ -355,7 +349,6 @@ static int page_referenced_anon(struct p
  */
 static int page_referenced_file(struct page *page)
 {
-	unsigned int mapcount;
 	struct address_space *mapping = page->mapping;
 	pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT);
 	struct vm_area_struct *vma;
@@ -379,21 +372,13 @@ static int page_referenced_file(struct p
 
 	spin_lock(&mapping->i_mmap_lock);
 
-	/*
-	 * i_mmap_lock does not stabilize mapcount at all, but mapcount
-	 * is more likely to be accurate if we note it after spinning.
-	 */
-	mapcount = page_mapcount(page);
-
 	vma_prio_tree_foreach(vma, &iter, &mapping->i_mmap, pgoff, pgoff) {
 		if ((vma->vm_flags & (VM_LOCKED|VM_MAYSHARE))
 				  == (VM_LOCKED|VM_MAYSHARE)) {
 			referenced++;
 			break;
 		}
-		referenced += page_referenced_one(page, vma, &mapcount);
-		if (!mapcount)
-			break;
+		referenced += page_referenced_one(page, vma);
 	}
 
 	spin_unlock(&mapping->i_mmap_lock);
@@ -483,7 +468,6 @@ void page_add_file_rmap(struct page *pag
 void page_remove_rmap(struct page *page)
 {
 	if (atomic_add_negative(-1, &page->_mapcount)) {
-		BUG_ON(page_mapcount(page) < 0);
 		/*
 		 * It would be tidy to reset the PageAnon mapping here,
 		 * but that might overwrite a racing page_add_anon_rmap
@@ -594,7 +578,7 @@ out:
 #define CLUSTER_MASK	(~(CLUSTER_SIZE - 1))
 
 static void try_to_unmap_cluster(unsigned long cursor,
-	unsigned int *mapcount, struct vm_area_struct *vma)
+	struct vm_area_struct *vma)
 {
 	struct mm_struct *mm = vma->vm_mm;
 	pgd_t *pgd;
@@ -655,7 +639,6 @@ static void try_to_unmap_cluster(unsigne
 		page_remove_rmap(page);
 		page_cache_release(page);
 		dec_mm_counter(mm, file_rss);
-		(*mapcount)--;
 	}
 	pte_unmap_unlock(pte - 1, ptl);
 }
@@ -698,7 +681,6 @@ static int try_to_unmap_file(struct page
 	unsigned long cursor;
 	unsigned long max_nl_cursor = 0;
 	unsigned long max_nl_size = 0;
-	unsigned int mapcount;
 
 	spin_lock(&mapping->i_mmap_lock);
 	vma_prio_tree_foreach(vma, &iter, &mapping->i_mmap, pgoff, pgoff) {
@@ -731,12 +713,8 @@ static int try_to_unmap_file(struct page
 	 * We don't try to search for this page in the nonlinear vmas,
 	 * and page_referenced wouldn't have found it anyway.  Instead
 	 * just walk the nonlinear vmas trying to age and unmap some.
-	 * The mapcount of the page we came in with is irrelevant,
-	 * but even so use it as a guide to how hard we should try?
 	 */
-	mapcount = page_mapcount(page);
-	if (!mapcount)
-		goto out;
+
 	cond_resched_lock(&mapping->i_mmap_lock);
 
 	max_nl_size = (max_nl_size + CLUSTER_SIZE - 1) & CLUSTER_MASK;
@@ -751,11 +729,9 @@ static int try_to_unmap_file(struct page
 			cursor = (unsigned long) vma->vm_private_data;
 			while ( cursor < max_nl_cursor &&
 				cursor < vma->vm_end - vma->vm_start) {
-				try_to_unmap_cluster(cursor, &mapcount, vma);
+				try_to_unmap_cluster(cursor, vma);
 				cursor += CLUSTER_SIZE;
 				vma->vm_private_data = (void *) cursor;
-				if ((int)mapcount <= 0)
-					goto out;
 			}
 			vma->vm_private_data = (void *) max_nl_cursor;
 		}
--- from-0002/mm/swapfile.c
+++ to-work/mm/swapfile.c	2005-12-08 10:52:13.000000000 +0900
@@ -308,13 +308,9 @@ static inline int page_swapcount(struct 
  */
 int can_share_swap_page(struct page *page)
 {
-	int count;
-
 	BUG_ON(!PageLocked(page));
-	count = page_mapcount(page);
-	if (count <= 1 && PageSwapCache(page))
-		count += page_swapcount(page);
-	return count == 1;
+
+	return 0;
 }
 
 /*

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 02/07] Add PG_mapped
  2005-12-08 11:26 [PATCH 00/07][RFC] Remove mapcount from struct page Magnus Damm
  2005-12-08 11:27 ` [PATCH 01/07] Remove page_mapcount Magnus Damm
@ 2005-12-08 11:27 ` Magnus Damm
  2005-12-08 11:27 ` [PATCH 03/07] Add anon_vma use count Magnus Damm
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Magnus Damm @ 2005-12-08 11:27 UTC (permalink / raw)
  To: linux-mm, linux-kernel; +Cc: Magnus Damm, andrea

Add PG_mapped.

This patch adds a PG_mapped bit to page->flags to be able to track if a page
is unmapped or not. PG_mapped should be interpreted as follows:

0: Page is guaranteed to be unmapped.
1: Page is either mapped or unmapped.

The bit could be read without locks, but will be set under PG_locked.

Signed-off-by: Magnus Damm <magnus@valinux.co.jp>
---

 page-flags.h |    5 +++++
 1 files changed, 5 insertions(+)

--- from-0002/include/linux/page-flags.h
+++ to-work/include/linux/page-flags.h	2005-12-08 14:58:52.000000000 +0900
@@ -75,6 +75,7 @@
 #define PG_reclaim		17	/* To be reclaimed asap */
 #define PG_nosave_free		18	/* Free, should not be written */
 #define PG_uncached		19	/* Page has been mapped as uncached */
+#define PG_mapped		20	/* Page might be mapped in a vma */
 
 /*
  * Global page accounting.  One instance per CPU.  Only unsigned longs are
@@ -303,6 +304,10 @@ extern void __mod_page_state(unsigned lo
 #define SetPageUncached(page)	set_bit(PG_uncached, &(page)->flags)
 #define ClearPageUncached(page)	clear_bit(PG_uncached, &(page)->flags)
 
+#define PageMapped(page)	test_bit(PG_mapped, &(page)->flags)
+#define TestSetPageMapped(page)	test_and_set_bit(PG_mapped, &(page)->flags)
+#define ClearPageMapped(page)	clear_bit(PG_mapped, &(page)->flags)
+
 struct page;	/* forward declaration */
 
 int test_clear_page_dirty(struct page *page);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 03/07] Add anon_vma use count
  2005-12-08 11:26 [PATCH 00/07][RFC] Remove mapcount from struct page Magnus Damm
  2005-12-08 11:27 ` [PATCH 01/07] Remove page_mapcount Magnus Damm
  2005-12-08 11:27 ` [PATCH 02/07] Add PG_mapped Magnus Damm
@ 2005-12-08 11:27 ` Magnus Damm
  2005-12-08 11:27 ` [PATCH 04/07] Replace mapcount with PG_mapped Magnus Damm
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Magnus Damm @ 2005-12-08 11:27 UTC (permalink / raw)
  To: linux-mm, linux-kernel; +Cc: Magnus Damm, andrea

Add anon_vma use count.

This patch adds an atomic use counter to struct anon_vma. We do this because
we need to be able to follow page->mapping to determine the map count when
page->_mapcount is missing. Without this patch page->mapping might point
to an already freed struct anon_vma.

Signed-off-by: Magnus Damm <magnus@valinux.co.jp>
---

 include/linux/rmap.h |    4 +++-
 mm/page_alloc.c      |    6 +++++-
 mm/rmap.c            |   18 ++++++++++--------
 3 files changed, 18 insertions(+), 10 deletions(-)

--- from-0002/include/linux/rmap.h
+++ to-work/include/linux/rmap.h	2005-12-08 12:17:33.000000000 +0900
@@ -27,6 +27,7 @@
 struct anon_vma {
 	spinlock_t lock;	/* Serialize access to vma list */
 	struct list_head head;	/* List of private "related" vmas */
+	atomic_t use_count;
 };
 
 #ifdef CONFIG_MMU
@@ -40,7 +41,8 @@ static inline struct anon_vma *anon_vma_
 
 static inline void anon_vma_free(struct anon_vma *anon_vma)
 {
-	kmem_cache_free(anon_vma_cachep, anon_vma);
+	if (atomic_add_negative(-1, &anon_vma->use_count))
+		kmem_cache_free(anon_vma_cachep, anon_vma);
 }
 
 static inline void anon_vma_lock(struct vm_area_struct *vma)
--- from-0003/mm/page_alloc.c
+++ to-work/mm/page_alloc.c	2005-12-08 12:17:33.000000000 +0900
@@ -36,6 +36,7 @@
 #include <linux/memory_hotplug.h>
 #include <linux/nodemask.h>
 #include <linux/vmalloc.h>
+#include <linux/rmap.h>
 
 #include <asm/tlbflush.h>
 #include "internal.h"
@@ -683,8 +684,11 @@ static void fastcall free_hot_cold_page(
 
 	arch_free_page(page, 0);
 
-	if (PageAnon(page))
+	if (PageAnon(page)) {
+		anon_vma_free((void *)((unsigned long)
+				       (page->mapping) - PAGE_MAPPING_ANON));
 		page->mapping = NULL;
+	}
 	if (free_pages_check(__FUNCTION__, page))
 		return;
 
--- from-0003/mm/rmap.c
+++ to-work/mm/rmap.c	2005-12-08 12:17:33.000000000 +0900
@@ -100,6 +100,8 @@ int anon_vma_prepare(struct vm_area_stru
 			locked = NULL;
 		}
 
+		atomic_inc(&anon_vma->use_count);
+
 		/* page_table_lock to protect against threads */
 		spin_lock(&mm->page_table_lock);
 		if (likely(!vma->anon_vma)) {
@@ -121,6 +123,7 @@ void __anon_vma_merge(struct vm_area_str
 {
 	BUG_ON(vma->anon_vma != next->anon_vma);
 	list_del(&next->anon_vma_node);
+	anon_vma_free(vma->anon_vma);
 }
 
 void __anon_vma_link(struct vm_area_struct *vma)
@@ -128,6 +131,7 @@ void __anon_vma_link(struct vm_area_stru
 	struct anon_vma *anon_vma = vma->anon_vma;
 
 	if (anon_vma) {
+		atomic_inc(&anon_vma->use_count);
 		list_add(&vma->anon_vma_node, &anon_vma->head);
 		validate_anon_vma(vma);
 	}
@@ -138,6 +142,7 @@ void anon_vma_link(struct vm_area_struct
 	struct anon_vma *anon_vma = vma->anon_vma;
 
 	if (anon_vma) {
+		atomic_inc(&anon_vma->use_count);
 		spin_lock(&anon_vma->lock);
 		list_add(&vma->anon_vma_node, &anon_vma->head);
 		validate_anon_vma(vma);
@@ -148,7 +153,6 @@ void anon_vma_link(struct vm_area_struct
 void anon_vma_unlink(struct vm_area_struct *vma)
 {
 	struct anon_vma *anon_vma = vma->anon_vma;
-	int empty;
 
 	if (!anon_vma)
 		return;
@@ -156,13 +160,8 @@ void anon_vma_unlink(struct vm_area_stru
 	spin_lock(&anon_vma->lock);
 	validate_anon_vma(vma);
 	list_del(&vma->anon_vma_node);
-
-	/* We must garbage collect the anon_vma if it's empty */
-	empty = list_empty(&anon_vma->head);
 	spin_unlock(&anon_vma->lock);
-
-	if (empty)
-		anon_vma_free(anon_vma);
+	anon_vma_free(anon_vma);
 }
 
 static void anon_vma_ctor(void *data, kmem_cache_t *cachep, unsigned long flags)
@@ -173,6 +172,7 @@ static void anon_vma_ctor(void *data, km
 
 		spin_lock_init(&anon_vma->lock);
 		INIT_LIST_HEAD(&anon_vma->head);
+		atomic_set(&anon_vma->use_count, -1);
 	}
 }
 
@@ -434,7 +434,9 @@ void page_add_anon_rmap(struct page *pag
 		struct anon_vma *anon_vma = vma->anon_vma;
 
 		BUG_ON(!anon_vma);
-		anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON;
+		atomic_inc(&anon_vma->use_count);
+		anon_vma = (void *) ((unsigned long)anon_vma 
+				     + PAGE_MAPPING_ANON);
 		page->mapping = (struct address_space *) anon_vma;
 
 		page->index = linear_page_index(vma, address);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 04/07] Replace mapcount with PG_mapped
  2005-12-08 11:26 [PATCH 00/07][RFC] Remove mapcount from struct page Magnus Damm
                   ` (2 preceding siblings ...)
  2005-12-08 11:27 ` [PATCH 03/07] Add anon_vma use count Magnus Damm
@ 2005-12-08 11:27 ` Magnus Damm
  2005-12-08 11:27 ` [PATCH 05/07] Remove reset_page_mapcount Magnus Damm
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Magnus Damm @ 2005-12-08 11:27 UTC (permalink / raw)
  To: linux-mm, linux-kernel; +Cc: Magnus Damm, andrea

Replace mapcount with PG_mapped.

This patch contains the core of the page->_mapcount removal code. PG_mapped
replaces page->_mapcount and update_page_mapped() is introduced.

Signed-off-by: Magnus Damm <magnus@valinux.co.jp>
---

 include/linux/mm.h   |    8 --
 include/linux/rmap.h |   16 +----
 mm/fremap.c          |    3 +
 mm/rmap.c            |  143 ++++++++++++++++++++++++++++++++++++++++++--------- mm/swap.c            |   10 +++
 mm/truncate.c        |    3 -
 6 files changed, 140 insertions(+), 43 deletions(-)

--- from-0003/include/linux/mm.h
+++ to-work/include/linux/mm.h	2005-12-08 15:00:40.000000000 +0900
@@ -218,10 +218,6 @@ struct page {
 	unsigned long flags;		/* Atomic flags, some possibly
 					 * updated asynchronously */
 	atomic_t _count;		/* Usage count, see below. */
-	atomic_t _mapcount;		/* Count of ptes mapped in mms,
-					 * to show when page is mapped
-					 * & limit reverse map searches.
-					 */
 	union {
 		unsigned long private;	/* Mapping-private opaque data:
 					 * usually used for buffer_heads
@@ -583,7 +579,7 @@ static inline pgoff_t page_index(struct 
  */
 static inline void reset_page_mapcount(struct page *page)
 {
-	atomic_set(&(page)->_mapcount, -1);
+	ClearPageMapped(page);
 }
 
 /*
@@ -591,7 +587,7 @@ static inline void reset_page_mapcount(s
  */
 static inline int page_mapped(struct page *page)
 {
-	return atomic_read(&(page)->_mapcount) >= 0;
+	return PageMapped(page);
 }
 
 /*
--- from-0005/include/linux/rmap.h
+++ to-work/include/linux/rmap.h	2005-12-08 15:00:40.000000000 +0900
@@ -74,19 +74,11 @@ void __anon_vma_link(struct vm_area_stru
  */
 void page_add_anon_rmap(struct page *, struct vm_area_struct *, unsigned long);
 void page_add_file_rmap(struct page *);
-void page_remove_rmap(struct page *);
 
-/**
- * page_dup_rmap - duplicate pte mapping to a page
- * @page:	the page to add the mapping to
- *
- * For copy_page_range only: minimal extract from page_add_rmap,
- * avoiding unnecessary tests (already checked) so it's quicker.
- */
-static inline void page_dup_rmap(struct page *page)
-{
-	atomic_inc(&page->_mapcount);
-}
+static inline void page_remove_rmap(struct page *page) {}
+static inline void page_dup_rmap(struct page *page) {}
+
+int update_page_mapped(struct page *);
 
 /*
  * Called from mm/vmscan.c to handle paging out
--- from-0003/mm/fremap.c
+++ to-work/mm/fremap.c	2005-12-08 15:00:40.000000000 +0900
@@ -62,6 +62,8 @@ int install_page(struct mm_struct *mm, s
 	if (!pte)
 		goto out;
 
+	lock_page(page);
+
 	/*
 	 * This page may have been truncated. Tell the
 	 * caller about it.
@@ -85,6 +87,7 @@ int install_page(struct mm_struct *mm, s
 unlock:
 	pte_unmap_unlock(pte, ptl);
 out:
+	unlock_page(page);
 	return err;
 }
 EXPORT_SYMBOL(install_page);
--- from-0005/mm/rmap.c
+++ to-work/mm/rmap.c	2005-12-08 17:52:34.000000000 +0900
@@ -430,7 +430,7 @@ int page_referenced(struct page *page, i
 void page_add_anon_rmap(struct page *page,
 	struct vm_area_struct *vma, unsigned long address)
 {
-	if (atomic_inc_and_test(&page->_mapcount)) {
+	if (!PageMapped(page)) {
 		struct anon_vma *anon_vma = vma->anon_vma;
 
 		BUG_ON(!anon_vma);
@@ -442,6 +442,8 @@ void page_add_anon_rmap(struct page *pag
 		page->index = linear_page_index(vma, address);
 
 		inc_page_state(nr_mapped);
+		if (TestSetPageMapped(page))
+			BUG();
 	}
 	/* else checking page index and mapping is racy */
 }
@@ -457,34 +459,124 @@ void page_add_file_rmap(struct page *pag
 	BUG_ON(PageAnon(page));
 	BUG_ON(!pfn_valid(page_to_pfn(page)));
 
-	if (atomic_inc_and_test(&page->_mapcount))
+	if (!PageMapped(page)) {
 		inc_page_state(nr_mapped);
+		if (TestSetPageMapped(page))
+			BUG();
+	}
 }
 
-/**
- * page_remove_rmap - take down pte mapping from a page
- * @page: page to remove mapping from
- *
- * The caller needs to hold the pte lock.
+
+/*
+ * Subfunctions of update_page_mapped: page_mapped_one called
+ * repeatedly from either page_mapped_anon or page_mapped_file.
  */
-void page_remove_rmap(struct page *page)
+static int page_mapped_one(struct page *page, struct vm_area_struct *vma)
 {
-	if (atomic_add_negative(-1, &page->_mapcount)) {
-		/*
-		 * It would be tidy to reset the PageAnon mapping here,
-		 * but that might overwrite a racing page_add_anon_rmap
-		 * which increments mapcount after us but sets mapping
-		 * before us: so leave the reset to free_hot_cold_page,
-		 * and remember that it's only reliable while mapped.
-		 * Leaving it set also helps swapoff to reinstate ptes
-		 * faster for those pages still in swapcache.
-		 */
-		if (page_test_and_clear_dirty(page))
-			set_page_dirty(page);
-		dec_page_state(nr_mapped);
+	struct mm_struct *mm = vma->vm_mm;
+	unsigned long address;
+	pte_t *pte;
+	spinlock_t *ptl;
+	int mapped = 0;
+
+	address = vma_address(page, vma);
+	if (address == -EFAULT)
+		goto out;
+
+	pte = page_check_address(page, mm, address, &ptl);
+	if (!pte)
+		goto out;
+
+	mapped++;
+
+	pte_unmap_unlock(pte, ptl);
+out:
+	return mapped;
+}
+
+static int page_mapped_anon(struct page *page)
+{
+	struct anon_vma *anon_vma;
+	struct vm_area_struct *vma;
+	int mapped = 0;
+
+	anon_vma = page_lock_anon_vma(page);
+	if (!anon_vma)
+		return mapped;
+
+	list_for_each_entry(vma, &anon_vma->head, anon_vma_node) {
+		mapped += page_mapped_one(page, vma);
+		if (mapped)
+			break;
 	}
+
+	spin_unlock(&anon_vma->lock);
+	return mapped;
 }
 
+static int page_mapped_file(struct page *page)
+{
+	struct address_space *mapping = page->mapping;
+	pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT);
+	struct vm_area_struct *vma;
+	struct prio_tree_iter iter;
+	int mapped = 0;
+
+	/*
+	 * The caller's checks on page->mapping and !PageAnon have made
+	 * sure that this is a file page: the check for page->mapping
+	 * excludes the case just before it gets set on an anon page.
+	 */
+	BUG_ON(PageAnon(page));
+
+	/*
+	 * The page lock not only makes sure that page->mapping cannot
+	 * suddenly be NULLified by truncation, it makes sure that the
+	 * structure at mapping cannot be freed and reused yet,
+	 * so we can safely take mapping->i_mmap_lock.
+	 */
+	BUG_ON(!PageLocked(page));
+
+	spin_lock(&mapping->i_mmap_lock);
+
+	vma_prio_tree_foreach(vma, &iter, &mapping->i_mmap, pgoff, pgoff) {
+		mapped += page_mapped_one(page, vma);
+		if (mapped)
+			break;
+	}
+
+	spin_unlock(&mapping->i_mmap_lock);
+	return mapped;
+}
+
+/*
+ * update_page_mapped - update the mapped bit in page->flags
+ * @page: the page to test
+ */
+int update_page_mapped(struct page *page)
+{
+	int mappings = 0;
+
+	BUG_ON(!PageLocked(page));
+
+	if (PageMapped(page)) {
+		if (page->mapping) {
+			if (PageAnon(page))
+				mappings = page_mapped_anon(page);
+			else
+				mappings = page_mapped_file(page);
+		}
+
+		if (mappings == 0) {
+			ClearPageMapped(page);
+			dec_page_state(nr_mapped);
+		}
+        }
+
+	return PageMapped(page);
+ }
+
+
 /*
  * Subfunctions of try_to_unmap: try_to_unmap_one called
  * repeatedly from either try_to_unmap_anon or try_to_unmap_file.
@@ -657,7 +749,7 @@ static int try_to_unmap_anon(struct page
 
 	list_for_each_entry(vma, &anon_vma->head, anon_vma_node) {
 		ret = try_to_unmap_one(page, vma);
-		if (ret == SWAP_FAIL || !page_mapped(page))
+		if (ret == SWAP_FAIL)
 			break;
 	}
 	spin_unlock(&anon_vma->lock);
@@ -684,10 +776,13 @@ static int try_to_unmap_file(struct page
 	unsigned long max_nl_cursor = 0;
 	unsigned long max_nl_size = 0;
 
+	if (!mapping)
+		return ret;
+
 	spin_lock(&mapping->i_mmap_lock);
 	vma_prio_tree_foreach(vma, &iter, &mapping->i_mmap, pgoff, pgoff) {
 		ret = try_to_unmap_one(page, vma);
-		if (ret == SWAP_FAIL || !page_mapped(page))
+		if (ret == SWAP_FAIL)
 			goto out;
 	}
 
@@ -776,7 +871,7 @@ int try_to_unmap(struct page *page)
 	else
 		ret = try_to_unmap_file(page);
 
-	if (!page_mapped(page))
+	if (!update_page_mapped(page))
 		ret = SWAP_SUCCESS;
 	return ret;
 }
--- from-0002/mm/swap.c
+++ to-work/mm/swap.c	2005-12-08 15:00:40.000000000 +0900
@@ -177,6 +177,11 @@ void fastcall __page_cache_release(struc
 	unsigned long flags;
 	struct zone *zone = page_zone(page);
 
+	if (PageMapped(page)) {
+		dec_page_state(nr_mapped);
+		ClearPageMapped(page);
+	}
+
 	spin_lock_irqsave(&zone->lru_lock, flags);
 	if (TestClearPageLRU(page))
 		del_page_from_lru(zone, page);
@@ -215,6 +220,11 @@ void release_pages(struct page **pages, 
 		if (!put_page_testzero(page))
 			continue;
 
+		if (PageMapped(page)) {
+			dec_page_state(nr_mapped);
+			ClearPageMapped(page);
+		}
+
 		pagezone = page_zone(page);
 		if (pagezone != zone) {
 			if (zone)
--- from-0002/mm/truncate.c
+++ to-work/mm/truncate.c	2005-12-08 15:00:40.000000000 +0900
@@ -12,6 +12,7 @@
 #include <linux/module.h>
 #include <linux/pagemap.h>
 #include <linux/pagevec.h>
+#include <linux/rmap.h>
 #include <linux/buffer_head.h>	/* grr. try_to_release_page,
 				   do_invalidatepage */
 
@@ -276,7 +277,7 @@ int invalidate_inode_pages2_range(struct
 				break;
 			}
 			wait_on_page_writeback(page);
-			while (page_mapped(page)) {
+			while (update_page_mapped(page)) {
 				if (!did_range_unmap) {
 					/*
 					 * Zap the rest of the file in one hit.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 05/07] Remove reset_page_mapcount
  2005-12-08 11:26 [PATCH 00/07][RFC] Remove mapcount from struct page Magnus Damm
                   ` (3 preceding siblings ...)
  2005-12-08 11:27 ` [PATCH 04/07] Replace mapcount with PG_mapped Magnus Damm
@ 2005-12-08 11:27 ` Magnus Damm
  2005-12-08 11:27 ` [PATCH 06/07] Remove page_remove_rmap Magnus Damm
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Magnus Damm @ 2005-12-08 11:27 UTC (permalink / raw)
  To: linux-mm, linux-kernel; +Cc: Magnus Damm, andrea

Remove reset_page_mapcount.

This patch simply removes reset_page_mapcount(). It is not needed anymore.

Signed-off-by: Magnus Damm <magnus@valinux.co.jp>
---

 include/linux/mm.h |   10 ----------
 mm/page_alloc.c    |    5 ++---
 2 files changed, 2 insertions(+), 13 deletions(-)

--- from-0006/include/linux/mm.h
+++ to-work/include/linux/mm.h	2005-12-08 18:03:42.000000000 +0900
@@ -573,16 +573,6 @@ static inline pgoff_t page_index(struct 
 }
 
 /*
- * The atomic page->_mapcount, like _count, starts from -1:
- * so that transitions both from it and to it can be tracked,
- * using atomic_inc_and_test and atomic_add_negative(-1).
- */
-static inline void reset_page_mapcount(struct page *page)
-{
-	ClearPageMapped(page);
-}
-
-/*
  * Return true if this page is mapped into pagetables.
  */
 static inline int page_mapped(struct page *page)
--- from-0005/mm/page_alloc.c
+++ to-work/mm/page_alloc.c	2005-12-08 18:06:43.000000000 +0900
@@ -141,9 +141,9 @@ static void bad_page(const char *functio
 			1 << PG_reclaim |
 			1 << PG_slab    |
 			1 << PG_swapcache |
-			1 << PG_writeback );
+			1 << PG_writeback |
+			1 << PG_mapped );
 	set_page_count(page, 0);
-	reset_page_mapcount(page);
 	page->mapping = NULL;
 	add_taint(TAINT_BAD_PAGE);
 }
@@ -1716,7 +1716,6 @@ void __devinit memmap_init_zone(unsigned
 		page = pfn_to_page(pfn);
 		set_page_links(page, zone, nid, pfn);
 		set_page_count(page, 1);
-		reset_page_mapcount(page);
 		SetPageReserved(page);
 		INIT_LIST_HEAD(&page->lru);
 #ifdef WANT_PAGE_VIRTUAL

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 06/07] Remove page_remove_rmap
  2005-12-08 11:26 [PATCH 00/07][RFC] Remove mapcount from struct page Magnus Damm
                   ` (4 preceding siblings ...)
  2005-12-08 11:27 ` [PATCH 05/07] Remove reset_page_mapcount Magnus Damm
@ 2005-12-08 11:27 ` Magnus Damm
  2005-12-08 11:27 ` [PATCH 07/07] Remove page_dup_rmap Magnus Damm
  2005-12-08 14:16 ` [PATCH 00/07][RFC] Remove mapcount from struct page Hugh Dickins
  7 siblings, 0 replies; 10+ messages in thread
From: Magnus Damm @ 2005-12-08 11:27 UTC (permalink / raw)
  To: linux-mm, linux-kernel; +Cc: Magnus Damm, andrea

Remove page_remove_rmap.

This patch simply removes page_remove_rmap(). It is not needed anymore.

Signed-off-by: Magnus Damm <magnus@valinux.co.jp>
---

 include/linux/rmap.h |    1 -
 mm/fremap.c          |    1 -
 mm/memory.c          |    2 --
 mm/rmap.c            |    2 --
 4 files changed, 6 deletions(-)

--- from-0006/include/linux/rmap.h
+++ to-work/include/linux/rmap.h	2005-12-08 18:09:00.000000000 +0900
@@ -75,7 +75,6 @@ void __anon_vma_link(struct vm_area_stru
 void page_add_anon_rmap(struct page *, struct vm_area_struct *, unsigned long);
 void page_add_file_rmap(struct page *);
 
-static inline void page_remove_rmap(struct page *page) {}
 static inline void page_dup_rmap(struct page *page) {}
 
 int update_page_mapped(struct page *);
--- from-0006/mm/fremap.c
+++ to-work/mm/fremap.c	2005-12-08 18:10:07.000000000 +0900
@@ -33,7 +33,6 @@ static int zap_pte(struct mm_struct *mm,
 		if (page) {
 			if (pte_dirty(pte))
 				set_page_dirty(page);
-			page_remove_rmap(page);
 			page_cache_release(page);
 		}
 	} else {
--- from-0002/mm/memory.c
+++ to-work/mm/memory.c	2005-12-08 18:10:17.000000000 +0900
@@ -649,7 +649,6 @@ static unsigned long zap_pte_range(struc
 					mark_page_accessed(page);
 				file_rss--;
 			}
-			page_remove_rmap(page);
 			tlb_remove_page(tlb, page);
 			continue;
 		}
@@ -1514,7 +1513,6 @@ gotten:
 	page_table = pte_offset_map_lock(mm, pmd, address, &ptl);
 	if (likely(pte_same(*page_table, orig_pte))) {
 		if (old_page) {
-			page_remove_rmap(old_page);
 			if (!PageAnon(old_page)) {
 				dec_mm_counter(mm, file_rss);
 				inc_mm_counter(mm, anon_rss);
--- from-0006/mm/rmap.c
+++ to-work/mm/rmap.c	2005-12-08 18:10:28.000000000 +0900
@@ -640,7 +640,6 @@ static int try_to_unmap_one(struct page 
 	} else
 		dec_mm_counter(mm, file_rss);
 
-	page_remove_rmap(page);
 	page_cache_release(page);
 
 out_unmap:
@@ -730,7 +729,6 @@ static void try_to_unmap_cluster(unsigne
 		if (pte_dirty(pteval))
 			set_page_dirty(page);
 
-		page_remove_rmap(page);
 		page_cache_release(page);
 		dec_mm_counter(mm, file_rss);
 	}

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 07/07] Remove page_dup_rmap
  2005-12-08 11:26 [PATCH 00/07][RFC] Remove mapcount from struct page Magnus Damm
                   ` (5 preceding siblings ...)
  2005-12-08 11:27 ` [PATCH 06/07] Remove page_remove_rmap Magnus Damm
@ 2005-12-08 11:27 ` Magnus Damm
  2005-12-08 14:16 ` [PATCH 00/07][RFC] Remove mapcount from struct page Hugh Dickins
  7 siblings, 0 replies; 10+ messages in thread
From: Magnus Damm @ 2005-12-08 11:27 UTC (permalink / raw)
  To: linux-mm, linux-kernel; +Cc: Magnus Damm, andrea

Remove page_dup_rmap.

This patch simply removes page_dup_rmap(). It is not needed anymore.

Signed-off-by: Magnus Damm <magnus@valinux.co.jp>
---

 include/linux/rmap.h |    2 --
 mm/memory.c          |    1 -
 2 files changed, 3 deletions(-)

--- from-0008/include/linux/rmap.h
+++ to-work/include/linux/rmap.h	2005-12-08 18:14:37.000000000 +0900
@@ -75,8 +75,6 @@ void __anon_vma_link(struct vm_area_stru
 void page_add_anon_rmap(struct page *, struct vm_area_struct *, unsigned long);
 void page_add_file_rmap(struct page *);
 
-static inline void page_dup_rmap(struct page *page) {}
-
 int update_page_mapped(struct page *);
 
 /*
--- from-0008/mm/memory.c
+++ to-work/mm/memory.c	2005-12-08 18:17:52.000000000 +0900
@@ -453,7 +453,6 @@ copy_one_pte(struct mm_struct *dst_mm, s
 	page = vm_normal_page(vma, addr, pte);
 	if (page) {
 		get_page(page);
-		page_dup_rmap(page);
 		rss[!!PageAnon(page)]++;
 	}
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 00/07][RFC] Remove mapcount from struct page
  2005-12-08 11:26 [PATCH 00/07][RFC] Remove mapcount from struct page Magnus Damm
                   ` (6 preceding siblings ...)
  2005-12-08 11:27 ` [PATCH 07/07] Remove page_dup_rmap Magnus Damm
@ 2005-12-08 14:16 ` Hugh Dickins
  2005-12-09  2:48   ` Magnus Damm
  7 siblings, 1 reply; 10+ messages in thread
From: Hugh Dickins @ 2005-12-08 14:16 UTC (permalink / raw)
  To: Magnus Damm; +Cc: linux-mm, linux-kernel, andrea

On Thu, 8 Dec 2005, Magnus Damm wrote:
> This patchset tries to remove page->_mapcount.

Interesting.  I share your feeling that it ought to be possible to
get along without page->_mapcount, but I've not succeeded yet.  And
perhaps the system without page->_mapcount would perform worse.

Unfortunately, I don't have time to study your patches at the moment,
nor get into a discussion on them.  Sorry if that sounds dismissive:
not my intention, I hope others will take up the discussion instead.

But it looked to me as if you've done the easy part without doing the
hard part yet: vmscanning can get along very well with an approximate
idea of page_mapped, but can_share_swap_page really needs to know.

At present you're just saying "no" there, which appears safe but
slow; but there's a get_user_pages fork case where it's very bad
for it to say "no" when it should say "yes".  See try_to_unmap_one
comment on get_user_pages in 2.6.12 mm/rmap.c.

It looked as if you were doing a separate scan to update PG_mapped,
which would better be incorporated in the page_referenced scan.
I found locking to be a problem.  lock_page is held at many of
the right points, but not all, and may be bad to extend its use.

Your patches looked over-split to me (a rare criticism!): you don't
need a separate patch to delete each little thing that's no longer
used, nor a separate patch to introduce each new definition before
it's used.

Hugh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 00/07][RFC] Remove mapcount from struct page
  2005-12-08 14:16 ` [PATCH 00/07][RFC] Remove mapcount from struct page Hugh Dickins
@ 2005-12-09  2:48   ` Magnus Damm
  0 siblings, 0 replies; 10+ messages in thread
From: Magnus Damm @ 2005-12-09  2:48 UTC (permalink / raw)
  To: Hugh Dickins; +Cc: linux-mm, linux-kernel, andrea

On Thu, 2005-12-08 at 14:16 +0000, Hugh Dickins wrote:
> On Thu, 8 Dec 2005, Magnus Damm wrote:
> > This patchset tries to remove page->_mapcount.
> 
> Interesting.  I share your feeling that it ought to be possible to
> get along without page->_mapcount, but I've not succeeded yet.  And
> perhaps the system without page->_mapcount would perform worse.
> 
> Unfortunately, I don't have time to study your patches at the moment,
> nor get into a discussion on them.  Sorry if that sounds dismissive:
> not my intention, I hope others will take up the discussion instead.

Your comments so far are very valuable to me. Thank you.

> But it looked to me as if you've done the easy part without doing the
> hard part yet: vmscanning can get along very well with an approximate
> idea of page_mapped, but can_share_swap_page really needs to know.
> 
> At present you're just saying "no" there, which appears safe but
> slow; but there's a get_user_pages fork case where it's very bad
> for it to say "no" when it should say "yes".  See try_to_unmap_one
> comment on get_user_pages in 2.6.12 mm/rmap.c.

Ah, I thought it was safe to always say no. ATM I have no good idea how
to solve the get_user_pages() fork case in a good way, so any
suggestions are very welcome. =)

> It looked as if you were doing a separate scan to update PG_mapped,
> which would better be incorporated in the page_referenced scan.

My first non public version did just that. But then I decided to
implement the scan separately. Mainly because the anonymous
page_referenced() scan could be done without PG_locked held, but in my
case the page locking is always needed to protect us from a racing
page_add_*_rmap(). And I could not find any reason to actually count the
number of pages mapped, except for can_share_swap_page() that I thought
was safe to change into the constant 0, so the idea was to improve
performance by returning when the first mapping was found.

> I found locking to be a problem.  lock_page is held at many of
> the right points, but not all, and may be bad to extend its use.

Yes. Locking is tricky. I studied where page_add_*_rmap() was called and
figured out that only one extra PG_locked was needed. In all other cases
the page was either newly allocated, the zero page or already locked.

This extra lock probably result in worse scalability for large machines.
But OTOH the patch will save memory, and for smaller systems such as
laptops the scalability might not be such a big issue.

> Your patches looked over-split to me (a rare criticism!): you don't
> need a separate patch to delete each little thing that's no longer
> used, nor a separate patch to introduce each new definition before
> it's used.

Indeed. Looking at them today and I totally agree with you. My plan was
to be able to test each broken out patch separately to be able to locate
bugs and performance bottle necks. But I could still do that and reduce
the number of patches to 4 or so.

Many thanks,

/ magnus

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2005-12-09  2:48 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2005-12-08 11:26 [PATCH 00/07][RFC] Remove mapcount from struct page Magnus Damm
2005-12-08 11:27 ` [PATCH 01/07] Remove page_mapcount Magnus Damm
2005-12-08 11:27 ` [PATCH 02/07] Add PG_mapped Magnus Damm
2005-12-08 11:27 ` [PATCH 03/07] Add anon_vma use count Magnus Damm
2005-12-08 11:27 ` [PATCH 04/07] Replace mapcount with PG_mapped Magnus Damm
2005-12-08 11:27 ` [PATCH 05/07] Remove reset_page_mapcount Magnus Damm
2005-12-08 11:27 ` [PATCH 06/07] Remove page_remove_rmap Magnus Damm
2005-12-08 11:27 ` [PATCH 07/07] Remove page_dup_rmap Magnus Damm
2005-12-08 14:16 ` [PATCH 00/07][RFC] Remove mapcount from struct page Hugh Dickins
2005-12-09  2:48   ` Magnus Damm

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox