* [PATCH -mm 12/24] Unevictable LRU Infrastructure
[not found] <20080611184214.605110868@redhat.com>
@ 2008-06-11 18:42 ` Rik van Riel
2008-06-11 18:42 ` [PATCH -mm 14/24] Ramfs and Ram Disk pages are unevictable Rik van Riel
` (6 subsequent siblings)
7 siblings, 0 replies; 13+ messages in thread
From: Rik van Riel @ 2008-06-11 18:42 UTC (permalink / raw)
To: linux-kernel
Cc: Andrew Morton, Lee Schermerhorn, Kosaki Motohiro, linux-mm, Eric Whitney
[-- Attachment #1: vmscan-noreclaim-lru-infrastructure.patch --]
[-- Type: text/plain, Size: 33938 bytes --]
From: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
When the system contains lots of mlocked or otherwise unevictable
pages, the pageout code (kswapd) can spend lots of time scanning
over these pages. Worse still, the presence of lots of unevictable
pages can confuse kswapd into thinking that more aggressive pageout
modes are required, resulting in all kinds of bad behaviour.
Infrastructure to manage pages excluded from reclaim--i.e., hidden
from vmscan. Based on a patch by Larry Woodman of Red Hat. Reworked
to maintain "unevictable" pages on a separate per-zone LRU list,
to "hide" them from vmscan.
Kosaki Motohiro added the support for the memory controller unevictable
lru list.
Pages on the unevictable list have both PG_unevictable and PG_lru set.
Thus, PG_unevictable is analogous to and mutually exclusive with
PG_active--it specifies which LRU list the page is on.
The unevictable infrastructure is enabled by a new mm Kconfig option
[CONFIG_]UNEVICTABLE_LRU.
A new function 'page_evictable(page, vma)' in vmscan.c tests whether
or not a page may be evictable. Subsequent patches will add the various
!evictable tests. We'll want to keep these tests light-weight for
use in shrink_active_list() and, possibly, the fault path.
To avoid races between tasks putting pages [back] onto an LRU list and
tasks that might be moving the page from non-evictable to evictable
state, one should test evictability under page lock and place
nonevictable pages directly on the unevictable list before dropping the
lock. Otherwise, we risk "stranding" evictable pages on the unevictable
list. It's OK to use the pagevec caches for evictable pages. The new
function 'putback_lru_page()'--inverse to 'isolate_lru_page()'--handles
this transition, including potential page truncation while the page is
unlocked.
Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
include/linux/memcontrol.h | 1
include/linux/mm_inline.h | 23 ++++--
include/linux/mmzone.h | 24 ++++++-
include/linux/page-flags.h | 13 +++
include/linux/pagevec.h | 1
include/linux/swap.h | 12 +++
mm/Kconfig | 10 +++
mm/internal.h | 26 +++++++
mm/memcontrol.c | 73 +++++++++++++---------
mm/mempolicy.c | 2
mm/migrate.c | 68 +++++++++++++-------
mm/page_alloc.c | 9 ++
mm/swap.c | 40 ++++++++++--
mm/vmscan.c | 148 ++++++++++++++++++++++++++++++++++++++++-----
14 files changed, 370 insertions(+), 80 deletions(-)
Index: linux-2.6.26-rc5-mm2/mm/Kconfig
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/Kconfig 2008-06-10 16:23:31.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/Kconfig 2008-06-10 16:25:49.000000000 -0400
@@ -205,3 +205,13 @@ config NR_QUICK
config VIRT_TO_BUS
def_bool y
depends on !ARCH_NO_VIRT_TO_BUS
+
+config UNEVICTABLE_LRU
+ bool "Add LRU list to track non-evictable pages"
+ default y
+ help
+ Keeps unevictable pages off of the active and inactive pageout
+ lists, so kswapd will not waste CPU time or have its balancing
+ algorithms thrown off by scanning these pages. Selecting this
+ will use one page flag and increase the code size a little,
+ say Y unless you know what you are doing.
Index: linux-2.6.26-rc5-mm2/include/linux/page-flags.h
===================================================================
--- linux-2.6.26-rc5-mm2.orig/include/linux/page-flags.h 2008-06-10 16:23:31.000000000 -0400
+++ linux-2.6.26-rc5-mm2/include/linux/page-flags.h 2008-06-10 22:11:01.000000000 -0400
@@ -94,6 +94,9 @@ enum pageflags {
PG_reclaim, /* To be reclaimed asap */
PG_buddy, /* Page is free, on buddy lists */
PG_swapbacked, /* Page is backed by RAM/swap */
+#ifdef CONFIG_UNEVICTABLE_LRU
+ PG_unevictable, /* Page is "unevictable" */
+#endif
#ifdef CONFIG_IA64_UNCACHED_ALLOCATOR
PG_uncached, /* Page has been mapped as uncached */
#endif
@@ -182,6 +185,7 @@ PAGEFLAG(Referenced, referenced) TESTCLE
PAGEFLAG(Dirty, dirty) TESTSCFLAG(Dirty, dirty) __CLEARPAGEFLAG(Dirty, dirty)
PAGEFLAG(LRU, lru) __CLEARPAGEFLAG(LRU, lru)
PAGEFLAG(Active, active) __CLEARPAGEFLAG(Active, active)
+ TESTCLEARFLAG(Active, active)
__PAGEFLAG(Slab, slab)
PAGEFLAG(Checked, checked) /* Used by some filesystems */
PAGEFLAG(Pinned, pinned) TESTSCFLAG(Pinned, pinned) /* Xen */
@@ -225,6 +229,15 @@ PAGEFLAG(SwapCache, swapcache)
PAGEFLAG_FALSE(SwapCache)
#endif
+#ifdef CONFIG_UNEVICTABLE_LRU
+PAGEFLAG(Unevictable, unevictable) __CLEARPAGEFLAG(Unevictable, unevictable)
+ TESTCLEARFLAG(Unevictable, unevictable)
+#else
+PAGEFLAG_FALSE(Unevictable) TESTCLEARFLAG_FALSE(Unevictable)
+ SETPAGEFLAG_NOOP(Unevictable) CLEARPAGEFLAG_NOOP(Unevictable)
+ __CLEARPAGEFLAG_NOOP(Unevictable)
+#endif
+
#ifdef CONFIG_IA64_UNCACHED_ALLOCATOR
PAGEFLAG(Uncached, uncached)
#else
Index: linux-2.6.26-rc5-mm2/include/linux/mmzone.h
===================================================================
--- linux-2.6.26-rc5-mm2.orig/include/linux/mmzone.h 2008-06-10 16:23:31.000000000 -0400
+++ linux-2.6.26-rc5-mm2/include/linux/mmzone.h 2008-06-10 22:11:01.000000000 -0400
@@ -86,6 +86,11 @@ enum zone_stat_item {
NR_ACTIVE_ANON, /* " " " " " */
NR_INACTIVE_FILE, /* " " " " " */
NR_ACTIVE_FILE, /* " " " " " */
+#ifdef CONFIG_UNEVICTABLE_LRU
+ NR_UNEVICTABLE, /* " " " " " */
+#else
+ NR_UNEVICTABLE = NR_ACTIVE_FILE, /* avoid compiler errors in dead code */
+#endif
NR_ANON_PAGES, /* Mapped anonymous pages */
NR_FILE_MAPPED, /* pagecache pages mapped into pagetables.
only modified from process context */
@@ -128,10 +133,18 @@ enum lru_list {
LRU_ACTIVE_ANON = LRU_BASE + LRU_ACTIVE,
LRU_INACTIVE_FILE = LRU_BASE + LRU_FILE,
LRU_ACTIVE_FILE = LRU_BASE + LRU_FILE + LRU_ACTIVE,
- NR_LRU_LISTS };
+#ifdef CONFIG_UNEVICTABLE_LRU
+ LRU_UNEVICTABLE,
+#else
+ LRU_UNEVICTABLE = LRU_ACTIVE_FILE, /* avoid compiler errors in dead code */
+#endif
+ NR_LRU_LISTS
+};
#define for_each_lru(l) for (l = 0; l < NR_LRU_LISTS; l++)
+#define for_each_evictable_lru(l) for (l = 0; l <= LRU_ACTIVE_FILE; l++)
+
static inline int is_file_lru(enum lru_list l)
{
return (l == LRU_INACTIVE_FILE || l == LRU_ACTIVE_FILE);
@@ -142,6 +155,15 @@ static inline int is_active_lru(enum lru
return (l == LRU_ACTIVE_ANON || l == LRU_ACTIVE_FILE);
}
+static inline int is_unevictable_lru(enum lru_list l)
+{
+#ifdef CONFIG_UNEVICTABLE_LRU
+ return (l == LRU_UNEVICTABLE);
+#else
+ return 0;
+#endif
+}
+
struct per_cpu_pages {
int count; /* number of pages in the list */
int high; /* high watermark, emptying needed */
Index: linux-2.6.26-rc5-mm2/mm/page_alloc.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/page_alloc.c 2008-06-10 16:23:31.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/page_alloc.c 2008-06-10 22:11:01.000000000 -0400
@@ -241,6 +241,9 @@ static void bad_page(struct page *page)
1 << PG_private |
1 << PG_locked |
1 << PG_active |
+#ifdef CONFIG_UNEVICTABLE_LRU
+ 1 << PG_unevictable |
+#endif
1 << PG_dirty |
1 << PG_reclaim |
1 << PG_slab |
@@ -474,6 +477,9 @@ static inline int free_pages_check(struc
1 << PG_swapcache |
1 << PG_writeback |
1 << PG_reserved |
+#ifdef CONFIG_UNEVICTABLE_LRU
+ 1 << PG_unevictable |
+#endif
1 << PG_buddy ))))
bad_page(page);
if (PageDirty(page))
@@ -625,6 +631,9 @@ static int prep_new_page(struct page *pa
1 << PG_private |
1 << PG_locked |
1 << PG_active |
+#ifdef CONFIG_UNEVICTABLE_LRU
+ 1 << PG_unevictable |
+#endif
1 << PG_dirty |
1 << PG_slab |
1 << PG_swapcache |
Index: linux-2.6.26-rc5-mm2/include/linux/mm_inline.h
===================================================================
--- linux-2.6.26-rc5-mm2.orig/include/linux/mm_inline.h 2008-06-10 16:23:31.000000000 -0400
+++ linux-2.6.26-rc5-mm2/include/linux/mm_inline.h 2008-06-10 16:25:49.000000000 -0400
@@ -91,11 +91,16 @@ del_page_from_lru(struct zone *zone, str
enum lru_list l = LRU_BASE;
list_del(&page->lru);
- if (PageActive(page)) {
- __ClearPageActive(page);
- l += LRU_ACTIVE;
+ if (PageUnevictable(page)) {
+ __ClearPageUnevictable(page);
+ l = LRU_UNEVICTABLE;
+ } else {
+ if (PageActive(page)) {
+ __ClearPageActive(page);
+ l += LRU_ACTIVE;
+ }
+ l += page_is_file_cache(page);
}
- l += page_is_file_cache(page);
__dec_zone_state(zone, NR_LRU_BASE + l);
}
@@ -110,9 +115,13 @@ static inline enum lru_list page_lru(str
{
enum lru_list lru = LRU_BASE;
- if (PageActive(page))
- lru += LRU_ACTIVE;
- lru += page_is_file_cache(page);
+ if (PageUnevictable(page))
+ lru = LRU_UNEVICTABLE;
+ else {
+ if (PageActive(page))
+ lru += LRU_ACTIVE;
+ lru += page_is_file_cache(page);
+ }
return lru;
}
Index: linux-2.6.26-rc5-mm2/include/linux/swap.h
===================================================================
--- linux-2.6.26-rc5-mm2.orig/include/linux/swap.h 2008-06-10 16:23:31.000000000 -0400
+++ linux-2.6.26-rc5-mm2/include/linux/swap.h 2008-06-10 22:11:01.000000000 -0400
@@ -180,6 +180,8 @@ extern int lru_add_drain_all(void);
extern void rotate_reclaimable_page(struct page *page);
extern void swap_setup(void);
+extern void add_page_to_unevictable_list(struct page *page);
+
/**
* lru_cache_add: add a page to the page lists
* @page: the page to add
@@ -228,6 +230,16 @@ static inline int zone_reclaim(struct zo
}
#endif
+#ifdef CONFIG_UNEVICTABLE_LRU
+extern int page_evictable(struct page *page, struct vm_area_struct *vma);
+#else
+static inline int page_evictable(struct page *page,
+ struct vm_area_struct *vma)
+{
+ return 1;
+}
+#endif
+
extern int kswapd_run(int nid);
#ifdef CONFIG_MMU
Index: linux-2.6.26-rc5-mm2/include/linux/pagevec.h
===================================================================
--- linux-2.6.26-rc5-mm2.orig/include/linux/pagevec.h 2008-06-10 16:23:31.000000000 -0400
+++ linux-2.6.26-rc5-mm2/include/linux/pagevec.h 2008-06-10 16:25:49.000000000 -0400
@@ -101,7 +101,6 @@ static inline void __pagevec_lru_add_act
____pagevec_lru_add(pvec, LRU_ACTIVE_FILE);
}
-
static inline void pagevec_lru_add_file(struct pagevec *pvec)
{
if (pagevec_count(pvec))
Index: linux-2.6.26-rc5-mm2/mm/swap.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/swap.c 2008-06-10 16:23:31.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/swap.c 2008-06-10 22:11:01.000000000 -0400
@@ -136,7 +136,7 @@ static void pagevec_move_tail(struct pag
void rotate_reclaimable_page(struct page *page)
{
if (!PageLocked(page) && !PageDirty(page) && !PageActive(page) &&
- PageLRU(page)) {
+ !PageUnevictable(page) && PageLRU(page)) {
struct pagevec *pvec;
unsigned long flags;
@@ -157,7 +157,7 @@ void activate_page(struct page *page)
struct zone *zone = page_zone(page);
spin_lock_irq(&zone->lru_lock);
- if (PageLRU(page) && !PageActive(page)) {
+ if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
int file = page_is_file_cache(page);
int lru = LRU_BASE + file;
del_page_from_lru_list(zone, page, lru);
@@ -166,7 +166,7 @@ void activate_page(struct page *page)
lru += LRU_ACTIVE;
add_page_to_lru_list(zone, page, lru);
__count_vm_event(PGACTIVATE);
- mem_cgroup_move_lists(page, true);
+ mem_cgroup_move_lists(page, lru);
zone->recent_rotated[!!file]++;
zone->recent_scanned[!!file]++;
@@ -183,7 +183,8 @@ void activate_page(struct page *page)
*/
void mark_page_accessed(struct page *page)
{
- if (!PageActive(page) && PageReferenced(page) && PageLRU(page)) {
+ if (!PageActive(page) && !PageUnevictable(page) &&
+ PageReferenced(page) && PageLRU(page)) {
activate_page(page);
ClearPageReferenced(page);
} else if (!PageReferenced(page)) {
@@ -211,13 +212,38 @@ void __lru_cache_add(struct page *page,
void lru_cache_add_lru(struct page *page, enum lru_list lru)
{
if (PageActive(page)) {
+ VM_BUG_ON(PageUnevictable(page));
ClearPageActive(page);
+ } else if (PageUnevictable(page)) {
+ VM_BUG_ON(PageActive(page));
+ ClearPageUnevictable(page);
}
- VM_BUG_ON(PageLRU(page) || PageActive(page));
+ VM_BUG_ON(PageLRU(page) || PageActive(page) || PageUnevictable(page));
__lru_cache_add(page, lru);
}
+/**
+ * add_page_to_unevictable_list - add a page to the unevictable list
+ * @page: the page to be added to the unevictable list
+ *
+ * Add page directly to its zone's unevictable list. To avoid races with
+ * tasks that might be making the page evictable, through eg. munlock,
+ * munmap or exit, while it's not on the lru, we want to add the page
+ * while it's locked or otherwise "invisible" to other tasks. This is
+ * difficult to do when using the pagevec cache, so bypass that.
+ */
+void add_page_to_unevictable_list(struct page *page)
+{
+ struct zone *zone = page_zone(page);
+
+ spin_lock_irq(&zone->lru_lock);
+ SetPageUnevictable(page);
+ SetPageLRU(page);
+ add_page_to_lru_list(zone, page, LRU_UNEVICTABLE);
+ spin_unlock_irq(&zone->lru_lock);
+}
+
/*
* Drain pages out of the cpu's pagevecs.
* Either "cpu" is the current CPU, and preemption has already been
@@ -315,6 +341,7 @@ void release_pages(struct page **pages,
if (PageLRU(page)) {
struct zone *pagezone = page_zone(page);
+
if (pagezone != zone) {
if (zone)
spin_unlock_irqrestore(&zone->lru_lock,
@@ -391,6 +418,7 @@ void ____pagevec_lru_add(struct pagevec
{
int i;
struct zone *zone = NULL;
+ VM_BUG_ON(is_unevictable_lru(lru));
for (i = 0; i < pagevec_count(pvec); i++) {
struct page *page = pvec->pages[i];
@@ -402,6 +430,8 @@ void ____pagevec_lru_add(struct pagevec
zone = pagezone;
spin_lock_irq(&zone->lru_lock);
}
+ VM_BUG_ON(PageActive(page));
+ VM_BUG_ON(PageUnevictable(page));
VM_BUG_ON(PageLRU(page));
SetPageLRU(page);
if (is_active_lru(lru))
Index: linux-2.6.26-rc5-mm2/mm/migrate.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/migrate.c 2008-06-10 16:23:31.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/migrate.c 2008-06-10 22:11:01.000000000 -0400
@@ -53,14 +53,9 @@ int migrate_prep(void)
return 0;
}
-static inline void move_to_lru(struct page *page)
-{
- lru_cache_add_lru(page, page_lru(page));
- put_page(page);
-}
-
/*
- * Add isolated pages on the list back to the LRU.
+ * Add isolated pages on the list back to the LRU under page lock
+ * to avoid leaking evictable pages back onto unevictable list.
*
* returns the number of pages put back.
*/
@@ -72,7 +67,9 @@ int putback_lru_pages(struct list_head *
list_for_each_entry_safe(page, page2, l, lru) {
list_del(&page->lru);
- move_to_lru(page);
+ lock_page(page);
+ if (putback_lru_page(page))
+ unlock_page(page);
count++;
}
return count;
@@ -338,8 +335,11 @@ static void migrate_page_copy(struct pag
SetPageReferenced(newpage);
if (PageUptodate(page))
SetPageUptodate(newpage);
- if (PageActive(page))
+ if (TestClearPageActive(page)) {
+ VM_BUG_ON(PageUnevictable(page));
SetPageActive(newpage);
+ } else
+ unevictable_migrate_page(newpage, page);
if (PageChecked(page))
SetPageChecked(newpage);
if (PageMappedToDisk(page))
@@ -368,7 +368,6 @@ static void migrate_page_copy(struct pag
mem_cgroup_uncharge_page(page);
}
#endif
- ClearPageActive(page);
ClearPagePrivate(page);
set_page_private(page, 0);
page->mapping = NULL;
@@ -547,10 +546,15 @@ static int fallback_migrate_page(struct
*
* The new page will have replaced the old page if this function
* is successful.
+ *
+ * Return value:
+ * < 0 - error code
+ * == 0 - success
*/
static int move_to_new_page(struct page *newpage, struct page *page)
{
struct address_space *mapping;
+ int unlock = 1;
int rc;
/*
@@ -585,10 +589,16 @@ static int move_to_new_page(struct page
if (!rc) {
remove_migration_ptes(page, newpage);
+ /*
+ * Put back on LRU while holding page locked to
+ * handle potential race with, e.g., munlock()
+ */
+ unlock = putback_lru_page(newpage);
} else
newpage->mapping = NULL;
- unlock_page(newpage);
+ if (unlock)
+ unlock_page(newpage);
return rc;
}
@@ -605,18 +615,19 @@ static int unmap_and_move(new_page_t get
struct page *newpage = get_new_page(page, private, &result);
int rcu_locked = 0;
int charge = 0;
+ int unlock = 1;
if (!newpage)
return -ENOMEM;
if (page_count(page) == 1)
/* page was freed from under us. So we are done. */
- goto move_newpage;
+ goto end_migration;
charge = mem_cgroup_prepare_migration(page, newpage);
if (charge == -ENOMEM) {
rc = -ENOMEM;
- goto move_newpage;
+ goto end_migration;
}
/* prepare cgroup just returns 0 or -ENOMEM */
BUG_ON(charge);
@@ -624,7 +635,7 @@ static int unmap_and_move(new_page_t get
rc = -EAGAIN;
if (TestSetPageLocked(page)) {
if (!force)
- goto move_newpage;
+ goto end_migration;
lock_page(page);
}
@@ -686,8 +697,6 @@ rcu_unlock:
unlock:
- unlock_page(page);
-
if (rc != -EAGAIN) {
/*
* A page that has been migrated has all references
@@ -696,17 +705,30 @@ unlock:
* restored.
*/
list_del(&page->lru);
- move_to_lru(page);
+ if (!page->mapping) {
+ VM_BUG_ON(page_count(page) != 1);
+ unlock_page(page);
+ put_page(page); /* just free the old page */
+ goto end_migration;
+ } else
+ unlock = putback_lru_page(page);
}
-move_newpage:
+ if (unlock)
+ unlock_page(page);
+
+end_migration:
if (!charge)
mem_cgroup_end_migration(newpage);
- /*
- * Move the new page to the LRU. If migration was not successful
- * then this will free the page.
- */
- move_to_lru(newpage);
+
+ if (!newpage->mapping) {
+ /*
+ * Migration failed or was never attempted.
+ * Free the newpage.
+ */
+ VM_BUG_ON(page_count(newpage) != 1);
+ put_page(newpage);
+ }
if (result) {
if (rc)
*result = rc;
Index: linux-2.6.26-rc5-mm2/mm/vmscan.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/vmscan.c 2008-06-10 16:23:31.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/vmscan.c 2008-06-10 22:11:01.000000000 -0400
@@ -452,6 +452,73 @@ cannot_free:
return 0;
}
+/**
+ * putback_lru_page - put previously isolated page onto appropriate LRU list
+ * @page: page to be put back to appropriate lru list
+ *
+ * Add previously isolated @page to appropriate LRU list.
+ * Page may still be unevictable for other reasons.
+ *
+ * lru_lock must not be held, interrupts must be enabled.
+ * Must be called with page locked.
+ *
+ * return 1 if page still locked [not truncated], else 0
+ */
+int putback_lru_page(struct page *page)
+{
+ int lru;
+ int ret = 1;
+
+ VM_BUG_ON(!PageLocked(page));
+ VM_BUG_ON(PageLRU(page));
+
+ lru = !!TestClearPageActive(page);
+ ClearPageUnevictable(page); /* for page_evictable() */
+
+ if (unlikely(!page->mapping)) {
+ /*
+ * page truncated. drop lock as put_page() will
+ * free the page.
+ */
+ VM_BUG_ON(page_count(page) != 1);
+ unlock_page(page);
+ ret = 0;
+ } else if (page_evictable(page, NULL)) {
+ /*
+ * For evictable pages, we can use the cache.
+ * In event of a race, worst case is we end up with an
+ * unevictable page on [in]active list.
+ * We know how to handle that.
+ */
+ lru += page_is_file_cache(page);
+ lru_cache_add_lru(page, lru);
+ mem_cgroup_move_lists(page, lru);
+ } else {
+ /*
+ * Put unevictable pages directly on zone's unevictable
+ * list.
+ */
+ add_page_to_unevictable_list(page);
+ mem_cgroup_move_lists(page, LRU_UNEVICTABLE);
+ }
+
+ put_page(page); /* drop ref from isolate */
+ return ret; /* ret => "page still locked" */
+}
+
+/*
+ * Cull page that shrink_*_list() has detected to be unevictable
+ * under page lock to close races with other tasks that might be making
+ * the page evictable. Avoid stranding an evictable page on the
+ * unevictable list.
+ */
+static void cull_unevictable_page(struct page *page)
+{
+ lock_page(page);
+ if (putback_lru_page(page))
+ unlock_page(page);
+}
+
/*
* shrink_page_list() returns the number of reclaimed pages
*/
@@ -485,6 +552,12 @@ static unsigned long shrink_page_list(st
sc->nr_scanned++;
+ if (unlikely(!page_evictable(page, NULL))) {
+ if (putback_lru_page(page))
+ unlock_page(page);
+ continue;
+ }
+
if (!sc->may_swap && page_mapped(page))
goto keep_locked;
@@ -584,7 +657,7 @@ static unsigned long shrink_page_list(st
* possible for a page to have PageDirty set, but it is actually
* clean (all its buffers are clean). This happens if the
* buffers were written out directly, with submit_bh(). ext3
- * will do this, as well as the blockdev mapping.
+ * will do this, as well as the blockdev mapping.
* try_to_release_page() will discover that cleanness and will
* drop the buffers and mark the page clean - it can be freed.
*
@@ -616,6 +689,7 @@ activate_locked:
/* Not a candidate for swapping, so reclaim swap space. */
if (PageSwapCache(page) && vm_swap_full())
remove_exclusive_swap_page_ref(page);
+ VM_BUG_ON(PageActive(page));
SetPageActive(page);
pgactivate++;
keep_locked:
@@ -665,6 +739,14 @@ int __isolate_lru_page(struct page *page
if (mode != ISOLATE_BOTH && (!page_is_file_cache(page) != !file))
return ret;
+ /*
+ * When this function is being called for lumpy reclaim, we
+ * initially look into all LRU pages, active, inactive and
+ * unevictable; only give shrink_page_list evictable pages.
+ */
+ if (PageUnevictable(page))
+ return ret;
+
ret = -EBUSY;
if (likely(get_page_unless_zero(page))) {
/*
@@ -776,7 +858,7 @@ static unsigned long isolate_lru_pages(u
/* else it is being freed elsewhere */
list_move(&cursor_page->lru, src);
default:
- break;
+ break; /* ! on LRU or wrong list */
}
}
}
@@ -836,8 +918,9 @@ static unsigned long clear_active_flags(
* Returns -EBUSY if the page was not on an LRU list.
*
* The returned page will have PageLRU() cleared. If it was found on
- * the active list, it will have PageActive set. That flag may need
- * to be cleared by the caller before letting the page go.
+ * the active list, it will have PageActive set. If it was found on
+ * the unevictable list, it will have the PageUnevictable bit set. That flag
+ * may need to be cleared by the caller before letting the page go.
*
* The vmstat statistic corresponding to the list on which the page was
* found will be decremented.
@@ -858,11 +941,10 @@ int isolate_lru_page(struct page *page)
spin_lock_irq(&zone->lru_lock);
if (PageLRU(page) && get_page_unless_zero(page)) {
- int lru = LRU_BASE;
+ int lru = page_lru(page);
ret = 0;
ClearPageLRU(page);
- lru += page_is_file_cache(page) + !!PageActive(page);
del_page_from_lru_list(zone, page, lru);
}
spin_unlock_irq(&zone->lru_lock);
@@ -974,11 +1056,20 @@ static unsigned long shrink_inactive_lis
* Put back any unfreeable pages.
*/
while (!list_empty(&page_list)) {
+ int lru;
page = lru_to_page(&page_list);
VM_BUG_ON(PageLRU(page));
- SetPageLRU(page);
list_del(&page->lru);
- add_page_to_lru_list(zone, page, page_lru(page));
+ if (unlikely(!page_evictable(page, NULL))) {
+ spin_unlock_irq(&zone->lru_lock);
+ cull_unevictable_page(page);
+ spin_lock_irq(&zone->lru_lock);
+ continue;
+ }
+ SetPageLRU(page);
+ lru = page_lru(page);
+ add_page_to_lru_list(zone, page, lru);
+ mem_cgroup_move_lists(page, lru);
if (PageActive(page) && scan_global_lru(sc)) {
int file = !!page_is_file_cache(page);
zone->recent_rotated[file]++;
@@ -1073,6 +1164,12 @@ static void shrink_active_list(unsigned
cond_resched();
page = lru_to_page(&l_hold);
list_del(&page->lru);
+
+ if (unlikely(!page_evictable(page, NULL))) {
+ cull_unevictable_page(page);
+ continue;
+ }
+
if (page_referenced(page, 0, sc->mem_cgroup)) {
pgmoved++;
if (file) {
@@ -1118,7 +1215,7 @@ static void shrink_active_list(unsigned
ClearPageActive(page);
list_move(&page->lru, &zone->lru[lru].list);
- mem_cgroup_move_lists(page, false);
+ mem_cgroup_move_lists(page, lru);
pgmoved++;
if (!pagevec_add(&pvec, page)) {
__mod_zone_page_state(zone, NR_LRU_BASE + lru, pgmoved);
@@ -1149,7 +1246,7 @@ static void shrink_active_list(unsigned
VM_BUG_ON(!PageActive(page));
list_move(&page->lru, &zone->lru[lru].list);
- mem_cgroup_move_lists(page, true);
+ mem_cgroup_move_lists(page, lru);
pgmoved++;
if (!pagevec_add(&pvec, page)) {
__mod_zone_page_state(zone, NR_LRU_BASE + lru, pgmoved);
@@ -1289,7 +1386,7 @@ static unsigned long shrink_zone(int pri
get_scan_ratio(zone, sc, percent);
- for_each_lru(l) {
+ for_each_evictable_lru(l) {
if (scan_global_lru(sc)) {
int file = is_file_lru(l);
int scan;
@@ -1319,7 +1416,7 @@ static unsigned long shrink_zone(int pri
while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] ||
nr[LRU_INACTIVE_FILE]) {
- for_each_lru(l) {
+ for_each_evictable_lru(l) {
if (nr[l]) {
nr_to_scan = min(nr[l],
(unsigned long)sc->swap_cluster_max);
@@ -1874,8 +1971,8 @@ static unsigned long shrink_all_zones(un
if (zone_is_all_unreclaimable(zone) && prio != DEF_PRIORITY)
continue;
- for_each_lru(l) {
- /* For pass = 0 we don't shrink the active list */
+ for_each_evictable_lru(l) {
+ /* For pass = 0, we don't shrink the active list */
if (pass == 0 &&
(l == LRU_ACTIVE || l == LRU_ACTIVE_FILE))
continue;
@@ -2212,3 +2309,26 @@ int zone_reclaim(struct zone *zone, gfp_
return ret;
}
#endif
+
+#ifdef CONFIG_UNEVICTABLE_LRU
+/*
+ * page_evictable - test whether a page is evictable
+ * @page: the page to test
+ * @vma: the VMA in which the page is or will be mapped, may be NULL
+ *
+ * Test whether page is evictable--i.e., should be placed on active/inactive
+ * lists vs unevictable list.
+ *
+ * Reasons page might not be evictable:
+ * TODO - later patches
+ */
+int page_evictable(struct page *page, struct vm_area_struct *vma)
+{
+
+ VM_BUG_ON(PageUnevictable(page));
+
+ /* TODO: test page [!]evictable conditions */
+
+ return 1;
+}
+#endif
Index: linux-2.6.26-rc5-mm2/mm/mempolicy.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/mempolicy.c 2008-06-10 16:23:31.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/mempolicy.c 2008-06-10 16:25:49.000000000 -0400
@@ -2199,7 +2199,7 @@ static void gather_stats(struct page *pa
if (PageSwapCache(page))
md->swapcache++;
- if (PageActive(page))
+ if (PageActive(page) || PageUnevictable(page))
md->active++;
if (PageWriteback(page))
Index: linux-2.6.26-rc5-mm2/mm/internal.h
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/internal.h 2008-06-10 16:23:31.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/internal.h 2008-06-10 22:11:01.000000000 -0400
@@ -39,8 +39,15 @@ static inline void __put_page(struct pag
atomic_dec(&page->_count);
}
+/*
+ * in mm/vmscan.c:
+ */
extern int isolate_lru_page(struct page *page);
+extern int putback_lru_page(struct page *page);
+/*
+ * in mm/page_alloc.c
+ */
extern void __free_pages_bootmem(struct page *page, unsigned int order);
/*
@@ -54,6 +61,25 @@ static inline unsigned long page_order(s
return page_private(page);
}
+#ifdef CONFIG_UNEVICTABLE_LRU
+/*
+ * unevictable_migrate_page() called only from migrate_page_copy() to
+ * migrate unevictable flag to new page.
+ * Note that the old page has been isolated from the LRU lists at this
+ * point so we don't need to worry about LRU statistics.
+ */
+static inline void unevictable_migrate_page(struct page *new, struct page *old)
+{
+ if (TestClearPageUnevictable(old))
+ SetPageUnevictable(new);
+}
+#else
+static inline void unevictable_migrate_page(struct page *new, struct page *old)
+{
+}
+#endif
+
+
/*
* FLATMEM and DISCONTIGMEM configurations use alloc_bootmem_node,
* so all functions starting at paging_init should be marked __init
Index: linux-2.6.26-rc5-mm2/mm/memcontrol.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/memcontrol.c 2008-06-10 16:23:31.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/memcontrol.c 2008-06-10 22:11:01.000000000 -0400
@@ -160,9 +160,10 @@ struct page_cgroup {
struct mem_cgroup *mem_cgroup;
int flags;
};
-#define PAGE_CGROUP_FLAG_CACHE (0x1) /* charged as cache */
-#define PAGE_CGROUP_FLAG_ACTIVE (0x2) /* page is active in this cgroup */
-#define PAGE_CGROUP_FLAG_FILE (0x4) /* page is file system backed */
+#define PAGE_CGROUP_FLAG_CACHE (0x1) /* charged as cache */
+#define PAGE_CGROUP_FLAG_ACTIVE (0x2) /* page is active in this cgroup */
+#define PAGE_CGROUP_FLAG_FILE (0x4) /* page is file system backed */
+#define PAGE_CGROUP_FLAG_UNEVICTABLE (0x8) /* page is unevictableable */
static int page_cgroup_nid(struct page_cgroup *pc)
{
@@ -283,10 +284,14 @@ static void __mem_cgroup_remove_list(str
{
int lru = LRU_BASE;
- if (pc->flags & PAGE_CGROUP_FLAG_ACTIVE)
- lru += LRU_ACTIVE;
- if (pc->flags & PAGE_CGROUP_FLAG_FILE)
- lru += LRU_FILE;
+ if (pc->flags & PAGE_CGROUP_FLAG_UNEVICTABLE)
+ lru = LRU_UNEVICTABLE;
+ else {
+ if (pc->flags & PAGE_CGROUP_FLAG_ACTIVE)
+ lru += LRU_ACTIVE;
+ if (pc->flags & PAGE_CGROUP_FLAG_FILE)
+ lru += LRU_FILE;
+ }
MEM_CGROUP_ZSTAT(mz, lru) -= 1;
@@ -299,10 +304,14 @@ static void __mem_cgroup_add_list(struct
{
int lru = LRU_BASE;
- if (pc->flags & PAGE_CGROUP_FLAG_ACTIVE)
- lru += LRU_ACTIVE;
- if (pc->flags & PAGE_CGROUP_FLAG_FILE)
- lru += LRU_FILE;
+ if (pc->flags & PAGE_CGROUP_FLAG_UNEVICTABLE)
+ lru = LRU_UNEVICTABLE;
+ else {
+ if (pc->flags & PAGE_CGROUP_FLAG_ACTIVE)
+ lru += LRU_ACTIVE;
+ if (pc->flags & PAGE_CGROUP_FLAG_FILE)
+ lru += LRU_FILE;
+ }
MEM_CGROUP_ZSTAT(mz, lru) += 1;
list_add(&pc->lru, &mz->lists[lru]);
@@ -310,21 +319,31 @@ static void __mem_cgroup_add_list(struct
mem_cgroup_charge_statistics(pc->mem_cgroup, pc->flags, true);
}
-static void __mem_cgroup_move_lists(struct page_cgroup *pc, bool active)
+static void __mem_cgroup_move_lists(struct page_cgroup *pc, enum lru_list lru)
{
struct mem_cgroup_per_zone *mz = page_cgroup_zoneinfo(pc);
- int from = pc->flags & PAGE_CGROUP_FLAG_ACTIVE;
- int file = pc->flags & PAGE_CGROUP_FLAG_FILE;
- int lru = LRU_FILE * !!file + !!from;
+ int active = pc->flags & PAGE_CGROUP_FLAG_ACTIVE;
+ int file = pc->flags & PAGE_CGROUP_FLAG_FILE;
+ int unevictable = pc->flags & PAGE_CGROUP_FLAG_UNEVICTABLE;
+ enum lru_list from = unevictable ? LRU_UNEVICTABLE :
+ (LRU_FILE * !!file + !!active);
- MEM_CGROUP_ZSTAT(mz, lru) -= 1;
+ if (lru == from)
+ return;
- if (active)
- pc->flags |= PAGE_CGROUP_FLAG_ACTIVE;
- else
+ MEM_CGROUP_ZSTAT(mz, from) -= 1;
+
+ if (is_unevictable_lru(lru)) {
pc->flags &= ~PAGE_CGROUP_FLAG_ACTIVE;
+ pc->flags |= PAGE_CGROUP_FLAG_UNEVICTABLE;
+ } else {
+ if (is_active_lru(lru))
+ pc->flags |= PAGE_CGROUP_FLAG_ACTIVE;
+ else
+ pc->flags &= ~PAGE_CGROUP_FLAG_ACTIVE;
+ pc->flags &= ~PAGE_CGROUP_FLAG_UNEVICTABLE;
+ }
- lru = LRU_FILE * !!file + !!active;
MEM_CGROUP_ZSTAT(mz, lru) += 1;
list_move(&pc->lru, &mz->lists[lru]);
}
@@ -342,7 +361,7 @@ int task_in_mem_cgroup(struct task_struc
/*
* This routine assumes that the appropriate zone's lru lock is already held
*/
-void mem_cgroup_move_lists(struct page *page, bool active)
+void mem_cgroup_move_lists(struct page *page, enum lru_list lru)
{
struct page_cgroup *pc;
struct mem_cgroup_per_zone *mz;
@@ -362,7 +381,7 @@ void mem_cgroup_move_lists(struct page *
if (pc) {
mz = page_cgroup_zoneinfo(pc);
spin_lock_irqsave(&mz->lru_lock, flags);
- __mem_cgroup_move_lists(pc, active);
+ __mem_cgroup_move_lists(pc, lru);
spin_unlock_irqrestore(&mz->lru_lock, flags);
}
unlock_page_cgroup(page);
@@ -460,12 +479,10 @@ unsigned long mem_cgroup_isolate_pages(u
/*
* TODO: play better with lumpy reclaim, grabbing anything.
*/
- if (PageActive(page) && !active) {
- __mem_cgroup_move_lists(pc, true);
- continue;
- }
- if (!PageActive(page) && active) {
- __mem_cgroup_move_lists(pc, false);
+ if (PageUnevictable(page) ||
+ (PageActive(page) && !active) ||
+ (!PageActive(page) && active)) {
+ __mem_cgroup_move_lists(pc, page_lru(page));
continue;
}
Index: linux-2.6.26-rc5-mm2/include/linux/memcontrol.h
===================================================================
--- linux-2.6.26-rc5-mm2.orig/include/linux/memcontrol.h 2008-06-10 16:23:31.000000000 -0400
+++ linux-2.6.26-rc5-mm2/include/linux/memcontrol.h 2008-06-10 22:11:08.000000000 -0400
@@ -34,9 +34,9 @@ extern int mem_cgroup_charge(struct page
gfp_t gfp_mask);
extern int mem_cgroup_cache_charge(struct page *page, struct mm_struct *mm,
gfp_t gfp_mask);
+extern void mem_cgroup_move_lists(struct page *page, enum lru_list lru);
extern void mem_cgroup_uncharge_page(struct page *page);
extern void mem_cgroup_uncharge_cache_page(struct page *page);
-extern void mem_cgroup_move_lists(struct page *page, bool active);
extern int mem_cgroup_shrink_usage(struct mm_struct *mm, gfp_t gfp_mask);
extern unsigned long mem_cgroup_isolate_pages(unsigned long nr_to_scan,
--
All Rights Reversed
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH -mm 14/24] Ramfs and Ram Disk pages are unevictable
[not found] <20080611184214.605110868@redhat.com>
2008-06-11 18:42 ` [PATCH -mm 12/24] Unevictable LRU Infrastructure Rik van Riel
@ 2008-06-11 18:42 ` Rik van Riel
2008-06-12 0:54 ` Nick Piggin
2008-06-11 18:42 ` [PATCH -mm 16/24] mlock: mlocked " Rik van Riel, Nick Piggin
` (5 subsequent siblings)
7 siblings, 1 reply; 13+ messages in thread
From: Rik van Riel @ 2008-06-11 18:42 UTC (permalink / raw)
To: linux-kernel
Cc: Andrew Morton, Lee Schermerhorn, Kosaki Motohiro, linux-mm, Eric Whitney
[-- Attachment #1: vmscan-ramfs-and-ram-disk-pages-are-non-reclaimable.patch --]
[-- Type: text/plain, Size: 4864 bytes --]
From: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
Christoph Lameter pointed out that ram disk pages also clutter the
LRU lists. When vmscan finds them dirty and tries to clean them,
the ram disk writeback function just redirties the page so that it
goes back onto the active list. Round and round she goes...
Define new address_space flag [shares address_space flags member
with mapping's gfp mask] to indicate that the address space contains
all unevictable pages. This will provide for efficient testing
of ramdisk pages in page_evictable().
Also provide wrapper functions to set/test the unevictable state to
minimize #ifdefs in ramdisk driver and any other users of this
facility.
Set the unevictable state on address_space structures for new
ramdisk inodes. Test the unevictable state in page_evictable()
to cull unevictable pages.
Similarly, ramfs pages are unevictable. Set the 'unevictable'
address_space flag for new ramfs inodes.
These changes depend on [CONFIG_]UNEVICTABLE_LRU.
Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
drivers/block/brd.c | 13 +++++++++++++
fs/ramfs/inode.c | 1 +
include/linux/pagemap.h | 22 ++++++++++++++++++++++
mm/vmscan.c | 5 +++++
4 files changed, 41 insertions(+)
Index: linux-2.6.26-rc5-mm2/include/linux/pagemap.h
===================================================================
--- linux-2.6.26-rc5-mm2.orig/include/linux/pagemap.h 2008-06-10 10:46:23.000000000 -0400
+++ linux-2.6.26-rc5-mm2/include/linux/pagemap.h 2008-06-10 16:47:23.000000000 -0400
@@ -30,6 +30,28 @@ static inline void mapping_set_error(str
}
}
+#ifdef CONFIG_UNEVICTABLE_LRU
+#define AS_UNEVICTABLE (__GFP_BITS_SHIFT + 2) /* e.g., ramdisk, SHM_LOCK */
+
+static inline void mapping_set_unevictable(struct address_space *mapping)
+{
+ set_bit(AS_UNEVICTABLE, &mapping->flags);
+}
+
+static inline int mapping_unevictable(struct address_space *mapping)
+{
+ if (mapping && (mapping->flags & AS_UNEVICTABLE))
+ return 1;
+ return 0;
+}
+#else
+static inline void mapping_set_unevictable(struct address_space *mapping) { }
+static inline int mapping_unevictable(struct address_space *mapping)
+{
+ return 0;
+}
+#endif
+
static inline gfp_t mapping_gfp_mask(struct address_space * mapping)
{
return (__force gfp_t)mapping->flags & __GFP_BITS_MASK;
Index: linux-2.6.26-rc5-mm2/mm/vmscan.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/vmscan.c 2008-06-10 16:25:49.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/vmscan.c 2008-06-10 16:47:23.000000000 -0400
@@ -2320,6 +2320,8 @@ int zone_reclaim(struct zone *zone, gfp_
* lists vs unevictable list.
*
* Reasons page might not be evictable:
+ * (1) page's mapping marked unevictable
+ *
* TODO - later patches
*/
int page_evictable(struct page *page, struct vm_area_struct *vma)
@@ -2327,6 +2329,9 @@ int page_evictable(struct page *page, st
VM_BUG_ON(PageUnevictable(page));
+ if (mapping_unevictable(page_mapping(page)))
+ return 0;
+
/* TODO: test page [!]evictable conditions */
return 1;
Index: linux-2.6.26-rc5-mm2/fs/ramfs/inode.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/fs/ramfs/inode.c 2008-06-10 10:25:06.000000000 -0400
+++ linux-2.6.26-rc5-mm2/fs/ramfs/inode.c 2008-06-10 16:47:23.000000000 -0400
@@ -61,6 +61,7 @@ struct inode *ramfs_get_inode(struct sup
inode->i_mapping->a_ops = &ramfs_aops;
inode->i_mapping->backing_dev_info = &ramfs_backing_dev_info;
mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
+ mapping_set_unevictable(inode->i_mapping);
inode->i_atime = inode->i_mtime = inode->i_ctime = CURRENT_TIME;
switch (mode & S_IFMT) {
default:
Index: linux-2.6.26-rc5-mm2/drivers/block/brd.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/drivers/block/brd.c 2008-06-10 10:46:18.000000000 -0400
+++ linux-2.6.26-rc5-mm2/drivers/block/brd.c 2008-06-10 16:47:23.000000000 -0400
@@ -374,8 +374,21 @@ static int brd_ioctl(struct inode *inode
return error;
}
+/*
+ * brd_open():
+ * Just mark the mapping as containing unevictable pages
+ */
+static int brd_open(struct inode *inode, struct file *filp)
+{
+ struct address_space *mapping = inode->i_mapping;
+
+ mapping_set_unevictable(mapping);
+ return 0;
+}
+
static struct block_device_operations brd_fops = {
.owner = THIS_MODULE,
+ .open = brd_open,
.ioctl = brd_ioctl,
#ifdef CONFIG_BLK_DEV_XIP
.direct_access = brd_direct_access,
--
All Rights Reversed
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH -mm 16/24] mlock: mlocked pages are unevictable
[not found] <20080611184214.605110868@redhat.com>
2008-06-11 18:42 ` [PATCH -mm 12/24] Unevictable LRU Infrastructure Rik van Riel
2008-06-11 18:42 ` [PATCH -mm 14/24] Ramfs and Ram Disk pages are unevictable Rik van Riel
@ 2008-06-11 18:42 ` Rik van Riel, Nick Piggin
2008-06-11 18:42 ` [PATCH -mm 18/24] mmap: handle mlocked pages during map, remap, unmap Rik van Riel
` (4 subsequent siblings)
7 siblings, 0 replies; 13+ messages in thread
From: Rik van Riel, Nick Piggin @ 2008-06-11 18:42 UTC (permalink / raw)
To: linux-kernel
Cc: Andrew Morton, Lee Schermerhorn, Kosaki Motohiro, linux-mm,
Eric Whitney, Nick Piggin
[-- Attachment #1: vmscan-mlocked-pages-are-non-reclaimable.patch --]
[-- Type: text/plain, Size: 44331 bytes --]
Make sure that mlocked pages also live on the unevictable
LRU, so kswapd will not scan them over and over again.
This is achieved through various strategies:
1) add yet another page flag--PG_mlocked--to indicate that
the page is locked for efficient testing in vmscan and,
optionally, fault path. This allows early culling of
unevictable pages, preventing them from getting to
page_referenced()/try_to_unmap(). Also allows separate
accounting of mlock'd pages, as Nick's original patch
did.
Note: Nick's original mlock patch used a PG_mlocked
flag. I had removed this in favor of the PG_unevictable
flag + an mlock_count [new page struct member]. I
restored the PG_mlocked flag to eliminate the new
count field.
2) add the mlock/unevictable infrastructure to mm/mlock.c,
with internal APIs in mm/internal.h. This is a rework
of Nick's original patch to these files, taking into
account that mlocked pages are now kept on unevictable
LRU list.
3) update vmscan.c:page_evictable() to check PageMlocked()
and, if vma passed in, the vm_flags. Note that the vma
will only be passed in for new pages in the fault path;
and then only if the "cull unevictable pages in fault
path" patch is included.
4) add try_to_unlock() to rmap.c to walk a page's rmap and
ClearPageMlocked() if no other vmas have it mlocked.
Reuses as much of try_to_unmap() as possible. This
effectively replaces the use of one of the lru list links
as an mlock count. If this mechanism let's pages in mlocked
vmas leak through w/o PG_mlocked set [I don't know that it
does], we should catch them later in try_to_unmap(). One
hopes this will be rare, as it will be relatively expensive.
5) Kosaki: added munlock page table walk to avoid using
get_user_pages() for unlock. get_user_pages() is unreliable
for some vma protections.
Lee: modified to wait for in-flight migration to complete
to close munlock/migration race that could strand pages.
Original mm/internal.h, mm/rmap.c and mm/mlock.c changes:
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
---
V8:
+ more refinement of rmap interaction, including attempt to
handle mlocked pages in non-linear mappings.
+ cleanup of lockdep reported errors.
+ enhancement of munlock page table walker to detect and
handle pages under migration [migration ptes].
V6:
+ Kosaki-san and Rik van Riel: added check for "page mapped
in vma" to try_to_unlock() processing in try_to_unmap_anon().
+ Kosaki-san added munlock page table walker to avoid use of
get_user_pages() for munlock. get_user_pages() proved to be
unreliable for some types of vmas.
+ added filtering of "special" vmas. Some [_IO||_PFN] we skip
altogether. Others, we just "make_pages_present" to simulate
old behavior--i.e., populate page tables. Clear/don't set
VM_LOCKED in non-mlockable vmas so that we don't try to unlock
at exit/unmap time.
+ rework PG_mlock page flag definitions for new page flags
macros.
+ Clear PageMlocked when COWing a page into a VM_LOCKED vma
so we don't leave an mlocked page in another non-mlocked
vma. If the other vma[s] had the page mlocked, we'll re-mlock
it if/when we try to reclaim it. This is less expensive than
walking the rmap in the COW/fault path.
+ in vmscan:shrink_page_list(), avoid adding anon page to
the swap cache if it's in a VM_LOCKED vma, even tho'
PG_mlocked might not be set. Call try_to_unlock() to
determine this. As a result, we'll never try to unmap
an mlocked anon page.
+ in support of the above change, updated try_to_unlock()
to use same logic as try_to_unmap() when it encounters a
VM_LOCKED vma--call mlock_vma_page() directly. Added
stub try_to_unlock() for vmscan when NORECLAIM_MLOCK
not configured.
V4 -> V5:
+ fixed problem with placement of #ifdef CONFIG_NORECLAIM_MLOCK
in prep_new_page() [Thanks, minchan Kim!].
V3 -> V4:
+ Added #ifdef CONFIG_NORECLAIM_MLOCK, #endif around use of
PG_mlocked in free_page_check(), et al. Not defined for
32-bit builds.
V2 -> V3:
+ rebase to 23-mm1 atop RvR's split lru series
+ fix page flags macros for *PageMlocked() when not configured.
+ ensure lru_add_drain_all() runs on all cpus when NORECLAIM_MLOCK
configured. Was just for NUMA.
V1 -> V2:
+ moved this patch [and related patches] up to right after
ramdisk/ramfs and SHM_LOCKed patches.
+ add [back] missing put_page() in putback_lru_page().
This solved page leakage as seen by stats in previous
version.
+ fix up munlock_vma_page() to isolate page from lru
before calling try_to_unlock(). Think I detected a
race here.
+ use TestClearPageMlock() on old page in migrate.c's
migrate_page_copy() to clean up old page.
+ live dangerously: remove TestSetPageLocked() in
is_mlocked_vma()--should only be called on new pages in
the fault path--iff we chose to cull there [later patch].
+ Add PG_mlocked to free_pages_check() etc to detect mlock
state mismanagement.
NOTE: temporarily [???] commented out--tripping over it
under load. Why?
Rework of Nick Piggins's "mm: move mlocked pages off the LRU" patch
-- part 1 of 2.
include/linux/mm.h | 5
include/linux/page-flags.h | 11 +
include/linux/rmap.h | 14 +
mm/internal.h | 63 +++++++
mm/memory.c | 19 ++
mm/migrate.c | 2
mm/mlock.c | 383 ++++++++++++++++++++++++++++++++++++++++++---
mm/mmap.c | 2
mm/page_alloc.c | 9 -
mm/rmap.c | 255 +++++++++++++++++++++++++----
mm/swap.c | 2
mm/vmscan.c | 36 +++-
12 files changed, 732 insertions(+), 69 deletions(-)
Index: linux-2.6.26-rc5-mm2/mm/internal.h
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/internal.h 2008-06-10 22:20:26.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/internal.h 2008-06-10 22:27:14.000000000 -0400
@@ -61,6 +61,10 @@ static inline unsigned long page_order(s
return page_private(page);
}
+extern int mlock_vma_pages_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end);
+extern void munlock_vma_pages_all(struct vm_area_struct *vma);
+
#ifdef CONFIG_UNEVICTABLE_LRU
/*
* unevictable_migrate_page() called only from migrate_page_copy() to
@@ -79,6 +83,65 @@ static inline void unevictable_migrate_p
}
#endif
+#ifdef CONFIG_UNEVICTABLE_LRU
+/*
+ * Called only in fault path via page_evictable() for a new page
+ * to determine if it's being mapped into a LOCKED vma.
+ * If so, mark page as mlocked.
+ */
+static inline int is_mlocked_vma(struct vm_area_struct *vma, struct page *page)
+{
+ VM_BUG_ON(PageLRU(page));
+
+ if (likely((vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) != VM_LOCKED))
+ return 0;
+
+ SetPageMlocked(page);
+ return 1;
+}
+
+/*
+ * must be called with vma's mmap_sem held for read, and page locked.
+ */
+extern void mlock_vma_page(struct page *page);
+
+/*
+ * Clear the page's PageMlocked(). This can be useful in a situation where
+ * we want to unconditionally remove a page from the pagecache -- e.g.,
+ * on truncation or freeing.
+ *
+ * It is legal to call this function for any page, mlocked or not.
+ * If called for a page that is still mapped by mlocked vmas, all we do
+ * is revert to lazy LRU behaviour -- semantics are not broken.
+ */
+extern void __clear_page_mlock(struct page *page);
+static inline void clear_page_mlock(struct page *page)
+{
+ if (unlikely(TestClearPageMlocked(page)))
+ __clear_page_mlock(page);
+}
+
+/*
+ * mlock_migrate_page - called only from migrate_page_copy() to
+ * migrate the Mlocked page flag
+ */
+static inline void mlock_migrate_page(struct page *newpage, struct page *page)
+{
+ if (TestClearPageMlocked(page))
+ SetPageMlocked(newpage);
+}
+
+
+#else /* CONFIG_UNEVICTABLE_LRU */
+static inline int is_mlocked_vma(struct vm_area_struct *v, struct page *p)
+{
+ return 0;
+}
+static inline void clear_page_mlock(struct page *page) { }
+static inline void mlock_vma_page(struct page *page) { }
+static inline void mlock_migrate_page(struct page *new, struct page *old) { }
+
+#endif /* CONFIG_UNEVICTABLE_LRU */
/*
* FLATMEM and DISCONTIGMEM configurations use alloc_bootmem_node,
Index: linux-2.6.26-rc5-mm2/mm/mlock.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/mlock.c 2008-06-10 22:20:26.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/mlock.c 2008-06-10 22:28:01.000000000 -0400
@@ -8,10 +8,18 @@
#include <linux/capability.h>
#include <linux/mman.h>
#include <linux/mm.h>
+#include <linux/swap.h>
+#include <linux/swapops.h>
+#include <linux/pagemap.h>
#include <linux/mempolicy.h>
#include <linux/syscalls.h>
#include <linux/sched.h>
#include <linux/module.h>
+#include <linux/rmap.h>
+#include <linux/mmzone.h>
+#include <linux/hugetlb.h>
+
+#include "internal.h"
int can_do_mlock(void)
{
@@ -23,17 +31,350 @@ int can_do_mlock(void)
}
EXPORT_SYMBOL(can_do_mlock);
+#ifdef CONFIG_UNEVICTABLE_LRU
+/*
+ * Mlocked pages are marked with PageMlocked() flag for efficient testing
+ * in vmscan and, possibly, the fault path; and to support semi-accurate
+ * statistics.
+ *
+ * An mlocked page [PageMlocked(page)] is unevictable. As such, it will
+ * be placed on the LRU "unevictable" list, rather than the [in]active lists.
+ * The unevictable list is an LRU sibling list to the [in]active lists.
+ * PageUnevictable is set to indicate the unevictable state.
+ *
+ * When lazy mlocking via vmscan, it is important to ensure that the
+ * vma's VM_LOCKED status is not concurrently being modified, otherwise we
+ * may have mlocked a page that is being munlocked. So lazy mlock must take
+ * the mmap_sem for read, and verify that the vma really is locked
+ * (see mm/rmap.c).
+ */
+
+/*
+ * LRU accounting for clear_page_mlock()
+ */
+void __clear_page_mlock(struct page *page)
+{
+ VM_BUG_ON(!PageLocked(page)); /* for LRU isolate/putback */
+
+ if (!isolate_lru_page(page)) {
+ putback_lru_page(page);
+ } else {
+ /*
+ * Page not on the LRU yet. Flush all pagevecs and retry.
+ */
+ lru_add_drain_all();
+ if (!isolate_lru_page(page))
+ putback_lru_page(page);
+ }
+}
+
+/*
+ * Mark page as mlocked if not already.
+ * If page on LRU, isolate and putback to move to unevictable list.
+ */
+void mlock_vma_page(struct page *page)
+{
+ BUG_ON(!PageLocked(page));
+
+ if (!TestSetPageMlocked(page) && !isolate_lru_page(page))
+ putback_lru_page(page);
+}
+
+/*
+ * called from munlock()/munmap() path with page supposedly on the LRU.
+ *
+ * Note: unlike mlock_vma_page(), we can't just clear the PageMlocked
+ * [in try_to_munlock()] and then attempt to isolate the page. We must
+ * isolate the page to keep others from messing with its unevictable
+ * and mlocked state while trying to munlock. However, we pre-clear the
+ * mlocked state anyway as we might lose the isolation race and we might
+ * not get another chance to clear PageMlocked. If we successfully
+ * isolate the page and try_to_munlock() detects other VM_LOCKED vmas
+ * mapping the page, it will restore the PageMlocked state, unless the page
+ * is mapped in a non-linear vma. So, we go ahead and SetPageMlocked(),
+ * perhaps redundantly.
+ * If we lose the isolation race, and the page is mapped by other VM_LOCKED
+ * vmas, we'll detect this in vmscan--via try_to_munlock() or try_to_unmap()
+ * either of which will restore the PageMlocked state by calling
+ * mlock_vma_page() above, if it can grab the vma's mmap sem.
+ */
+static void munlock_vma_page(struct page *page)
+{
+ BUG_ON(!PageLocked(page));
+
+ if (TestClearPageMlocked(page) && !isolate_lru_page(page)) {
+ try_to_munlock(page);
+ putback_lru_page(page);
+ }
+}
+
+/*
+ * mlock a range of pages in the vma.
+ *
+ * This takes care of making the pages present too.
+ *
+ * vma->vm_mm->mmap_sem must be held for write.
+ */
+static int __mlock_vma_pages_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ unsigned long addr = start;
+ struct page *pages[16]; /* 16 gives a reasonable batch */
+ int write = !!(vma->vm_flags & VM_WRITE);
+ int nr_pages = (end - start) / PAGE_SIZE;
+ int ret;
+
+ VM_BUG_ON(start & ~PAGE_MASK || end & ~PAGE_MASK);
+ VM_BUG_ON(start < vma->vm_start || end > vma->vm_end);
+ VM_BUG_ON(!rwsem_is_locked(&vma->vm_mm->mmap_sem));
+
+ lru_add_drain_all(); /* push cached pages to LRU */
+
+ while (nr_pages > 0) {
+ int i;
+
+ cond_resched();
+
+ /*
+ * get_user_pages makes pages present if we are
+ * setting mlock.
+ */
+ ret = get_user_pages(current, mm, addr,
+ min_t(int, nr_pages, ARRAY_SIZE(pages)),
+ write, 0, pages, NULL);
+ /*
+ * This can happen for, e.g., VM_NONLINEAR regions before
+ * a page has been allocated and mapped at a given offset,
+ * or for addresses that map beyond end of a file.
+ * We'll mlock the the pages if/when they get faulted in.
+ */
+ if (ret < 0)
+ break;
+ if (ret == 0) {
+ /*
+ * We know the vma is there, so the only time
+ * we cannot get a single page should be an
+ * error (ret < 0) case.
+ */
+ WARN_ON(1);
+ break;
+ }
+
+ lru_add_drain(); /* push cached pages to LRU */
+
+ for (i = 0; i < ret; i++) {
+ struct page *page = pages[i];
+
+ /*
+ * page might be truncated or migrated out from under
+ * us. Check after acquiring page lock.
+ */
+ lock_page(page);
+ if (page->mapping)
+ mlock_vma_page(page);
+ unlock_page(page);
+ put_page(page); /* ref from get_user_pages() */
+
+ /*
+ * here we assume that get_user_pages() has given us
+ * a list of virtually contiguous pages.
+ */
+ addr += PAGE_SIZE; /* for next get_user_pages() */
+ nr_pages--;
+ }
+ }
+
+ lru_add_drain_all(); /* to update stats */
+
+ return 0; /* count entire vma as locked_vm */
+}
+
+/*
+ * private structure for munlock page table walk
+ */
+struct munlock_page_walk {
+ struct vm_area_struct *vma;
+ pmd_t *pmd; /* for migration_entry_wait() */
+};
+
+/*
+ * munlock normal pages for present ptes
+ */
+static int __munlock_pte_handler(pte_t *ptep, unsigned long addr,
+ unsigned long end, void *private)
+{
+ struct munlock_page_walk *mpw = private;
+ swp_entry_t entry;
+ struct page *page;
+ pte_t pte;
+
+retry:
+ pte = *ptep;
+ /*
+ * If it's a swap pte, we might be racing with page migration.
+ */
+ if (unlikely(!pte_present(pte))) {
+ if (!is_swap_pte(pte))
+ goto out;
+ entry = pte_to_swp_entry(pte);
+ if (is_migration_entry(entry)) {
+ migration_entry_wait(mpw->vma->vm_mm, mpw->pmd, addr);
+ goto retry;
+ }
+ goto out;
+ }
+
+ page = vm_normal_page(mpw->vma, addr, pte);
+ if (!page)
+ goto out;
+
+ lock_page(page);
+ if (!page->mapping) {
+ unlock_page(page);
+ goto retry;
+ }
+ munlock_vma_page(page);
+ unlock_page(page);
+
+out:
+ return 0;
+}
+
+/*
+ * Save pmd for pte handler for waiting on migration entries
+ */
+static int __munlock_pmd_handler(pmd_t *pmd, unsigned long addr,
+ unsigned long end, void *private)
+{
+ struct munlock_page_walk *mpw = private;
+
+ mpw->pmd = pmd;
+ return 0;
+}
+
+static struct mm_walk munlock_page_walk = {
+ .pmd_entry = __munlock_pmd_handler,
+ .pte_entry = __munlock_pte_handler,
+};
+
+/*
+ * munlock a range of pages in the vma using standard page table walk.
+ *
+ * vma->vm_mm->mmap_sem must be held for write.
+ */
+static void __munlock_vma_pages_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ struct munlock_page_walk mpw;
+
+ VM_BUG_ON(start & ~PAGE_MASK || end & ~PAGE_MASK);
+ VM_BUG_ON(!rwsem_is_locked(&vma->vm_mm->mmap_sem));
+ VM_BUG_ON(start < vma->vm_start);
+ VM_BUG_ON(end > vma->vm_end);
+
+ lru_add_drain_all(); /* push cached pages to LRU */
+ mpw.vma = vma;
+ walk_page_range(mm, start, end, &munlock_page_walk, &mpw);
+ lru_add_drain_all(); /* to update stats */
+}
+
+#else /* CONFIG_UNEVICTABLE_LRU */
+
+/*
+ * Just make pages present if VM_LOCKED. No-op if unlocking.
+ */
+static int __mlock_vma_pages_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end)
+{
+ if (vma->vm_flags & VM_LOCKED)
+ make_pages_present(start, end);
+ return 0;
+}
+
+/*
+ * munlock a range of pages in the vma -- no-op.
+ */
+static void __munlock_vma_pages_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end)
+{
+}
+#endif /* CONFIG_UNEVICTABLE_LRU */
+
+/*
+ * mlock all pages in this vma range. For mmap()/mremap()/...
+ */
+int mlock_vma_pages_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end)
+{
+ int nr_pages = (end - start) / PAGE_SIZE;
+ BUG_ON(!(vma->vm_flags & VM_LOCKED));
+
+ /*
+ * filter unlockable vmas
+ */
+ if (vma->vm_flags & (VM_IO | VM_PFNMAP))
+ goto no_mlock;
+
+ if (!((vma->vm_flags & (VM_DONTEXPAND | VM_RESERVED)) ||
+ is_vm_hugetlb_page(vma) ||
+ vma == get_gate_vma(current)))
+ return __mlock_vma_pages_range(vma, start, end);
+
+ /*
+ * User mapped kernel pages or huge pages:
+ * make these pages present to populate the ptes, but
+ * fall thru' to reset VM_LOCKED--no need to unlock, and
+ * return nr_pages so these don't get counted against task's
+ * locked limit. huge pages are already counted against
+ * locked vm limit.
+ */
+ make_pages_present(start, end);
+
+no_mlock:
+ vma->vm_flags &= ~VM_LOCKED; /* and don't come back! */
+ return nr_pages; /* pages NOT mlocked */
+}
+
+
+/*
+ * munlock all pages in vma. For munmap() and exit().
+ */
+void munlock_vma_pages_all(struct vm_area_struct *vma)
+{
+ vma->vm_flags &= ~VM_LOCKED;
+ __munlock_vma_pages_range(vma, vma->vm_start, vma->vm_end);
+}
+
+/*
+ * mlock_fixup - handle mlock[all]/munlock[all] requests.
+ *
+ * Filters out "special" vmas -- VM_LOCKED never gets set for these, and
+ * munlock is a no-op. However, for some special vmas, we go ahead and
+ * populate the ptes via make_pages_present().
+ *
+ * For vmas that pass the filters, merge/split as appropriate.
+ */
static int mlock_fixup(struct vm_area_struct *vma, struct vm_area_struct **prev,
unsigned long start, unsigned long end, unsigned int newflags)
{
- struct mm_struct * mm = vma->vm_mm;
+ struct mm_struct *mm = vma->vm_mm;
pgoff_t pgoff;
- int pages;
+ int nr_pages;
int ret = 0;
+ int lock = newflags & VM_LOCKED;
- if (newflags == vma->vm_flags) {
- *prev = vma;
- goto out;
+ if (newflags == vma->vm_flags ||
+ (vma->vm_flags & (VM_IO | VM_PFNMAP)))
+ goto out; /* don't set VM_LOCKED, don't count */
+
+ if ((vma->vm_flags & (VM_DONTEXPAND | VM_RESERVED)) ||
+ is_vm_hugetlb_page(vma) ||
+ vma == get_gate_vma(current)) {
+ if (lock)
+ make_pages_present(start, end);
+ goto out; /* don't set VM_LOCKED, don't count */
}
pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);
@@ -44,8 +385,6 @@ static int mlock_fixup(struct vm_area_st
goto success;
}
- *prev = vma;
-
if (start != vma->vm_start) {
ret = split_vma(mm, vma, start, 1);
if (ret)
@@ -60,24 +399,31 @@ static int mlock_fixup(struct vm_area_st
success:
/*
+ * Keep track of amount of locked VM.
+ */
+ nr_pages = (end - start) >> PAGE_SHIFT;
+ if (!lock)
+ nr_pages = -nr_pages;
+ mm->locked_vm += nr_pages;
+
+ /*
* vm_flags is protected by the mmap_sem held in write mode.
* It's okay if try_to_unmap_one unmaps a page just after we
- * set VM_LOCKED, make_pages_present below will bring it back.
+ * set VM_LOCKED, __mlock_vma_pages_range will bring it back.
*/
vma->vm_flags = newflags;
- /*
- * Keep track of amount of locked VM.
- */
- pages = (end - start) >> PAGE_SHIFT;
- if (newflags & VM_LOCKED) {
- pages = -pages;
- if (!(newflags & VM_IO))
- ret = make_pages_present(start, end);
- }
+ if (lock) {
+ ret = __mlock_vma_pages_range(vma, start, end);
+ if (ret > 0) {
+ mm->locked_vm -= ret;
+ ret = 0;
+ }
+ } else
+ __munlock_vma_pages_range(vma, start, end);
- mm->locked_vm -= pages;
out:
+ *prev = vma;
if (ret == -ENOMEM)
ret = -EAGAIN;
return ret;
Index: linux-2.6.26-rc5-mm2/mm/vmscan.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/vmscan.c 2008-06-10 22:20:26.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/vmscan.c 2008-06-10 22:27:14.000000000 -0400
@@ -552,11 +552,8 @@ static unsigned long shrink_page_list(st
sc->nr_scanned++;
- if (unlikely(!page_evictable(page, NULL))) {
- if (putback_lru_page(page))
- unlock_page(page);
- continue;
- }
+ if (unlikely(!page_evictable(page, NULL)))
+ goto cull_mlocked;
if (!sc->may_swap && page_mapped(page))
goto keep_locked;
@@ -594,9 +591,19 @@ static unsigned long shrink_page_list(st
* Anonymous process memory has backing store?
* Try to allocate it some swap space here.
*/
- if (PageAnon(page) && !PageSwapCache(page))
+ if (PageAnon(page) && !PageSwapCache(page)) {
+ switch (try_to_munlock(page)) {
+ case SWAP_FAIL: /* shouldn't happen */
+ case SWAP_AGAIN:
+ goto keep_locked;
+ case SWAP_MLOCK:
+ goto cull_mlocked;
+ case SWAP_SUCCESS:
+ ; /* fall thru'; add to swap cache */
+ }
if (!add_to_swap(page, GFP_ATOMIC))
goto activate_locked;
+ }
#endif /* CONFIG_SWAP */
mapping = page_mapping(page);
@@ -611,6 +618,8 @@ static unsigned long shrink_page_list(st
goto activate_locked;
case SWAP_AGAIN:
goto keep_locked;
+ case SWAP_MLOCK:
+ goto cull_mlocked;
case SWAP_SUCCESS:
; /* try to free the page below */
}
@@ -685,6 +694,11 @@ free_it:
__pagevec_release_nonlru(&freed_pvec);
continue;
+cull_mlocked:
+ if (putback_lru_page(page))
+ unlock_page(page);
+ continue;
+
activate_locked:
/* Not a candidate for swapping, so reclaim swap space. */
if (PageSwapCache(page) && vm_swap_full())
@@ -696,7 +710,7 @@ keep_locked:
unlock_page(page);
keep:
list_add(&page->lru, &ret_pages);
- VM_BUG_ON(PageLRU(page));
+ VM_BUG_ON(PageLRU(page) || PageUnevictable(page));
}
list_splice(&ret_pages, page_list);
if (pagevec_count(&freed_pvec))
@@ -2317,12 +2331,13 @@ int zone_reclaim(struct zone *zone, gfp_
* @vma: the VMA in which the page is or will be mapped, may be NULL
*
* Test whether page is evictable--i.e., should be placed on active/inactive
- * lists vs unevictable list.
+ * lists vs unevictable list. The vma argument is !NULL when called from the
+ * fault path to determine how to instantate a new page.
*
* Reasons page might not be evictable:
* (1) page's mapping marked unevictable
+ * (2) page is part of an mlocked VMA
*
- * TODO - later patches
*/
int page_evictable(struct page *page, struct vm_area_struct *vma)
{
@@ -2332,7 +2347,8 @@ int page_evictable(struct page *page, st
if (mapping_unevictable(page_mapping(page)))
return 0;
- /* TODO: test page [!]evictable conditions */
+ if (PageMlocked(page) || (vma && is_mlocked_vma(vma, page)))
+ return 0;
return 1;
}
Index: linux-2.6.26-rc5-mm2/include/linux/page-flags.h
===================================================================
--- linux-2.6.26-rc5-mm2.orig/include/linux/page-flags.h 2008-06-10 22:20:26.000000000 -0400
+++ linux-2.6.26-rc5-mm2/include/linux/page-flags.h 2008-06-10 22:20:57.000000000 -0400
@@ -96,6 +96,7 @@ enum pageflags {
PG_swapbacked, /* Page is backed by RAM/swap */
#ifdef CONFIG_UNEVICTABLE_LRU
PG_unevictable, /* Page is "unevictable" */
+ PG_mlocked, /* Page is vma mlocked */
#endif
#ifdef CONFIG_IA64_UNCACHED_ALLOCATOR
PG_uncached, /* Page has been mapped as uncached */
@@ -232,7 +233,17 @@ PAGEFLAG_FALSE(SwapCache)
#ifdef CONFIG_UNEVICTABLE_LRU
PAGEFLAG(Unevictable, unevictable) __CLEARPAGEFLAG(Unevictable, unevictable)
TESTCLEARFLAG(Unevictable, unevictable)
+
+#define MLOCK_PAGES 1
+PAGEFLAG(Mlocked, mlocked) __CLEARPAGEFLAG(Mlocked, mlocked)
+ TESTSCFLAG(Mlocked, mlocked)
+
#else
+
+#define MLOCK_PAGES 0
+PAGEFLAG_FALSE(Mlocked)
+ SETPAGEFLAG_NOOP(Mlocked) TESTCLEARFLAG_FALSE(Mlocked)
+
PAGEFLAG_FALSE(Unevictable) TESTCLEARFLAG_FALSE(Unevictable)
SETPAGEFLAG_NOOP(Unevictable) CLEARPAGEFLAG_NOOP(Unevictable)
__CLEARPAGEFLAG_NOOP(Unevictable)
Index: linux-2.6.26-rc5-mm2/include/linux/rmap.h
===================================================================
--- linux-2.6.26-rc5-mm2.orig/include/linux/rmap.h 2008-06-10 22:20:26.000000000 -0400
+++ linux-2.6.26-rc5-mm2/include/linux/rmap.h 2008-06-10 22:27:14.000000000 -0400
@@ -109,6 +109,19 @@ unsigned long page_address_in_vma(struct
*/
int page_mkclean(struct page *);
+#ifdef CONFIG_UNEVICTABLE_LRU
+/*
+ * called in munlock()/munmap() path to check for other vmas holding
+ * the page mlocked.
+ */
+int try_to_munlock(struct page *);
+#else
+static inline int try_to_munlock(struct page *page)
+{
+ return 0; /* a.k.a. SWAP_SUCCESS */
+}
+#endif
+
#else /* !CONFIG_MMU */
#define anon_vma_init() do {} while (0)
@@ -132,5 +145,6 @@ static inline int page_mkclean(struct pa
#define SWAP_SUCCESS 0
#define SWAP_AGAIN 1
#define SWAP_FAIL 2
+#define SWAP_MLOCK 3
#endif /* _LINUX_RMAP_H */
Index: linux-2.6.26-rc5-mm2/mm/rmap.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/rmap.c 2008-06-10 22:20:26.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/rmap.c 2008-06-10 22:27:14.000000000 -0400
@@ -52,6 +52,8 @@
#include <asm/tlbflush.h>
+#include "internal.h"
+
struct kmem_cache *anon_vma_cachep;
/* This must be called under the mmap_sem. */
@@ -263,6 +265,32 @@ pte_t *page_check_address(struct page *p
return NULL;
}
+/**
+ * page_mapped_in_vma - check whether a page is really mapped in a VMA
+ * @page: the page to test
+ * @vma: the VMA to test
+ *
+ * Returns 1 if the page is mapped into the page tables of the VMA, 0
+ * if the page is not mapped into the page tables of this VMA. Only
+ * valid for normal file or anonymous VMAs.
+ */
+static int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
+{
+ unsigned long address;
+ pte_t *pte;
+ spinlock_t *ptl;
+
+ address = vma_address(page, vma);
+ if (address == -EFAULT) /* out of vma range */
+ return 0;
+ pte = page_check_address(page, vma->vm_mm, address, &ptl);
+ if (!pte) /* the page is not in this mm */
+ return 0;
+ pte_unmap_unlock(pte, ptl);
+
+ return 1;
+}
+
/*
* Subfunctions of page_referenced: page_referenced_one called
* repeatedly from either page_referenced_anon or page_referenced_file.
@@ -284,10 +312,17 @@ static int page_referenced_one(struct pa
if (!pte)
goto out;
+ /*
+ * Don't want to elevate referenced for mlocked page that gets this far,
+ * in order that it progresses to try_to_unmap and is moved to the
+ * unevictable list.
+ */
if (vma->vm_flags & VM_LOCKED) {
- referenced++;
*mapcount = 1; /* break early from loop */
- } else if (ptep_clear_flush_young(vma, address, pte))
+ goto out_unmap;
+ }
+
+ if (ptep_clear_flush_young(vma, address, pte))
referenced++;
/* Pretend the page is referenced if the task has the
@@ -296,6 +331,7 @@ static int page_referenced_one(struct pa
rwsem_is_locked(&mm->mmap_sem))
referenced++;
+out_unmap:
(*mapcount)--;
pte_unmap_unlock(pte, ptl);
out:
@@ -385,11 +421,6 @@ static int page_referenced_file(struct p
*/
if (mem_cont && !mm_match_cgroup(vma->vm_mm, mem_cont))
continue;
- if ((vma->vm_flags & (VM_LOCKED|VM_MAYSHARE))
- == (VM_LOCKED|VM_MAYSHARE)) {
- referenced++;
- break;
- }
referenced += page_referenced_one(page, vma, &mapcount);
if (!mapcount)
break;
@@ -704,10 +735,15 @@ static int try_to_unmap_one(struct page
* If it's recently referenced (perhaps page_referenced
* skipped over this mm) then we should reactivate it.
*/
- if (!migration && ((vma->vm_flags & VM_LOCKED) ||
- (ptep_clear_flush_young(vma, address, pte)))) {
- ret = SWAP_FAIL;
- goto out_unmap;
+ if (!migration) {
+ if (vma->vm_flags & VM_LOCKED) {
+ ret = SWAP_MLOCK;
+ goto out_unmap;
+ }
+ if (ptep_clear_flush_young(vma, address, pte)) {
+ ret = SWAP_FAIL;
+ goto out_unmap;
+ }
}
/* Nuke the page table entry. */
@@ -789,12 +825,17 @@ out:
* For very sparsely populated VMAs this is a little inefficient - chances are
* there there won't be many ptes located within the scan cluster. In this case
* maybe we could scan further - to the end of the pte page, perhaps.
+ *
+ * Mlocked pages: check VM_LOCKED under mmap_sem held for read, if we can
+ * acquire it without blocking. If vma locked, mlock the pages in the cluster,
+ * rather than unmapping them. If we encounter the "check_page" that vmscan is
+ * trying to unmap, return SWAP_MLOCK, else default SWAP_AGAIN.
*/
#define CLUSTER_SIZE min(32*PAGE_SIZE, PMD_SIZE)
#define CLUSTER_MASK (~(CLUSTER_SIZE - 1))
-static void try_to_unmap_cluster(unsigned long cursor,
- unsigned int *mapcount, struct vm_area_struct *vma)
+static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount,
+ struct vm_area_struct *vma, struct page *check_page)
{
struct mm_struct *mm = vma->vm_mm;
pgd_t *pgd;
@@ -806,6 +847,8 @@ static void try_to_unmap_cluster(unsigne
struct page *page;
unsigned long address;
unsigned long end;
+ int ret = SWAP_AGAIN;
+ int locked_vma = 0;
address = (vma->vm_start + cursor) & CLUSTER_MASK;
end = address + CLUSTER_SIZE;
@@ -816,15 +859,26 @@ static void try_to_unmap_cluster(unsigne
pgd = pgd_offset(mm, address);
if (!pgd_present(*pgd))
- return;
+ return ret;
pud = pud_offset(pgd, address);
if (!pud_present(*pud))
- return;
+ return ret;
pmd = pmd_offset(pud, address);
if (!pmd_present(*pmd))
- return;
+ return ret;
+
+ /*
+ * MLOCK_PAGES => feature is configured.
+ * if we can acquire the mmap_sem for read, and vma is VM_LOCKED,
+ * keep the sem while scanning the cluster for mlocking pages.
+ */
+ if (MLOCK_PAGES && down_read_trylock(&vma->vm_mm->mmap_sem)) {
+ locked_vma = (vma->vm_flags & VM_LOCKED);
+ if (!locked_vma)
+ up_read(&vma->vm_mm->mmap_sem); /* don't need it */
+ }
pte = pte_offset_map_lock(mm, pmd, address, &ptl);
@@ -837,6 +891,13 @@ static void try_to_unmap_cluster(unsigne
page = vm_normal_page(vma, address, *pte);
BUG_ON(!page || PageAnon(page));
+ if (locked_vma) {
+ mlock_vma_page(page); /* no-op if already mlocked */
+ if (page == check_page)
+ ret = SWAP_MLOCK;
+ continue; /* don't unmap */
+ }
+
if (ptep_clear_flush_young(vma, address, pte))
continue;
@@ -858,39 +919,104 @@ static void try_to_unmap_cluster(unsigne
(*mapcount)--;
}
pte_unmap_unlock(pte - 1, ptl);
+ if (locked_vma)
+ up_read(&vma->vm_mm->mmap_sem);
+ return ret;
}
-static int try_to_unmap_anon(struct page *page, int migration)
+/*
+ * common handling for pages mapped in VM_LOCKED vmas
+ */
+static int try_to_mlock_page(struct page *page, struct vm_area_struct *vma)
+{
+ int mlocked = 0;
+
+ if (down_read_trylock(&vma->vm_mm->mmap_sem)) {
+ if (vma->vm_flags & VM_LOCKED) {
+ mlock_vma_page(page);
+ mlocked++; /* really mlocked the page */
+ }
+ up_read(&vma->vm_mm->mmap_sem);
+ }
+ return mlocked;
+}
+
+/**
+ * try_to_unmap_anon - unmap or unlock anonymous page using the object-based
+ * rmap method
+ * @page: the page to unmap/unlock
+ * @unlock: request for unlock rather than unmap [unlikely]
+ * @migration: unmapping for migration - ignored if @unlock
+ *
+ * Find all the mappings of a page using the mapping pointer and the vma chains
+ * contained in the anon_vma struct it points to.
+ *
+ * This function is only called from try_to_unmap/try_to_munlock for
+ * anonymous pages.
+ * When called from try_to_munlock(), the mmap_sem of the mm containing the vma
+ * where the page was found will be held for write. So, we won't recheck
+ * vm_flags for that VMA. That should be OK, because that vma shouldn't be
+ * 'LOCKED.
+ */
+static int try_to_unmap_anon(struct page *page, int unlock, int migration)
{
struct anon_vma *anon_vma;
struct vm_area_struct *vma;
+ unsigned int mlocked = 0;
int ret = SWAP_AGAIN;
+ if (MLOCK_PAGES && unlikely(unlock))
+ ret = SWAP_SUCCESS; /* default for try_to_munlock() */
+
anon_vma = page_lock_anon_vma(page);
if (!anon_vma)
return ret;
list_for_each_entry(vma, &anon_vma->head, anon_vma_node) {
- ret = try_to_unmap_one(page, vma, migration);
- if (ret == SWAP_FAIL || !page_mapped(page))
- break;
+ if (MLOCK_PAGES && unlikely(unlock)) {
+ if (!((vma->vm_flags & VM_LOCKED) &&
+ page_mapped_in_vma(page, vma)))
+ continue; /* must visit all unlocked vmas */
+ ret = SWAP_MLOCK; /* saw at least one mlocked vma */
+ } else {
+ ret = try_to_unmap_one(page, vma, migration);
+ if (ret == SWAP_FAIL || !page_mapped(page))
+ break;
+ }
+ if (ret == SWAP_MLOCK) {
+ mlocked = try_to_mlock_page(page, vma);
+ if (mlocked)
+ break; /* stop if actually mlocked page */
+ }
}
page_unlock_anon_vma(anon_vma);
+
+ if (mlocked)
+ ret = SWAP_MLOCK; /* actually mlocked the page */
+ else if (ret == SWAP_MLOCK)
+ ret = SWAP_AGAIN; /* saw VM_LOCKED vma */
+
return ret;
}
/**
- * try_to_unmap_file - unmap file page using the object-based rmap method
- * @page: the page to unmap
- * @migration: migration flag
+ * try_to_unmap_file - unmap/unlock file page using the object-based rmap method
+ * @page: the page to unmap/unlock
+ * @unlock: request for unlock rather than unmap [unlikely]
+ * @migration: unmapping for migration - ignored if @unlock
*
* Find all the mappings of a page using the mapping pointer and the vma chains
* contained in the address_space struct it points to.
*
- * This function is only called from try_to_unmap for object-based pages.
+ * This function is only called from try_to_unmap/try_to_munlock for
+ * object-based pages.
+ * When called from try_to_munlock(), the mmap_sem of the mm containing the vma
+ * where the page was found will be held for write. So, we won't recheck
+ * vm_flags for that VMA. That should be OK, because that vma shouldn't be
+ * 'LOCKED.
*/
-static int try_to_unmap_file(struct page *page, int migration)
+static int try_to_unmap_file(struct page *page, int unlock, int migration)
{
struct address_space *mapping = page->mapping;
pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT);
@@ -901,20 +1027,44 @@ static int try_to_unmap_file(struct page
unsigned long max_nl_cursor = 0;
unsigned long max_nl_size = 0;
unsigned int mapcount;
+ unsigned int mlocked = 0;
+
+ if (MLOCK_PAGES && unlikely(unlock))
+ ret = SWAP_SUCCESS; /* default for try_to_munlock() */
spin_lock(&mapping->i_mmap_lock);
vma_prio_tree_foreach(vma, &iter, &mapping->i_mmap, pgoff, pgoff) {
- ret = try_to_unmap_one(page, vma, migration);
- if (ret == SWAP_FAIL || !page_mapped(page))
- goto out;
+ if (MLOCK_PAGES && unlikely(unlock)) {
+ if (!(vma->vm_flags & VM_LOCKED))
+ continue; /* must visit all vmas */
+ ret = SWAP_MLOCK;
+ } else {
+ ret = try_to_unmap_one(page, vma, migration);
+ if (ret == SWAP_FAIL || !page_mapped(page))
+ goto out;
+ }
+ if (ret == SWAP_MLOCK) {
+ mlocked = try_to_mlock_page(page, vma);
+ if (mlocked)
+ break; /* stop if actually mlocked page */
+ }
}
+ if (mlocked)
+ goto out;
+
if (list_empty(&mapping->i_mmap_nonlinear))
goto out;
list_for_each_entry(vma, &mapping->i_mmap_nonlinear,
shared.vm_set.list) {
- if ((vma->vm_flags & VM_LOCKED) && !migration)
+ if (MLOCK_PAGES && unlikely(unlock)) {
+ if (!(vma->vm_flags & VM_LOCKED))
+ continue; /* must visit all vmas */
+ ret = SWAP_MLOCK; /* leave mlocked == 0 */
+ goto out; /* no need to look further */
+ }
+ if (!MLOCK_PAGES && !migration && (vma->vm_flags & VM_LOCKED))
continue;
cursor = (unsigned long) vma->vm_private_data;
if (cursor > max_nl_cursor)
@@ -924,7 +1074,7 @@ static int try_to_unmap_file(struct page
max_nl_size = cursor;
}
- if (max_nl_size == 0) { /* any nonlinears locked or reserved */
+ if (max_nl_size == 0) { /* all nonlinears locked or reserved ? */
ret = SWAP_FAIL;
goto out;
}
@@ -948,12 +1098,16 @@ static int try_to_unmap_file(struct page
do {
list_for_each_entry(vma, &mapping->i_mmap_nonlinear,
shared.vm_set.list) {
- if ((vma->vm_flags & VM_LOCKED) && !migration)
+ if (!MLOCK_PAGES && !migration &&
+ (vma->vm_flags & VM_LOCKED))
continue;
cursor = (unsigned long) vma->vm_private_data;
while ( cursor < max_nl_cursor &&
cursor < vma->vm_end - vma->vm_start) {
- try_to_unmap_cluster(cursor, &mapcount, vma);
+ ret = try_to_unmap_cluster(cursor, &mapcount,
+ vma, page);
+ if (ret == SWAP_MLOCK)
+ mlocked = 2; /* to return below */
cursor += CLUSTER_SIZE;
vma->vm_private_data = (void *) cursor;
if ((int)mapcount <= 0)
@@ -974,6 +1128,10 @@ static int try_to_unmap_file(struct page
vma->vm_private_data = NULL;
out:
spin_unlock(&mapping->i_mmap_lock);
+ if (mlocked)
+ ret = SWAP_MLOCK; /* actually mlocked the page */
+ else if (ret == SWAP_MLOCK)
+ ret = SWAP_AGAIN; /* saw VM_LOCKED vma */
return ret;
}
@@ -989,6 +1147,7 @@ out:
* SWAP_SUCCESS - we succeeded in removing all mappings
* SWAP_AGAIN - we missed a mapping, try again later
* SWAP_FAIL - the page is unswappable
+ * SWAP_MLOCK - page is mlocked.
*/
int try_to_unmap(struct page *page, int migration)
{
@@ -997,12 +1156,36 @@ int try_to_unmap(struct page *page, int
BUG_ON(!PageLocked(page));
if (PageAnon(page))
- ret = try_to_unmap_anon(page, migration);
+ ret = try_to_unmap_anon(page, 0, migration);
else
- ret = try_to_unmap_file(page, migration);
-
- if (!page_mapped(page))
+ ret = try_to_unmap_file(page, 0, migration);
+ if (ret != SWAP_MLOCK && !page_mapped(page))
ret = SWAP_SUCCESS;
return ret;
}
+#ifdef CONFIG_UNEVICTABLE_LRU
+/**
+ * try_to_munlock - try to munlock a page
+ * @page: the page to be munlocked
+ *
+ * Called from munlock code. Checks all of the VMAs mapping the page
+ * to make sure nobody else has this page mlocked. The page will be
+ * returned with PG_mlocked cleared if no other vmas have it mlocked.
+ *
+ * Return values are:
+ *
+ * SWAP_SUCCESS - no vma's holding page mlocked.
+ * SWAP_AGAIN - page mapped in mlocked vma -- couldn't acquire mmap sem
+ * SWAP_MLOCK - page is now mlocked.
+ */
+int try_to_munlock(struct page *page)
+{
+ VM_BUG_ON(!PageLocked(page) || PageLRU(page));
+
+ if (PageAnon(page))
+ return try_to_unmap_anon(page, 1, 0);
+ else
+ return try_to_unmap_file(page, 1, 0);
+}
+#endif
Index: linux-2.6.26-rc5-mm2/mm/migrate.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/migrate.c 2008-06-10 22:20:26.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/migrate.c 2008-06-10 22:20:57.000000000 -0400
@@ -357,6 +357,8 @@ static void migrate_page_copy(struct pag
__set_page_dirty_nobuffers(newpage);
}
+ mlock_migrate_page(newpage, page);
+
#ifdef CONFIG_SWAP
if (PageSwapCache(page)) {
ClearPageSwapCache(page);
Index: linux-2.6.26-rc5-mm2/mm/page_alloc.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/page_alloc.c 2008-06-10 22:20:26.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/page_alloc.c 2008-06-10 22:27:14.000000000 -0400
@@ -243,6 +243,7 @@ static void bad_page(struct page *page)
1 << PG_active |
#ifdef CONFIG_UNEVICTABLE_LRU
1 << PG_unevictable |
+ 1 << PG_mlocked |
#endif
1 << PG_dirty |
1 << PG_reclaim |
@@ -480,6 +481,7 @@ static inline int free_pages_check(struc
#ifdef CONFIG_UNEVICTABLE_LRU
1 << PG_unevictable |
#endif
+ 1 << PG_mlocked |
1 << PG_buddy ))))
bad_page(page);
if (PageDirty(page))
@@ -633,6 +635,7 @@ static int prep_new_page(struct page *pa
1 << PG_active |
#ifdef CONFIG_UNEVICTABLE_LRU
1 << PG_unevictable |
+ 1 << PG_mlocked |
#endif
1 << PG_dirty |
1 << PG_slab |
@@ -652,7 +655,11 @@ static int prep_new_page(struct page *pa
page->flags &= ~(1 << PG_uptodate | 1 << PG_error | 1 << PG_reclaim |
1 << PG_referenced | 1 << PG_arch_1 |
- 1 << PG_owner_priv_1 | 1 << PG_mappedtodisk);
+ 1 << PG_owner_priv_1 | 1 << PG_mappedtodisk
+#ifdef CONFIG_UNEVICTABLE_LRU
+ | 1 << PG_mlocked
+#endif
+ );
set_page_private(page, 0);
set_page_refcounted(page);
Index: linux-2.6.26-rc5-mm2/mm/swap.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/swap.c 2008-06-10 22:20:26.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/swap.c 2008-06-10 22:27:14.000000000 -0400
@@ -278,7 +278,7 @@ void lru_add_drain(void)
put_cpu();
}
-#ifdef CONFIG_NUMA
+#if defined(CONFIG_NUMA) || defined(CONFIG_UNEVICTABLE_LRU)
static void lru_add_drain_per_cpu(struct work_struct *dummy)
{
lru_add_drain();
Index: linux-2.6.26-rc5-mm2/mm/memory.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/memory.c 2008-06-10 22:20:26.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/memory.c 2008-06-10 22:27:14.000000000 -0400
@@ -63,6 +63,8 @@
#include "internal.h"
+#include "internal.h"
+
#ifndef CONFIG_NEED_MULTIPLE_NODES
/* use the per-pgdat data instead for discontigmem - mbligh */
unsigned long max_mapnr;
@@ -1773,6 +1775,15 @@ gotten:
new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address);
if (!new_page)
goto oom;
+ /*
+ * Don't let another task, with possibly unlocked vma,
+ * keep the mlocked page.
+ */
+ if (vma->vm_flags & VM_LOCKED) {
+ lock_page(old_page); /* for LRU manipulation */
+ clear_page_mlock(old_page);
+ unlock_page(old_page);
+ }
cow_user_page(new_page, old_page, address, vma);
__SetPageUptodate(new_page);
@@ -2215,7 +2226,7 @@ static int do_swap_page(struct mm_struct
page_add_anon_rmap(page, vma, address);
swap_free(entry);
- if (vm_swap_full())
+ if (vm_swap_full() || (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
remove_exclusive_swap_page(page);
unlock_page(page);
@@ -2355,6 +2366,12 @@ static int __do_fault(struct mm_struct *
ret = VM_FAULT_OOM;
goto out;
}
+ /*
+ * Don't let another task, with possibly unlocked vma,
+ * keep the mlocked page.
+ */
+ if (vma->vm_flags & VM_LOCKED)
+ clear_page_mlock(vmf.page);
copy_user_highpage(page, vmf.page, address, vma);
__SetPageUptodate(page);
} else {
Index: linux-2.6.26-rc5-mm2/mm/mmap.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/mmap.c 2008-06-10 22:20:26.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/mmap.c 2008-06-10 22:27:14.000000000 -0400
@@ -661,8 +661,6 @@ again: remove_next = 1 + (end > next->
* If the vma has a ->close operation then the driver probably needs to release
* per-vma resources, so we don't attempt to merge those.
*/
-#define VM_SPECIAL (VM_IO | VM_DONTEXPAND | VM_RESERVED | VM_PFNMAP)
-
static inline int is_mergeable_vma(struct vm_area_struct *vma,
struct file *file, unsigned long vm_flags)
{
Index: linux-2.6.26-rc5-mm2/include/linux/mm.h
===================================================================
--- linux-2.6.26-rc5-mm2.orig/include/linux/mm.h 2008-06-10 22:20:26.000000000 -0400
+++ linux-2.6.26-rc5-mm2/include/linux/mm.h 2008-06-10 22:20:57.000000000 -0400
@@ -127,6 +127,11 @@ extern unsigned int kobjsize(const void
#define VM_RandomReadHint(v) ((v)->vm_flags & VM_RAND_READ)
/*
+ * special vmas that are non-mergable, non-mlock()able
+ */
+#define VM_SPECIAL (VM_IO | VM_DONTEXPAND | VM_RESERVED | VM_PFNMAP)
+
+/*
* mapping from the currently active vm_flags protection bits (the
* low four bits) to a page protection mask..
*/
--
All Rights Reversed
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH -mm 18/24] mmap: handle mlocked pages during map, remap, unmap
[not found] <20080611184214.605110868@redhat.com>
` (2 preceding siblings ...)
2008-06-11 18:42 ` [PATCH -mm 16/24] mlock: mlocked " Rik van Riel, Nick Piggin
@ 2008-06-11 18:42 ` Rik van Riel
2008-06-11 18:42 ` [PATCH -mm 20/24] swap: cull unevictable pages in fault path Rik van Riel, Lee Schermerhorn
` (3 subsequent siblings)
7 siblings, 0 replies; 13+ messages in thread
From: Rik van Riel @ 2008-06-11 18:42 UTC (permalink / raw)
To: linux-kernel
Cc: Andrew Morton, Lee Schermerhorn, Kosaki Motohiro, linux-mm,
Eric Whitney, Nick Piggin
[-- Attachment #1: vmscan-handle-mlocked-pages-during-map-remap-unmap.patch --]
[-- Type: text/plain, Size: 11522 bytes --]
Originally
From: Nick Piggin <npiggin@suse.de>
Remove mlocked pages from the LRU using "unevictable infrastructure"
during mmap(), munmap(), mremap() and truncate(). Try to move back
to normal LRU lists on munmap() when last mlocked mapping removed.
Remove PageMlocked() status when page truncated from file.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
V6:
+ munlock page in range of VM_LOCKED vma being covered by
remap_file_pages(), as this is an implied unmap of the
range.
+ in support of special vma filtering, don't account for
non-mlockable vmas as locked_vm.
V2 -> V3:
+ rebase to 23-mm1 atop RvR's split lru series [no changes]
V1 -> V2:
+ modified mmap.c:mmap_region() to return error if mlock_vma_pages_range()
does. This can only occur if the vma gets removed/changed while
we're switching mmap_sem lock modes. Most callers don't care, but
sys_remap_file_pages() appears to.
Rework of Nick Piggins's "mm: move mlocked pages off the LRU" patch
-- part 2 0f 2.
mm/fremap.c | 26 +++++++++++++++++---
mm/internal.h | 7 ++++-
mm/mlock.c | 10 ++++---
mm/mmap.c | 73 ++++++++++++++++++++++++++++++++++++++++++++--------------
mm/mremap.c | 8 +++---
mm/truncate.c | 4 +++
6 files changed, 99 insertions(+), 29 deletions(-)
Index: linux-2.6.26-rc5-mm2/mm/mmap.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/mmap.c 2008-06-10 22:27:14.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/mmap.c 2008-06-10 22:28:19.000000000 -0400
@@ -969,6 +969,7 @@ unsigned long do_mmap_pgoff(struct file
return -EPERM;
vm_flags |= VM_LOCKED;
}
+
/* mlock MCL_FUTURE? */
if (vm_flags & VM_LOCKED) {
unsigned long locked, lock_limit;
@@ -1132,10 +1133,12 @@ munmap_back:
* The VM_SHARED test is necessary because shmem_zero_setup
* will create the file object for a shared anonymous map below.
*/
- if (!file && !(vm_flags & VM_SHARED) &&
- vma_merge(mm, prev, addr, addr + len, vm_flags,
- NULL, NULL, pgoff, NULL))
- goto out;
+ if (!file && !(vm_flags & VM_SHARED)) {
+ vma = vma_merge(mm, prev, addr, addr + len, vm_flags,
+ NULL, NULL, pgoff, NULL);
+ if (vma)
+ goto out;
+ }
/*
* Determine the object being mapped and call the appropriate
@@ -1217,10 +1220,14 @@ out:
mm->total_vm += len >> PAGE_SHIFT;
vm_stat_account(mm, vm_flags, file, len >> PAGE_SHIFT);
if (vm_flags & VM_LOCKED) {
- mm->locked_vm += len >> PAGE_SHIFT;
- make_pages_present(addr, addr + len);
- }
- if ((flags & MAP_POPULATE) && !(flags & MAP_NONBLOCK))
+ /*
+ * makes pages present; downgrades, drops, reacquires mmap_sem
+ */
+ int nr_pages = mlock_vma_pages_range(vma, addr, addr + len);
+ if (nr_pages < 0)
+ return nr_pages; /* vma gone! */
+ mm->locked_vm += (len >> PAGE_SHIFT) - nr_pages;
+ } else if ((flags & MAP_POPULATE) && !(flags & MAP_NONBLOCK))
make_pages_present(addr, addr + len);
return addr;
@@ -1693,8 +1700,11 @@ find_extend_vma(struct mm_struct *mm, un
return vma;
if (!prev || expand_stack(prev, addr))
return NULL;
- if (prev->vm_flags & VM_LOCKED)
- make_pages_present(addr, prev->vm_end);
+ if (prev->vm_flags & VM_LOCKED) {
+ int nr_pages = mlock_vma_pages_range(prev, addr, prev->vm_end);
+ if (nr_pages < 0)
+ return NULL; /* vma gone! */
+ }
return prev;
}
#else
@@ -1720,8 +1730,11 @@ find_extend_vma(struct mm_struct * mm, u
start = vma->vm_start;
if (expand_stack(vma, addr))
return NULL;
- if (vma->vm_flags & VM_LOCKED)
- make_pages_present(addr, start);
+ if (vma->vm_flags & VM_LOCKED) {
+ int nr_pages = mlock_vma_pages_range(vma, addr, start);
+ if (nr_pages < 0)
+ return NULL; /* vma gone! */
+ }
return vma;
}
#endif
@@ -1908,6 +1921,18 @@ int do_munmap(struct mm_struct *mm, unsi
vma = prev? prev->vm_next: mm->mmap;
/*
+ * unlock any mlock()ed ranges before detaching vmas
+ */
+ if (mm->locked_vm) {
+ struct vm_area_struct *tmp = vma;
+ while (tmp && tmp->vm_start < end) {
+ if (tmp->vm_flags & VM_LOCKED)
+ munlock_vma_pages_all(tmp);
+ tmp = tmp->vm_next;
+ }
+ }
+
+ /*
* Remove the vma's, and unmap the actual pages
*/
detach_vmas_to_be_unmapped(mm, vma, prev, end);
@@ -2019,8 +2044,9 @@ unsigned long do_brk(unsigned long addr,
return -ENOMEM;
/* Can we just expand an old private anonymous mapping? */
- if (vma_merge(mm, prev, addr, addr + len, flags,
- NULL, NULL, pgoff, NULL))
+ vma = vma_merge(mm, prev, addr, addr + len, flags,
+ NULL, NULL, pgoff, NULL);
+ if (vma)
goto out;
/*
@@ -2042,8 +2068,9 @@ unsigned long do_brk(unsigned long addr,
out:
mm->total_vm += len >> PAGE_SHIFT;
if (flags & VM_LOCKED) {
- mm->locked_vm += len >> PAGE_SHIFT;
- make_pages_present(addr, addr + len);
+ int nr_pages = mlock_vma_pages_range(vma, addr, addr + len);
+ if (nr_pages >= 0)
+ mm->locked_vm += (len >> PAGE_SHIFT) - nr_pages;
}
return addr;
}
@@ -2054,13 +2081,25 @@ EXPORT_SYMBOL(do_brk);
void exit_mmap(struct mm_struct *mm)
{
struct mmu_gather *tlb;
- struct vm_area_struct *vma = mm->mmap;
+ struct vm_area_struct *vma;
unsigned long nr_accounted = 0;
unsigned long end;
/* mm's last user has gone, and its about to be pulled down */
arch_exit_mmap(mm);
+ if (mm->locked_vm) {
+ vma = mm->mmap;
+ while (vma) {
+ if (vma->vm_flags & VM_LOCKED)
+ munlock_vma_pages_all(vma);
+ vma = vma->vm_next;
+ }
+ }
+
+ vma = mm->mmap;
+
+
lru_add_drain();
flush_cache_mm(mm);
tlb = tlb_gather_mmu(mm, 1);
Index: linux-2.6.26-rc5-mm2/mm/mremap.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/mremap.c 2008-06-10 22:27:14.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/mremap.c 2008-06-10 22:28:19.000000000 -0400
@@ -23,6 +23,8 @@
#include <asm/cacheflush.h>
#include <asm/tlbflush.h>
+#include "internal.h"
+
static pmd_t *get_old_pmd(struct mm_struct *mm, unsigned long addr)
{
pgd_t *pgd;
@@ -232,8 +234,8 @@ static unsigned long move_vma(struct vm_
if (vm_flags & VM_LOCKED) {
mm->locked_vm += new_len >> PAGE_SHIFT;
if (new_len > old_len)
- make_pages_present(new_addr + old_len,
- new_addr + new_len);
+ mlock_vma_pages_range(new_vma, new_addr + old_len,
+ new_addr + new_len);
}
return new_addr;
@@ -373,7 +375,7 @@ unsigned long do_mremap(unsigned long ad
vm_stat_account(mm, vma->vm_flags, vma->vm_file, pages);
if (vma->vm_flags & VM_LOCKED) {
mm->locked_vm += pages;
- make_pages_present(addr + old_len,
+ mlock_vma_pages_range(vma, addr + old_len,
addr + new_len);
}
ret = addr;
Index: linux-2.6.26-rc5-mm2/mm/truncate.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/truncate.c 2008-06-10 22:27:14.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/truncate.c 2008-06-10 22:28:19.000000000 -0400
@@ -18,6 +18,7 @@
#include <linux/task_io_accounting_ops.h>
#include <linux/buffer_head.h> /* grr. try_to_release_page,
do_invalidatepage */
+#include "internal.h"
/**
@@ -104,6 +105,7 @@ truncate_complete_page(struct address_sp
cancel_dirty_page(page, PAGE_CACHE_SIZE);
remove_from_page_cache(page);
+ clear_page_mlock(page);
ClearPageUptodate(page);
ClearPageMappedToDisk(page);
page_cache_release(page); /* pagecache ref */
@@ -128,6 +130,7 @@ invalidate_complete_page(struct address_
if (PagePrivate(page) && !try_to_release_page(page, 0))
return 0;
+ clear_page_mlock(page);
ret = remove_mapping(mapping, page);
return ret;
@@ -353,6 +356,7 @@ invalidate_complete_page2(struct address
if (PageDirty(page))
goto failed;
+ clear_page_mlock(page);
BUG_ON(PagePrivate(page));
__remove_from_page_cache(page);
write_unlock_irq(&mapping->tree_lock);
Index: linux-2.6.26-rc5-mm2/mm/mlock.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/mlock.c 2008-06-10 22:28:05.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/mlock.c 2008-06-10 22:28:19.000000000 -0400
@@ -270,7 +270,8 @@ static void __munlock_vma_pages_range(st
struct munlock_page_walk mpw;
VM_BUG_ON(start & ~PAGE_MASK || end & ~PAGE_MASK);
- VM_BUG_ON(!rwsem_is_locked(&vma->vm_mm->mmap_sem));
+ VM_BUG_ON((!rwsem_is_locked(&vma->vm_mm->mmap_sem)) &&
+ (atomic_read(&mm->mm_users) != 0));
VM_BUG_ON(start < vma->vm_start);
VM_BUG_ON(end > vma->vm_end);
@@ -351,12 +352,13 @@ no_mlock:
/*
- * munlock all pages in vma. For munmap() and exit().
+ * munlock all pages in the vma range. For mremap(), munmap() and exit().
*/
-void munlock_vma_pages_all(struct vm_area_struct *vma)
+void munlock_vma_pages_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end)
{
vma->vm_flags &= ~VM_LOCKED;
- __munlock_vma_pages_range(vma, vma->vm_start, vma->vm_end);
+ __munlock_vma_pages_range(vma, start, end);
}
/*
Index: linux-2.6.26-rc5-mm2/mm/fremap.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/fremap.c 2008-06-10 22:27:14.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/fremap.c 2008-06-10 22:28:19.000000000 -0400
@@ -20,6 +20,8 @@
#include <asm/cacheflush.h>
#include <asm/tlbflush.h>
+#include "internal.h"
+
static void zap_pte(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep)
{
@@ -214,13 +216,29 @@ asmlinkage long sys_remap_file_pages(uns
spin_unlock(&mapping->i_mmap_lock);
}
+ if (vma->vm_flags & VM_LOCKED) {
+ /*
+ * drop PG_Mlocked flag for over-mapped range
+ */
+ unsigned int saved_flags = vma->vm_flags;
+ munlock_vma_pages_range(vma, start, start + size);
+ vma->vm_flags = saved_flags;
+ }
+
err = populate_range(mm, vma, start, size, pgoff);
if (!err && !(flags & MAP_NONBLOCK)) {
- if (unlikely(has_write_lock)) {
- downgrade_write(&mm->mmap_sem);
- has_write_lock = 0;
+ if (vma->vm_flags & VM_LOCKED) {
+ /*
+ * might be mapping previously unmapped range of file
+ */
+ mlock_vma_pages_range(vma, start, start + size);
+ } else {
+ if (unlikely(has_write_lock)) {
+ downgrade_write(&mm->mmap_sem);
+ has_write_lock = 0;
+ }
+ make_pages_present(start, start+size);
}
- make_pages_present(start, start+size);
}
/*
Index: linux-2.6.26-rc5-mm2/mm/internal.h
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/internal.h 2008-06-10 22:27:14.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/internal.h 2008-06-10 22:28:19.000000000 -0400
@@ -63,7 +63,12 @@ static inline unsigned long page_order(s
extern int mlock_vma_pages_range(struct vm_area_struct *vma,
unsigned long start, unsigned long end);
-extern void munlock_vma_pages_all(struct vm_area_struct *vma);
+extern void munlock_vma_pages_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end);
+static inline void munlock_vma_pages_all(struct vm_area_struct *vma)
+{
+ munlock_vma_pages_range(vma, vma->vm_start, vma->vm_end);
+}
#ifdef CONFIG_UNEVICTABLE_LRU
/*
--
All Rights Reversed
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH -mm 20/24] swap: cull unevictable pages in fault path
[not found] <20080611184214.605110868@redhat.com>
` (3 preceding siblings ...)
2008-06-11 18:42 ` [PATCH -mm 18/24] mmap: handle mlocked pages during map, remap, unmap Rik van Riel
@ 2008-06-11 18:42 ` Rik van Riel, Lee Schermerhorn
2008-06-11 18:42 ` [PATCH -mm 22/24] vmscan: unevictable LRU scan sysctl Rik van Riel, Lee Schermerhorn
` (2 subsequent siblings)
7 siblings, 0 replies; 13+ messages in thread
From: Rik van Riel, Lee Schermerhorn @ 2008-06-11 18:42 UTC (permalink / raw)
To: linux-kernel
Cc: Andrew Morton, Lee Schermerhorn, Kosaki Motohiro, linux-mm, Eric Whitney
[-- Attachment #1: vmscan-cull-non-reclaimable-pages-in-fault-path.patch --]
[-- Type: text/plain, Size: 6358 bytes --]
In the fault paths that install new anonymous pages, check whether
the page is evictable or not using lru_cache_add_active_or_unevictable().
If the page is evictable, just add it to the active lru list [via
the pagevec cache], else add it to the unevictable list.
This "proactive" culling in the fault path mimics the handling of
mlocked pages in Nick Piggin's series to keep mlocked pages off
the lru lists.
Notes:
1) This patch is optional--e.g., if one is concerned about the
additional test in the fault path. We can defer the moving of
nonreclaimable pages until when vmscan [shrink_*_list()]
encounters them. Vmscan will only need to handle such pages
once, but if there are a lot of them it could impact system
performance.
2) The 'vma' argument to page_evictable() is require to notice that
we're faulting a page into an mlock()ed vma w/o having to scan the
page's rmap in the fault path. Culling mlock()ed anon pages is
currently the only reason for this patch.
3) We can't cull swap pages in read_swap_cache_async() because the
vma argument doesn't necessarily correspond to the swap cache
offset passed in by swapin_readahead(). This could [did!] result
in mlocking pages in non-VM_LOCKED vmas if [when] we tried to
cull in this path.
4) Move set_pte_at() to after where we add page to lru to keep it
hidden from other tasks that might walk the page table.
We already do it in this order in do_anonymous() page. And,
these are COW'd anon pages. Is this safe?
Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
---
V2 -> V3:
+ rebase to 23-mm1 atop RvR's split lru series.
V1 -> V2:
+ no changes
include/linux/swap.h | 2 ++
mm/memory.c | 20 ++++++++++++--------
mm/swap.c | 21 +++++++++++++++++++++
3 files changed, 35 insertions(+), 8 deletions(-)
Index: linux-2.6.26-rc5-mm2/mm/memory.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/memory.c 2008-06-10 22:21:14.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/memory.c 2008-06-10 22:21:16.000000000 -0400
@@ -1813,12 +1813,15 @@ gotten:
* thread doing COW.
*/
ptep_clear_flush(vma, address, page_table);
- set_pte_at(mm, address, page_table, entry);
- update_mmu_cache(vma, address, entry);
+
SetPageSwapBacked(new_page);
- lru_cache_add_active_anon(new_page);
+ lru_cache_add_active_or_unevictable(new_page, vma);
page_add_new_anon_rmap(new_page, vma, address);
+//TODO: is this safe? do_anonymous_page() does it this way.
+ set_pte_at(mm, address, page_table, entry);
+ update_mmu_cache(vma, address, entry);
+
/* Free the old page.. */
new_page = old_page;
ret |= VM_FAULT_WRITE;
@@ -2285,7 +2288,7 @@ static int do_anonymous_page(struct mm_s
goto release;
inc_mm_counter(mm, anon_rss);
SetPageSwapBacked(page);
- lru_cache_add_active_anon(page);
+ lru_cache_add_active_or_unevictable(page, vma);
page_add_new_anon_rmap(page, vma, address);
set_pte_at(mm, address, page_table, entry);
@@ -2429,12 +2432,11 @@ static int __do_fault(struct mm_struct *
entry = mk_pte(page, vma->vm_page_prot);
if (flags & FAULT_FLAG_WRITE)
entry = maybe_mkwrite(pte_mkdirty(entry), vma);
- set_pte_at(mm, address, page_table, entry);
if (anon) {
- inc_mm_counter(mm, anon_rss);
+ inc_mm_counter(mm, anon_rss);
SetPageSwapBacked(page);
- lru_cache_add_active_anon(page);
- page_add_new_anon_rmap(page, vma, address);
+ lru_cache_add_active_or_unevictable(page, vma);
+ page_add_new_anon_rmap(page, vma, address);
} else {
inc_mm_counter(mm, file_rss);
page_add_file_rmap(page);
@@ -2443,6 +2445,8 @@ static int __do_fault(struct mm_struct *
get_page(dirty_page);
}
}
+//TODO: is this safe? do_anonymous_page() does it this way.
+ set_pte_at(mm, address, page_table, entry);
/* no need to invalidate: a not-present page won't be cached */
update_mmu_cache(vma, address, entry);
Index: linux-2.6.26-rc5-mm2/include/linux/swap.h
===================================================================
--- linux-2.6.26-rc5-mm2.orig/include/linux/swap.h 2008-06-10 22:21:14.000000000 -0400
+++ linux-2.6.26-rc5-mm2/include/linux/swap.h 2008-06-10 22:24:12.000000000 -0400
@@ -173,6 +173,8 @@ extern unsigned int nr_free_pagecache_pa
/* linux/mm/swap.c */
extern void __lru_cache_add(struct page *, enum lru_list lru);
extern void lru_cache_add_lru(struct page *, enum lru_list lru);
+extern void lru_cache_add_active_or_unevictable(struct page *,
+ struct vm_area_struct *);
extern void activate_page(struct page *);
extern void mark_page_accessed(struct page *);
extern void lru_add_drain(void);
Index: linux-2.6.26-rc5-mm2/mm/swap.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/swap.c 2008-06-10 22:21:14.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/swap.c 2008-06-10 22:21:16.000000000 -0400
@@ -31,6 +31,8 @@
#include <linux/backing-dev.h>
#include <linux/memcontrol.h>
+#include "internal.h"
+
/* How many pages do we try to swap or page in/out together? */
int page_cluster;
@@ -244,6 +246,25 @@ void add_page_to_unevictable_list(struct
spin_unlock_irq(&zone->lru_lock);
}
+/**
+ * lru_cache_add_active_or_unevictable
+ * @page: the page to be added to LRU
+ * @vma: vma in which page is mapped for determining reclaimability
+ *
+ * place @page on active or unevictable LRU list, depending on
+ * page_evictable(). Note that if the page is not evictable,
+ * it goes directly back onto it's zone's unevictable list. It does
+ * NOT use a per cpu pagevec.
+ */
+void lru_cache_add_active_or_unevictable(struct page *page,
+ struct vm_area_struct *vma)
+{
+ if (page_evictable(page, vma))
+ lru_cache_add_lru(page, page_lru(page));
+ else
+ add_page_to_unevictable_list(page);
+}
+
/*
* Drain pages out of the cpu's pagevecs.
* Either "cpu" is the current CPU, and preemption has already been
--
All Rights Reversed
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH -mm 22/24] vmscan: unevictable LRU scan sysctl
[not found] <20080611184214.605110868@redhat.com>
` (4 preceding siblings ...)
2008-06-11 18:42 ` [PATCH -mm 20/24] swap: cull unevictable pages in fault path Rik van Riel, Lee Schermerhorn
@ 2008-06-11 18:42 ` Rik van Riel, Lee Schermerhorn
2008-06-11 18:42 ` [PATCH -mm 24/24] doc: unevictable LRU and mlocked pages documentation Rik van Riel
[not found] ` <20080611184339.159161465@redhat.com>
7 siblings, 0 replies; 13+ messages in thread
From: Rik van Riel, Lee Schermerhorn @ 2008-06-11 18:42 UTC (permalink / raw)
To: linux-kernel
Cc: Andrew Morton, Lee Schermerhorn, Kosaki Motohiro, linux-mm, Eric Whitney
[-- Attachment #1: mm-only-vmscan-noreclaim-lru-scan-sysctl.patch --]
[-- Type: text/plain, Size: 11753 bytes --]
This patch adds a function to scan individual or all zones' unevictable
lists and move any pages that have become evictable onto the respective
zone's inactive list, where shrink_inactive_list() will deal with them.
Adds sysctl to scan all nodes, and per node attributes to individual
nodes' zones.
Kosaki:
If evictable page found in unevictable lru when write
/proc/sys/vm/scan_unevictable_pages, print filename and file offset of
these pages.
---
TODO: DEBUGGING ONLY: NOT FOR UPSTREAM MERGE
V6:
+ moved to end of series as optional debug patch
V2 -> V3:
+ rebase to 23-mm1 atop RvR's split LRU series
New in V2
Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
drivers/base/node.c | 5 +
include/linux/rmap.h | 3
include/linux/swap.h | 15 ++++
kernel/sysctl.c | 10 +++
mm/rmap.c | 4 -
mm/vmscan.c | 163 +++++++++++++++++++++++++++++++++++++++++++++++++++
6 files changed, 198 insertions(+), 2 deletions(-)
Index: linux-2.6.26-rc5-mm2/include/linux/swap.h
===================================================================
--- linux-2.6.26-rc5-mm2.orig/include/linux/swap.h 2008-06-10 22:38:56.000000000 -0400
+++ linux-2.6.26-rc5-mm2/include/linux/swap.h 2008-06-10 22:38:58.000000000 -0400
@@ -7,6 +7,7 @@
#include <linux/list.h>
#include <linux/memcontrol.h>
#include <linux/sched.h>
+#include <linux/node.h>
#include <asm/atomic.h>
#include <asm/page.h>
@@ -235,15 +236,29 @@ static inline int zone_reclaim(struct zo
#ifdef CONFIG_UNEVICTABLE_LRU
extern int page_evictable(struct page *page, struct vm_area_struct *vma);
extern void scan_mapping_unevictable_pages(struct address_space *);
+
+extern unsigned long scan_unevictable_pages;
+extern int scan_unevictable_handler(struct ctl_table *, int, struct file *,
+ void __user *, size_t *, loff_t *);
+extern int scan_unevictable_register_node(struct node *node);
+extern void scan_unevictable_unregister_node(struct node *node);
#else
static inline int page_evictable(struct page *page,
struct vm_area_struct *vma)
{
return 1;
}
+
static inline void scan_mapping_unevictable_pages(struct address_space *mapping)
{
}
+
+static inline int scan_unevictable_register_node(struct node *node)
+{
+ return 0;
+}
+
+static inline void scan_unevictable_unregister_node(struct node *node) { }
#endif
extern int kswapd_run(int nid);
Index: linux-2.6.26-rc5-mm2/mm/vmscan.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/vmscan.c 2008-06-10 22:38:57.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/vmscan.c 2008-06-10 23:02:07.000000000 -0400
@@ -39,6 +39,7 @@
#include <linux/freezer.h>
#include <linux/memcontrol.h>
#include <linux/delayacct.h>
+#include <linux/sysctl.h>
#include <asm/tlbflush.h>
#include <asm/div64.h>
@@ -2362,6 +2363,39 @@ int page_evictable(struct page *page, st
return 1;
}
+static void show_page_path(struct page *page)
+{
+ char buf[256];
+ if (page_is_file_cache(page)) {
+ struct address_space *mapping = page->mapping;
+ struct dentry *dentry;
+ pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT);
+
+ spin_lock(&mapping->i_mmap_lock);
+ dentry = d_find_alias(mapping->host);
+ printk(KERN_INFO "rescued: %s %lu\n",
+ dentry_path(dentry, buf, 256), pgoff);
+ spin_unlock(&mapping->i_mmap_lock);
+ } else {
+#ifdef CONFIG_MM_OWNER
+ struct anon_vma *anon_vma;
+ struct vm_area_struct *vma;
+
+ anon_vma = page_lock_anon_vma(page);
+ if (!anon_vma)
+ return;
+
+ list_for_each_entry(vma, &anon_vma->head, anon_vma_node) {
+ printk(KERN_INFO "rescued: anon %s\n",
+ vma->vm_mm->owner->comm);
+ break;
+ }
+ page_unlock_anon_vma(anon_vma);
+#endif
+ }
+}
+
+
/**
* check_move_unevictable_page - check page for evictability and move to appropriate zone lru list
* @page: page to check evictability and move to appropriate lru list
@@ -2379,6 +2413,9 @@ static void check_move_unevictable_page(
ClearPageUnevictable(page); /* for page_evictable() */
if (page_evictable(page, NULL)) {
enum lru_list l = LRU_INACTIVE_ANON + page_is_file_cache(page);
+
+ show_page_path(page);
+
__dec_zone_state(zone, NR_UNEVICTABLE);
list_move(&page->lru, &zone->lru[l].list);
__inc_zone_state(zone, NR_INACTIVE_ANON + l);
@@ -2459,4 +2496,130 @@ void scan_mapping_unevictable_pages(stru
}
}
+
+/**
+ * scan_zone_unevictable_pages - check unevictable list for evictable pages
+ * @zone - zone of which to scan the unevictable list
+ *
+ * Scan @zone's unevictable LRU lists to check for pages that have become
+ * evictable. Move those that have to @zone's inactive list where they
+ * become candidates for reclaim, unless shrink_inactive_zone() decides
+ * to reactivate them. Pages that are still unevictable are rotated
+ * back onto @zone's unevictable list.
+ */
+#define SCAN_UNEVICTABLE_BATCH_SIZE 16UL /* arbitrary lock hold batch size */
+void scan_zone_unevictable_pages(struct zone *zone)
+{
+ struct list_head *l_unevictable = &zone->lru[LRU_UNEVICTABLE].list;
+ unsigned long scan;
+ unsigned long nr_to_scan = zone_page_state(zone, NR_UNEVICTABLE);
+
+ while (nr_to_scan > 0) {
+ unsigned long batch_size = min(nr_to_scan,
+ SCAN_UNEVICTABLE_BATCH_SIZE);
+
+ spin_lock_irq(&zone->lru_lock);
+ for (scan = 0; scan < batch_size; scan++) {
+ struct page *page = lru_to_page(l_unevictable);
+
+ if (TestSetPageLocked(page))
+ continue;
+
+ prefetchw_prev_lru_page(page, l_unevictable, flags);
+
+ if (likely(PageLRU(page) && PageUnevictable(page)))
+ check_move_unevictable_page(page, zone);
+
+ unlock_page(page);
+ }
+ spin_unlock_irq(&zone->lru_lock);
+
+ nr_to_scan -= batch_size;
+ }
+}
+
+
+/**
+ * scan_all_zones_unevictable_pages - scan all unevictable lists for evictable pages
+ *
+ * A really big hammer: scan all zones' unevictable LRU lists to check for
+ * pages that have become evictable. Move those back to the zones'
+ * inactive list where they become candidates for reclaim.
+ * This occurs when, e.g., we have unswappable pages on the unevictable lists,
+ * and we add swap to the system. As such, it runs in the context of a task
+ * that has possibly/probably made some previously unevictable pages
+ * evictable.
+ */
+void scan_all_zones_unevictable_pages(void)
+{
+ struct zone *zone;
+
+ for_each_zone(zone) {
+ scan_zone_unevictable_pages(zone);
+ }
+}
+
+/*
+ * scan_unevictable_pages [vm] sysctl handler. On demand re-scan of
+ * all nodes' unevictable lists for evictable pages
+ */
+unsigned long scan_unevictable_pages;
+
+int scan_unevictable_handler(struct ctl_table *table, int write,
+ struct file *file, void __user *buffer,
+ size_t *length, loff_t *ppos)
+{
+ proc_doulongvec_minmax(table, write, file, buffer, length, ppos);
+
+ if (write && *(unsigned long *)table->data)
+ scan_all_zones_unevictable_pages();
+
+ scan_unevictable_pages = 0;
+ return 0;
+}
+
+/*
+ * per node 'scan_unevictable_pages' attribute. On demand re-scan of
+ * a specified node's per zone unevictable lists for evictable pages.
+ */
+
+static ssize_t read_scan_unevictable_node(struct sys_device *dev, char *buf)
+{
+ return sprintf(buf, "0\n"); /* always zero; should fit... */
+}
+
+static ssize_t write_scan_unevictable_node(struct sys_device *dev,
+ const char *buf, size_t count)
+{
+ struct zone *node_zones = NODE_DATA(dev->id)->node_zones;
+ struct zone *zone;
+ unsigned long res;
+ unsigned long req = strict_strtoul(buf, 10, &res);
+
+ if (!req)
+ return 1; /* zero is no-op */
+
+ for (zone = node_zones; zone - node_zones < MAX_NR_ZONES; ++zone) {
+ if (!populated_zone(zone))
+ continue;
+ scan_zone_unevictable_pages(zone);
+ }
+ return 1;
+}
+
+
+static SYSDEV_ATTR(scan_unevictable_pages, S_IRUGO | S_IWUSR,
+ read_scan_unevictable_node,
+ write_scan_unevictable_node);
+
+int scan_unevictable_register_node(struct node *node)
+{
+ return sysdev_create_file(&node->sysdev, &attr_scan_unevictable_pages);
+}
+
+void scan_unevictable_unregister_node(struct node *node)
+{
+ sysdev_remove_file(&node->sysdev, &attr_scan_unevictable_pages);
+}
+
#endif
Index: linux-2.6.26-rc5-mm2/kernel/sysctl.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/kernel/sysctl.c 2008-06-10 22:36:48.000000000 -0400
+++ linux-2.6.26-rc5-mm2/kernel/sysctl.c 2008-06-10 22:38:58.000000000 -0400
@@ -1141,6 +1141,16 @@ static struct ctl_table vm_table[] = {
.extra2 = &one,
},
#endif
+#ifdef CONFIG_UNEVICTABLE_LRU
+ {
+ .ctl_name = CTL_UNNUMBERED,
+ .procname = "scan_unevictable_pages",
+ .data = &scan_unevictable_pages,
+ .maxlen = sizeof(scan_unevictable_pages),
+ .mode = 0644,
+ .proc_handler = &scan_unevictable_handler,
+ },
+#endif
/*
* NOTE: do not add new entries to this table unless you have read
* Documentation/sysctl/ctl_unnumbered.txt
Index: linux-2.6.26-rc5-mm2/drivers/base/node.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/drivers/base/node.c 2008-06-10 22:38:44.000000000 -0400
+++ linux-2.6.26-rc5-mm2/drivers/base/node.c 2008-06-10 22:38:58.000000000 -0400
@@ -13,6 +13,7 @@
#include <linux/nodemask.h>
#include <linux/cpu.h>
#include <linux/device.h>
+#include <linux/swap.h>
static struct sysdev_class node_class = {
.name = "node",
@@ -186,6 +187,8 @@ int register_node(struct node *node, int
sysdev_create_file(&node->sysdev, &attr_meminfo);
sysdev_create_file(&node->sysdev, &attr_numastat);
sysdev_create_file(&node->sysdev, &attr_distance);
+
+ scan_unevictable_register_node(node);
}
return error;
}
@@ -205,6 +208,8 @@ void unregister_node(struct node *node)
sysdev_remove_file(&node->sysdev, &attr_numastat);
sysdev_remove_file(&node->sysdev, &attr_distance);
+ scan_unevictable_unregister_node(node);
+
sysdev_unregister(&node->sysdev);
}
Index: linux-2.6.26-rc5-mm2/include/linux/rmap.h
===================================================================
--- linux-2.6.26-rc5-mm2.orig/include/linux/rmap.h 2008-06-10 22:37:26.000000000 -0400
+++ linux-2.6.26-rc5-mm2/include/linux/rmap.h 2008-06-10 22:38:58.000000000 -0400
@@ -67,6 +67,9 @@ void anon_vma_unlink(struct vm_area_stru
void anon_vma_link(struct vm_area_struct *);
void __anon_vma_link(struct vm_area_struct *);
+extern struct anon_vma *page_lock_anon_vma(struct page *page);
+extern void page_unlock_anon_vma(struct anon_vma *anon_vma);
+
/*
* rmap interfaces called when adding or removing pte of page
*/
Index: linux-2.6.26-rc5-mm2/mm/rmap.c
===================================================================
--- linux-2.6.26-rc5-mm2.orig/mm/rmap.c 2008-06-10 22:37:26.000000000 -0400
+++ linux-2.6.26-rc5-mm2/mm/rmap.c 2008-06-10 22:38:58.000000000 -0400
@@ -158,7 +158,7 @@ void __init anon_vma_init(void)
* Getting a lock on a stable anon_vma from a page off the LRU is
* tricky: page_lock_anon_vma rely on RCU to guard against the races.
*/
-static struct anon_vma *page_lock_anon_vma(struct page *page)
+struct anon_vma *page_lock_anon_vma(struct page *page)
{
struct anon_vma *anon_vma;
unsigned long anon_mapping;
@@ -178,7 +178,7 @@ out:
return NULL;
}
-static void page_unlock_anon_vma(struct anon_vma *anon_vma)
+void page_unlock_anon_vma(struct anon_vma *anon_vma)
{
spin_unlock(&anon_vma->lock);
rcu_read_unlock();
--
All Rights Reversed
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH -mm 24/24] doc: unevictable LRU and mlocked pages documentation
[not found] <20080611184214.605110868@redhat.com>
` (5 preceding siblings ...)
2008-06-11 18:42 ` [PATCH -mm 22/24] vmscan: unevictable LRU scan sysctl Rik van Riel, Lee Schermerhorn
@ 2008-06-11 18:42 ` Rik van Riel
[not found] ` <20080611184339.159161465@redhat.com>
7 siblings, 0 replies; 13+ messages in thread
From: Rik van Riel @ 2008-06-11 18:42 UTC (permalink / raw)
To: linux-kernel
Cc: Andrew Morton, Lee Schermerhorn, Kosaki Motohiro, linux-mm, Eric Whitney
[-- Attachment #1: vmscan-noreclaim-lru-and-mlocked-pages-documentation.patch --]
[-- Type: text/plain, Size: 36266 bytes --]
From: Lee Schermerhorn <lee.schermerhorn@hp.com>
Documentation for unevictable lru list and its usage.
Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
---
Documentation/vm/unevictable-lru.txt | 609 +++++++++++++++++++++++++++++++++++
1 file changed, 609 insertions(+)
Index: linux-2.6.26-rc5-mm2/Documentation/vm/unevictable-lru.txt
===================================================================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6.26-rc5-mm2/Documentation/vm/unevictable-lru.txt 2008-06-10 22:05:06.000000000 -0400
@@ -0,0 +1,609 @@
+
+This document describes the Linux memory management "Unevictable LRU"
+infrastructure and the use of this infrastructure to manage several types
+of "unevictable" pages. The document attempts to provide the overall
+rationale behind this mechanism and the rationale for some of the design
+decisions that drove the implementation. The latter design rationale is
+discussed in the context of an implementation description. Admittedly, one
+can obtain the implementation details--the "what does it do?"--by reading the
+code. One hopes that the descriptions below add value by provide the answer
+to "why does it do that?".
+
+Unevictable LRU Infrastructure:
+
+The Unevictable LRU adds an additional LRU list to track unevictable pages
+and to hide these pages from vmscan. This mechanism is based on a patch by
+Larry Woodman of Red Hat to address several scalability problems with page
+reclaim in Linux. The problems have been observed at customer sites on large
+memory x86_64 systems. For example, a non-numal x86_64 platform with 128GB
+of main memory will have over 32 million 4k pages in a single zone. When a
+large fraction of these pages are not evictable for any reason [see below],
+vmscan will spend a lot of time scanning the LRU lists looking for the small
+fraction of pages that are evictable. This can result in a situation where
+all cpus are spending 100% of their time in vmscan for hours or days on end,
+with the system completely unresponsive.
+
+The Unevictable LRU infrastructure addresses the following classes of
+unevictable pages:
+
++ page owned by ram disks or ramfs
++ page mapped into SHM_LOCKed shared memory regions
++ page mapped into VM_LOCKED [mlock()ed] vmas
+
+The infrastructure might be able to handle other conditions that make pages
+unevictable, either by definition or by circumstance, in the future.
+
+
+The Unevictable LRU List
+
+The Unevictable LRU infrastructure consists of an additional, per-zone, LRU list
+called the "unevictable" list and an associated page flag, PG_unevictable, to
+indicate that the page is being managed on the unevictable list. The PG_unevictable
+flag is analogous to, and mutually exclusive with, the PG_active flag in that
+it indicates on which LRU list a page resides when PG_lru is set. The
+unevictable LRU list is source configurable based on the UNEVICTABLE_LRU Kconfig
+option.
+
+Why maintain unevictable pages on an additional LRU list? The Linux memory
+management subsystem has well established protocols for managing pages on the
+LRU. Vmscan is based on LRU lists. LRU list exist per zone, and we want to
+maintain pages relative to their "home zone". All of these make the use of
+an additional list, parallel to the LRU active and inactive lists, a natural
+mechanism to employ. Note, however, that the unevictable list does not
+differentiate between file backed and swap backed [anon] pages. This
+differentiation is only important while the pages are, in fact, evictable.
+
+The unevictable LRU list benefits from the "arrayification" of the per-zone
+LRU lists and statistics originally proposed and posted by Christoph Lameter.
+
+Note that the unevictable list does not use the lru pagevec mechanism. Rather,
+unevictable pages are placed directly on the page's zone's unevictable
+list under the zone lru_lock. The reason for this is to prevent stranding
+of pages on the unevictable list when one task has the page isolated from the
+lru and other tasks are changing the "evictability" state of the page.
+
+
+Unevictable LRU and Memory Controller Interaction
+
+The memory controller data structure automatically gets a per zone unevictable
+lru list as a result of the "arrayification" of the per-zone LRU lists. The
+memory controller tracks the movement of pages to and from the unevictable list.
+When a memory control group comes under memory pressure, the controller will
+not attempt to reclaim pages on the unevictable list. This has a couple of
+effects. Because the pages are "hidden" from reclaim on the unevictable list,
+the reclaim process can be more efficient, dealing only with pages that have
+a chance of being reclaimed. On the other hand, if too many of the pages
+charged to the control group are unevictable, the evictable portion of the
+working set of the tasks in the control group may not fit into the available
+memory. This can cause the control group to thrash or to oom-kill tasks.
+
+
+Unevictable LRU: Detecting Unevictable Pages
+
+The function page_evictable(page, vma) in vmscan.c determines whether a
+page is evictable or not. For ramfs and ram disk [brd] pages and pages in
+SHM_LOCKed regions, page_evictable() tests a new address space flag,
+AS_UNEVICTABLE, in the page's address space using a wrapper function.
+Wrapper functions are used to set, clear and test the flag to reduce the
+requirement for #ifdef's throughout the source code. AS_UNEVICTABLE is set on
+ramfs inode/mapping when it is created and on ram disk inode/mappings at open
+time. This flag remains for the life of the inode.
+
+For shared memory regions, AS_UNEVICTABLE is set when an application successfully
+SHM_LOCKs the region and is removed when the region is SHM_UNLOCKed. Note that
+shmctl(SHM_LOCK, ...) does not populate the page tables for the region as does,
+for example, mlock(). So, we make no special effort to push any pages in the
+SHM_LOCKed region to the unevictable list. Vmscan will do this when/if it
+encounters the pages during reclaim. On SHM_UNLOCK, shmctl() scans the pages
+in the region and "rescues" them from the unevictable list if no other condition
+keeps them unevictable. If a SHM_LOCKed region is destroyed, the pages
+are also "rescued" from the unevictable list in the process of freeing them.
+
+page_evictable() detects mlock()ed pages by testing an additional page flag,
+PG_mlocked via the PageMlocked() wrapper. If the page is NOT mlocked, and a
+non-NULL vma is supplied, page_evictable() will check whether the vma is
+VM_LOCKED via is_mlocked_vma(). is_mlocked_vma() will SetPageMlocked() and
+update the appropriate statistics if the vma is VM_LOCKED. This method allows
+efficient "culling" of pages in the fault path that are being faulted in to
+VM_LOCKED vmas.
+
+
+Unevictable Pages and Vmscan [shrink_*_list()]
+
+If unevictable pages are culled in the fault path, or moved to the
+unevictable list at mlock() or mmap() time, vmscan will never encounter the pages
+until they have become evictable again, for example, via munlock() and have
+been "rescued" from the unevictable list. However, there may be situations where
+we decide, for the sake of expediency, to leave a unevictable page on one of
+the regular active/inactive LRU lists for vmscan to deal with. Vmscan checks
+for such pages in all of the shrink_{active|inactive|page}_list() functions and
+will "cull" such pages that it encounters--that is, it diverts those pages to
+the unevictable list for the zone being scanned.
+
+There may be situations where a page is mapped into a VM_LOCKED vma, but the
+page is not marked as PageMlocked. Such pages will make it all the way to
+shrink_page_list() where they will be detected when vmscan walks the reverse
+map in try_to_unmap(). If try_to_unmap() returns SWAP_MLOCK, shrink_page_list()
+will cull the page at that point.
+
+Note that for anonymous pages, shrink_page_list() attempts to add the page to
+the swap cache before it tries to unmap the page. To avoid this unnecessary
+consumption of swap space, shrink_page_list() calls try_to_munlock() to check
+whether any VM_LOCKED vmas map the page without attempting to unmap the page.
+If try_to_munlock() returns SWAP_MLOCK, shrink_page_list() will cull the page
+without consuming swap space. try_to_munlock() will be described below.
+
+
+Mlocked Page: Prior Work
+
+The "Unevictable Mlocked Pages" infrastructure is based on work originally posted
+by Nick Piggin in an RFC patch entitled "mm: mlocked pages off LRU". Nick's
+posted his patch as an alternative to a patch posted by Christoph Lameter to
+achieve the same objective--hiding mlocked pages from vmscan. In Nick's patch,
+he used one of the struct page lru list link fields as a count of VM_LOCKED
+vmas that map the page. This use of the link field for a count prevent the
+management of the pages on an LRU list. When Nick's patch was integrated with
+the Unevictable LRU work, the count was replaced by walking the reverse map to
+determine whether any VM_LOCKED vmas mapped the page. More on this below.
+The primary reason for wanting to keep mlocked pages on an LRU list is that
+mlocked pages are migratable, and the LRU list is used to arbitrate tasks
+attempting to migrate the same page. Whichever task succeeds in "isolating"
+the page from the LRU performs the migration.
+
+
+Mlocked Pages: Basic Management
+
+Mlocked pages--pages mapped into a VM_LOCKED vma--represent one class of
+unevictable pages. When such a page has been "noticed" by the memory
+management subsystem, the page is marked with the PG_mlocked [PageMlocked()]
+flag. A PageMlocked() page will be placed on the unevictable LRU list when
+it is added to the LRU. Pages can be "noticed" by memory management in
+several places:
+
+1) in the mlock()/mlockall() system call handlers.
+2) in the mmap() system call handler when mmap()ing a region with the
+ MAP_LOCKED flag, or mmap()ing a region in a task that has called
+ mlockall() with the MCL_FUTURE flag. Both of these conditions result
+ in the VM_LOCKED flag being set for the vma.
+3) in the fault path, if mlocked pages are "culled" in the fault path,
+ and when a VM_LOCKED stack segment is expanded.
+4) as mentioned above, in vmscan:shrink_page_list() with attempting to
+ reclaim a page in a VM_LOCKED vma--via try_to_unmap() or try_to_munlock().
+
+Mlocked pages become unlocked and rescued from the unevictable list when:
+
+1) mapped in a range unlocked via the munlock()/munlockall() system calls.
+2) munmapped() out of the last VM_LOCKED vma that maps the page, including
+ unmapping at task exit.
+3) when the page is truncated from the last VM_LOCKED vma of an mmap()ed file.
+4) before a page is COWed in a VM_LOCKED vma.
+
+
+Mlocked Pages: mlock()/mlockall() System Call Handling
+
+Both [do_]mlock() and [do_]mlockall() system call handlers call mlock_fixup()
+for each vma in the range specified by the call. In the case of mlockall(),
+this is the entire active address space of the task. Note that mlock_fixup()
+is used for both mlock()ing and munlock()ing a range of memory. A call to
+mlock() an already VM_LOCKED vma, or to munlock() a vma that is not VM_LOCKED
+is treated as a no-op--mlock_fixup() simply returns.
+
+If the vma passes some filtering described in "Mlocked Pages: Filtering Vmas"
+below, mlock_fixup() will attempt to merge the vma with its neighbors or split
+off a subset of the vma if the range does not cover the entire vma. Once the
+vma has been merged or split or neither, mlock_fixup() will call
+__mlock_vma_pages_range() to fault in the pages via get_user_pages() and
+to mark the pages as mlocked via mlock_vma_page().
+
+Note that the vma being mlocked might be mapped with PROT_NONE. In this case,
+get_user_pages() will be unable to fault in the pages. That's OK. If pages
+do end up getting faulted into this VM_LOCKED vma, we'll handle them in the
+fault path or in vmscan.
+
+Also note that a page returned by get_user_pages() could be truncated or
+migrated out from under us, while we're trying to mlock it. To detect
+this, __mlock_vma_pages_range() tests the page_mapping after acquiring
+the page lock. If the page is still associated with its mapping, we'll
+go ahead and call mlock_vma_page(). If the mapping is gone, we just
+unlock the page and move on. Worse case, this results in page mapped
+in a VM_LOCKED vma remaining on a normal LRU list without being
+PageMlocked(). Again, vmscan will detect and cull such pages.
+
+mlock_vma_page(), called with the page locked [N.B., not "mlocked"] will
+TestSetPageMlocked() for each page returned by get_user_pages(). We use
+TestSetPageMlocked() because the page might already be mlocked by another
+task/vma and we don't want to do extra work. We especially do not want to
+count an mlocked page more than once in the statistics. If the page was
+already mlocked, mlock_vma_page() is done.
+
+If the page was NOT already mlocked, mlock_vma_page() attempts to isolate the
+page from the LRU, as it is likely on the appropriate active or inactive list
+at that time. If the isolate_lru_page() succeeds, mlock_vma_page() will
+putback the page--putback_lru_page()--which will notice that the page is now
+mlocked and divert the page to the zone's unevictable LRU list. If
+mlock_vma_page() is unable to isolate the page from the LRU, vmscan will handle
+it later if/when it attempts to reclaim the page.
+
+
+Mlocked Pages: Filtering Vmas
+
+mlock_fixup() filters several classes of "special" vmas:
+
+1) vmas with VM_IO|VM_PFNMAP set are skipped entirely. The pages behind
+ these mappings are inherently pinned, so we don't need to mark them as
+ mlocked. In any case, most of the pages have no struct page in which to
+ so mark the page. Because of this, get_user_pages() will fail for these
+ vmas, so there is no sense in attempting to visit them.
+
+2) vmas mapping hugetlbfs page are already effectively pinned into memory.
+ We don't need nor want to mlock() these pages. However, to preserve the
+ prior behavior of mlock()--before the unevictable/mlock changes--mlock_fixup()
+ will call make_pages_present() in the hugetlbfs vma range to allocate the
+ huge pages and populate the ptes.
+
+3) vmas with VM_DONTEXPAND|VM_RESERVED are generally user space mappings of
+ kernel pages, such as the vdso page, relay channel pages, etc. These pages
+ are inherently unevictable and are not managed on the LRU lists.
+ mlock_fixup() treats these vmas the same as hugetlbfs vmas. It calls
+ make_pages_present() to populate the ptes.
+
+Note that for all of these special vmas, mlock_fixup() does not set the
+VM_LOCKED flag. Therefore, we won't have to deal with them later during
+munlock() or munmap()--for example, at task exit. Neither does mlock_fixup()
+account these vmas against the task's "locked_vm".
+
+Mlocked Pages: Downgrading the Mmap Semaphore.
+
+mlock_fixup() must be called with the mmap semaphore held for write, because
+it may have to merge or split vmas. However, mlocking a large region of
+memory can take a long time--especially if vmscan must reclaim pages to
+satisfy the regions requirements. Faulting in a large region with the mmap
+semaphore held for write can hold off other faults on the address space, in
+the case of a multi-threaded task. It can also hold off scans of the task's
+address space via /proc. While testing under heavy load, it was observed that
+the ps(1) command could be held off for many minutes while a large segment was
+mlock()ed down.
+
+To address this issue, and to make the system more responsive during mlock()ing
+of large segments, mlock_fixup() downgrades the mmap semaphore to read mode
+during the call to __mlock_vma_pages_range(). This works fine. However, the
+callers of mlock_fixup() expect the semaphore to be returned in write mode.
+So, mlock_fixup() "upgrades" the semphore to write mode. Linux does not
+support an atomic upgrade_sem() call, so mlock_fixup() must drop the semaphore
+and reacquire it in write mode. In a multi-threaded task, it is possible for
+the task memory map to change while the semaphore is dropped. Therefore,
+mlock_fixup() looks up the vma at the range start address after reacquiring
+the semaphore in write mode and verifies that it still covers the original
+range. If not, mlock_fixup() returns an error [-EAGAIN]. All callers of
+mlock_fixup() have been changed to deal with this new error condition.
+
+Note: when munlocking a region, all of the pages should already be resident--
+unless we have racing threads mlocking() and munlocking() regions. So,
+unlocking should not have to wait for page allocations nor faults of any kind.
+Therefore mlock_fixup() does not downgrade the semaphore for munlock().
+
+
+Mlocked Pages: munlock()/munlockall() System Call Handling
+
+The munlock() and munlockall() system calls are handled by the same functions--
+do_mlock[all]()--as the mlock() and mlockall() system calls with the unlock
+vs lock operation indicated by an argument. So, these system calls are also
+handled by mlock_fixup(). Again, if called for an already munlock()ed vma,
+mlock_fixup() simply returns. Because of the vma filtering discussed above,
+VM_LOCKED will not be set in any "special" vmas. So, these vmas will be
+ignored for munlock.
+
+If the vma is VM_LOCKED, mlock_fixup() again attempts to merge or split off
+the specified range. The range is then munlocked via the function
+__munlock_vma_pages_range(). Because the vma access protections could have
+been changed to PROT_NONE after faulting in and mlocking some pages,
+get_user_pages() is unreliable for visiting these pages for munlocking. We
+don't want to leave pages mlocked(), so __munlock_vma_pages_range() uses a
+custom page table walker to find all pages mapped into the specified range.
+Note that this again assumes that all pages in the mlocked() range are resident
+and mapped by the task's page table.
+
+As with __mlock_vma_pages_range(), unlocking can race with truncation and
+migration. It is very important that munlock of a page succeeds, lest we
+leak pages by stranding them in the mlocked state on the unevictable list.
+The munlock page walk pte handler resolves the race with page migration
+by checking the pte for a special swap pte indicating that the page is
+being migrated. If this is the case, the pte handler will wait for the
+migration entry to be replaced and then refetch the pte for the new page.
+Once the pte handler has locked the page, it checks the page_mapping to
+ensure that it still exists. If not, the handler unlocks the page and
+retries the entire process after refetching the pte.
+
+The munlock page walk pte handler unlocks individual pages by calling
+munlock_vma_page(). munlock_vma_page() unconditionally clears the PG_mlocked
+flag using TestClearPageMlocked(). As with mlock_vma_page(), munlock_vma_page()
+use the Test*PageMlocked() function to handle the case where the page might
+have already been unlocked by another task. If the page was mlocked,
+munlock_vma_page() updates that zone statistics for the number of mlocked
+pages. Note, however, that at this point we haven't checked whether the page
+is mapped by other VM_LOCKED vmas.
+
+We can't call try_to_munlock(), the function that walks the reverse map to check
+for other VM_LOCKED vmas, without first isolating the page from the LRU.
+try_to_munlock() is a variant of try_to_unmap() and thus requires that the page
+not be on an lru list. [More on these below.] However, the call to
+isolate_lru_page() could fail, in which case we couldn't try_to_munlock().
+So, we go ahead and clear PG_mlocked up front, as this might be the only chance
+we have. If we can successfully isolate the page, we go ahead and
+try_to_munlock(), which will restore the PG_mlocked flag and update the zone
+page statistics if it finds another vma holding the page mlocked. If we fail
+to isolate the page, we'll have left a potentially mlocked page on the LRU.
+This is fine, because we'll catch it later when/if vmscan tries to reclaim the
+page. This should be relatively rare.
+
+Mlocked Pages: Migrating Them...
+
+A page that is being migrated has been isolated from the lru lists and is
+held locked across unmapping of the page, updating the page's mapping
+[address_space] entry and copying the contents and state, until the
+page table entry has been replaced with an entry that refers to the new
+page. Linux supports migration of mlocked pages and other unevictable
+pages. This involves simply moving the PageMlocked and PageUnevictable states
+from the old page to the new page.
+
+Note that page migration can race with mlocking or munlocking of the same
+page. This has been discussed from the mlock/munlock perspective in the
+respective sections above. Both processes [migration, m[un]locking], hold
+the page locked. This provides the first level of synchronization. Page
+migration zeros out the page_mapping of the old page before unlocking it,
+so m[un]lock can skip these pages. However, as discussed above, munlock
+must wait for a migrating page to be replaced with the new page to prevent
+the new page from remaining mlocked outside of any VM_LOCKED vma.
+
+To ensure that we don't strand pages on the unevictable list because of a
+race between munlock and migration, we must also prevent the munlock pte
+handler from acquiring the old or new page lock from the time that the
+migration subsystem acquires the old page lock, until either migration
+succeeds and the new page is added to the lru or migration fails and
+the old page is putback to the lru. The achieve this coordination,
+the migration subsystem places the new page on success, or the old
+page on failure, back on the lru lists before dropping the respective
+page's lock. It uses the putback_lru_page() function to accomplish this,
+which rechecks the page's overall evictability and adjusts the page
+flags accordingly. To free the old page on success or the new page on
+failure, the migration subsystem just drops what it knows to be the last
+page reference via put_page().
+
+
+Mlocked Pages: mmap(MAP_LOCKED) System Call Handling
+
+In addition the the mlock()/mlockall() system calls, an application can request
+that a region of memory be mlocked using the MAP_LOCKED flag with the mmap()
+call. Furthermore, any mmap() call or brk() call that expands the heap by a
+task that has previously called mlockall() with the MCL_FUTURE flag will result
+in the newly mapped memory being mlocked. Before the unevictable/mlock changes,
+the kernel simply called make_pages_present() to allocate pages and populate
+the page table.
+
+To mlock a range of memory under the unevictable/mlock infrastructure, the
+mmap() handler and task address space expansion functions call
+mlock_vma_pages_range() specifying the vma and the address range to mlock.
+mlock_vma_pages_range() filters vmas like mlock_fixup(), as described above in
+"Mlocked Pages: Filtering Vmas". It will clear the VM_LOCKED flag, which will
+have already been set by the caller, in filtered vmas. Thus these vma's need
+not be visited for munlock when the region is unmapped.
+
+For "normal" vmas, mlock_vma_pages_range() calls __mlock_vma_pages_range() to
+fault/allocate the pages and mlock them. Again, like mlock_fixup(),
+mlock_vma_pages_range() downgrades the mmap semaphore to read mode before
+attempting to fault/allocate and mlock the pages; and "upgrades" the semaphore
+back to write mode before returning.
+
+The callers of mlock_vma_pages_range() will have already added the memory
+range to be mlocked to the task's "locked_vm". To account for filtered vmas,
+mlock_vma_pages_range() returns the number of pages NOT mlocked. All of the
+callers then subtract a non-negative return value from the task's locked_vm.
+A negative return value represent an error--for example, from get_user_pages()
+attempting to fault in a vma with PROT_NONE access. In this case, we leave
+the memory range accounted as locked_vm, as the protections could be changed
+later and pages allocated into that region.
+
+
+Mlocked Pages: munmap()/exit()/exec() System Call Handling
+
+When unmapping an mlocked region of memory, whether by an explicit call to
+munmap() or via an internal unmap from exit() or exec() processing, we must
+munlock the pages if we're removing the last VM_LOCKED vma that maps the pages.
+Before the unevictable/mlock changes, mlocking did not mark the pages in any way,
+so unmapping them required no processing.
+
+To munlock a range of memory under the unevictable/mlock infrastructure, the
+munmap() hander and task address space tear down function call
+munlock_vma_pages_all(). The name reflects the observation that one always
+specifies the entire vma range when munlock()ing during unmap of a region.
+Because of the vma filtering when mlocking() regions, only "normal" vmas that
+actually contain mlocked pages will be passed to munlock_vma_pages_all().
+
+munlock_vma_pages_all() clears the VM_LOCKED vma flag and, like mlock_fixup()
+for the munlock case, calls __munlock_vma_pages_range() to walk the page table
+for the vma's memory range and munlock_vma_page() each resident page mapped by
+the vma. This effectively munlocks the page, only if this is the last
+VM_LOCKED vma that maps the page.
+
+
+Mlocked Page: try_to_unmap()
+
+[Note: the code changes represented by this section are really quite small
+compared to the text to describe what happening and why, and to discuss the
+implications.]
+
+Pages can, of course, be mapped into multiple vmas. Some of these vmas may
+have VM_LOCKED flag set. It is possible for a page mapped into one or more
+VM_LOCKED vmas not to have the PG_mlocked flag set and therefore reside on one
+of the active or inactive LRU lists. This could happen if, for example, a
+task in the process of munlock()ing the page could not isolate the page from
+the LRU. As a result, vmscan/shrink_page_list() might encounter such a page
+as described in "Unevictable Pages and Vmscan [shrink_*_list()]". To
+handle this situation, try_to_unmap() has been enhanced to check for VM_LOCKED
+vmas while it is walking a page's reverse map.
+
+try_to_unmap() is always called, by either vmscan for reclaim or for page
+migration, with the argument page locked and isolated from the LRU. BUG_ON()
+assertions enforce this requirement. Separate functions handle anonymous and
+mapped file pages, as these types of pages have different reverse map
+mechanisms.
+
+ try_to_unmap_anon()
+
+To unmap anonymous pages, each vma in the list anchored in the anon_vma must be
+visited--at least until a VM_LOCKED vma is encountered. If the page is being
+unmapped for migration, VM_LOCKED vmas do not stop the process because mlocked
+pages are migratable. However, for reclaim, if the page is mapped into a
+VM_LOCKED vma, the scan stops. try_to_unmap() attempts to acquire the mmap
+semphore of the mm_struct to which the vma belongs in read mode. If this is
+successful, try_to_unmap() will mlock the page via mlock_vma_page()--we
+wouldn't have gotten to try_to_unmap() if the page were already mlocked--and
+will return SWAP_MLOCK, indicating that the page is unevictable. If the
+mmap semaphore cannot be acquired, we are not sure whether the page is really
+unevictable or not. In this case, try_to_unmap() will return SWAP_AGAIN.
+
+ try_to_unmap_file() -- linear mappings
+
+Unmapping of a mapped file page works the same, except that the scan visits
+all vmas that maps the page's index/page offset in the page's mapping's
+reverse map priority search tree. It must also visit each vma in the page's
+mapping's non-linear list, if the list is non-empty. As for anonymous pages,
+on encountering a VM_LOCKED vma for a mapped file page, try_to_unmap() will
+attempt to acquire the associated mm_struct's mmap semaphore to mlock the page,
+returning SWAP_MLOCK if this is successful, and SWAP_AGAIN, if not.
+
+ try_to_unmap_file() -- non-linear mappings
+
+If a page's mapping contains a non-empty non-linear mapping vma list, then
+try_to_un{map|lock}() must also visit each vma in that list to determine
+whether the page is mapped in a VM_LOCKED vma. Again, the scan must visit
+all vmas in the non-linear list to ensure that the pages is not/should not be
+mlocked. If a VM_LOCKED vma is found in the list, the scan could terminate.
+However, there is no easy way to determine whether the page is actually mapped
+in a given vma--either for unmapping or testing whether the VM_LOCKED vma
+actually pins the page.
+
+So, try_to_unmap_file() handles non-linear mappings by scanning a certain
+number of pages--a "cluster"--in each non-linear vma associated with the page's
+mapping, for each file mapped page that vmscan tries to unmap. If this happens
+to unmap the page we're trying to unmap, try_to_unmap() will notice this on
+return--(page_mapcount(page) == 0)--and return SWAP_SUCCESS. Otherwise, it
+will return SWAP_AGAIN, causing vmscan to recirculate this page. We take
+advantage of the cluster scan in try_to_unmap_cluster() as follows:
+
+For each non-linear vma, try_to_unmap_cluster() attempts to acquire the mmap
+semaphore of the associated mm_struct for read without blocking. If this
+attempt is successful and the vma is VM_LOCKED, try_to_unmap_cluster() will
+retain the mmap semaphore for the scan; otherwise it drops it here. Then,
+for each page in the cluster, if we're holding the mmap semaphore for a locked
+vma, try_to_unmap_cluster() calls mlock_vma_page() to mlock the page. This
+call is a no-op if the page is already locked, but will mlock any pages in
+the non-linear mapping that happen to be unlocked. If one of the pages so
+mlocked is the page passed in to try_to_unmap(), try_to_unmap_cluster() will
+return SWAP_MLOCK, rather than the default SWAP_AGAIN. This will allow vmscan
+to cull the page, rather than recirculating it on the inactive list. Again,
+if try_to_unmap_cluster() cannot acquire the vma's mmap sem, it returns
+SWAP_AGAIN, indicating that the page is mapped by a VM_LOCKED vma, but
+couldn't be mlocked.
+
+
+Mlocked pages: try_to_munlock() Reverse Map Scan
+
+TODO/FIXME: a better name might be page_mlocked()--analogous to the
+page_referenced() reverse map walker--especially if we continue to call this
+from shrink_page_list(). See related TODO/FIXME below.
+
+When munlock_vma_page()--see "Mlocked Pages: munlock()/munlockall() System
+Call Handling" above--tries to munlock a page, or when shrink_page_list()
+encounters an anonymous page that is not yet in the swap cache, they need to
+determine whether or not the page is mapped by any VM_LOCKED vma, without
+actually attempting to unmap all ptes from the page. For this purpose, the
+unevictable/mlock infrastructure introduced a variant of try_to_unmap() called
+try_to_munlock().
+
+try_to_munlock() calls the same functions as try_to_unmap() for anonymous and
+mapped file pages with an additional argument specifing unlock versus unmap
+processing. Again, these functions walk the respective reverse maps looking
+for VM_LOCKED vmas. When such a vma is found for anonymous pages and file
+pages mapped in linear VMAs, as in the try_to_unmap() case, the functions
+attempt to acquire the associated mmap semphore, mlock the page via
+mlock_vma_page() and return SWAP_MLOCK. This effectively undoes the
+pre-clearing of the page's PG_mlocked done by munlock_vma_page() and informs
+shrink_page_list() that the anonymous page should be culled rather than added
+to the swap cache in preparation for a try_to_unmap() that will almost
+certainly fail.
+
+If try_to_unmap() is unable to acquire a VM_LOCKED vma's associated mmap
+semaphore, it will return SWAP_AGAIN. This will allow shrink_page_list()
+to recycle the page on the inactive list and hope that it has better luck
+with the page next time.
+
+For file pages mapped into non-linear vmas, the try_to_munlock() logic works
+slightly differently. On encountering a VM_LOCKED non-linear vma that might
+map the page, try_to_munlock() returns SWAP_AGAIN without actually mlocking
+the page. munlock_vma_page() will just leave the page unlocked and let
+vmscan deal with it--the usual fallback position.
+
+Note that try_to_munlock()'s reverse map walk must visit every vma in a pages'
+reverse map to determine that a page is NOT mapped into any VM_LOCKED vma.
+However, the scan can terminate when it encounters a VM_LOCKED vma and can
+successfully acquire the vma's mmap semphore for read and mlock the page.
+Although try_to_munlock() can be called many [very many!] times when
+munlock()ing a large region or tearing down a large address space that has been
+mlocked via mlockall(), overall this is a fairly rare event. In addition,
+although shrink_page_list() calls try_to_munlock() for every anonymous page that
+it handles that is not yet in the swap cache, on average anonymous pages will
+have very short reverse map lists.
+
+Mlocked Page: Page Reclaim in shrink_*_list()
+
+shrink_active_list() culls any obviously unevictable pages--i.e.,
+!page_evictable(page, NULL)--diverting these to the unevictable lru
+list. However, shrink_active_list() only sees unevictable pages that
+made it onto the active/inactive lru lists. Note that these pages do not
+have PageUnevictable set--otherwise, they would be on the unevictable list and
+shrink_active_list would never see them.
+
+Some examples of these unevictable pages on the LRU lists are:
+
+1) ramfs and ram disk pages that have been placed on the lru lists when
+ first allocated.
+
+2) SHM_LOCKed shared memory pages. shmctl(SHM_LOCK) does not attempt to
+ allocate or fault in the pages in the shared memory region. This happens
+ when an application accesses the page the first time after SHM_LOCKing
+ the segment.
+
+3) Mlocked pages that could not be isolated from the lru and moved to the
+ unevictable list in mlock_vma_page().
+
+3) Pages mapped into multiple VM_LOCKED vmas, but try_to_munlock() couldn't
+ acquire the vma's mmap semaphore to test the flags and set PageMlocked.
+ munlock_vma_page() was forced to let the page back on to the normal
+ LRU list for vmscan to handle.
+
+shrink_inactive_list() also culls any unevictable pages that it finds
+on the inactive lists, again diverting them to the appropriate zone's unevictable
+lru list. shrink_inactive_list() should only see SHM_LOCKed pages that became
+SHM_LOCKed after shrink_active_list() had moved them to the inactive list, or
+pages mapped into VM_LOCKED vmas that munlock_vma_page() couldn't isolate from
+the lru to recheck via try_to_munlock(). shrink_inactive_list() won't notice
+the latter, but will pass on to shrink_page_list().
+
+shrink_page_list() again culls obviously unevictable pages that it could
+encounter for similar reason to shrink_inactive_list(). As already discussed,
+shrink_page_list() proactively looks for anonymous pages that should have
+PG_mlocked set but don't--these would not be detected by page_evictable()--to
+avoid adding them to the swap cache unnecessarily. File pages mapped into
+VM_LOCKED vmas but without PG_mlocked set will make it all the way to
+try_to_unmap(). shrink_page_list() will divert them to the unevictable list when
+try_to_unmap() returns SWAP_MLOCK, as discussed above.
+
+TODO/FIXME: If we can enhance the swap cache to reliably remove entries
+with page_count(page) > 2, as long as all ptes are mapped to the page and
+not the swap entry, we can probably remove the call to try_to_munlock() in
+shrink_page_list() and just remove the page from the swap cache when
+try_to_unmap() returns SWAP_MLOCK. Currently, remove_exclusive_swap_page()
+doesn't seem to allow that.
+
+
--
All Rights Reversed
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH -mm 14/24] Ramfs and Ram Disk pages are unevictable
2008-06-11 18:42 ` [PATCH -mm 14/24] Ramfs and Ram Disk pages are unevictable Rik van Riel
@ 2008-06-12 0:54 ` Nick Piggin
2008-06-12 17:29 ` Rik van Riel
0 siblings, 1 reply; 13+ messages in thread
From: Nick Piggin @ 2008-06-12 0:54 UTC (permalink / raw)
To: Rik van Riel
Cc: linux-kernel, Andrew Morton, Lee Schermerhorn, Kosaki Motohiro,
linux-mm, Eric Whitney
On Thursday 12 June 2008 04:42, Rik van Riel wrote:
> From: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
>
> Christoph Lameter pointed out that ram disk pages also clutter the
> LRU lists. When vmscan finds them dirty and tries to clean them,
> the ram disk writeback function just redirties the page so that it
> goes back onto the active list. Round and round she goes...
> Index: linux-2.6.26-rc5-mm2/drivers/block/brd.c
> ===================================================================
> --- linux-2.6.26-rc5-mm2.orig/drivers/block/brd.c 2008-06-10
> 10:46:18.000000000 -0400 +++
> linux-2.6.26-rc5-mm2/drivers/block/brd.c 2008-06-10 16:47:23.000000000
> -0400 @@ -374,8 +374,21 @@ static int brd_ioctl(struct inode *inode
> return error;
> }
>
> +/*
> + * brd_open():
> + * Just mark the mapping as containing unevictable pages
> + */
> +static int brd_open(struct inode *inode, struct file *filp)
> +{
> + struct address_space *mapping = inode->i_mapping;
> +
> + mapping_set_unevictable(mapping);
> + return 0;
> +}
> +
> static struct block_device_operations brd_fops = {
> .owner = THIS_MODULE,
> + .open = brd_open,
> .ioctl = brd_ioctl,
> #ifdef CONFIG_BLK_DEV_XIP
> .direct_access = brd_direct_access,
This isn't the case for brd any longer. It doesn't use the buffer
cache as its backing store, so the buffer cache is reclaimable.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH -mm 14/24] Ramfs and Ram Disk pages are unevictable
2008-06-12 0:54 ` Nick Piggin
@ 2008-06-12 17:29 ` Rik van Riel
2008-06-12 17:37 ` Nick Piggin
0 siblings, 1 reply; 13+ messages in thread
From: Rik van Riel @ 2008-06-12 17:29 UTC (permalink / raw)
To: Nick Piggin
Cc: linux-kernel, Andrew Morton, Lee Schermerhorn, Kosaki Motohiro,
linux-mm, Eric Whitney
On Thu, 12 Jun 2008 10:54:18 +1000
Nick Piggin <nickpiggin@yahoo.com.au> wrote:
> On Thursday 12 June 2008 04:42, Rik van Riel wrote:
> > From: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
> >
> > Christoph Lameter pointed out that ram disk pages also clutter the
> > LRU lists. When vmscan finds them dirty and tries to clean them,
> > the ram disk writeback function just redirties the page so that it
> > goes back onto the active list. Round and round she goes...
> This isn't the case for brd any longer. It doesn't use the buffer
> cache as its backing store, so the buffer cache is reclaimable.
What does that mean?
I know that pages of files that got paged into the page
cache from the ramdisk can be evicted (back to the ram
disk), but how do the brd pages themselves behave?
--
All Rights Reversed
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH -mm 14/24] Ramfs and Ram Disk pages are unevictable
2008-06-12 17:29 ` Rik van Riel
@ 2008-06-12 17:37 ` Nick Piggin
2008-06-12 17:50 ` Rik van Riel
0 siblings, 1 reply; 13+ messages in thread
From: Nick Piggin @ 2008-06-12 17:37 UTC (permalink / raw)
To: Rik van Riel
Cc: linux-kernel, Andrew Morton, Lee Schermerhorn, Kosaki Motohiro,
linux-mm, Eric Whitney
On Friday 13 June 2008 03:29, Rik van Riel wrote:
> On Thu, 12 Jun 2008 10:54:18 +1000
>
> Nick Piggin <nickpiggin@yahoo.com.au> wrote:
> > On Thursday 12 June 2008 04:42, Rik van Riel wrote:
> > > From: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
> > >
> > > Christoph Lameter pointed out that ram disk pages also clutter the
> > > LRU lists. When vmscan finds them dirty and tries to clean them,
> > > the ram disk writeback function just redirties the page so that it
> > > goes back onto the active list. Round and round she goes...
> >
> > This isn't the case for brd any longer. It doesn't use the buffer
> > cache as its backing store, so the buffer cache is reclaimable.
>
> What does that mean?
That your patch is obsolete.
> I know that pages of files that got paged into the page
> cache from the ramdisk can be evicted (back to the ram
> disk), but how do the brd pages themselves behave?
They are not reclaimable. But they have nothing (directly) to do
with brd's i_mapping address space, nor are they put on any LRU
lists.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH -mm 14/24] Ramfs and Ram Disk pages are unevictable
2008-06-12 17:37 ` Nick Piggin
@ 2008-06-12 17:50 ` Rik van Riel
2008-06-12 17:57 ` Nick Piggin
0 siblings, 1 reply; 13+ messages in thread
From: Rik van Riel @ 2008-06-12 17:50 UTC (permalink / raw)
To: Nick Piggin
Cc: linux-kernel, Andrew Morton, Lee Schermerhorn, Kosaki Motohiro,
linux-mm, Eric Whitney
On Fri, 13 Jun 2008 03:37:56 +1000
Nick Piggin <nickpiggin@yahoo.com.au> wrote:
> > > This isn't the case for brd any longer. It doesn't use the buffer
> > > cache as its backing store, so the buffer cache is reclaimable.
> > I know that pages of files that got paged into the page
> > cache from the ramdisk can be evicted (back to the ram
> > disk), but how do the brd pages themselves behave?
>
> They are not reclaimable. But they have nothing (directly) to do
> with brd's i_mapping address space, nor are they put on any LRU
> lists.
Ahhhh, doh!
I'm mailing Andrew a patch right now that undoes the
brd.c part of patch 14/24. The ramdisk part is correct
and should stay (afaict).
--
All Rights Reversed
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH -mm 14/24] Ramfs and Ram Disk pages are unevictable
2008-06-12 17:50 ` Rik van Riel
@ 2008-06-12 17:57 ` Nick Piggin
0 siblings, 0 replies; 13+ messages in thread
From: Nick Piggin @ 2008-06-12 17:57 UTC (permalink / raw)
To: Rik van Riel
Cc: linux-kernel, Andrew Morton, Lee Schermerhorn, Kosaki Motohiro,
linux-mm, Eric Whitney
On Friday 13 June 2008 03:50, Rik van Riel wrote:
> On Fri, 13 Jun 2008 03:37:56 +1000
>
> Nick Piggin <nickpiggin@yahoo.com.au> wrote:
> > > > This isn't the case for brd any longer. It doesn't use the buffer
> > > > cache as its backing store, so the buffer cache is reclaimable.
> > >
> > > I know that pages of files that got paged into the page
> > > cache from the ramdisk can be evicted (back to the ram
> > > disk), but how do the brd pages themselves behave?
> >
> > They are not reclaimable. But they have nothing (directly) to do
> > with brd's i_mapping address space, nor are they put on any LRU
> > lists.
>
> Ahhhh, doh!
>
> I'm mailing Andrew a patch right now that undoes the
> brd.c part of patch 14/24. The ramdisk part is correct
> and should stay (afaict).
The ramfs part? Yes, that looks correct to me.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH] collect lru meminfo statistics from correct offset
[not found] ` <20080613134827.5dbac5ac@cuia.bos.redhat.com>
@ 2008-06-13 20:21 ` Lee Schermerhorn
0 siblings, 0 replies; 13+ messages in thread
From: Lee Schermerhorn @ 2008-06-13 20:21 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-kernel, Rik van Riel, linux-mm
PATCH collect lru meminfo statistics from correct offset
Against: 2.6.26-rc5-mm3
Incremental fix to: vmscan-split-lru-lists-into-anon-file-sets.patch
Offset 'lru' by 'NR_LRU_BASE' to obtain global page state for
lru list 'lru'.
Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
fs/proc/proc_misc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Index: linux-2.6.26-rc5-mm3/fs/proc/proc_misc.c
===================================================================
--- linux-2.6.26-rc5-mm3.orig/fs/proc/proc_misc.c 2008-06-13 15:16:17.000000000 -0400
+++ linux-2.6.26-rc5-mm3/fs/proc/proc_misc.c 2008-06-13 15:44:24.000000000 -0400
@@ -158,7 +158,7 @@ static int meminfo_read_proc(char *page,
get_vmalloc_info(&vmi);
for (lru = LRU_BASE; lru < NR_LRU_LISTS; lru++)
- pages[lru] = global_page_state(lru);
+ pages[lru] = global_page_state(NR_LRU_BASE + lru);
/*
* Tagged format, for easy grepping and expansion.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2008-06-13 20:21 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <20080611184214.605110868@redhat.com>
2008-06-11 18:42 ` [PATCH -mm 12/24] Unevictable LRU Infrastructure Rik van Riel
2008-06-11 18:42 ` [PATCH -mm 14/24] Ramfs and Ram Disk pages are unevictable Rik van Riel
2008-06-12 0:54 ` Nick Piggin
2008-06-12 17:29 ` Rik van Riel
2008-06-12 17:37 ` Nick Piggin
2008-06-12 17:50 ` Rik van Riel
2008-06-12 17:57 ` Nick Piggin
2008-06-11 18:42 ` [PATCH -mm 16/24] mlock: mlocked " Rik van Riel, Nick Piggin
2008-06-11 18:42 ` [PATCH -mm 18/24] mmap: handle mlocked pages during map, remap, unmap Rik van Riel
2008-06-11 18:42 ` [PATCH -mm 20/24] swap: cull unevictable pages in fault path Rik van Riel, Lee Schermerhorn
2008-06-11 18:42 ` [PATCH -mm 22/24] vmscan: unevictable LRU scan sysctl Rik van Riel, Lee Schermerhorn
2008-06-11 18:42 ` [PATCH -mm 24/24] doc: unevictable LRU and mlocked pages documentation Rik van Riel
[not found] ` <20080611184339.159161465@redhat.com>
[not found] ` <4851C1CC.7070607@ct.jp.nec.com>
[not found] ` <20080613134827.5dbac5ac@cuia.bos.redhat.com>
2008-06-13 20:21 ` [PATCH] collect lru meminfo statistics from correct offset Lee Schermerhorn
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox