* [PATCH] memory unplug v7 - introduction
@ 2007-07-06 9:19 KAMEZAWA Hiroyuki
2007-07-06 9:23 ` [PATCH] memory unplug v7 - migration by kernel KAMEZAWA Hiroyuki
` (6 more replies)
0 siblings, 7 replies; 14+ messages in thread
From: KAMEZAWA Hiroyuki @ 2007-07-06 9:19 UTC (permalink / raw)
To: LKML; +Cc: linux-mm, Andrew Morton, Christoph Lameter, mel
Hi,
This is a memory unplug base patch set against 2.6.22-rc6-mm1.
called v7 (I skipped v6 post because of my internal patch handling.)
Andrew, could you give me your advice toward next step (merge) ?
Changelog V5->V7
- reflected all coments on V5 and following threads.
Patch series are following.
(1)migrate_nocontext.patch
(2)lru_page_race_fix.patch
(3)walk_mem_resource.patch
(4)page_isolation_base_v7.patch
(5)page_removal_base_v7.patch
(6)ia64_page_hotremove.patch
I think patch (1) (2) (3) has enough quality and can be merged without
regression.
patch (1) and (2) is for "page migration by the kernel".
patch (3) is cleanup of memory hotplug.
patch (4)(5) depens on Mel's page grouping.
patch (5) will need more work for enhancement for NUMA and stable-removal.
(In current code, a user may have to retry offlining if pages are *very* busy.)
But it works well on my test.
How to use
- user kernelcore=XXX boot option to create ZONE_MOVABLE.
Memory unplug itself can work without ZONE_MOVABLE (if you allow retrying..)
but it will be better to use kernelcore= if your section size is big.
- After bootup, execute following.
# echo "offline" > /sys/devices/system/memory/memoryX/state
- you can push back offlined memory by following
# echo "online" > /sys/devices/system/memory/memoryX/state
TODO
- more tests.
- Now, there is no check around ZONE_MOVABLE and bootmem.
I hope bootmem can treat kernelcore=....
We have some idea about this.
- add better logic to allocate memory for migration (for NUMA).
Problems here are that we have no way to rememeber "How page is allocated".
cpusets info and policy info is in "task_struct", which cannot be accessed
from a page struct..maybe what we can do is (1) add more information to page
or (2) use just a simple way. or (3) some magical technique...
- interface code for other archs. plz request if you want.
- remove memmap after memory unplug. (after sparsemem-vmemap inclusion)
- node hotplug support
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH] memory unplug v7 - migration by kernel
2007-07-06 9:19 [PATCH] memory unplug v7 - introduction KAMEZAWA Hiroyuki
@ 2007-07-06 9:23 ` KAMEZAWA Hiroyuki
2007-07-06 18:11 ` Christoph Lameter
2007-07-06 9:24 ` [PATCH] memory unplug v7 [2/6] - isolate_lru_page fix KAMEZAWA Hiroyuki
` (5 subsequent siblings)
6 siblings, 1 reply; 14+ messages in thread
From: KAMEZAWA Hiroyuki @ 2007-07-06 9:23 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki; +Cc: LKML, linux-mm, Andrew Morton, Christoph Lameter, mel
page migration by kernel v6.
Changelog V6->V7
- moved rcu_read_lock/rcu_read_unlock to correct place.
- fixed text.
Changelog V5->V6
- removed dummy_vma and uses rcu_read_lock().
- removed page_mapped() check and uses !page->mapping check.
In usual, migrate_pages(page,,) is called with holding mm->sem by system call.
(mm here is a mm_struct which maps the migration target page.)
This semaphore helps avoiding some race conditions.
But, if we want to migrate a page by some kernel codes, we have to avoid
some races. This patch adds check code for following race condition.
1. A page which page->mapping==NULL can be target of migration. Then, we have
to check page->mapping before calling try_to_unmap().
2. anon_vma can be freed while page is unmapped, but page->mapping remains as
it was. We drop page->mapcount to be 0. Then we cannot trust page->mapping.
So, use rcu_read_lock() to prevent anon_vma pointed by page->mapping from
being freed during migration.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
mm/migrate.c | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
Index: linux-2.6.22-rc6-mm1/mm/migrate.c
===================================================================
--- linux-2.6.22-rc6-mm1.orig/mm/migrate.c
+++ linux-2.6.22-rc6-mm1/mm/migrate.c
@@ -632,18 +632,35 @@ static int unmap_and_move(new_page_t get
goto unlock;
wait_on_page_writeback(page);
}
-
/*
- * Establish migration ptes or remove ptes
+ * By try_to_unmap(), page->mapcount goes down to 0 here. In this case,
+ * we cannot notice that anon_vma is freed while we migrates a page.
+ * This rcu_read_lock() delays freeing anon_vma pointer until the end
+ * of migration. File cache pages are no problem because of page_lock()
*/
+ rcu_read_lock();
+ /*
+ * This is a corner case handling.
+ * When a new swap-cache is read into, it is linked to LRU
+ * and treated as swapcache but has no rmap yet.
+ * Calling try_to_unmap() against a page->mapping==NULL page is
+ * BUG. So handle it here.
+ */
+ if (!page->mapping)
+ goto rcu_unlock;
+ /* Establish migration ptes or remove ptes */
try_to_unmap(page, 1);
+
if (!page_mapped(page))
rc = move_to_new_page(newpage, page);
if (rc)
remove_migration_ptes(page, page);
+rcu_unlock:
+ rcu_read_unlock();
unlock:
+
unlock_page(page);
if (rc != -EAGAIN) {
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH] memory unplug v7 [2/6] - isolate_lru_page fix
2007-07-06 9:19 [PATCH] memory unplug v7 - introduction KAMEZAWA Hiroyuki
2007-07-06 9:23 ` [PATCH] memory unplug v7 - migration by kernel KAMEZAWA Hiroyuki
@ 2007-07-06 9:24 ` KAMEZAWA Hiroyuki
2007-07-06 18:11 ` Christoph Lameter
2007-07-06 9:25 ` [PATCH] memory unplug v7 [3/6] memory hotplug cleanup KAMEZAWA Hiroyuki
` (4 subsequent siblings)
6 siblings, 1 reply; 14+ messages in thread
From: KAMEZAWA Hiroyuki @ 2007-07-06 9:24 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki; +Cc: LKML, linux-mm, Andrew Morton, Christoph Lameter, mel
release_pages() in mm/swap.c changes page_count() to be 0
without removing PageLRU flag...
This means isolate_lru_page() can see a page, PageLRU() && page_count(page)==0..
This is BUG. (get_page() will be called against count=0 page.)
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
mm/migrate.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
Index: linux-2.6.22-rc6-mm1/mm/migrate.c
===================================================================
--- linux-2.6.22-rc6-mm1.orig/mm/migrate.c
+++ linux-2.6.22-rc6-mm1/mm/migrate.c
@@ -49,9 +49,8 @@ int isolate_lru_page(struct page *page,
struct zone *zone = page_zone(page);
spin_lock_irq(&zone->lru_lock);
- if (PageLRU(page)) {
+ if (PageLRU(page) && get_page_unless_zero(page)) {
ret = 0;
- get_page(page);
ClearPageLRU(page);
if (PageActive(page))
del_page_from_active_list(zone, page);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH] memory unplug v7 [3/6] memory hotplug cleanup
2007-07-06 9:19 [PATCH] memory unplug v7 - introduction KAMEZAWA Hiroyuki
2007-07-06 9:23 ` [PATCH] memory unplug v7 - migration by kernel KAMEZAWA Hiroyuki
2007-07-06 9:24 ` [PATCH] memory unplug v7 [2/6] - isolate_lru_page fix KAMEZAWA Hiroyuki
@ 2007-07-06 9:25 ` KAMEZAWA Hiroyuki
2007-07-06 9:26 ` [PATCH] memory unplug v7 [4/6] - page isolation KAMEZAWA Hiroyuki
` (3 subsequent siblings)
6 siblings, 0 replies; 14+ messages in thread
From: KAMEZAWA Hiroyuki @ 2007-07-06 9:25 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki; +Cc: LKML, linux-mm, Andrew Morton, Christoph Lameter, mel
A clean up patch for "scanning memory resource [start, end)" operation.
Now, find_next_system_ram() function is used in memory hotplug, but this
interface is not easy to use and codes are complicated.
This patch adds walk_memory_resouce(start,len,arg,func) function.
The function 'func' is called per valid memory resouce range in [start,pfn).
ChangeLog V5->V7:
- fixed cast and braces.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
include/linux/ioport.h | 3 --
include/linux/memory_hotplug.h | 8 +++++++
kernel/resource.c | 26 ++++++++++++++++++++++-
mm/memory_hotplug.c | 45 +++++++++++++++++------------------------
4 files changed, 52 insertions(+), 30 deletions(-)
Index: linux-2.6.22-rc6-mm1/kernel/resource.c
===================================================================
--- linux-2.6.22-rc6-mm1.orig/kernel/resource.c
+++ linux-2.6.22-rc6-mm1/kernel/resource.c
@@ -244,7 +244,7 @@ EXPORT_SYMBOL(release_resource);
* the caller must specify res->start, res->end, res->flags.
* If found, returns 0, res is overwritten, if not found, returns -1.
*/
-int find_next_system_ram(struct resource *res)
+static int find_next_system_ram(struct resource *res)
{
resource_size_t start, end;
struct resource *p;
@@ -277,6 +277,30 @@ int find_next_system_ram(struct resource
res->end = p->end;
return 0;
}
+int
+walk_memory_resource(unsigned long start_pfn, unsigned long nr_pages, void *arg,
+ int (*func)(unsigned long, unsigned long, void *))
+{
+ struct resource res;
+ unsigned long pfn, len;
+ u64 orig_end;
+ int ret;
+ res.start = (u64) start_pfn << PAGE_SHIFT;
+ res.end = ((u64)(start_pfn + nr_pages) << PAGE_SHIFT) - 1;
+ res.flags = IORESOURCE_MEM;
+ orig_end = res.end;
+ while ((res.start < res.end) && (find_next_system_ram(&res) >= 0)) {
+ pfn = (unsigned long)(res.start >> PAGE_SHIFT);
+ len = (unsigned long)((res.end + 1 - res.start) >> PAGE_SHIFT);
+ ret = (*func)(pfn, len, arg);
+ if (ret)
+ break;
+ res.start = res.end + 1;
+ res.end = orig_end;
+ }
+ return ret;
+}
+
#endif
/*
Index: linux-2.6.22-rc6-mm1/include/linux/ioport.h
===================================================================
--- linux-2.6.22-rc6-mm1.orig/include/linux/ioport.h
+++ linux-2.6.22-rc6-mm1/include/linux/ioport.h
@@ -110,9 +110,6 @@ extern int allocate_resource(struct reso
int adjust_resource(struct resource *res, resource_size_t start,
resource_size_t size);
-/* get registered SYSTEM_RAM resources in specified area */
-extern int find_next_system_ram(struct resource *res);
-
/* Convenience shorthand with allocation */
#define request_region(start,n,name) __request_region(&ioport_resource, (start), (n), (name))
#define request_mem_region(start,n,name) __request_region(&iomem_resource, (start), (n), (name))
Index: linux-2.6.22-rc6-mm1/include/linux/memory_hotplug.h
===================================================================
--- linux-2.6.22-rc6-mm1.orig/include/linux/memory_hotplug.h
+++ linux-2.6.22-rc6-mm1/include/linux/memory_hotplug.h
@@ -64,6 +64,14 @@ extern int online_pages(unsigned long, u
extern int __add_pages(struct zone *zone, unsigned long start_pfn,
unsigned long nr_pages);
+/*
+ * Walk thorugh all memory which is registered as resource.
+ * arg is (start_pfn, nr_pages, private_arg_pointer)
+ */
+extern int walk_memory_resource(unsigned long start_pfn,
+ unsigned long nr_pages, void *arg,
+ int (*func)(unsigned long, unsigned long, void *));
+
#ifdef CONFIG_NUMA
extern int memory_add_physaddr_to_nid(u64 start);
#else
Index: linux-2.6.22-rc6-mm1/mm/memory_hotplug.c
===================================================================
--- linux-2.6.22-rc6-mm1.orig/mm/memory_hotplug.c
+++ linux-2.6.22-rc6-mm1/mm/memory_hotplug.c
@@ -161,14 +161,27 @@ static void grow_pgdat_span(struct pglis
pgdat->node_start_pfn;
}
-int online_pages(unsigned long pfn, unsigned long nr_pages)
+static int online_pages_range(unsigned long start_pfn, unsigned long nr_pages,
+ void *arg)
{
unsigned long i;
+ unsigned long onlined_pages = *(unsigned long *)arg;
+ struct page *page;
+ if (PageReserved(pfn_to_page(start_pfn)))
+ for (i = 0; i < nr_pages; i++) {
+ page = pfn_to_page(start_pfn + i);
+ online_page(page);
+ onlined_pages++;
+ }
+ *(unsigned long *)arg = onlined_pages;
+ return 0;
+}
+
+
+int online_pages(unsigned long pfn, unsigned long nr_pages)
+{
unsigned long flags;
unsigned long onlined_pages = 0;
- struct resource res;
- u64 section_end;
- unsigned long start_pfn;
struct zone *zone;
int need_zonelists_rebuild = 0;
@@ -191,28 +204,8 @@ int online_pages(unsigned long pfn, unsi
if (!populated_zone(zone))
need_zonelists_rebuild = 1;
- res.start = (u64)pfn << PAGE_SHIFT;
- res.end = res.start + ((u64)nr_pages << PAGE_SHIFT) - 1;
- res.flags = IORESOURCE_MEM; /* we just need system ram */
- section_end = res.end;
-
- while ((res.start < res.end) && (find_next_system_ram(&res) >= 0)) {
- start_pfn = (unsigned long)(res.start >> PAGE_SHIFT);
- nr_pages = (unsigned long)
- ((res.end + 1 - res.start) >> PAGE_SHIFT);
-
- if (PageReserved(pfn_to_page(start_pfn))) {
- /* this region's page is not onlined now */
- for (i = 0; i < nr_pages; i++) {
- struct page *page = pfn_to_page(start_pfn + i);
- online_page(page);
- onlined_pages++;
- }
- }
-
- res.start = res.end + 1;
- res.end = section_end;
- }
+ walk_memory_resource(pfn, nr_pages, &onlined_pages,
+ online_pages_range);
zone->present_pages += onlined_pages;
zone->zone_pgdat->node_present_pages += onlined_pages;
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH] memory unplug v7 [4/6] - page isolation
2007-07-06 9:19 [PATCH] memory unplug v7 - introduction KAMEZAWA Hiroyuki
` (2 preceding siblings ...)
2007-07-06 9:25 ` [PATCH] memory unplug v7 [3/6] memory hotplug cleanup KAMEZAWA Hiroyuki
@ 2007-07-06 9:26 ` KAMEZAWA Hiroyuki
2007-07-06 22:28 ` Andrew Morton
2007-07-06 9:27 ` [PATCH] memory unplug v7 [5/6] - page offline KAMEZAWA Hiroyuki
` (2 subsequent siblings)
6 siblings, 1 reply; 14+ messages in thread
From: KAMEZAWA Hiroyuki @ 2007-07-06 9:26 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki; +Cc: LKML, linux-mm, Andrew Morton, Christoph Lameter, mel
Implement generic chunk-of-pages isolation method by using page grouping ops.
This patch add MIGRATE_ISOLATE to MIGRATE_TYPES. By this
- MIGRATE_TYPES increases.
- bitmap for migratetype is enlarged.
pages of MIGRATE_ISOLATE migratetype will not be allocated even if it is free.
By this, you can isolated *freed* pages from users. How-to-free pages is not
a purpose of this patch. You may use reclaim and migrate codes to free pages.
If start_isolate_page_range(start,end) is called,
- migratetype of the range turns to be MIGRATE_ISOLATE if
its type is MIGRATE_MOVABLE. (*) this check can be updated if other
memory reclaiming works make progress.
- MIGRATE_ISOLATE is not on migratetype fallback list.
- All free pages and will-be-freed pages are isolated.
To check all pages in the range are isolated or not, use test_pages_isolated(),
To cancel isolation, use undo_isolate_page_range().
Changes V6 -> V7
- removed unnecessary #ifdef
There are HOLES_IN_ZONE handling codes...I'm glad if we can remove them..
Signed-off-by: Yasunori Goto <y-goto@jp.fujitsu.com>
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
include/linux/mmzone.h | 3
include/linux/page-isolation.h | 37 ++++++++++
include/linux/pageblock-flags.h | 2
mm/Makefile | 2
mm/page_alloc.c | 44 ++++++++++++
mm/page_isolation.c | 138 ++++++++++++++++++++++++++++++++++++++++
6 files changed, 223 insertions(+), 3 deletions(-)
Index: linux-2.6.22-rc6-mm1/include/linux/mmzone.h
===================================================================
--- linux-2.6.22-rc6-mm1.orig/include/linux/mmzone.h
+++ linux-2.6.22-rc6-mm1/include/linux/mmzone.h
@@ -39,7 +39,8 @@ extern int page_group_by_mobility_disabl
#define MIGRATE_RECLAIMABLE 1
#define MIGRATE_MOVABLE 2
#define MIGRATE_RESERVE 3
-#define MIGRATE_TYPES 4
+#define MIGRATE_ISOLATE 4 /* can't allocate from here */
+#define MIGRATE_TYPES 5
#define for_each_migratetype_order(order, type) \
for (order = 0; order < MAX_ORDER; order++) \
Index: linux-2.6.22-rc6-mm1/include/linux/pageblock-flags.h
===================================================================
--- linux-2.6.22-rc6-mm1.orig/include/linux/pageblock-flags.h
+++ linux-2.6.22-rc6-mm1/include/linux/pageblock-flags.h
@@ -31,7 +31,7 @@
/* Bit indices that affect a whole block of pages */
enum pageblock_bits {
- PB_range(PB_migrate, 2), /* 2 bits required for migrate types */
+ PB_range(PB_migrate, 3), /* 3 bits required for migrate types */
NR_PAGEBLOCK_BITS
};
Index: linux-2.6.22-rc6-mm1/mm/page_alloc.c
===================================================================
--- linux-2.6.22-rc6-mm1.orig/mm/page_alloc.c
+++ linux-2.6.22-rc6-mm1/mm/page_alloc.c
@@ -41,6 +41,7 @@
#include <linux/pfn.h>
#include <linux/backing-dev.h>
#include <linux/fault-inject.h>
+#include <linux/page-isolation.h>
#include <asm/tlbflush.h>
#include <asm/div64.h>
@@ -4412,3 +4413,46 @@ void set_pageblock_flags_group(struct pa
else
__clear_bit(bitidx + start_bitidx, bitmap);
}
+
+/*
+ * This is designed as sub function...plz see page_isolation.c also.
+ * set/clear page block's type to be ISOLATE.
+ * page allocater never alloc memory from ISOLATE block.
+ */
+
+int set_migratetype_isolate(struct page *page)
+{
+ struct zone *zone;
+ unsigned long flags;
+ int ret = -EBUSY;
+
+ zone = page_zone(page);
+ spin_lock_irqsave(&zone->lock, flags);
+ /*
+ * In future, more migrate types will be able to be isolation target.
+ */
+ if (get_pageblock_migratetype(page) != MIGRATE_MOVABLE)
+ goto out;
+ set_pageblock_migratetype(page, MIGRATE_ISOLATE);
+ move_freepages_block(zone, page, MIGRATE_ISOLATE);
+ ret = 0;
+out:
+ spin_unlock_irqrestore(&zone->lock, flags);
+ if (!ret)
+ drain_all_local_pages();
+ return ret;
+}
+
+void unset_migratetype_isolate(struct page *page)
+{
+ struct zone *zone;
+ unsigned long flags;
+ zone = page_zone(page);
+ spin_lock_irqsave(&zone->lock, flags);
+ if (get_pageblock_migratetype(page) != MIGRATE_ISOLATE)
+ goto out;
+ set_pageblock_migratetype(page, MIGRATE_MOVABLE);
+ move_freepages_block(zone, page, MIGRATE_MOVABLE);
+out:
+ spin_unlock_irqrestore(&zone->lock, flags);
+}
Index: linux-2.6.22-rc6-mm1/mm/page_isolation.c
===================================================================
--- /dev/null
+++ linux-2.6.22-rc6-mm1/mm/page_isolation.c
@@ -0,0 +1,138 @@
+/*
+ * linux/mm/page_isolation.c
+ */
+
+#include <stddef.h>
+#include <linux/mm.h>
+#include <linux/page-isolation.h>
+#include <linux/pageblock-flags.h>
+#include "internal.h"
+
+static inline struct page *
+__first_valid_page(unsigned long pfn, unsigned long nr_pages)
+{
+ int i;
+ for (i = 0; i < nr_pages; i++)
+ if (pfn_valid_within(pfn + i))
+ break;
+ if (unlikely(i == nr_pages))
+ return NULL;
+ return pfn_to_page(pfn + i);
+}
+
+/*
+ * start_isolate_page_range() -- make page-allocation-type of range of pages
+ * to be MIGRATE_ISOLATE.
+ * @start_pfn: The lower PFN of the range to be isolated.
+ * @end_pfn: The upper PFN of the range to be isolated.
+ *
+ * Making page-allocation-type to be MIGRATE_ISOLATE means free pages in
+ * the range will never be allocated. Any free pages and pages freed in the
+ * future will not be allocated again.
+ *
+ * start_pfn/end_pfn must be aligned to pageblock_order.
+ * Returns 0 on success and -EBUSY if any part of range cannot be isolated.
+ */
+int
+start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn)
+{
+ unsigned long pfn;
+ unsigned long undo_pfn;
+ struct page *page;
+
+ BUG_ON((start_pfn) & (pageblock_nr_pages - 1));
+ BUG_ON((end_pfn) & (pageblock_nr_pages - 1));
+
+ for (pfn = start_pfn;
+ pfn < end_pfn;
+ pfn += pageblock_nr_pages) {
+ page = __first_valid_page(pfn, pageblock_nr_pages);
+ if (page && set_migratetype_isolate(page)) {
+ undo_pfn = pfn;
+ goto undo;
+ }
+ }
+ return 0;
+undo:
+ for (pfn = start_pfn;
+ pfn <= undo_pfn;
+ pfn += pageblock_nr_pages)
+ unset_migratetype_isolate(pfn_to_page(pfn));
+
+ return -EBUSY;
+}
+
+/*
+ * Make isolated pages available again.
+ */
+int
+undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn)
+{
+ unsigned long pfn;
+ struct page *page;
+ BUG_ON((start_pfn) & (pageblock_nr_pages - 1));
+ BUG_ON((end_pfn) & (pageblock_nr_pages - 1));
+ for (pfn = start_pfn;
+ pfn < end_pfn;
+ pfn += pageblock_nr_pages) {
+ page = __first_valid_page(pfn, pageblock_nr_pages);
+ if (!page || get_pageblock_flags(page) != MIGRATE_ISOLATE)
+ continue;
+ unset_migratetype_isolate(page);
+ }
+ return 0;
+}
+/*
+ * Test all pages in the range is free(means isolated) or not.
+ * all pages in [start_pfn...end_pfn) must be in the same zone.
+ * zone->lock must be held before call this.
+ *
+ * Returns 0 if all pages in the range is isolated.
+ */
+static int
+__test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn)
+{
+ struct page *page;
+
+ while (pfn < end_pfn) {
+ if (!pfn_valid_within(pfn)) {
+ pfn++;
+ continue;
+ }
+ page = pfn_to_page(pfn);
+ if (PageBuddy(page))
+ pfn += 1 << page_order(page);
+ else if (page_count(page) == 0 &&
+ page_private(page) == MIGRATE_ISOLATE)
+ pfn += 1;
+ else
+ break;
+ }
+ if (pfn < end_pfn)
+ return 0;
+ return 1;
+}
+
+int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn)
+{
+ unsigned long pfn;
+ struct page *page;
+
+ pfn = start_pfn;
+ /*
+ * Note: pageblock_nr_page != MAX_ORDER. Then, chunks of free page
+ * is not aligned to pageblock_nr_pages.
+ * Then we just check pagetype fist.
+ */
+ for (pfn = start_pfn; pfn < end_pfn; pfn += pageblock_nr_pages) {
+ page = __first_valid_page(pfn, pageblock_nr_pages);
+ if (page && get_pageblock_flags(page) != MIGRATE_ISOLATE)
+ break;
+ }
+ if (pfn < end_pfn)
+ return -EBUSY;
+ /* Check all pages are free or Marked as ISOLATED */
+ if (__test_page_isolated_in_pageblock(start_pfn, end_pfn))
+ return 0;
+ return -EBUSY;
+}
Index: linux-2.6.22-rc6-mm1/include/linux/page-isolation.h
===================================================================
--- /dev/null
+++ linux-2.6.22-rc6-mm1/include/linux/page-isolation.h
@@ -0,0 +1,37 @@
+#ifndef __LINUX_PAGEISOLATION_H
+#define __LINUX_PAGEISOLATION_H
+
+/*
+ * Changes migrate type in [start_pfn, end_pfn) to be MIGRATE_ISOLATE.
+ * If specified range includes migrate types other than MOVABLE,
+ * this will fail with -EBUSY.
+ *
+ * For isolating all pages in the range finally, the caller have to
+ * free all pages in the range. test_page_isolated() can be used for
+ * test it.
+ */
+extern int
+start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn);
+
+/*
+ * Changes MIGRATE_ISOLATE to MIGRATE_MOVABLE.
+ * target range is [start_pfn, end_pfn)
+ */
+extern int
+undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn);
+
+/*
+ * test all pages in [start_pfn, end_pfn)are isolated or not.
+ */
+extern int
+test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn);
+
+/*
+ * Internal funcs.Changes pageblock's migrate type.
+ * Please use make_pagetype_isolated()/make_pagetype_movable().
+ */
+extern int set_migratetype_isolate(struct page *page);
+extern void unset_migratetype_isolate(struct page *page);
+
+
+#endif
Index: linux-2.6.22-rc6-mm1/mm/Makefile
===================================================================
--- linux-2.6.22-rc6-mm1.orig/mm/Makefile
+++ linux-2.6.22-rc6-mm1/mm/Makefile
@@ -11,7 +11,7 @@ obj-y := bootmem.o filemap.o mempool.o
page_alloc.o page-writeback.o pdflush.o \
readahead.o swap.o truncate.o vmscan.o \
prio_tree.o util.o mmzone.o vmstat.o backing-dev.o \
- $(mmu-y)
+ page_isolation.o $(mmu-y)
obj-$(CONFIG_BOUNCE) += bounce.o
obj-$(CONFIG_SWAP) += page_io.o swap_state.o swapfile.o thrash.o
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH] memory unplug v7 [5/6] - page offline
2007-07-06 9:19 [PATCH] memory unplug v7 - introduction KAMEZAWA Hiroyuki
` (3 preceding siblings ...)
2007-07-06 9:26 ` [PATCH] memory unplug v7 [4/6] - page isolation KAMEZAWA Hiroyuki
@ 2007-07-06 9:27 ` KAMEZAWA Hiroyuki
2007-07-06 9:28 ` [PATCH] memory unplug v7 [6/6] - ia64 interface KAMEZAWA Hiroyuki
2007-07-06 22:34 ` [PATCH] memory unplug v7 - introduction Andrew Morton
6 siblings, 0 replies; 14+ messages in thread
From: KAMEZAWA Hiroyuki @ 2007-07-06 9:27 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki; +Cc: LKML, linux-mm, Andrew Morton, Christoph Lameter, mel
Changes V5 ->V6
- style fixes.
Logic.
- set all pages in [start,end) as isolated migration-type.
by this, all free pages in the range will be not-for-use.
- Migrate all LRU pages in the range.
- Test all pages in the range's refcnt is zero or not.
Todo:
- allocate migration destination page from better area.
- confirm page_count(page)== 0 && PageReserved(page) page is safe to be freed..
(I don't like this kind of page but..
- Find out pages which cannot be migrated.
- more running tests.
- Use reclaim for unplugging other memory type area.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Yasunori Goto <y-goto@jp.fujitsu.com>
---
include/linux/kernel.h | 1
include/linux/memory_hotplug.h | 5
mm/Kconfig | 5
mm/memory_hotplug.c | 254 +++++++++++++++++++++++++++++++++++++++++
mm/page_alloc.c | 47 +++++++
5 files changed, 311 insertions(+), 1 deletion(-)
Index: linux-2.6.22-rc6-mm1/mm/Kconfig
===================================================================
--- linux-2.6.22-rc6-mm1.orig/mm/Kconfig
+++ linux-2.6.22-rc6-mm1/mm/Kconfig
@@ -126,6 +126,11 @@ config MEMORY_HOTPLUG_SPARSE
def_bool y
depends on SPARSEMEM && MEMORY_HOTPLUG
+config MEMORY_HOTREMOVE
+ bool "Allow for memory hot remove"
+ depends on MEMORY_HOTPLUG
+ depends on MIGRATION
+
# Heavily threaded applications may benefit from splitting the mm-wide
# page_table_lock, so that faults on different parts of the user address
# space can be handled with less contention: split it at this NR_CPUS.
Index: linux-2.6.22-rc6-mm1/mm/memory_hotplug.c
===================================================================
--- linux-2.6.22-rc6-mm1.orig/mm/memory_hotplug.c
+++ linux-2.6.22-rc6-mm1/mm/memory_hotplug.c
@@ -23,6 +23,9 @@
#include <linux/vmalloc.h>
#include <linux/ioport.h>
#include <linux/cpuset.h>
+#include <linux/delay.h>
+#include <linux/migrate.h>
+#include <linux/page-isolation.h>
#include <asm/tlbflush.h>
@@ -301,3 +304,254 @@ error:
return ret;
}
EXPORT_SYMBOL_GPL(add_memory);
+
+#ifdef CONFIG_MEMORY_HOTREMOVE
+/*
+ * Confirm all pages in a range [start, end) is belongs to the same zone.
+ */
+static int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn)
+{
+ unsigned long pfn;
+ struct zone *zone = NULL;
+ struct page *page;
+ int i;
+ for (pfn = start_pfn;
+ pfn < end_pfn;
+ pfn += MAX_ORDER_NR_PAGES) {
+ i = 0;
+ /* This is just a CONFIG_HOLES_IN_ZONE check.*/
+ while ((i < MAX_ORDER_NR_PAGES) && !pfn_valid_within(pfn + i))
+ i++;
+ if (i == MAX_ORDER_NR_PAGES)
+ continue;
+ page = pfn_to_page(pfn + i);
+ if (zone && page_zone(page) != zone)
+ return 0;
+ zone = page_zone(page);
+ }
+ return 1;
+}
+
+/*
+ * Scanning pfn is much easier than scanning lru list.
+ * Scan pfn from start to end and Find LRU page.
+ */
+int scan_lru_pages(unsigned long start, unsigned long end)
+{
+ unsigned long pfn;
+ struct page *page;
+ for (pfn = start; pfn < end; pfn++) {
+ if (pfn_valid(pfn)) {
+ page = pfn_to_page(pfn);
+ if (PageLRU(page))
+ return pfn;
+ }
+ }
+ return 0;
+}
+
+static struct page *
+hotremove_migrate_alloc(struct page *page,
+ unsigned long private,
+ int **x)
+{
+ /* This should be improoooooved!! */
+ return alloc_page(GFP_HIGHUSER_PAGECACHE);
+}
+
+
+#define NR_OFFLINE_AT_ONCE_PAGES (256)
+static int
+do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
+{
+ unsigned long pfn;
+ struct page *page;
+ int move_pages = NR_OFFLINE_AT_ONCE_PAGES;
+ int not_managed = 0;
+ int ret = 0;
+ LIST_HEAD(source);
+
+ for (pfn = start_pfn; pfn < end_pfn && move_pages > 0; pfn++) {
+ if (!pfn_valid(pfn))
+ continue;
+ page = pfn_to_page(pfn);
+ if (!page_count(page))
+ continue;
+ /*
+ * We can skip free pages. And we can only deal with pages on
+ * LRU.
+ */
+ ret = isolate_lru_page(page, &source);
+ if (!ret) { /* Success */
+ move_pages--;
+ } else {
+ /* Becasue we don't have big zone->lock. we should
+ check this again here. */
+ if (page_count(page))
+ not_managed++;
+#ifdef CONFIG_DEBUG_VM
+ printk(KERN_INFO "removing from LRU failed"
+ " %lx/%d/%lx\n",
+ pfn, page_count(page), page->flags);
+#endif
+ }
+ }
+ ret = -EBUSY;
+ if (not_managed) {
+ if (!list_empty(&source))
+ putback_lru_pages(&source);
+ goto out;
+ }
+ ret = 0;
+ if (list_empty(&source))
+ goto out;
+ /* this function returns # of failed pages */
+ ret = migrate_pages(&source, hotremove_migrate_alloc, 0);
+
+out:
+ return ret;
+}
+
+/*
+ * remove from free_area[] and mark all as Reserved.
+ */
+static int
+offline_isolated_pages_cb(unsigned long start, unsigned long nr_pages,
+ void *data)
+{
+ __offline_isolated_pages(start, start + nr_pages);
+ return 0;
+}
+
+static void
+offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
+{
+ walk_memory_resource(start_pfn, end_pfn - start_pfn, NULL,
+ offline_isolated_pages_cb);
+}
+
+/*
+ * Check all pages in range, recoreded as memory resource, are isolated.
+ */
+static int
+check_pages_isolated_cb(unsigned long start_pfn, unsigned long nr_pages,
+ void *data)
+{
+ int ret;
+ long offlined = *(long *)data;
+ ret = test_pages_isolated(start_pfn, start_pfn + nr_pages);
+ offlined = nr_pages;
+ if (!ret)
+ *(long *)data += offlined;
+ return ret;
+}
+
+static long
+check_pages_isolated(unsigned long start_pfn, unsigned long end_pfn)
+{
+ long offlined = 0;
+ int ret;
+
+ ret = walk_memory_resource(start_pfn, end_pfn - start_pfn, &offlined,
+ check_pages_isolated_cb);
+ if (ret < 0)
+ offlined = (long)ret;
+ return offlined;
+}
+
+extern void drain_all_local_pages(void);
+
+int offline_pages(unsigned long start_pfn,
+ unsigned long end_pfn, unsigned long timeout)
+{
+ unsigned long pfn, nr_pages, expire;
+ long offlined_pages;
+ int ret, drain, retry_max;
+ struct zone *zone;
+
+ BUG_ON(start_pfn >= end_pfn);
+ /* at least, alignment against pageblock is necessary */
+ if (!IS_ALIGNED(start_pfn, pageblock_nr_pages))
+ return -EINVAL;
+ if (!IS_ALIGNED(end_pfn, pageblock_nr_pages))
+ return -EINVAL;
+ /* This makes hotplug much easier...and readable.
+ we assume this for now. .*/
+ if (!test_pages_in_a_zone(start_pfn, end_pfn))
+ return -EINVAL;
+ /* set above range as isolated */
+ ret = start_isolate_page_range(start_pfn, end_pfn);
+ if (ret)
+ return ret;
+ nr_pages = end_pfn - start_pfn;
+ pfn = start_pfn;
+ expire = jiffies + timeout;
+ drain = 0;
+ retry_max = 5;
+repeat:
+ /* start memory hot removal */
+ ret = -EAGAIN;
+ if (time_after(jiffies, expire))
+ goto failed_removal;
+ ret = -EINTR;
+ if (signal_pending(current))
+ goto failed_removal;
+ ret = 0;
+ if (drain) {
+ lru_add_drain_all();
+ flush_scheduled_work();
+ cond_resched();
+ drain_all_local_pages();
+ }
+
+ pfn = scan_lru_pages(start_pfn, end_pfn);
+ if (pfn) { /* We have page on LRU */
+ ret = do_migrate_range(pfn, end_pfn);
+ if (!ret) {
+ drain = 1;
+ goto repeat;
+ } else {
+ if (ret < 0)
+ if (--retry_max == 0)
+ goto failed_removal;
+ yield();
+ drain = 1;
+ goto repeat;
+ }
+ }
+ /* drain all zone's lru pagevec, this is asyncronous... */
+ lru_add_drain_all();
+ flush_scheduled_work();
+ yield();
+ /* drain pcp pages , this is synchrouns. */
+ drain_all_local_pages();
+ /* check again */
+ offlined_pages = check_pages_isolated(start_pfn, end_pfn);
+ if (offlined_pages < 0) {
+ ret = -EBUSY;
+ goto failed_removal;
+ }
+ printk(KERN_INFO "Offlined Pages %ld\n", offlined_pages);
+ /* Ok, all of our target is islaoted.
+ We cannot do rollback at this point. */
+ offline_isolated_pages(start_pfn, end_pfn);
+ /* reset pagetype flags */
+ start_isolate_page_range(start_pfn, end_pfn);
+ /* removal success */
+ zone = page_zone(pfn_to_page(start_pfn));
+ zone->present_pages -= offlined_pages;
+ zone->zone_pgdat->node_present_pages -= offlined_pages;
+ totalram_pages -= offlined_pages;
+ num_physpages -= offlined_pages;
+ vm_total_pages = nr_free_pagecache_pages();
+ writeback_set_ratelimit();
+ return 0;
+
+failed_removal:
+ printk(KERN_INFO "memory offlining %lx to %lx failed\n",
+ start_pfn, end_pfn);
+ /* pushback to free area */
+ undo_isolate_page_range(start_pfn, end_pfn);
+ return ret;
+}
+#endif /* CONFIG_MEMORY_HOTREMOVE */
Index: linux-2.6.22-rc6-mm1/include/linux/memory_hotplug.h
===================================================================
--- linux-2.6.22-rc6-mm1.orig/include/linux/memory_hotplug.h
+++ linux-2.6.22-rc6-mm1/include/linux/memory_hotplug.h
@@ -59,7 +59,10 @@ extern int add_one_highpage(struct page
extern void online_page(struct page *page);
/* VM interface that may be used by firmware interface */
extern int online_pages(unsigned long, unsigned long);
-
+#ifdef CONFIG_MEMORY_HOTREMOVE
+extern int offline_pages(unsigned long, unsigned long, unsigned long);
+extern void __offline_isolated_pages(unsigned long, unsigned long);
+#endif
/* reasonably generic interface to expand the physical pages in a zone */
extern int __add_pages(struct zone *zone, unsigned long start_pfn,
unsigned long nr_pages);
Index: linux-2.6.22-rc6-mm1/mm/page_alloc.c
===================================================================
--- linux-2.6.22-rc6-mm1.orig/mm/page_alloc.c
+++ linux-2.6.22-rc6-mm1/mm/page_alloc.c
@@ -4456,3 +4456,50 @@ void unset_migratetype_isolate(struct pa
out:
spin_unlock_irqrestore(&zone->lock, flags);
}
+
+#ifdef CONFIG_MEMORY_HOTREMOVE
+/*
+ * All pages in the range must be isolated before calling this.
+ */
+void
+__offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
+{
+ struct page *page;
+ struct zone *zone;
+ int order, i;
+ unsigned long pfn;
+ unsigned long flags;
+ /* find the first valid pfn */
+ for (pfn = start_pfn; pfn < end_pfn; pfn++)
+ if (pfn_valid(pfn))
+ break;
+ if (pfn == end_pfn)
+ return;
+ zone = page_zone(pfn_to_page(pfn));
+ spin_lock_irqsave(&zone->lock, flags);
+ pfn = start_pfn;
+ while (pfn < end_pfn) {
+ if (!pfn_valid(pfn)) {
+ pfn++;
+ continue;
+ }
+ page = pfn_to_page(pfn);
+ BUG_ON(page_count(page));
+ BUG_ON(!PageBuddy(page));
+ order = page_order(page);
+#ifdef CONFIG_DEBUG_VM
+ printk(KERN_INFO "remove from free list %lx %d %lx\n",
+ pfn, 1 << order, end_pfn);
+#endif
+ list_del(&page->lru);
+ rmv_page_order(page);
+ zone->free_area[order].nr_free--;
+ __mod_zone_page_state(zone, NR_FREE_PAGES,
+ - (1UL << order));
+ for (i = 0; i < (1 << order); i++)
+ SetPageReserved((page+i));
+ pfn += (1 << order);
+ }
+ spin_unlock_irqrestore(&zone->lock, flags);
+}
+#endif
Index: linux-2.6.22-rc6-mm1/include/linux/kernel.h
===================================================================
--- linux-2.6.22-rc6-mm1.orig/include/linux/kernel.h
+++ linux-2.6.22-rc6-mm1/include/linux/kernel.h
@@ -34,6 +34,7 @@ extern const char linux_proc_banner[];
#define ALIGN(x,a) __ALIGN_MASK(x,(typeof(x))(a)-1)
#define __ALIGN_MASK(x,mask) (((x)+(mask))&~(mask))
+#define IS_ALIGNED(x,a) (((x) % ((typeof(x))(a))) == 0)
#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]) + __must_be_array(arr))
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH] memory unplug v7 [6/6] - ia64 interface
2007-07-06 9:19 [PATCH] memory unplug v7 - introduction KAMEZAWA Hiroyuki
` (4 preceding siblings ...)
2007-07-06 9:27 ` [PATCH] memory unplug v7 [5/6] - page offline KAMEZAWA Hiroyuki
@ 2007-07-06 9:28 ` KAMEZAWA Hiroyuki
2007-07-06 22:34 ` [PATCH] memory unplug v7 - introduction Andrew Morton
6 siblings, 0 replies; 14+ messages in thread
From: KAMEZAWA Hiroyuki @ 2007-07-06 9:28 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki; +Cc: LKML, linux-mm, Andrew Morton, Christoph Lameter, mel
IA64 memory unplug interface.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
arch/ia64/mm/init.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
Index: linux-2.6.22-rc6-mm1/arch/ia64/mm/init.c
===================================================================
--- linux-2.6.22-rc6-mm1.orig/arch/ia64/mm/init.c
+++ linux-2.6.22-rc6-mm1/arch/ia64/mm/init.c
@@ -724,7 +724,17 @@ int arch_add_memory(int nid, u64 start,
int remove_memory(u64 start, u64 size)
{
- return -EINVAL;
+ unsigned long start_pfn, end_pfn;
+ unsigned long timeout = 120 * HZ;
+ int ret;
+ start_pfn = start >> PAGE_SHIFT;
+ end_pfn = start_pfn + (size >> PAGE_SHIFT);
+ ret = offline_pages(start_pfn, end_pfn, timeout);
+ if (ret)
+ goto out;
+ /* we can free mem_map at this point */
+out:
+ return ret;
}
EXPORT_SYMBOL_GPL(remove_memory);
#endif
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] memory unplug v7 - migration by kernel
2007-07-06 9:23 ` [PATCH] memory unplug v7 - migration by kernel KAMEZAWA Hiroyuki
@ 2007-07-06 18:11 ` Christoph Lameter
0 siblings, 0 replies; 14+ messages in thread
From: Christoph Lameter @ 2007-07-06 18:11 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki; +Cc: LKML, linux-mm, Andrew Morton, mel
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] memory unplug v7 [2/6] - isolate_lru_page fix
2007-07-06 9:24 ` [PATCH] memory unplug v7 [2/6] - isolate_lru_page fix KAMEZAWA Hiroyuki
@ 2007-07-06 18:11 ` Christoph Lameter
0 siblings, 0 replies; 14+ messages in thread
From: Christoph Lameter @ 2007-07-06 18:11 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki; +Cc: LKML, linux-mm, Andrew Morton, mel
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] memory unplug v7 [4/6] - page isolation
2007-07-06 9:26 ` [PATCH] memory unplug v7 [4/6] - page isolation KAMEZAWA Hiroyuki
@ 2007-07-06 22:28 ` Andrew Morton
2007-07-06 22:31 ` KAMEZAWA Hiroyuki
0 siblings, 1 reply; 14+ messages in thread
From: Andrew Morton @ 2007-07-06 22:28 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki; +Cc: LKML, linux-mm, Christoph Lameter, mel
On Fri, 6 Jul 2007 18:26:11 +0900
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
> +/*
> + * start_isolate_page_range() -- make page-allocation-type of range of pages
> + * to be MIGRATE_ISOLATE.
I think kerneldoc requires that the above all be on a single line.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] memory unplug v7 [4/6] - page isolation
2007-07-06 22:28 ` Andrew Morton
@ 2007-07-06 22:31 ` KAMEZAWA Hiroyuki
0 siblings, 0 replies; 14+ messages in thread
From: KAMEZAWA Hiroyuki @ 2007-07-06 22:31 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-kernel, linux-mm, clameter, mel
On Fri, 6 Jul 2007 15:28:28 -0700
Andrew Morton <akpm@linux-foundation.org> wrote:
> On Fri, 6 Jul 2007 18:26:11 +0900
> KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
>
> > +/*
> > + * start_isolate_page_range() -- make page-allocation-type of range of pages
> > + * to be MIGRATE_ISOLATE.
>
> I think kerneldoc requires that the above all be on a single line.
>
Hmm...I'll read it again and fix this.
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] memory unplug v7 - introduction
2007-07-06 9:19 [PATCH] memory unplug v7 - introduction KAMEZAWA Hiroyuki
` (5 preceding siblings ...)
2007-07-06 9:28 ` [PATCH] memory unplug v7 [6/6] - ia64 interface KAMEZAWA Hiroyuki
@ 2007-07-06 22:34 ` Andrew Morton
2007-07-06 22:40 ` Christoph Lameter
2007-07-06 22:44 ` KAMEZAWA Hiroyuki
6 siblings, 2 replies; 14+ messages in thread
From: Andrew Morton @ 2007-07-06 22:34 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki; +Cc: LKML, linux-mm, Christoph Lameter, mel
On Fri, 6 Jul 2007 18:19:03 +0900
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
> This is a memory unplug base patch set against 2.6.22-rc6-mm1.
Well I stuck these in -mm, but I don't know what they do. An overall
description of the design would make any review much more effective.
ie: what does it all do, and how does it do it?
Also a description of the test setup and the testing results would be
useful.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] memory unplug v7 - introduction
2007-07-06 22:34 ` [PATCH] memory unplug v7 - introduction Andrew Morton
@ 2007-07-06 22:40 ` Christoph Lameter
2007-07-06 22:44 ` KAMEZAWA Hiroyuki
1 sibling, 0 replies; 14+ messages in thread
From: Christoph Lameter @ 2007-07-06 22:40 UTC (permalink / raw)
To: Andrew Morton; +Cc: KAMEZAWA Hiroyuki, LKML, linux-mm, mel
On Fri, 6 Jul 2007, Andrew Morton wrote:
> On Fri, 6 Jul 2007 18:19:03 +0900
> KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
>
> > This is a memory unplug base patch set against 2.6.22-rc6-mm1.
>
> Well I stuck these in -mm, but I don't know what they do. An overall
> description of the design would make any review much more effective.
>
> ie: what does it all do, and how does it do it?
The two patches that you just merged and that I acked are also
generally useful for page migration. They are also necessary for
Mel's memory compaction patchset.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH] memory unplug v7 - introduction
2007-07-06 22:34 ` [PATCH] memory unplug v7 - introduction Andrew Morton
2007-07-06 22:40 ` Christoph Lameter
@ 2007-07-06 22:44 ` KAMEZAWA Hiroyuki
1 sibling, 0 replies; 14+ messages in thread
From: KAMEZAWA Hiroyuki @ 2007-07-06 22:44 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-kernel, linux-mm, clameter, mel
On Fri, 6 Jul 2007 15:34:01 -0700
Andrew Morton <akpm@linux-foundation.org> wrote:
> On Fri, 6 Jul 2007 18:19:03 +0900
> KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
>
> > This is a memory unplug base patch set against 2.6.22-rc6-mm1.
>
> Well I stuck these in -mm, but I don't know what they do. An overall
> description of the design would make any review much more effective.
>
Ah yes, I also wants more people's review.
> ie: what does it all do, and how does it do it?
>
> Also a description of the test setup and the testing results would be
> useful.
>
Okay.
I'll try following in the next week.
- "How-to-use and the whole design" to Documentaion/vm/memory-hotplug.txt
- Add more comments on source code about details.
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2007-07-06 22:44 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-07-06 9:19 [PATCH] memory unplug v7 - introduction KAMEZAWA Hiroyuki
2007-07-06 9:23 ` [PATCH] memory unplug v7 - migration by kernel KAMEZAWA Hiroyuki
2007-07-06 18:11 ` Christoph Lameter
2007-07-06 9:24 ` [PATCH] memory unplug v7 [2/6] - isolate_lru_page fix KAMEZAWA Hiroyuki
2007-07-06 18:11 ` Christoph Lameter
2007-07-06 9:25 ` [PATCH] memory unplug v7 [3/6] memory hotplug cleanup KAMEZAWA Hiroyuki
2007-07-06 9:26 ` [PATCH] memory unplug v7 [4/6] - page isolation KAMEZAWA Hiroyuki
2007-07-06 22:28 ` Andrew Morton
2007-07-06 22:31 ` KAMEZAWA Hiroyuki
2007-07-06 9:27 ` [PATCH] memory unplug v7 [5/6] - page offline KAMEZAWA Hiroyuki
2007-07-06 9:28 ` [PATCH] memory unplug v7 [6/6] - ia64 interface KAMEZAWA Hiroyuki
2007-07-06 22:34 ` [PATCH] memory unplug v7 - introduction Andrew Morton
2007-07-06 22:40 ` Christoph Lameter
2007-07-06 22:44 ` KAMEZAWA Hiroyuki
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox