From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx111.postini.com [74.125.245.111]) by kanga.kvack.org (Postfix) with SMTP id EC6D96B00C8 for ; Tue, 30 Apr 2013 05:18:43 -0400 (EDT) From: Tang Chen Subject: [PATCH v2 10/13] x86, acpi, numa, mem-hotplug: Introduce MEMBLK_HOTPLUGGABLE to mark and reserve hotpluggable memory. Date: Tue, 30 Apr 2013 17:21:20 +0800 Message-Id: <1367313683-10267-11-git-send-email-tangchen@cn.fujitsu.com> In-Reply-To: <1367313683-10267-1-git-send-email-tangchen@cn.fujitsu.com> References: <1367313683-10267-1-git-send-email-tangchen@cn.fujitsu.com> Sender: owner-linux-mm@kvack.org List-ID: To: mingo@redhat.com, hpa@zytor.com, akpm@linux-foundation.org, yinghai@kernel.org, jiang.liu@huawei.com, wency@cn.fujitsu.com, isimatu.yasuaki@jp.fujitsu.com, tj@kernel.org, laijs@cn.fujitsu.com, davem@davemloft.net, mgorman@suse.de, minchan@kernel.org, mina86@mina86.com Cc: x86@kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org We mark out movable memory ranges and reserve them with MEMBLK_HOTPLUGGABLE flag in memblock.reserved. This should be done after the memory mapping is initialized because the kernel now supports allocate pagetable pages on local node, which are kernel pages. The reserved hotpluggable will be freed to buddy when memory initialization is done. This idea is from Wen Congyang and Jiang Liu . Suggested-by: Jiang Liu Suggested-by: Wen Congyang Signed-off-by: Tang Chen --- arch/x86/mm/numa.c | 28 ++++++++++++++++++++++++++++ include/linux/memblock.h | 3 +++ mm/memblock.c | 19 +++++++++++++++++++ 3 files changed, 50 insertions(+), 0 deletions(-) diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c index 1367fe4..a1f1f90 100644 --- a/arch/x86/mm/numa.c +++ b/arch/x86/mm/numa.c @@ -731,6 +731,32 @@ static void __init early_x86_numa_init_mapping(void) } #endif +#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP +static void __init early_mem_hotplug_init() +{ + int i, nid; + phys_addr_t start, end; + + if (!movablecore_enable_srat) + return; + + for (i = 0; i < numa_meminfo.nr_blks; i++) { + if (!numa_meminfo.blk[i].hotpluggable) + continue; + + nid = numa_meminfo.blk[i].nid; + start = numa_meminfo.blk[i].start; + end = numa_meminfo.blk[i].end; + + memblock_reserve_hotpluggable(start, end - start, nid); + } +} +#else /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ +static inline void early_mem_hotplug_init() +{ +} +#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ + void __init early_initmem_init(void) { early_x86_numa_init(); @@ -740,6 +766,8 @@ void __init early_initmem_init(void) load_cr3(swapper_pg_dir); __flush_tlb_all(); + early_mem_hotplug_init(); + early_memtest(0, max_pfn_mapped< email@kvack.org