On Tue, 21 Oct 2008 17:57:35 +0900, KAMEZAWA Hiroyuki wrote: > On Tue, 21 Oct 2008 16:35:09 +0800 > Li Zefan wrote: > > > KAMEZAWA Hiroyuki wrote: > > > On Tue, 21 Oct 2008 15:21:07 +0800 > > > Li Zefan wrote: > > >> dmesg is attached. > > >> > > > Thanks....I think I caught some. (added Mel Gorman to CC:) > > > > > > NODE_DATA(nid)->spanned_pages just means sum of zone->spanned_pages in node. > > > > > > So, If there is a hole between zone, node->spanned_pages doesn't mean > > > length of node's memmap....(then, some hole can be skipped.) > > > > > > OMG....Could you try this ? > > > > > > > No luck, the same bug still exists. :( > > > This is a little fixed one.. > I can reproduce a similar problem(hang on boot) on 2.6.27-git9, but this patch doesn't help either on my environment... I attach a console log(I've not seen NULL pointer dereference yet). Daisuke Nishimura. > please.. > -Kame > == > NODE_DATA(nid)->node_spanned_pages doesn't means width of node's memory. > > alloc_node_page_cgroup() misunderstand it. This patch tries to use > the same algorithm as alloc_node_mem_map() for allocating page_cgroup() > for node. > > Changelog: > - fixed range of initialization loop. > > Signed-off-by: KAMEZAWA Hiroyuki > > mm/page_cgroup.c | 19 +++++++++++++++---- > 1 file changed, 15 insertions(+), 4 deletions(-) > > Index: linux-2.6.27/mm/page_cgroup.c > =================================================================== > --- linux-2.6.27.orig/mm/page_cgroup.c > +++ linux-2.6.27/mm/page_cgroup.c > @@ -9,6 +9,8 @@ > static void __meminit > __init_page_cgroup(struct page_cgroup *pc, unsigned long pfn) > { > + if (!pfn_valid(pfn)) > + return; > pc->flags = 0; > pc->mem_cgroup = NULL; > pc->page = pfn_to_page(pfn); > @@ -41,10 +43,18 @@ static int __init alloc_node_page_cgroup > { > struct page_cgroup *base, *pc; > unsigned long table_size; > - unsigned long start_pfn, nr_pages, index; > + unsigned long start, end, start_pfn, nr_pages, index; > > + /* > + * Instead of allocating page_cgroup for [start, end) > + * We allocate page_cgroup to the same size of mem_map. > + * See page_alloc.c::alloc_node_mem_map() > + */ > start_pfn = NODE_DATA(nid)->node_start_pfn; > - nr_pages = NODE_DATA(nid)->node_spanned_pages; > + start = start_pfn & ~(MAX_ORDER_NR_PAGES - 1); > + end = start_pfn + NODE_DATA(nid)->node_spanned_pages; > + end = ALIGN(end, MAX_ORDER_NR_PAGES); > + nr_pages = end - start; > > table_size = sizeof(struct page_cgroup) * nr_pages; > > @@ -52,11 +62,12 @@ static int __init alloc_node_page_cgroup > table_size, PAGE_SIZE, __pa(MAX_DMA_ADDRESS)); > if (!base) > return -ENOMEM; > + > for (index = 0; index < nr_pages; index++) { > pc = base + index; > - __init_page_cgroup(pc, start_pfn + index); > + __init_page_cgroup(pc, start + index); > } > - NODE_DATA(nid)->node_page_cgroup = base; > + NODE_DATA(nid)->node_page_cgroup = base + start_pfn - start; > total_usage += table_size; > return 0; > } >