From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail172.messagelabs.com (mail172.messagelabs.com [216.82.254.3]) by kanga.kvack.org (Postfix) with SMTP id EB1956B01E3 for ; Thu, 22 Apr 2010 06:55:54 -0400 (EDT) Received: from m1.gw.fujitsu.co.jp ([10.0.50.71]) by fgwmail6.fujitsu.co.jp (Fujitsu Gateway) with ESMTP id o3MAtqll016928 for (envelope-from kamezawa.hiroyu@jp.fujitsu.com); Thu, 22 Apr 2010 19:55:52 +0900 Received: from smail (m1 [127.0.0.1]) by outgoing.m1.gw.fujitsu.co.jp (Postfix) with ESMTP id DDBB145DE50 for ; Thu, 22 Apr 2010 19:55:51 +0900 (JST) Received: from s1.gw.fujitsu.co.jp (s1.gw.fujitsu.co.jp [10.0.50.91]) by m1.gw.fujitsu.co.jp (Postfix) with ESMTP id B49E245DE51 for ; Thu, 22 Apr 2010 19:55:51 +0900 (JST) Received: from s1.gw.fujitsu.co.jp (localhost.localdomain [127.0.0.1]) by s1.gw.fujitsu.co.jp (Postfix) with ESMTP id 862871DB804D for ; Thu, 22 Apr 2010 19:55:51 +0900 (JST) Received: from m106.s.css.fujitsu.com (m106.s.css.fujitsu.com [10.249.87.106]) by s1.gw.fujitsu.co.jp (Postfix) with ESMTP id 1DAD41DB8049 for ; Thu, 22 Apr 2010 19:55:51 +0900 (JST) Date: Thu, 22 Apr 2010 19:51:53 +0900 From: KAMEZAWA Hiroyuki Subject: Re: [PATCH 04/14] mm,migration: Allow the migration of PageSwapCache pages Message-Id: <20100422195153.d91c1c9e.kamezawa.hiroyu@jp.fujitsu.com> In-Reply-To: <20100422193106.9ffad4ec.kamezawa.hiroyu@jp.fujitsu.com> References: <1271797276-31358-1-git-send-email-mel@csn.ul.ie> <20100421150037.GJ30306@csn.ul.ie> <20100421151417.GK30306@csn.ul.ie> <20100421153421.GM30306@csn.ul.ie> <20100422092819.GR30306@csn.ul.ie> <20100422184621.0aaaeb5f.kamezawa.hiroyu@jp.fujitsu.com> <20100422193106.9ffad4ec.kamezawa.hiroyu@jp.fujitsu.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: owner-linux-mm@kvack.org To: KAMEZAWA Hiroyuki Cc: Minchan Kim , Mel Gorman , Christoph Lameter , Andrew Morton , Andrea Arcangeli , Adam Litke , Avi Kivity , David Rientjes , KOSAKI Motohiro , Rik van Riel , linux-kernel@vger.kernel.org, linux-mm@kvack.org List-ID: On Thu, 22 Apr 2010 19:31:06 +0900 KAMEZAWA Hiroyuki wrote: > On Thu, 22 Apr 2010 19:13:12 +0900 > Minchan Kim wrote: > > > On Thu, Apr 22, 2010 at 6:46 PM, KAMEZAWA Hiroyuki > > wrote: > > > > Hmm..in my test, the case was. > > > > > > Before try_to_unmap: > > > A A A A mapcount=1, SwapCache, remap_swapcache=1 > > > After remap > > > A A A A mapcount=0, SwapCache, rc=0. > > > > > > So, I think there may be some race in rmap_walk() and vma handling or > > > anon_vma handling. migration_entry isn't found by rmap_walk. > > > > > > Hmm..it seems this kind patch will be required for debug. > > Ok, here is my patch for _fix_. But still testing... Running well at least for 30 minutes, where I can see bug in 10minutes. But this patch is too naive. please think about something better fix. == From: KAMEZAWA Hiroyuki At adjust_vma(), vma's start address and pgoff is updated under write lock of mmap_sem. This means the vma's rmap information update is atoimic only under read lock of mmap_sem. Even if it's not atomic, in usual case, try_to_ummap() etc... just fails to decrease mapcount to be 0. no problem. But at page migration's rmap_walk(), it requires to know all migration_entry in page tables and recover mapcount. So, this race in vma's address is critical. When rmap_walk meet the race, rmap_walk will mistakenly get -EFAULT and don't call rmap_one(). This patch adds a lock for vma's rmap information. But, this is _very slow_. We need something sophisitcated, light-weight update for this.. Signed-off-by: KAMEZAWA Hiroyuki --- include/linux/mm_types.h | 1 + kernel/fork.c | 1 + mm/mmap.c | 11 ++++++++++- mm/rmap.c | 3 +++ 4 files changed, 15 insertions(+), 1 deletion(-) Index: linux-2.6.34-rc4-mm1/include/linux/mm_types.h =================================================================== --- linux-2.6.34-rc4-mm1.orig/include/linux/mm_types.h +++ linux-2.6.34-rc4-mm1/include/linux/mm_types.h @@ -183,6 +183,7 @@ struct vm_area_struct { #ifdef CONFIG_NUMA struct mempolicy *vm_policy; /* NUMA policy for the VMA */ #endif + spinlock_t adjust_lock; }; struct core_thread { Index: linux-2.6.34-rc4-mm1/mm/mmap.c =================================================================== --- linux-2.6.34-rc4-mm1.orig/mm/mmap.c +++ linux-2.6.34-rc4-mm1/mm/mmap.c @@ -584,13 +584,20 @@ again: remove_next = 1 + (end > next-> if (adjust_next) vma_prio_tree_remove(next, root); } - + /* + * changing all params in atomic. If not, vma_address in rmap.c + * can see wrong result. + */ + spin_lock(&vma->adjust_lock); vma->vm_start = start; vma->vm_end = end; vma->vm_pgoff = pgoff; + spin_unlock(&vma->adjust_lock); if (adjust_next) { + spin_lock(&next->adjust_lock); next->vm_start += adjust_next << PAGE_SHIFT; next->vm_pgoff += adjust_next; + spin_unlock(&next->adjust_lock); } if (root) { @@ -1939,6 +1946,7 @@ static int __split_vma(struct mm_struct *new = *vma; INIT_LIST_HEAD(&new->anon_vma_chain); + spin_lock_init(&new->adjust_lock); if (new_below) new->vm_end = addr; @@ -2338,6 +2346,7 @@ struct vm_area_struct *copy_vma(struct v if (IS_ERR(pol)) goto out_free_vma; INIT_LIST_HEAD(&new_vma->anon_vma_chain); + spin_lock_init(&new_vma->adjust_lock); if (anon_vma_clone(new_vma, vma)) goto out_free_mempol; vma_set_policy(new_vma, pol); Index: linux-2.6.34-rc4-mm1/kernel/fork.c =================================================================== --- linux-2.6.34-rc4-mm1.orig/kernel/fork.c +++ linux-2.6.34-rc4-mm1/kernel/fork.c @@ -350,6 +350,7 @@ static int dup_mmap(struct mm_struct *mm goto fail_nomem; *tmp = *mpnt; INIT_LIST_HEAD(&tmp->anon_vma_chain); + spin_lock_init(&tmp->adjust_lock); pol = mpol_dup(vma_policy(mpnt)); retval = PTR_ERR(pol); if (IS_ERR(pol)) Index: linux-2.6.34-rc4-mm1/mm/rmap.c =================================================================== --- linux-2.6.34-rc4-mm1.orig/mm/rmap.c +++ linux-2.6.34-rc4-mm1/mm/rmap.c @@ -332,11 +332,14 @@ vma_address(struct page *page, struct vm pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT); unsigned long address; + spin_lock(&vma->adjust_lock); address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); if (unlikely(address < vma->vm_start || address >= vma->vm_end)) { + spin_unlock(&vma->adjust_lock); /* page should be within @vma mapping range */ return -EFAULT; } + spin_unlock(&vma->adjust_lock); return address; } -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org