From: Mel Gorman <mel@csn.ul.ie>
To: Minchan Kim <minchan.kim@gmail.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
Christoph Lameter <cl@linux-foundation.org>,
Andrew Morton <akpm@linux-foundation.org>,
Andrea Arcangeli <aarcange@redhat.com>,
Adam Litke <agl@us.ibm.com>, Avi Kivity <avi@redhat.com>,
David Rientjes <rientjes@google.com>,
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
Rik van Riel <riel@redhat.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH 04/14] mm,migration: Allow the migration of PageSwapCache pages
Date: Thu, 22 Apr 2010 16:44:43 +0100 [thread overview]
Message-ID: <20100422154443.GD30306@csn.ul.ie> (raw)
In-Reply-To: <1271947206.2100.216.camel@barrios-desktop>
On Thu, Apr 22, 2010 at 11:40:06PM +0900, Minchan Kim wrote:
> On Thu, 2010-04-22 at 23:23 +0900, Minchan Kim wrote:
> > On Thu, 2010-04-22 at 19:51 +0900, KAMEZAWA Hiroyuki wrote:
> > > On Thu, 22 Apr 2010 19:31:06 +0900
> > > KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
> > >
> > > > On Thu, 22 Apr 2010 19:13:12 +0900
> > > > Minchan Kim <minchan.kim@gmail.com> wrote:
> > > >
> > > > > On Thu, Apr 22, 2010 at 6:46 PM, KAMEZAWA Hiroyuki
> > > > > <kamezawa.hiroyu@jp.fujitsu.com> wrote:
> > > >
> > > > > > Hmm..in my test, the case was.
> > > > > >
> > > > > > Before try_to_unmap:
> > > > > > mapcount=1, SwapCache, remap_swapcache=1
> > > > > > After remap
> > > > > > mapcount=0, SwapCache, rc=0.
> > > > > >
> > > > > > So, I think there may be some race in rmap_walk() and vma handling or
> > > > > > anon_vma handling. migration_entry isn't found by rmap_walk.
> > > > > >
> > > > > > Hmm..it seems this kind patch will be required for debug.
> > > > >
> > >
> > > Ok, here is my patch for _fix_. But still testing...
> > > Running well at least for 30 minutes, where I can see bug in 10minutes.
> > > But this patch is too naive. please think about something better fix.
> > >
> > > ==
> > > From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> > >
> > > At adjust_vma(), vma's start address and pgoff is updated under
> > > write lock of mmap_sem. This means the vma's rmap information
> > > update is atoimic only under read lock of mmap_sem.
> > >
> > >
> > > Even if it's not atomic, in usual case, try_to_ummap() etc...
> > > just fails to decrease mapcount to be 0. no problem.
> > >
> > > But at page migration's rmap_walk(), it requires to know all
> > > migration_entry in page tables and recover mapcount.
> > >
> > > So, this race in vma's address is critical. When rmap_walk meet
> > > the race, rmap_walk will mistakenly get -EFAULT and don't call
> > > rmap_one(). This patch adds a lock for vma's rmap information.
> > > But, this is _very slow_.
> > > We need something sophisitcated, light-weight update for this..
> > >
> > >
> > > Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> > > ---
> > > include/linux/mm_types.h | 1 +
> > > kernel/fork.c | 1 +
> > > mm/mmap.c | 11 ++++++++++-
> > > mm/rmap.c | 3 +++
> > > 4 files changed, 15 insertions(+), 1 deletion(-)
> > >
> > > Index: linux-2.6.34-rc4-mm1/include/linux/mm_types.h
> > > ===================================================================
> > > --- linux-2.6.34-rc4-mm1.orig/include/linux/mm_types.h
> > > +++ linux-2.6.34-rc4-mm1/include/linux/mm_types.h
> > > @@ -183,6 +183,7 @@ struct vm_area_struct {
> > > #ifdef CONFIG_NUMA
> > > struct mempolicy *vm_policy; /* NUMA policy for the VMA */
> > > #endif
> > > + spinlock_t adjust_lock;
> > > };
> > >
> > > struct core_thread {
> > > Index: linux-2.6.34-rc4-mm1/mm/mmap.c
> > > ===================================================================
> > > --- linux-2.6.34-rc4-mm1.orig/mm/mmap.c
> > > +++ linux-2.6.34-rc4-mm1/mm/mmap.c
> > > @@ -584,13 +584,20 @@ again: remove_next = 1 + (end > next->
> > > if (adjust_next)
> > > vma_prio_tree_remove(next, root);
> > > }
> > > -
> > > + /*
> > > + * changing all params in atomic. If not, vma_address in rmap.c
> > > + * can see wrong result.
> > > + */
> > > + spin_lock(&vma->adjust_lock);
> > > vma->vm_start = start;
> > > vma->vm_end = end;
> > > vma->vm_pgoff = pgoff;
> > > + spin_unlock(&vma->adjust_lock);
> > > if (adjust_next) {
> > > + spin_lock(&next->adjust_lock);
> > > next->vm_start += adjust_next << PAGE_SHIFT;
> > > next->vm_pgoff += adjust_next;
> > > + spin_unlock(&next->adjust_lock);
> > > }
> > >
> > > if (root) {
> > > @@ -1939,6 +1946,7 @@ static int __split_vma(struct mm_struct
> > > *new = *vma;
> > >
> > > INIT_LIST_HEAD(&new->anon_vma_chain);
> > > + spin_lock_init(&new->adjust_lock);
> > >
> > > if (new_below)
> > > new->vm_end = addr;
> > > @@ -2338,6 +2346,7 @@ struct vm_area_struct *copy_vma(struct v
> > > if (IS_ERR(pol))
> > > goto out_free_vma;
> > > INIT_LIST_HEAD(&new_vma->anon_vma_chain);
> > > + spin_lock_init(&new_vma->adjust_lock);
> > > if (anon_vma_clone(new_vma, vma))
> > > goto out_free_mempol;
> > > vma_set_policy(new_vma, pol);
> > > Index: linux-2.6.34-rc4-mm1/kernel/fork.c
> > > ===================================================================
> > > --- linux-2.6.34-rc4-mm1.orig/kernel/fork.c
> > > +++ linux-2.6.34-rc4-mm1/kernel/fork.c
> > > @@ -350,6 +350,7 @@ static int dup_mmap(struct mm_struct *mm
> > > goto fail_nomem;
> > > *tmp = *mpnt;
> > > INIT_LIST_HEAD(&tmp->anon_vma_chain);
> > > + spin_lock_init(&tmp->adjust_lock);
> > > pol = mpol_dup(vma_policy(mpnt));
> > > retval = PTR_ERR(pol);
> > > if (IS_ERR(pol))
> > > Index: linux-2.6.34-rc4-mm1/mm/rmap.c
> > > ===================================================================
> > > --- linux-2.6.34-rc4-mm1.orig/mm/rmap.c
> > > +++ linux-2.6.34-rc4-mm1/mm/rmap.c
> > > @@ -332,11 +332,14 @@ vma_address(struct page *page, struct vm
> > > pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT);
> > > unsigned long address;
> > >
> > > + spin_lock(&vma->adjust_lock);
> > > address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
> > > if (unlikely(address < vma->vm_start || address >= vma->vm_end)) {
> > > + spin_unlock(&vma->adjust_lock);
> > > /* page should be within @vma mapping range */
> > > return -EFAULT;
> > > }
> > > + spin_unlock(&vma->adjust_lock);
> > > return address;
> > > }
> > >
> >
> > Nice Catch, Kame. :)
> >
> > For further optimization, we can hold vma->adjust_lock if vma_address
> > returns -EFAULT. But I hope we redesigns it without new locking.
> > But I don't have good idea, now. :(
>
> How about this?
> I just merged ideas of Mel and Kame.:)
>
> It just shows the concept, not formal patch.
>
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> index f90ea92..61ea742 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -578,6 +578,8 @@ again: remove_next = 1 + (end > next->vm_end);
> }
> }
>
> + if (vma->anon_vma)
> + spin_lock(&vma->anon_vma->lock);
> if (root) {
> flush_dcache_mmap_lock(mapping);
> vma_prio_tree_remove(vma, root);
> @@ -619,7 +621,8 @@ again: remove_next = 1 + (end > next->vm_end);
>
> if (mapping)
> spin_unlock(&mapping->i_mmap_lock);
> -
> + if (vma->anon_vma)
> + spin_unlock(&vma->anon_vma->lock);
> if (remove_next) {
> if (file) {
> fput(file);
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 3a53d9f..8075057 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1359,9 +1359,22 @@ static int rmap_walk_anon(struct page *page, int (*rmap_one)(struct page *,
> spin_lock(&anon_vma->lock);
> list_for_each_entry(avc, &anon_vma->head, same_anon_vma) {
> struct vm_area_struct *vma = avc->vma;
> - unsigned long address = vma_address(page, vma);
> - if (address == -EFAULT)
> + struct anon_vma *tmp_anon_vma = vma->anon_vma;
> + unsigned long address;
> + int tmp_vma_lock = 0;
> +
> + if (tmp_anon_vma != anon_vma) {
> + spin_lock(&tmp_anon_vma->lock);
> + tmp_vma_lock = 1;
> + }
heh, I thought of a similar approach at the same time as you but missed
this mail until later. However, with this approach I suspect there is a
possibility that two walkers of the same anon_vma list could livelock if
two locks on the list are held at the same time. Am still thinking of
how it could be resolved without introducing new locking.
> + address = vma_address(page, vma);
> + if (address == -EFAULT) {
> + if (tmp_vma_lock)
> + spin_unlock(&tmp_anon_vma->lock);
> continue;
> + }
> + if (tmp_vma_lock)
> + spin_unlock(&tmp_anon_vma->lock);
> ret = rmap_one(page, vma, address, arg);
> if (ret != SWAP_AGAIN)
> break;
> --
> 1.7.0.5
>
>
>
> --
> Kind regards,
> Minchan Kim
>
>
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2010-04-22 15:45 UTC|newest]
Thread overview: 69+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-04-20 21:01 [PATCH 0/14] Memory Compaction v8 Mel Gorman
2010-04-20 21:01 ` [PATCH 01/14] mm,migration: Take a reference to the anon_vma before migrating Mel Gorman
2010-04-21 2:49 ` KAMEZAWA Hiroyuki
2010-04-20 21:01 ` [PATCH 02/14] mm,migration: Share the anon_vma ref counts between KSM and page migration Mel Gorman
2010-04-20 21:01 ` [PATCH 03/14] mm,migration: Do not try to migrate unmapped anonymous pages Mel Gorman
2010-04-20 21:01 ` [PATCH 04/14] mm,migration: Allow the migration of PageSwapCache pages Mel Gorman
2010-04-21 14:30 ` Christoph Lameter
2010-04-21 15:00 ` Mel Gorman
2010-04-21 15:05 ` Christoph Lameter
2010-04-21 15:14 ` Mel Gorman
2010-04-21 15:31 ` Christoph Lameter
2010-04-21 15:34 ` Mel Gorman
2010-04-21 15:46 ` Christoph Lameter
2010-04-22 9:28 ` Mel Gorman
2010-04-22 9:46 ` KAMEZAWA Hiroyuki
2010-04-22 10:13 ` Minchan Kim
2010-04-22 10:31 ` KAMEZAWA Hiroyuki
2010-04-22 10:51 ` KAMEZAWA Hiroyuki
2010-04-22 14:14 ` Mel Gorman
2010-04-22 14:18 ` Minchan Kim
2010-04-22 15:40 ` Mel Gorman
2010-04-22 16:13 ` Mel Gorman
2010-04-22 19:29 ` Mel Gorman
2010-04-22 19:40 ` Christoph Lameter
2010-04-22 23:52 ` KAMEZAWA Hiroyuki
2010-04-23 9:03 ` Mel Gorman
2010-04-22 14:23 ` Minchan Kim
2010-04-22 14:40 ` Minchan Kim
2010-04-22 15:44 ` Mel Gorman [this message]
2010-04-23 18:31 ` Andrea Arcangeli
2010-04-23 19:23 ` Mel Gorman
2010-04-23 19:39 ` Andrea Arcangeli
2010-04-23 21:35 ` Andrea Arcangeli
2010-04-24 10:52 ` Mel Gorman
2010-04-24 11:13 ` Andrea Arcangeli
2010-04-24 11:59 ` Mel Gorman
2010-04-24 14:30 ` Andrea Arcangeli
2010-04-26 21:54 ` Rik van Riel
2010-04-26 22:11 ` Mel Gorman
2010-04-26 22:26 ` Andrea Arcangeli
2010-04-25 14:41 ` Andrea Arcangeli
2010-04-27 9:40 ` Mel Gorman
2010-04-27 10:41 ` KAMEZAWA Hiroyuki
2010-04-27 11:12 ` Mel Gorman
2010-04-27 15:42 ` Andrea Arcangeli
2010-04-24 10:50 ` Mel Gorman
2010-04-22 15:14 ` Christoph Lameter
2010-04-23 3:39 ` Paul E. McKenney
2010-04-23 4:55 ` Minchan Kim
2010-04-21 23:59 ` KAMEZAWA Hiroyuki
2010-04-22 0:11 ` Minchan Kim
2010-04-20 21:01 ` [PATCH 05/14] mm: Allow CONFIG_MIGRATION to be set without CONFIG_NUMA or memory hot-remove Mel Gorman
2010-04-20 21:01 ` [PATCH 06/14] mm: Export unusable free space index via debugfs Mel Gorman
2010-04-20 21:01 ` [PATCH 07/14] mm: Export fragmentation " Mel Gorman
2010-04-20 21:01 ` [PATCH 08/14] mm: Move definition for LRU isolation modes to a header Mel Gorman
2010-04-20 21:01 ` [PATCH 09/14] mm,compaction: Memory compaction core Mel Gorman
2010-04-20 21:01 ` [PATCH 10/14] mm,compaction: Add /proc trigger for memory compaction Mel Gorman
2010-04-20 21:01 ` [PATCH 11/14] mm,compaction: Add /sys trigger for per-node " Mel Gorman
2010-04-20 21:01 ` [PATCH 12/14] mm,compaction: Direct compact when a high-order allocation fails Mel Gorman
2010-05-05 12:19 ` [PATCH] fix count_vm_event preempt in memory compaction direct reclaim Andrea Arcangeli
2010-05-05 12:51 ` Mel Gorman
2010-05-05 13:11 ` Andrea Arcangeli
2010-05-05 13:55 ` Mel Gorman
2010-05-05 14:48 ` Andrea Arcangeli
2010-05-05 15:14 ` Mel Gorman
2010-05-05 15:25 ` Andrea Arcangeli
2010-05-05 15:32 ` Mel Gorman
2010-04-20 21:01 ` [PATCH 13/14] mm,compaction: Add a tunable that decides when memory should be compacted and when it should be reclaimed Mel Gorman
2010-04-20 21:01 ` [PATCH 14/14] mm,compaction: Defer compaction using an exponential backoff when compaction fails Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20100422154443.GD30306@csn.ul.ie \
--to=mel@csn.ul.ie \
--cc=aarcange@redhat.com \
--cc=agl@us.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=avi@redhat.com \
--cc=cl@linux-foundation.org \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=kosaki.motohiro@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan.kim@gmail.com \
--cc=riel@redhat.com \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox