linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
To: Andrea Arcangeli <aarcange@redhat.com>
Cc: kosaki.motohiro@jp.fujitsu.com,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	Hugh Dickins <hugh.dickins@tiscali.co.uk>,
	Andrew Morton <akpm@linux-foundation.org>,
	Izik Eidus <ieidus@redhat.com>, Chris Wright <chrisw@redhat.com>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH 2/9] ksm: let shared pages be swappable
Date: Tue,  1 Dec 2009 18:28:16 +0900 (JST)	[thread overview]
Message-ID: <20091201181633.5C31.A69D9226@jp.fujitsu.com> (raw)
In-Reply-To: <20091201091111.GK30235@random.random>

> On Tue, Dec 01, 2009 at 09:39:45AM +0900, KAMEZAWA Hiroyuki wrote:
> > Maybe some modification to lru scanning is necessary independent from ksm.
> > I think.
> 
> It looks independent from ksm yes. Larry case especially has cpus
> hanging in fork, and for those cpus to make progress it'd be enough to
> release the anon_vma lock for a little while. I think counting the
> number of young bits we found might be enough to fix this (at least
> for anon_vma were we can easily randomize the ptes we scan). Let's
> just break the rmap loop of page_referenced() after we cleared N young
> bits. If we found so many young bits it's pointless to continue. It
> still looks preferable than doing nothing or a full scan depending on
> a magic mapcount value. It's preferable because we'll do real work
> incrementally and we give a chance to heavily mapped but totally
> unused pages to go away in perfect lru order.
>
> Sure we can still end up with a 10000 length of anon_vma chain (or
> rmap_item chain, or prio_tree scan) with all N young bits set in the
> very last N vmas we check. But statistically with so many mappings
> such a scenario has a very low probability to materialize. It's not
> very useful to be so aggressive on a page where the young bits are
> refreshed quick all the time because of plenty of mappings and many of
> them using the page. If we do this, we've also to rotate the anon_vma
> list too to start from a new vma, which globally it means randomizing
> it. For anon_vma (and conceptually for ksm rmap_item, not sure in
> implementation terms) it's trivial to rotate to randomize the young
> bit scan. For prio_tree (that includes tmpfs) it's much harder.
> 
> In addition to returning 1 every N young bit cleared, we should
> ideally also have a spin_needbreak() for the rmap lock so things like
> fork can continue against page_referenced one and try_to_unmap
> too. Even for the prio_tree we could record the prio_tree position on
> the stack and we can add a bit that signals when the prio_tree got
> modified under us. But if the rmap structure modified from under us
> we're in deep trouble: after that we have to either restart from
> scratch (risking a livelock in page_referenced(), so not really
> feasible) or alternatively to return 1 breaking the loop which would
> make the VM less reliable (which means we would be increasing the
> probability of a suprious OOM) . Somebody could just mmap the hugely
> mapped file from another task in a loop, and prevent the
> page_referenced_one and try_to_unmap to ever complete on all pages of
> that file! So I don't really know how to implement the spin_needbreak
> without making the VM exploitable. But I'm quite confident there is no
> way the below can make the VM less reliable, and the spin_needbreak is
> much less relevant for anon_vma than it is for prio_tree because it's
> trivial to randomize the ptes we scan for young bit with
> anon_vma. Maybe this also is enough to fix tmpfs.
> 
> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
> ---
> 
> diff --git a/mm/rmap.c b/mm/rmap.c
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -60,6 +60,8 @@
>  
>  #include "internal.h"
>  
> +#define MAX_YOUNG_BIT_CLEARED 64
> +
>  static struct kmem_cache *anon_vma_cachep;
>  
>  static inline struct anon_vma *anon_vma_alloc(void)
> @@ -420,6 +422,24 @@ static int page_referenced_anon(struct p
>  						  &mapcount, vm_flags);
>  		if (!mapcount)
>  			break;
> +
> +		/*
> +		 * Break the loop early if we found many active
> +		 * mappings and go deep into the long chain only if
> +		 * this looks a fully unused page. Otherwise we only
> +		 * waste this cpu and we hang other CPUs too that
> +		 * might be waiting on our lock to be released.
> +		 */
> +		if (referenced >= MAX_YOUNG_BIT_CLEARED) {
> +			/*
> +			 * randomize the MAX_YOUNG_BIT_CLEARED ptes
> +			 * that we scan at every page_referenced_one()
> +			 * call on this page.
> +			 */
> +			list_del(&anon_vma->head);
> +			list_add(&anon_vma->head, &vma->anon_vma_node);
> +			break;
> +		}
>  	}

This patch doesn't works correctly. shrink_active_list() use page_referenced() for
clear young bit and doesn't use return value.
after this patch apply, shrink_active_list() move the page to inactive list although
the page still have many young bit. then, next shrink_inactive_list() move the page
to active list again.



>  	page_unlock_anon_vma(anon_vma);
> @@ -485,6 +505,16 @@ static int page_referenced_file(struct p
>  						  &mapcount, vm_flags);
>  		if (!mapcount)
>  			break;
> +
> +		/*
> +		 * Break the loop early if we found many active
> +		 * mappings and go deep into the long chain only if
> +		 * this looks a fully unused page. Otherwise we only
> +		 * waste this cpu and we hang other CPUs too that
> +		 * might be waiting on our lock to be released.
> +		 */
> +		if (referenced >= MAX_YOUNG_BIT_CLEARED)
> +			break;
>  	}
>  
>  	spin_unlock(&mapping->i_mmap_lock);



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2009-12-01  9:28 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-11-24 16:37 [PATCH 0/9] ksm: swapping Hugh Dickins
2009-11-24 16:40 ` [PATCH 1/9] ksm: fix mlockfreed to munlocked Hugh Dickins
2009-11-24 23:53   ` Rik van Riel
2009-11-26 16:20   ` Mel Gorman
2009-11-27 12:45     ` Hugh Dickins
2009-11-30  6:01       ` KOSAKI Motohiro
2009-11-30 12:26         ` Hugh Dickins
2009-11-30 21:27           ` Lee Schermerhorn
2009-12-01 11:14       ` Mel Gorman
2009-11-24 16:42 ` [PATCH 2/9] ksm: let shared pages be swappable Hugh Dickins
2009-11-30  0:46   ` KAMEZAWA Hiroyuki
2009-11-30  9:15     ` KOSAKI Motohiro
2009-11-30 12:38       ` Hugh Dickins
2009-12-01  4:14         ` KOSAKI Motohiro
2009-11-30 11:55     ` Hugh Dickins
2009-11-30 12:07     ` Andrea Arcangeli
2009-12-01  0:39       ` KAMEZAWA Hiroyuki
2009-12-01  6:32         ` Chris Wright
2009-12-01  9:11         ` Andrea Arcangeli
2009-12-01  9:28           ` KOSAKI Motohiro [this message]
2009-12-01  9:37             ` Andrea Arcangeli
2009-12-01  9:46               ` KOSAKI Motohiro
2009-12-01  9:59                 ` Andrea Arcangeli
2009-12-02  5:08                   ` Rik van Riel
2009-12-02 12:55                     ` Andrea Arcangeli
2009-12-03  5:15                       ` KOSAKI Motohiro
2009-12-04  5:06                         ` KOSAKI Motohiro
2009-12-04  5:16                           ` KAMEZAWA Hiroyuki
2009-12-04 14:49                             ` Andrea Arcangeli
2009-12-04 17:16                             ` Chris Wright
2009-12-04 18:53                               ` Andrea Arcangeli
2009-12-04 19:03                                 ` Chris Wright
2009-12-09  0:43                               ` KAMEZAWA Hiroyuki
2009-12-09  1:04                                 ` Chris Wright
2009-12-09 16:12                                 ` Andrea Arcangeli
2009-12-09 23:54                                   ` KAMEZAWA Hiroyuki
2009-12-04 14:45                           ` Andrea Arcangeli
2009-12-04 16:21                             ` Rik van Riel
2009-11-24 16:43 ` [PATCH 3/9] ksm: hold anon_vma in rmap_item Hugh Dickins
2009-11-24 16:45 ` [PATCH 4/9] ksm: take keyhole reference to page Hugh Dickins
2009-11-24 16:48 ` [PATCH 5/9] ksm: share anon page without allocating Hugh Dickins
2009-11-30  0:04   ` KAMEZAWA Hiroyuki
2009-11-30 11:18     ` Hugh Dickins
2009-12-01  0:02       ` KAMEZAWA Hiroyuki
2009-11-24 16:51 ` [PATCH 6/9] ksm: mem cgroup charge swapin copy Hugh Dickins
2009-11-25 14:23   ` Balbir Singh
2009-11-25 17:12     ` Hugh Dickins
2009-11-25 17:36       ` Balbir Singh
2009-11-30  0:13   ` KAMEZAWA Hiroyuki
2009-11-30 11:40     ` Hugh Dickins
2009-11-24 16:54 ` [PATCH 7/9] ksm: rmap_walk to remove_migation_ptes Hugh Dickins
2009-11-24 16:56 ` [PATCH 8/9] ksm: memory hotremove migration only Hugh Dickins
2009-11-24 16:57 ` [PATCH 9/9] ksm: remove unswappable max_kernel_pages Hugh Dickins

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20091201181633.5C31.A69D9226@jp.fujitsu.com \
    --to=kosaki.motohiro@jp.fujitsu.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=chrisw@redhat.com \
    --cc=hugh.dickins@tiscali.co.uk \
    --cc=ieidus@redhat.com \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox