From: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
To: Larry Woodman <lwoodman@redhat.com>
Cc: kosaki.motohiro@jp.fujitsu.com, Rik van Riel <riel@redhat.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
akpm@linux-foundation.org,
Hugh Dickins <hugh.dickins@tiscali.co.uk>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
Andrea Arcangeli <aarcange@redhat.com>
Subject: Re: [RFC] high system time & lock contention running large mixed workload
Date: Fri, 4 Dec 2009 09:36:21 +0900 (JST) [thread overview]
Message-ID: <20091204092445.587D.A69D9226@jp.fujitsu.com> (raw)
In-Reply-To: <1259878496.2345.57.camel@dhcp-100-19-198.bos.redhat.com>
> On Tue, 2009-12-01 at 21:20 -0500, Rik van Riel wrote:
>
> > This is reasonable, except for the fact that pages that are moved
> > to the inactive list without having the referenced bit cleared are
> > guaranteed to be moved back to the active list.
> >
> > You'll be better off without that excess list movement, by simply
> > moving pages directly back onto the active list if the trylock
> > fails.
> >
>
>
> The attached patch addresses this issue by changing page_check_address()
> to return -1 if the spin_trylock() fails and page_referenced_one() to
> return 1 in that path so the page gets moved back to the active list.
>
> Also, BTW, check this out: an 8-CPU/16GB system running AIM 7 Compute
> has 196491 isolated_anon pages. This means that ~6140 processes are
> somewhere down in try_to_free_pages() since we only isolate 32 pages at
> a time, this is out of 9000 processes...
>
>
> ---------------------------------------------------------------------
> active_anon:2140361 inactive_anon:453356 isolated_anon:196491
> active_file:3438 inactive_file:1100 isolated_file:0
> unevictable:2802 dirty:153 writeback:0 unstable:0
> free:578920 slab_reclaimable:49214 slab_unreclaimable:93268
> mapped:1105 shmem:0 pagetables:139100 bounce:0
>
> Node 0 Normal free:1647892kB min:12500kB low:15624kB high:18748kB
> active_anon:7835452kB inactive_anon:785764kB active_file:13672kB
> inactive_file:4352kB unevictable:11208kB isolated(anon):785964kB
> isolated(file):0kB present:12410880kB mlocked:11208kB dirty:604kB
> writeback:0kB mapped:4344kB shmem:0kB slab_reclaimable:177792kB
> slab_unreclaimable:368676kB kernel_stack:73256kB pagetables:489972kB
> unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0
>
> 202895 total pagecache pages
> 197629 pages in swap cache
> Swap cache stats: add 6954838, delete 6757209, find 1251447/2095005
> Free swap = 65881196kB
> Total swap = 67354616kB
> 3997696 pages RAM
> 207046 pages reserved
> 1688629 pages shared
> 3016248 pages non-shared
This seems we have to improve reclaim bale out logic. the system already
have 1.5GB free pages. IOW, the system don't need swap-out anymore.
> @@ -352,9 +359,11 @@ static int page_referenced_one(struct page *page,
> if (address == -EFAULT)
> goto out;
>
> - pte = page_check_address(page, mm, address, &ptl, 0);
> + pte = page_check_address(page, mm, address, &ptl, 0, trylock);
> if (!pte)
> goto out;
> + else if (pte == (pte_t *)-1)
> + return 1;
>
> /*
> * Don't want to elevate referenced for mlocked page that gets this far,
Sorry, NAK.
I have to say the same thing of Rik's previous mention. shrink_active_list()
ignore the return value of page_referenced(). then above 'return 1' is meaningless.
Umm, ok, I'll make the patch myself.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2009-12-04 0:36 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-11-25 18:37 [PATCH] vmscan: do not evict inactive pages when skipping an active list scan Rik van Riel
2009-11-25 20:35 ` Johannes Weiner
2009-11-25 20:47 ` Rik van Riel
2009-11-26 2:50 ` KOSAKI Motohiro
2009-11-26 2:57 ` Rik van Riel
2009-11-30 22:00 ` [RFC] high system time & lock contention running large mixed workload Larry Woodman
2009-12-01 10:04 ` Andrea Arcangeli
2009-12-01 12:31 ` KOSAKI Motohiro
2009-12-01 12:46 ` Andrea Arcangeli
2009-12-02 2:02 ` KOSAKI Motohiro
2009-12-02 2:04 ` Rik van Riel
2009-12-02 2:00 ` Rik van Riel
2009-12-01 12:23 ` KOSAKI Motohiro
2009-12-01 16:41 ` Larry Woodman
2009-12-02 2:20 ` Rik van Riel
2009-12-02 2:41 ` KOSAKI Motohiro
2009-12-03 22:14 ` Larry Woodman
2009-12-04 0:29 ` Rik van Riel
2009-12-04 21:26 ` Larry Woodman
2009-12-06 21:04 ` Rik van Riel
2009-12-04 0:36 ` KOSAKI Motohiro [this message]
2009-12-04 19:31 ` Larry Woodman
2009-12-02 2:55 ` [PATCH] Clear reference bit although page isn't mapped KOSAKI Motohiro
2009-12-02 3:07 ` Rik van Riel
2009-12-02 3:28 ` [PATCH] Replace page_mapping_inuse() with page_mapped() KOSAKI Motohiro
2009-12-02 4:57 ` Rik van Riel
2009-12-02 11:07 ` Johannes Weiner
2009-12-02 1:55 ` [RFC] high system time & lock contention running large mixed workload Rik van Riel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20091204092445.587D.A69D9226@jp.fujitsu.com \
--to=kosaki.motohiro@jp.fujitsu.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=hugh.dickins@tiscali.co.uk \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lwoodman@redhat.com \
--cc=riel@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox