linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Satoru Moriya <satoru.moriya@hds.com>
To: Rik van Riel <riel@redhat.com>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	"mel@csn.ul.ie" <mel@csn.ul.ie>,
	"kosaki.motohiro@jp.fujitsu.com" <kosaki.motohiro@jp.fujitsu.com>,
	"rdunlap@xenotime.net" <rdunlap@xenotime.net>,
	"dle-develop@lists.sourceforge.net"
	<dle-develop@lists.sourceforge.net>,
	Seiji Aguchi <seiji.aguchi@hds.com>
Subject: RE: [RFC][PATCH 0/2] Tunable watermark
Date: Thu, 10 Feb 2011 13:30:13 -0500	[thread overview]
Message-ID: <65795E11DBF1E645A09CEC7EAEE94B9C3BCD59E6@USINDEVS02.corp.hds.com> (raw)
In-Reply-To: <4D38D070.2050802@redhat.com>

On 01/20/2011 07:16 PM, Rik van Riel wrote:
> On 01/07/2011 05:03 PM, Satoru Moriya wrote:
> 
> > The result is following.
> >
> >                   | default |  case 1   |  case 2 |
> > ----------------------------------------------------------
> > wmark_min_kbytes  |  5752   |    5752   |   5752  |
> > wmark_low_kbytes  |  7190   |   16384   |  32768  | (KB)
> > wmark_high_kbytes |  8628   |   20480   |  40960  |
> > ----------------------------------------------------------
> > real              |   503   |    364    |    337  |
> > user              |     3   |      5    |      4  | (msec)
> > sys               |   153   |    149    |    146  |
> > ----------------------------------------------------------
> > page fault        |  32768  |  32768    |  32768  |
> > kswapd_wakeup     |   1809  |    335    |    228  | (times)
> > direct reclaim    |      5  |      0    |      0  |
> >
> > As you can see, direct reclaim was performed 5 times and
> > its exec time was 503 msec in the default case. On the other
> > hand, in case 1 (large delta case ) no direct reclaim was
> > performed and its exec time was 364 msec.
> 
> Saving 1.5 seconds on a one-off workload is probably not
> worth the complexity of giving a system administrator
> yet another set of tunables to mess with.

Above table shows average data but they might not be enough.
In a low-latency enterprise system, worst latency is the most
important. I recorded worst latency data per one page allocation
and here it is.

                    | default |  case 1   |  case 2 |
----------------------------------------------------------
worst latency       |   223   |    75     |    50   | (usec)  
 per one page alloc |         |           |         |

In the default case, the worst latency is 223 usec and at that time
direct reclaim occurred. OTOH our target latency is under 100 usec.
So I'd like to ensure that direct reclaim is never executed in a certain
situation.

> However, I suspect it may be a good idea if the kernel
> could adjust these watermarks automatically, since direct
> reclaim could lead to quite a big performance penalty.
> 
> I do not know which events should be used to increase and
> decrease the watermarks, but I have some ideas:
> - direct reclaim (increase)
> - kswapd has trouble freeing pages (increase)
> - kswapd frees enough memory at DEF_PRIORITY (decrease)
> - next to no direct reclaim events in the last N (1000?)
>    reclaim events (decrease)

I think it might be good idea but not enough because we can't avoid
direct reclaim completely. So what do you think of introducing a learning
mode to your idea? In the learning mode, kernel calculates appropriate
watermarks and next boot users use them.

It is useful for a enterprise system because we normally do performance/stress
tests and tune it before release. If we run stress tests under the learning mode,
we can get the appropriate watermarks for that system. By using them we can avoid
direct reclaim and keep latency low enough in a product system.

> I guess we will also need to be sure that the watermarks
> are never raised above some sane upper threshold.  Maybe
> 4x or 5x the default?
> 
> 
> --
> All rights reversed

      reply	other threads:[~2011-02-10 18:34 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-01-07 22:03 Satoru Moriya
2011-01-07 22:04 ` [RFC][PATCH 1/2] Add explanation about min_free_kbytes to clarify its effect Satoru Moriya
2011-01-07 22:27   ` David Rientjes
2011-01-07 22:07 ` [RFC][PATCH 2/2] Make watermarks tunable separately Satoru Moriya
2011-01-07 22:23 ` [RFC][PATCH 0/2] Tunable watermark David Rientjes
2011-01-07 22:35   ` Ying Han
2011-01-07 22:39     ` David Rientjes
2011-01-13 22:05       ` Satoru Moriya
2011-01-13 22:24         ` David Rientjes
2011-01-13 22:05   ` Satoru Moriya
2011-01-13 22:20     ` David Rientjes
2011-01-21  0:16 ` Rik van Riel
2011-02-10 18:30   ` Satoru Moriya [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=65795E11DBF1E645A09CEC7EAEE94B9C3BCD59E6@USINDEVS02.corp.hds.com \
    --to=satoru.moriya@hds.com \
    --cc=akpm@linux-foundation.org \
    --cc=dle-develop@lists.sourceforge.net \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mel@csn.ul.ie \
    --cc=rdunlap@xenotime.net \
    --cc=riel@redhat.com \
    --cc=seiji.aguchi@hds.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox