linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Minchan Kim <minchan.kim@gmail.com>
To: Shaohua Li <shaohua.li@intel.com>
Cc: Jens Axboe <jaxboe@fusionio.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	"mgorman@suse.de" <mgorman@suse.de>,
	linux-mm <linux-mm@kvack.org>,
	lkml <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH]vmscan: add block plug for page reclaim
Date: Wed, 20 Jul 2011 15:30:05 +0900	[thread overview]
Message-ID: <CAEwNFnD3iCMBpZK95Ks+Z7DYbrzbZbSTLf3t6WXDQdeHrE6bLQ@mail.gmail.com> (raw)
In-Reply-To: <1311142253.15392.361.camel@sli10-conroe>

On Wed, Jul 20, 2011 at 3:10 PM, Shaohua Li <shaohua.li@intel.com> wrote:
> On Wed, 2011-07-20 at 13:53 +0800, Minchan Kim wrote:
>> On Wed, Jul 20, 2011 at 11:53 AM, Shaohua Li <shaohua.li@intel.com> wrote:
>> > per-task block plug can reduce block queue lock contention and increase request
>> > merge. Currently page reclaim doesn't support it. I originally thought page
>> > reclaim doesn't need it, because kswapd thread count is limited and file cache
>> > write is done at flusher mostly.
>> > When I test a workload with heavy swap in a 4-node machine, each CPU is doing
>> > direct page reclaim and swap. This causes block queue lock contention. In my
>> > test, without below patch, the CPU utilization is about 2% ~ 7%. With the
>> > patch, the CPU utilization is about 1% ~ 3%. Disk throughput isn't changed.
>>
>> Why doesn't it enhance through?
> throughput? The disk isn't that fast. We already can make it run in full

Yes. Sorry for the typo.

> speed, CPU isn't bottleneck here.

But you try to optimize CPU. so your experiment is not good.

>
>> It means merge is rare?
> Merge is still there even without my patch, but maybe not be able to
> make the request size biggest in cocurrent I/O.
>
>> > This should improve normal kswapd write and file cache write too (increase
>> > request merge for example), but might not be so obvious as I explain above.
>>
>> CPU utilization enhance on  4-node machine with heavy swap?
>> I think it isn't common situation.
>>
>> And I don't want to add new stack usage if it doesn't have a benefit.
>> As you know, direct reclaim path has a stack overflow.
>> These days, Mel, Dave and Christoph try to remove write path in
>> reclaim for solving stack usage and enhance write performance.
> it will use a little stack, yes. When I said the benefit isn't so
> obvious, it doesn't mean it has no benefit. For example, if kswapd and
> other threads write the same disk, this can still reduce lock contention
> and increase request merge. Part reason I didn't see obvious affect for
> file cache is my disk is slow.

If it begin swapping, I think the the performance would be less important,
But your patch is so simple that it would be mergable(Maybe Andrew
would merge regardless of my comment) but impact is a little in your
experiment.

I suggest you test it with fast disk like SSD and show the benefit to
us certainly. (I think you intel guy have a good SSD, apparently :D )

-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2011-07-20  6:30 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-07-20  2:53 Shaohua Li
2011-07-20  5:53 ` Minchan Kim
2011-07-20  6:10   ` Shaohua Li
2011-07-20  6:30     ` Minchan Kim [this message]
2011-07-20  6:49       ` Shaohua Li
2011-07-21 19:32         ` Jens Axboe
2011-07-22  5:14           ` Shaohua Li
2011-07-23 18:49             ` Jens Axboe
2011-07-27  3:09               ` Shaohua Li
2011-07-27 23:45               ` Andrew Morton
2011-07-28  1:04                 ` Shaohua Li
2011-07-28  1:15                   ` Andrew Morton
2011-07-28  1:34                     ` Shaohua Li
2011-07-29  8:38                 ` Minchan Kim
2011-07-29 10:30                   ` Shaohua Li
2011-07-29 10:43                   ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAEwNFnD3iCMBpZK95Ks+Z7DYbrzbZbSTLf3t6WXDQdeHrE6bLQ@mail.gmail.com \
    --to=minchan.kim@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=jaxboe@fusionio.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=shaohua.li@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox