linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Rik van Riel <riel@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm <linux-mm@kvack.org>
Subject: RE: Followup: [PATCH -mm] make swapin readahead skip over holes
Date: Tue, 17 Apr 2012 08:20:58 -0700 (PDT)	[thread overview]
Message-ID: <f81dcf86-fb34-4e39-923b-3fd1862e60c6@default> (raw)
In-Reply-To: <4F8C7D59.1000402@redhat.com>

> From: Rik van Riel [mailto:riel@redhat.com]
> Subject: Re: Followup: [PATCH -mm] make swapin readahead skip over holes
> 
> On 04/16/2012 02:34 PM, Dan Magenheimer wrote:
> > Hi Rik --
> >
> > For values of N=24 and N=28, your patch made the workload
> > run 4-9% percent faster.  For N=16 and N=20, it was 5-10%
> > slower.  And for N=36 and N=40, it was 30%-40% slower!
> >
> > Is this expected?  Since the swap "disk" is a partition
> > on the one active drive, maybe the advantage is lost due
> > to contention?
> 
> There are several things going on here:
> 
> 1) you are running a workload that thrashes
> 
> 2) the speed at which data is swapped in is increased
>     with this patch
> 
> 3) with only 1GB memory, the inactive anon list is
>     the same size as the active anon list
> 
> 4) the above points combined mean that less of the
>     working set could be in memory at once
> 
> One solution may be to decrease the swap cluster for
> small systems, when they are thrashing.
> 
> On the other hand, for most systems swap is very much
> a special circumstance, and you want to focus on quickly
> moving excess stuff into swap, and moving it back into
> memory when needed.

Hmmm... as I look at this patch more, I think I get a
picture of what's going on and I'm still concerned.
Please correct me if I am misunderstanding:

What the patch does is increase the average size of
a "cluster" of sequential pages brought in per "read"
from the swap device.  As a result there are more pages
brought back into memory "speculatively" because it is
presumably cheaper to bring in more pages per disk seek,
even if it results in a lower "swapcache hit rate".
In effect, you've done the equivalent of increasing the
default swap cluster size (on average).

If the above is wrong, please cut here and ignore
the following. :-)  But in case it is right (or
close enough), let me continue...

In other words, you are both presuming a "swap workload"
that is more sequential than random for which this patch
improves performance, and assuming a "swap device" 
for which the cost of a seek is high enough to overcome
the costs of filling the swap cache with pages that won't
be used.

While it is easy to write a simple test/benchmark that
swaps a lot (and we probably all have similar test code
that writes data into a huge bigger-than-RAM array and then
reads it back), such a test/benchmark is usually sequential,
so one would assume most swap testing is done with a
sequential-favoring workload.  The kernbench workload
apparently exercises swap quite a bit more randomly and
your patch makes it run slower for low and high levels
of swapping, while faster for moderate swapping.

I also suspect (without proof) that the patch will
result in lower performance on non-rotating devices, such
as SSDs.

(Sure one can change the swap cluster size to 1, but how
many users or even sysadmins know such a thing even
exists... so the default is important.)

I'm no I/O expert, but I suspect if one of the Linux
I/O developers proposed a patch that unilaterally made
all sequential I/O faster and all random I/O slower,
it would get torn to pieces.

I'm certainly not trying to tear your patch to pieces,
just trying to evaluate it.  Hope that's OK.

Thanks,
Dan

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2012-04-17 15:21 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-04-16 18:34 Dan Magenheimer
2012-04-16 20:13 ` Rik van Riel
2012-04-17 15:20   ` Dan Magenheimer [this message]
2012-04-17 19:26     ` Rik van Riel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f81dcf86-fb34-4e39-923b-3fd1862e60c6@default \
    --to=dan.magenheimer@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=riel@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox