From: Mark Petersen <mpete_06@hotmail.com>
To: vgoyal@redhat.com
Cc: bugme-daemon@bugzilla.kernel.org, axboe@kernel.dk,
linux-mm@kvack.org, linux-scsi@vger.kernel.org,
akpm@linux-foundation.org
Subject: RE: [Bugme-new] [Bug 41552] New: Performance of writing and reading from multiple drives decreases by 40% when going from Linux Kernel 2.6.36.4 to 2.6.37 (and beyond)
Date: Mon, 22 Aug 2011 15:32:38 -0500 [thread overview]
Message-ID: <BLU165-W344B2B1CD9DC44A9DABA5DFF2F0@phx.gbl> (raw)
In-Reply-To: <20110822194854.GA15087@redhat.com>
[-- Attachment #1: Type: text/plain, Size: 4282 bytes --]
We were using the default CFQ scheduler. I will change it to deadline and see what happens. Also, we are not using a file system to perform the writes, rather we are sending SCSI commands directly to the devices, nor are we doing anything special to the disks with a device mapper. We simply write to each one on a different thread one sector at a time.
I will attempt to get the trace and will add it if I can.
Thanks,
Mark
> Date: Mon, 22 Aug 2011 15:48:54 -0400
> From: vgoyal@redhat.com
> To: mpete_06@hotmail.com
> CC: bugme-daemon@bugzilla.kernel.org; axboe@kernel.dk; linux-mm@kvack.org; linux-scsi@vger.kernel.org; akpm@linux-foundation.org
> Subject: Re: [Bugme-new] [Bug 41552] New: Performance of writing and reading from multiple drives decreases by 40% when going from Linux Kernel 2.6.36.4 to 2.6.37 (and beyond)
>
> On Mon, Aug 22, 2011 at 12:24:43PM -0700, Andrew Morton wrote:
> >
> > (switched to email. Please respond via emailed reply-to-all, not via the
> > bugzilla web interface).
> >
> > On Mon, 22 Aug 2011 15:20:41 GMT
> > bugzilla-daemon@bugzilla.kernel.org wrote:
> >
> > > https://bugzilla.kernel.org/show_bug.cgi?id=41552
> > >
> > > Summary: Performance of writing and reading from multiple
> > > drives decreases by 40% when going from Linux Kernel
> > > 2.6.36.4 to 2.6.37 (and beyond)
> > > Product: IO/Storage
> > > Version: 2.5
> > > Kernel Version: 2.6.37
> > > Platform: All
> > > OS/Version: Linux
> > > Tree: Mainline
> > > Status: NEW
> > > Severity: normal
> > > Priority: P1
> > > Component: SCSI
> > > AssignedTo: linux-scsi@vger.kernel.org
> > > ReportedBy: mpete_06@hotmail.com
> > > Regression: No
> > >
> > >
> > > We have an application that will write and read from every sector on a drive.
> > > The application can perform these tasks on multiple drives at the same time.
> > > It is designed to run on top of the Linux Kernel, which we periodically update
> > > so that we can get the latest device drivers. When performing the last update
> > > from 2.6.33.2 to 2.6.37, we found that the performance of a set of drives
> > > decreased by some 40% (took 3 hours and 11 minutes to write and read from 5
> > > drives on 2.6.37 versus 2 hours and 12 minutes on 2.6.33.3). I was able to
> > > determine that the issue was in the 2.6.37 Kernel as I was able to run it with
> > > the 2.6.36.4 kernel, and it had the better performance. After seeing that I/O
> > > throttling was introduced in the 2.6.37 Kernel, I naturally suspected that.
> > > However, by default, all the throttling was turned off (I attached the actual
> > > .config that was used to build the kernel). I then tried to turn on the
> > > throttling and set it to a high number to see what would happen. When I did
> > > that, I was able to reduce the time from 3 hours and 11 minutes to 2 hours and
> > > 50 minutes. There seems to be something there that changed that is impacting
> > > performance on multiple drives. When we do this same test with only one drive,
> > > the performance is identical between the systems. This issue still occurs on
> > > Kernel 3.0.2.
> > >
> >
> > Are you able to determine whether this regression is due to slower
> > reading, to slower writing or to both?
>
> Mark,
>
> As your initial comment says that you see 40% regression even when block
> throttling infrastructure is not enabled, I think it is not related to
> throttling as blk_throtl_bio() is null when BLK_DEV_THROTTLING=n.
>
> What IO scheduler are you using? Can you try switching IO scheduler to
> deadline and see if regression is still there. Trying to figure out if
> it has anything to do with IO scheduler.
>
> What file system are you using with what options? Are you using device
> mapper to create some special configuration on multiple disks?
>
> Also can you take a trace (blktrace) of any of the disks for 30 seconds
> both without regression and after regression and upload it somewhere.
> Staring at it might give some clues.
>
> Thanks
> Vivek
[-- Attachment #2: Type: text/html, Size: 5184 bytes --]
next prev parent reply other threads:[~2011-08-22 20:32 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <bug-41552-10286@https.bugzilla.kernel.org/>
2011-08-22 19:24 ` Andrew Morton
2011-08-22 19:48 ` Vivek Goyal
2011-08-22 20:32 ` Mark Petersen [this message]
2011-08-24 20:11 ` Mark Petersen
2011-08-25 21:02 ` Vivek Goyal
2011-08-22 19:49 ` Mark Petersen
2011-08-22 19:56 ` Vivek Goyal
2011-08-22 20:28 ` Mark Petersen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=BLU165-W344B2B1CD9DC44A9DABA5DFF2F0@phx.gbl \
--to=mpete_06@hotmail.com \
--cc=akpm@linux-foundation.org \
--cc=axboe@kernel.dk \
--cc=bugme-daemon@bugzilla.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-scsi@vger.kernel.org \
--cc=vgoyal@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox