linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mel Gorman <mel@csn.ul.ie>
To: Andi Kleen <ak@muc.de>
Cc: "Mukker, Atul" <Atulm@lsil.com>, 'Steve Lord' <lord@xfs.org>,
	'Marcelo Tosatti' <marcelo.tosatti@cyclades.com>,
	'William Lee Irwin III' <wli@holomorphy.com>,
	'Linux Memory Management List' <linux-mm@kvack.org>,
	'Linux Kernel' <linux-kernel@vger.kernel.org>,
	'Grant Grundler' <grundler@parisc-linux.org>
Subject: Re: [PATCH] Avoiding fragmentation through different allocator
Date: Tue, 25 Jan 2005 16:12:31 +0000 (GMT)	[thread overview]
Message-ID: <Pine.LNX.4.58.0501251603360.1203@skynet> (raw)
In-Reply-To: <20050125145602.GB75109@muc.de>

On Tue, 25 Jan 2005, Andi Kleen wrote:

> On Tue, Jan 25, 2005 at 09:02:34AM -0500, Mukker, Atul wrote:
> >
> > > e.g. performance on megaraid controllers (very popular
> > > because a big PC vendor ships them) was always quite bad on
> > > Linux. Up to the point that specific IO workloads run half as
> > > fast on a megaraid compared to other controllers. I heard
> > > they do work better on Windows.
> > >
> > <snip>
> > > Ideally the Linux IO patterns would look similar to the
> > > Windows IO patterns, then we could reuse all the
> > > optimizations the controller vendors did for Windows :)
> >
> > LSI would leave no stone unturned to make the performance better for
> > megaraid controllers under Linux. If you have some hard data in relation to
> > comparison of performance for adapters from other vendors, please share with
> > us. We would definitely strive to better it.
>
> Sorry for being vague on this. I don't have much hard data on this,
> just telling an annecdote. The issue we saw was over a year ago
> and on a machine running an IO intensive multi process stress test
> (I believe it was an AIM7 variant with some tweaked workfile). When the test
> was moved to a machine with megaraid controller it ran significantly
> lower, compared to the old setup with a non RAID SCSI controller from
> a different vendor. I unfortunately don't know anymore the exact
> type/firmware revision etc. of the megaraid that showed the problem.
>

Ok, for me here, the bottom line is that decent hardware will not benefit
from help from the allocator. Worse, if the work required to provide
adjacent pages is high, it will even adversly affect throughput. I know as
well that to have physically contiguous pages in userspace would involve a
fair amount of overhead so even if we devise a system for providing them,
it would need to be a configurable option.

I will keep an eye out for a means of granting physically contiguous pages
for userspace in a lightweight manner but I'm going to focus on general
availability of large pages for TLBs, extend the system for a pool of
zero'd pages and how it can be adapted to help out the hotplug folks.

The system I have in mind for contiguous pages for userspace right now is
to extend the allocator API so that prefaulting and readahead will request
blocks of pages for userspace rather than a series of order-0 pages. So,
if we prefault 32 pages ahead, the allocator would have a new API that
would return 32 pages that are physically contiguous. That, in combination
with forced IOMMU may show if Contiguous Pages For IO is worth it or not.

This will take a while as I'll have to develop some mechanism for
measuring it while I'm at it and I only do this 2 days a week so it'll
take a while.

-- 
Mel Gorman
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>

  reply	other threads:[~2005-01-25 16:12 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-01-25 14:02 Mukker, Atul
2005-01-25 14:17 ` Steve Lord
2005-01-25 14:27   ` Christoph Hellwig
2005-01-25 14:49     ` Andi Kleen
2005-01-25 14:56 ` Andi Kleen
2005-01-25 16:12   ` Mel Gorman [this message]
2005-01-25 18:50 ` Grant Grundler
  -- strict thread matches above, loose matches on Subject: below --
2005-01-20 10:13 Mel Gorman
2005-01-21 14:28 ` Marcelo Tosatti
2005-01-22 21:48   ` Mel Gorman
2005-01-22 21:59     ` Marcelo Tosatti
2005-01-23 13:28       ` Marcelo Tosatti
2005-01-24 13:28       ` Mel Gorman
2005-01-24 12:29         ` Marcelo Tosatti
2005-01-24 16:44           ` James Bottomley
2005-01-24 15:49             ` Marcelo Tosatti
2005-01-24 20:36               ` James Bottomley
2005-01-24 20:47             ` Steve Lord
2005-01-25  7:39               ` Andi Kleen
2005-01-24 19:55           ` Grant Grundler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.58.0501251603360.1203@skynet \
    --to=mel@csn.ul.ie \
    --cc=Atulm@lsil.com \
    --cc=ak@muc.de \
    --cc=grundler@parisc-linux.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lord@xfs.org \
    --cc=marcelo.tosatti@cyclades.com \
    --cc=wli@holomorphy.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox