linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Hugh Dickins <hugh@veritas.com>
To: "Zhang, Yanmin" <yanmin_zhang@linux.intel.com>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>,
	Nick Piggin <npiggin@suse.de>,
	Linux Memory Management List <linux-mm@kvack.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Lin Ming <ming.m.lin@intel.com>,
	Christoph Lameter <cl@linux-foundation.org>
Subject: Re: [patch] SLQB slab allocator
Date: Tue, 3 Feb 2009 12:18:28 +0000 (GMT)	[thread overview]
Message-ID: <Pine.LNX.4.64.0902031150110.5290@blonde.anvils> (raw)
In-Reply-To: <1233646145.2604.137.camel@ymzhang>

On Tue, 3 Feb 2009, Zhang, Yanmin wrote:
> On Mon, 2009-02-02 at 11:00 +0200, Pekka Enberg wrote:
> > On Mon, 2009-02-02 at 11:38 +0800, Zhang, Yanmin wrote:
> > > Can we add a checking about free memory page number/percentage in function
> > > allocate_slab that we can bypass the first try of alloc_pages when memory
> > > is hungry?
> > 
> > If the check isn't too expensive, I don't any reason not to. How would
> > you go about checking how much free pages there are, though? Is there
> > something in the page allocator that we can use for this?
> 
> We can use nr_free_pages(), totalram_pages and hugetlb_total_pages(). Below
> patch is a try. I tested it with hackbench and tbench on my stoakley
> (2 qual-core processors) and tigerton (4 qual-core processors).
> There is almost no regression.

May I repeat what I said yesterday?  Certainly I'm oversimplifying,
but if I'm plain wrong, please correct me.

Having lots of free memory is a temporary accident following process
exit (when lots of anonymous memory has suddenly been freed), before
it has been put to use for page cache.  The kernel tries to run with
a certain amount of free memory in reserve, and the rest of memory
put to (potentially) good use.  I don't think we have the number
you're looking for there, though perhaps some approximation could
be devised (or I'm looking at the problem the wrong way round).

Perhaps feedback from vmscan.c, on how much it's having to write back,
would provide a good clue.  There's plenty of stats maintained there.

> 
> Besides this patch, I have another patch to try to reduce the calculation
> of "totalram_pages - hugetlb_total_pages()", but it touches many files.
> So just post the first simple patch here for review.
> 
> 
> Hugh,
> 
> Would you like to test it on your machines?

Indeed I shall, starting in a few hours when I've finished with trying
the script I promised yesterday to send you.  And I won't be at all
surprised if your patch eliminates my worst cases, because I don't
expect to have any significant amount of free memory during my testing,
and my swap testing suffers from slub's thirst for higher orders.

But I don't believe the kind of check you're making is appropriate,
and I do believe that when you try more extensive testing, you'll find
regressions in other tests which were relying on the higher orders.
If all of your testing happens to have lots of free memory around,
I'm surprised; but perhaps I'm naive about how things actually work,
especially on the larger machines.

Or maybe your tests are relying crucially on the slabs allocated at
system startup, when of course there should be plenty of free memory
around.

By the way, when I went to remind myself of what nr_free_pages()
actually does, my grep immediately hit this remark in mm/mmap.c:
		 * nr_free_pages() is very expensive on large systems,
I hope that's just a stale comment from before it was converted
to global_page_state(NR_FREE_PAGES)!

Hugh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2009-02-03 12:19 UTC|newest]

Thread overview: 99+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-01-21 14:30 Nick Piggin
2009-01-21 14:59 ` Ingo Molnar
2009-01-21 15:17   ` Nick Piggin
2009-01-21 16:56   ` Nick Piggin
2009-01-21 17:40     ` Ingo Molnar
2009-01-23  3:31       ` Nick Piggin
2009-01-23  6:14       ` Nick Piggin
2009-01-23 12:56         ` Ingo Molnar
2009-01-21 17:59 ` Joe Perches
2009-01-23  3:35   ` Nick Piggin
2009-01-23  4:00     ` Joe Perches
2009-01-21 18:10 ` Hugh Dickins
2009-01-22 10:01   ` Pekka Enberg
2009-01-22 12:47     ` Hugh Dickins
2009-01-23 14:23       ` Hugh Dickins
2009-01-23 14:30         ` Pekka Enberg
2009-02-02  3:38         ` Zhang, Yanmin
2009-02-02  9:00           ` Pekka Enberg
2009-02-02 15:00             ` Christoph Lameter
2009-02-03  1:34               ` Zhang, Yanmin
2009-02-03  7:29             ` Zhang, Yanmin
2009-02-03 12:18               ` Hugh Dickins [this message]
2009-02-04  2:21                 ` Zhang, Yanmin
2009-02-05 19:04                   ` Hugh Dickins
2009-02-06  0:47                     ` Zhang, Yanmin
2009-02-06  8:57                     ` Pekka Enberg
2009-02-06 12:33                       ` Hugh Dickins
2009-02-10  8:56                         ` Zhang, Yanmin
2009-02-02 11:50           ` Hugh Dickins
2009-01-23  3:55   ` Nick Piggin
2009-01-23 13:57     ` Hugh Dickins
2009-01-22  8:45 ` Zhang, Yanmin
2009-01-23  3:57   ` Nick Piggin
2009-01-23  9:00   ` Nick Piggin
2009-01-23 13:34     ` Hugh Dickins
2009-01-23 13:44       ` Nick Piggin
2009-01-23  9:55 ` Andi Kleen
2009-01-23 10:13   ` Pekka Enberg
2009-01-23 11:25   ` Nick Piggin
2009-01-23 11:57     ` Andi Kleen
2009-01-23 13:18       ` Nick Piggin
2009-01-23 14:04         ` Andi Kleen
2009-01-23 14:27           ` Nick Piggin
2009-01-23 15:06             ` Andi Kleen
2009-01-23 15:15               ` Nick Piggin
2009-01-23 12:55   ` Nick Piggin
  -- strict thread matches above, loose matches on Subject: below --
2009-01-14  9:04 Nick Piggin
2009-01-14 10:53 ` Pekka Enberg
2009-01-14 11:47   ` Nick Piggin
2009-01-14 13:44     ` Pekka Enberg
2009-01-14 14:22       ` Nick Piggin
2009-01-14 14:45         ` Pekka Enberg
2009-01-14 15:09           ` Nick Piggin
2009-01-14 15:22             ` Nick Piggin
2009-01-14 15:30               ` Pekka Enberg
2009-01-14 15:59                 ` Nick Piggin
2009-01-14 18:40                   ` Christoph Lameter
2009-01-15  6:19                     ` Nick Piggin
2009-01-15 20:47                       ` Christoph Lameter
2009-01-16  3:43                         ` Nick Piggin
2009-01-16 21:25                           ` Christoph Lameter
2009-01-19  6:18                             ` Nick Piggin
2009-01-22  0:13                               ` Christoph Lameter
2009-01-22  9:27                                 ` Pekka Enberg
2009-01-22  9:30                                   ` Zhang, Yanmin
2009-01-22  9:33                                     ` Pekka Enberg
2009-01-23 15:32                                       ` Christoph Lameter
2009-01-23 15:37                                         ` Pekka Enberg
2009-01-23 15:42                                           ` Christoph Lameter
2009-01-23 15:32                                   ` Christoph Lameter
2009-01-23  4:09                                 ` Nick Piggin
2009-01-23 15:41                                   ` Christoph Lameter
2009-01-23 15:53                                     ` Nick Piggin
2009-01-26 17:28                                       ` Christoph Lameter
2009-02-03  1:53                                         ` Nick Piggin
2009-02-03 17:33                                           ` Christoph Lameter
2009-02-03 18:42                                             ` Pekka Enberg
2009-02-03 18:47                                               ` Pekka Enberg
2009-02-04  4:22                                                 ` Nick Piggin
2009-02-04 20:09                                                   ` Christoph Lameter
2009-02-05  3:18                                                     ` Nick Piggin
2009-02-04 20:10                                               ` Christoph Lameter
2009-02-05  3:14                                                 ` Nick Piggin
2009-02-04  4:07                                             ` Nick Piggin
2009-01-14 18:01             ` Christoph Lameter
2009-01-15  6:03               ` Nick Piggin
2009-01-15 20:05                 ` Christoph Lameter
2009-01-16  3:19                   ` Nick Piggin
2009-01-16 21:07                     ` Christoph Lameter
2009-01-19  5:47                       ` Nick Piggin
2009-01-22  0:19                         ` Christoph Lameter
2009-01-23  4:17                           ` Nick Piggin
2009-01-23 15:52                             ` Christoph Lameter
2009-01-23 16:10                               ` Nick Piggin
2009-01-23 17:09                                 ` Nick Piggin
2009-01-26 17:46                                   ` Christoph Lameter
2009-02-03  1:42                                     ` Nick Piggin
2009-01-26 17:34                                 ` Christoph Lameter
2009-02-03  1:48                                   ` Nick Piggin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.64.0902031150110.5290@blonde.anvils \
    --to=hugh@veritas.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ming.m.lin@intel.com \
    --cc=npiggin@suse.de \
    --cc=penberg@cs.helsinki.fi \
    --cc=yanmin_zhang@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox