From: Mel Gorman <mel@csn.ul.ie>
To: Nick Piggin <npiggin@suse.de>
Cc: Linux Memory Management List <linux-mm@kvack.org>,
Christoph Lameter <cl@linux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 3/3] page-allocator: Move pcp static fields for high and batch off-pcp and onto the zone
Date: Tue, 18 Aug 2009 13:57:35 +0100 [thread overview]
Message-ID: <20090818125735.GC31469@csn.ul.ie> (raw)
In-Reply-To: <20090818114752.GP9962@wotan.suse.de>
On Tue, Aug 18, 2009 at 01:47:52PM +0200, Nick Piggin wrote:
> On Tue, Aug 18, 2009 at 12:16:02PM +0100, Mel Gorman wrote:
> > Having multiple lists per PCPU increased the size of the per-pcpu
> > structure. Two of the fields, high and batch, do not change within a
> > zone making that information redundant. This patch moves those fields
> > off the PCP and onto the zone to reduce the size of the PCPU.
>
> Hmm.. I did have some patches a long long time ago that among other
> things made the lists larger for the local node only....
>
To reduce the remote node lists, one could look at applying some fixed factor
to the high value or basing remote lists on some percentage of high.
> But I guess if something like that is ever shown to be a good idea
> then we can go back to the old scheme. So yeah this seems OK.
>
Thanks.
> >
> > Signed-off-by: Mel Gorman <mel@csn.ul.ie>
> > ---
> > include/linux/mmzone.h | 9 +++++----
> > mm/page_alloc.c | 47 +++++++++++++++++++++++++----------------------
> > mm/vmstat.c | 4 ++--
> > 3 files changed, 32 insertions(+), 28 deletions(-)
> >
> > <SNIP>
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2009-08-18 12:57 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-08-18 11:15 [RFC PATCH 0/3] Reduce searching in the page allocator fast-path Mel Gorman
2009-08-18 11:16 ` [PATCH 1/3] page-allocator: Split per-cpu list into one-list-per-migrate-type Mel Gorman
2009-08-18 11:43 ` Nick Piggin
2009-08-18 13:10 ` Mel Gorman
2009-08-18 13:12 ` Nick Piggin
2009-08-18 22:57 ` Vincent Li
2009-08-19 8:57 ` Mel Gorman
2009-08-18 11:16 ` [PATCH 2/3] page-allocoator: Maintain rolling count of pages to free from the PCP Mel Gorman
2009-08-18 11:16 ` [PATCH 3/3] page-allocator: Move pcp static fields for high and batch off-pcp and onto the zone Mel Gorman
2009-08-18 11:47 ` Nick Piggin
2009-08-18 12:57 ` Mel Gorman [this message]
2009-08-18 14:18 ` Christoph Lameter
2009-08-18 16:42 ` Mel Gorman
2009-08-18 17:56 ` Christoph Lameter
2009-08-18 20:50 ` Mel Gorman
2009-08-18 14:22 ` [RFC PATCH 0/3] Reduce searching in the page allocator fast-path Christoph Lameter
2009-08-18 16:53 ` Mel Gorman
2009-08-18 19:05 ` Christoph Lameter
2009-08-19 9:08 ` Mel Gorman
2009-08-19 11:48 ` Christoph Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090818125735.GC31469@csn.ul.ie \
--to=mel@csn.ul.ie \
--cc=cl@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=npiggin@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox