linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Rohit Seth <rohit.seth@intel.com>
To: Mel Gorman <mel@csn.ul.ie>
Cc: Andrew Morton <akpm@osdl.org>,
	torvalds@osdl.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org,
	Christoph Lameter <christoph@lameter.com>
Subject: Re: [PATCH]: Free pages from local pcp lists under tight memory conditions
Date: Wed, 23 Nov 2005 11:41:40 -0800	[thread overview]
Message-ID: <1132774900.25086.49.camel@akash.sc.intel.com> (raw)
In-Reply-To: <Pine.LNX.4.58.0511231754020.7045@skynet>

On Wed, 2005-11-23 at 18:06 +0000, Mel Gorman wrote:
> On Wed, 23 Nov 2005, Rohit Seth wrote:
> 
> >
> I doubt you gain a whole lot by releasing them in batches. There is no way
> to determine if freeing a few will result in contiguous blocks or not and
> the overhead of been cautious will likely exceed the cost of simply
> refilling them on the next order-0 allocation. 

It depends.  If most of the higher order allocations are only order 1
(and may be order 2) then it is possible that we may gain in freeing in
batches.  

> Your worst case is where
> the buddies you need are in different per-cpu caches.
> 

That is why we need another patch that tries to allocate physically
contiguous pages in each per_cpu_pagelist.  Actually this patch used to
be there in Andrew's tree for some time (2.6.14) before couple of corner
cases came up failing where order 1 allocations were unsuccessful.

> As it's easy to refill a per-cpu cache, it would be easier, clearer and
> probably faster to just purge the per-cpu cache and have it refilled on
> the next order-0 allocation. The release-in-batch approach would only be
> worthwhile if you expect an order-1 allocation to be very rare.
> 

Well, my only fear is if this shunting happens too often...

> In 005_drainpercpu.patch from the last version of the anti-defrag, I used
> the smp_call_function() and it did not seem to slow up the system.
> Certainly, by the time it was called, the system was already low on
> memory and trashing a bit so it just wasn't noticable.
> 

I agree at this point in alloaction, speed probably does not matter too
much.  I definitely want to first see for simple workloads how much (and
how deep we have to go into deallocations) this extra logic helps.

> > 2- Do we drain the whole pcp on remote processors or again follow the
> > stepped approach (but may be with a steeper slope).
> >
> 
> I would say do the same on the remote case as you do locally to keep
> things consistent.
> 

Well, I think in bigger scope these allocations/deallocations will get
automatically balanced.
 
> >
> > > We need to verify that this patch actually does something useful.
> > >
> > >
> > I'm working on this.  Will let you know later today if I can come with
> > some workload easily hitting this additional logic.
> >
> 
> I found it hard to generate reliable workloads which hit these sort of
> situations although a fork-heavy workload with 8k stacks will put pressure
> on order-1 allocations. You can artifically force high order allocations
> using vmregress by doing something like this;

Need something more benign/stupid to kick into this logic.  

-rohit

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2005-11-23 19:41 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-11-23  0:10 Rohit Seth
2005-11-23  5:36 ` Andrew Morton
2005-11-23  5:58   ` Andrew Morton
2005-11-23 18:17     ` Rohit Seth
2005-11-23  6:36   ` Christoph Lameter
2005-11-23  6:42   ` Christoph Lameter
2005-11-23 16:35     ` Linus Torvalds
2005-11-23 17:03       ` Christoph Lameter
2005-11-23 17:54   ` Rohit Seth
2005-11-23 18:06     ` Mel Gorman
2005-11-23 19:41       ` Rohit Seth [this message]
2005-11-24  9:25         ` Mel Gorman
2005-11-23 23:26       ` Rohit Seth
2005-11-23 19:30 ` Christoph Lameter
2005-11-23 19:46   ` Rohit Seth
2005-11-23 19:55     ` Andrew Morton
2005-11-23 21:00       ` Rohit Seth
2005-11-23 21:25         ` Christoph Lameter
2005-11-23 22:29           ` Rohit Seth
2005-11-23 21:26         ` Andrew Morton
2005-11-23 21:40           ` Rohit Seth
2005-11-24  3:02         ` Paul Jackson
2005-11-29 23:18           ` Rohit Seth
2005-12-01 14:44             ` Paul Jackson
2005-12-02  0:32               ` Nick Piggin
2005-11-23 22:01       ` Christoph Lameter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1132774900.25086.49.camel@akash.sc.intel.com \
    --to=rohit.seth@intel.com \
    --cc=akpm@osdl.org \
    --cc=christoph@lameter.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mel@csn.ul.ie \
    --cc=torvalds@osdl.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox