linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mel Gorman <mel@csn.ul.ie>
To: Dave Hansen <haveblue@us.ibm.com>
Cc: linux-mm <linux-mm@kvack.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	clameter@sgi.com
Subject: Re: [PATCH] 2/2 Prezeroing large blocks of pages during allocation
Date: Mon, 28 Feb 2005 19:01:53 +0000 (GMT)	[thread overview]
Message-ID: <Pine.LNX.4.58.0502281858520.29288@skynet> (raw)
In-Reply-To: <1109609180.6921.22.camel@localhost>

On Mon, 28 Feb 2005, Dave Hansen wrote:

> On Sun, 2005-02-27 at 13:43 +0000, Mel Gorman wrote:
> > +		/*
> > +		 * If this is a request for a zero page and the page was
> > +		 * not taken from the USERZERO pool, zero it all
> > +		 */
> > +		if ((flags & __GFP_ZERO) && alloctype != ALLOC_USERZERO) {
> > +			int zero_order=order;
> > +
> > +			/*
> > +			 * This is important. We are about to zero a block
> > +			 * which may be larger than we need so we have to
> > +			 * determine do we zero just what we need or do
> > +			 * we zero the whole block and put the pages in
> > +			 * the zero page.
> > +			 *
> > +			 * We zero the whole block in the event we are taking
> > +			 * from the KERNNORCLM pools and otherwise zero just
> > +			 * what we need. The reason we do not always zero
> > +			 * everything is because we do not want unreclaimable
> > +			 * pages to leak into the USERRCLM and KERNRCLM
> > +			 * pools
> > +			 *
> > +			 */
> > +			if (alloctype != ALLOC_USERRCLM &&
> > +			    alloctype != ALLOC_KERNRCLM) {
> > +				area = zone->free_area_lists[ALLOC_USERZERO] +
> > +					current_order;
> > +				zero_order = current_order;
> > +			}
> > +
> > +
> > +			spin_unlock_irqrestore(&zone->lock, *irq_flags);
> > +			prep_zero_page(page, zero_order, flags);
> > +			inc_zeroblock_count(zone, zero_order, flags);
> > +			spin_lock_irqsave(&zone->lock, *irq_flags);
> > +
> > +		}
> > +
> >  		return expand(zone, page, order, current_order, area);
> >  	}
> >
>
> I think it would make sense to put that in its own helper function.
> When comments get that big, they often reduce readability.  The only
> outside variable that gets modified is "area", I think.
>
> So, a static inline:
>
> 	area = my_new_function_with_the_huge_comment(zone, ..., area);
>

Will make that change in the next version. It makes perfect sense.

> BTW, what kernel does this apply against?  Is linux-2.6.11-rc4-v18 the
> same as bk18?
>

It applies on top of 2.6.11-rc4 with the latest version of the placement
policy. Admittedly, the naming of the tree is not very obvious.

-- 
Mel Gorman
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>

  reply	other threads:[~2005-02-28 19:01 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-02-27 13:43 Mel Gorman
2005-02-28 16:46 ` Dave Hansen
2005-02-28 19:01   ` Mel Gorman [this message]
2005-03-01  3:51 ` Christoph Lameter
2005-03-07  0:35   ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.58.0502281858520.29288@skynet \
    --to=mel@csn.ul.ie \
    --cc=clameter@sgi.com \
    --cc=haveblue@us.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox