linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Christoph Lameter <cl@linux.com>
To: Alex Shi <alex.shi@intel.com>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Pekka Enberg <penberg@cs.helsinki.fi>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>
Subject: Re: [rfc PATCH]slub: per cpu partial statistics change
Date: Mon, 6 Feb 2012 09:02:23 -0600 (CST)	[thread overview]
Message-ID: <alpine.DEB.2.00.1202060858510.393@router.home> (raw)
In-Reply-To: <4F2C824E.8080501@intel.com>

On Sat, 4 Feb 2012, Alex Shi wrote:

> On 02/03/2012 11:27 PM, Christoph Lameter wrote:
>
> > On Fri, 3 Feb 2012, Alex,Shi wrote:
> >
> >> This patch split the cpu_partial_free into 2 parts: cpu_partial_node, PCP refilling
> >> times from node partial; and same name cpu_partial_free, PCP refilling times in
> >> slab_free slow path. A new statistic 'release_cpu_partial' is added to get PCP
> >> release times. These info are useful when do PCP tunning.
> >
> > Releasing? The code where you inserted the new statistics counts the pages
> > put on the cpu partial list when refilling from the node partial list.
>
>
> Ops, are we talking the same base kernel: Linus' tree?  :)
> Here the Releasing code only be called in slow free path and the PCP is
> full at the same time, not in PCP refilling from node partial.

Well the term releasing is unfortunate. per cpu partial pages can migrate
to and from the per node partial list and become per cpu slabs under
allocation.

> >> @@ -2465,9 +2466,10 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
> >>  		 * If we just froze the page then put it onto the
> >>  		 * per cpu partial list.
> >>  		 */
> >> -		if (new.frozen && !was_frozen)
> >> +		if (new.frozen && !was_frozen) {
> >>  			put_cpu_partial(s, page, 1);
> >> -
> >> +			stat(s, CPU_PARTIAL_FREE);
> >
> > cpu partial list filled with a partial page created from a fully allocated
> > slab (which therefore was not on any list before).
>
>
> Yes, but the counting is not new here. It just moved out of
> put_cpu_partial().

Ok but then you also added different accounting in put_cpu_partial.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2012-02-06 15:02 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-02-03  8:11 Alex,Shi
2012-02-03 15:27 ` Christoph Lameter
2012-02-04  0:56   ` Alex Shi
2012-02-06 15:02     ` Christoph Lameter [this message]
2012-02-07  5:06       ` Alex,Shi
2012-02-07 15:12         ` Christoph Lameter
2012-02-08  4:44           ` Alex,Shi
2012-02-08 14:46             ` Christoph Lameter
2012-02-17  7:06               ` Alex,Shi
2012-02-18  9:02                 ` Pekka Enberg
2012-02-20  0:45                   ` Alex,Shi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.DEB.2.00.1202060858510.393@router.home \
    --to=cl@linux.com \
    --cc=alex.shi@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@cs.helsinki.fi \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox