From: Christoph Lameter <cl@gentwo.org>
To: "Alex,Shi" <alex.shi@intel.com>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>,
"Chen, Tim C" <tim.c.chen@intel.com>,
"Huang, Ying" <ying.huang@intel.com>"Huang,
Ying" <ying.huang@intel.com>, Andi Kleen <ak@linux.intel.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>
Subject: Re: [PATCH] slub Discard slab page only when node partials > minimum setting
Date: Thu, 29 Sep 2011 09:32:29 -0500 (CDT) [thread overview]
Message-ID: <alpine.DEB.2.00.1109290927590.9848@router.home> (raw)
In-Reply-To: <1317290032.4188.1223.camel@debian>
[-- Attachment #1: Type: TEXT/PLAIN, Size: 1243 bytes --]
On Thu, 29 Sep 2011, Alex,Shi wrote:
> > Is it really worth it? The higher the value the higher the potential
> > memory that is stuck in the per cpu partial pages?
>
> It is hard to find best balance. :)
Well then lets err on the side of smaller memory use for now.
> I am tested aim9/netperf, both of them was said related to memory
> allocation, but didn't find performance change with/without PCP. Seems
> only hackbench sensitive on this. As to aim9, whichever with ourself
> configuration, or with Mel Gorman's aim9 configuration from his mmtest,
> both of them has no clear performance change for PCP slub.
AIM9 tests are usually single threaded so I would not expect any
differences. Try AIM7? And concurrent netperfs?
The PCP patch helps only if there is node lock contention. Meaning
simultaneous allocations/frees from multiple processor from the same
cache.
> Checking the kernel function call graphic via perf record/perf report,
> slab function only be used much in hackbench benchmark.
Then the question arises if its worthwhile merging if it only affects this
benchmark.
> Above is what I did this week for PCP.
>
> BTW, I will take my one week holiday from tomorrow. e-mail access will
> be slow.
Have a nice holiday.
[-- Attachment #2: Type: TEXT/X-PATCH, Size: 3745 bytes --]
diff --git a/mm/slub.c b/mm/slub.c
index 492beab..372f219 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1613,7 +1613,7 @@ static void *get_partial_node(struct kmem_cache *s,
spin_lock(&n->list_lock);
list_for_each_entry_safe(page, page2, &n->partial, lru) {
void *t = acquire_slab(s, n, page, object == NULL);
- int available;
+ int available = 1;
if (!t)
continue;
@@ -1623,12 +1623,14 @@ static void *get_partial_node(struct kmem_cache *s,
c->node = page_to_nid(page);
stat(s, ALLOC_FROM_PARTIAL);
object = t;
- available = page->objects - page->inuse;
} else {
page->freelist = t;
available = put_cpu_partial(s, page, 0);
+ if(!available)
+ add_partial(n, page, 0);
+
}
- if (kmem_cache_debug(s) || available > s->cpu_partial / 2)
+ if (kmem_cache_debug(s) || !available)
break;
}
@@ -2017,17 +2019,10 @@ int put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
if (oldpage) {
pobjects = oldpage->pobjects;
pages = oldpage->pages;
- if (drain && pobjects > s->cpu_partial) {
- unsigned long flags;
- /*
- * partial array is full. Move the existing
- * set to the per node partial list.
- */
- local_irq_save(flags);
- unfreeze_partials(s);
- local_irq_restore(flags);
+ if (pobjects > s->cpu_partial) {
pobjects = 0;
- pages = 0;
+ page->frozen = 0;
+ break;
}
}
@@ -2039,7 +2034,10 @@ int put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
page->next = oldpage;
} while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page) != oldpage);
- stat(s, CPU_PARTIAL_FREE);
+
+ if(pobjects)
+ stat(s, CPU_PARTIAL_FREE);
+
return pobjects;
}
@@ -2472,6 +2470,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
new.inuse--;
if ((!new.inuse || !prior) && !was_frozen && !n) {
+
if (!kmem_cache_debug(s) && !prior)
/*
@@ -2482,7 +2481,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
else { /* Needs to be taken off a list */
- n = get_node(s, page_to_nid(page));
+ n = get_node(s, page_to_nid(page));
/*
* Speculatively acquire the list_lock.
* If the cmpxchg does not succeed then we may
@@ -2492,8 +2491,8 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
* other processors updating the list of slabs.
*/
spin_lock_irqsave(&n->list_lock, flags);
-
}
+
}
inuse = new.inuse;
@@ -2503,23 +2502,23 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
"__slab_free"));
if (likely(!n)) {
-
- /*
- * If we just froze the page then put it onto the
- * per cpu partial list.
- */
if (new.frozen && !was_frozen)
- put_cpu_partial(s, page, 1);
+ if (!put_cpu_partial(s, page, 1)){
+ n = get_node(s, page_to_nid(page));
+ spin_lock_irqsave(&n->list_lock, flags);
+ goto get_lock;
+ }
/*
* The list lock was not taken therefore no list
* activity can be necessary.
*/
- if (was_frozen)
- stat(s, FREE_FROZEN);
- return;
- }
+ if (was_frozen)
+ stat(s, FREE_FROZEN);
+ return;
+ }
+get_lock:
/*
* was_frozen may have been set after we acquired the list_lock in
* an earlier loop. So we need to check it here again.
@@ -2536,7 +2535,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
*/
if (unlikely(!prior)) {
remove_full(s, page);
- add_partial(n, page, 0);
+ add_partial(n, page, 1);
stat(s, FREE_ADD_PARTIAL);
}
}
next prev parent reply other threads:[~2011-09-29 14:32 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1315188460.31737.5.camel@debian>
[not found] ` <alpine.DEB.2.00.1109061914440.18646@router.home>
2011-09-07 1:03 ` Alex,Shi
2011-09-07 2:26 ` [PATCH] slub: code optimze in get_partial_node() Alex,Shi
2011-09-07 2:45 ` [PATCH 2/2] slub: continue to seek slab in node partial if met a null page Alex,Shi
2011-09-07 15:01 ` Christoph Lameter
2011-09-08 8:38 ` Alex,Shi
2011-09-08 18:41 ` Christoph Lameter
2011-09-07 2:56 ` [rfc ] slub: unfreeze full page if it's in node partial Alex,Shi
2011-09-07 3:06 ` Alex,Shi
2011-09-07 14:56 ` [PATCH] slub: code optimze in get_partial_node() Christoph Lameter
[not found] ` <alpine.DEB.2.00.1109062022100.20474@router.home>
[not found] ` <4E671E5C.7010405@cs.helsinki.fi>
[not found] ` <6E3BC7F7C9A4BF4286DD4C043110F30B5D00DA333C@shsmsx502.ccr.corp.intel.com>
[not found] ` <alpine.DEB.2.00.1109071003240.9406@router.home>
2011-09-08 0:43 ` [PATCH] slub Discard slab page only when node partials > minimum setting Alex,Shi
2011-09-08 1:34 ` Shaohua Li
2011-09-08 2:24 ` Alex,Shi
2011-09-15 5:40 ` Pekka Enberg
2011-09-15 6:03 ` Alex,Shi
2011-09-08 18:37 ` Christoph Lameter
2011-09-09 8:45 ` Alex,Shi
2011-09-11 11:41 ` Christoph Lameter
2011-09-13 8:29 ` Alex,Shi
2011-09-13 15:04 ` Christoph Lameter
2011-09-15 1:32 ` Alex,Shi
2011-09-15 1:51 ` Christoph Lameter
2011-09-15 2:00 ` Alex,Shi
[not found] ` <1316765880.4188.34.camel@debian>
[not found] ` <alpine.DEB.2.00.1109231500580.15559@router.home>
2011-09-29 9:53 ` Alex,Shi
2011-09-29 14:32 ` Christoph Lameter [this message]
2011-10-02 12:47 ` Shi, Alex
2011-10-03 15:21 ` Christoph Lameter
2011-10-09 6:28 ` Alex,Shi
2011-10-10 17:12 ` Christoph Lameter
2011-09-14 15:38 ` Christoph Lameter
2011-09-15 5:48 ` Pekka Enberg
2011-09-15 6:16 ` Alex,Shi
2011-09-07 3:14 ` [PATCH] slub: correct comments error for per cpu partial Alex,Shi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.DEB.2.00.1109290927590.9848@router.home \
--to=cl@gentwo.org \
--cc=alex.shi@intel.com \
--cc=penberg@cs.helsinki.fi \
--cc=tim.c.chen@intel.com \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox