From: Nick Piggin <nickpiggin@yahoo.com.au>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: balbir@linux.vnet.ibm.com, "xemul@openvz.org" <xemul@openvz.org>,
"hugh@veritas.com" <hugh@veritas.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
menage@google.com
Subject: Re: [RFC] [PATCH 9/9] memcg: percpu page cgroup lookup cache
Date: Thu, 11 Sep 2008 21:31:34 +1000 [thread overview]
Message-ID: <200809112131.34414.nickpiggin@yahoo.com.au> (raw)
In-Reply-To: <20080911202407.752b5731.kamezawa.hiroyu@jp.fujitsu.com>
On Thursday 11 September 2008 21:24, KAMEZAWA Hiroyuki wrote:
> Use per-cpu cache for fast access to page_cgroup.
> This patch is for making fastpath faster.
>
> Because page_cgroup is accessed when the page is allocated/freed,
> we can assume several of continuous page_cgroup will be accessed soon.
> (If not interleaved on NUMA...but in such case, alloc/free itself is slow.)
>
> We cache some set of page_cgroup's base pointer on per-cpu area and
> use it when we hit.
>
> TODO:
> - memory/cpu hotplug support.
How much does this help?
>
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
>
> ---
> mm/page_cgroup.c | 47 +++++++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 45 insertions(+), 2 deletions(-)
>
> Index: mmtom-2.6.27-rc5+/mm/page_cgroup.c
> ===================================================================
> --- mmtom-2.6.27-rc5+.orig/mm/page_cgroup.c
> +++ mmtom-2.6.27-rc5+/mm/page_cgroup.c
> @@ -57,14 +57,26 @@ static int pcg_hashmask __read_mostly;
> #define PCG_HASHMASK (pcg_hashmask)
> #define PCG_HASHSIZE (1 << pcg_hashshift)
>
> +#define PCG_CACHE_MAX_SLOT (32)
> +#define PCG_CACHE_MASK (PCG_CACHE_MAX_SLOT - 1)
> +struct percpu_page_cgroup_cache {
> + struct {
> + unsigned long index;
> + struct page_cgroup *base;
> + } slots[PCG_CACHE_MAX_SLOT];
> +};
> +DEFINE_PER_CPU(struct percpu_page_cgroup_cache, pcg_cache);
> +
> int pcg_hashfun(unsigned long index)
> {
> return hash_long(index, pcg_hashshift);
> }
>
> -struct page_cgroup *lookup_page_cgroup(unsigned long pfn)
> +noinline static struct page_cgroup *
> +__lookup_page_cgroup(struct percpu_page_cgroup_cache *pcc,unsigned long
> pfn) {
> unsigned long index = pfn >> ENTS_PER_CHUNK_SHIFT;
> + int s = index & PCG_CACHE_MASK;
> struct pcg_hash *ent;
> struct pcg_hash_head *head;
> struct hlist_node *node;
> @@ -77,6 +89,8 @@ struct page_cgroup *lookup_page_cgroup(u
> hlist_for_each_entry(ent, node, &head->head, node) {
> if (ent->index == index) {
> pc = ent->map + pfn;
> + pcc->slots[s].index = ent->index;
> + pcc->slots[s].base = ent->map;
> break;
> }
> }
> @@ -84,6 +98,22 @@ struct page_cgroup *lookup_page_cgroup(u
> return pc;
> }
>
> +struct page_cgroup *lookup_page_cgroup(unsigned long pfn)
> +{
> + unsigned long index = pfn >> ENTS_PER_CHUNK_SHIFT;
> + int hnum = (pfn >> ENTS_PER_CHUNK_SHIFT) & PCG_CACHE_MASK;
> + struct percpu_page_cgroup_cache *pcc;
> + struct page_cgroup *ret;
> +
> + pcc = &get_cpu_var(pcg_cache);
> + if (likely(pcc->slots[hnum].index == index))
> + ret = pcc->slots[hnum].base + pfn;
> + else
> + ret = __lookup_page_cgroup(pcc, pfn);
> + put_cpu_var(pcg_cache);
> + return ret;
> +}
> +
> static void __meminit alloc_page_cgroup(int node, unsigned long index)
> {
> struct pcg_hash *ent;
> @@ -124,12 +154,23 @@ static void __meminit alloc_page_cgroup(
> return;
> }
>
> +void clear_page_cgroup_cache_pcg(int cpu)
> +{
> + struct percpu_page_cgroup_cache *pcc;
> + int i;
> +
> + pcc = &per_cpu(pcg_cache, cpu);
> + for (i = 0; i < PCG_CACHE_MAX_SLOT; i++) {
> + pcc->slots[i].index = -1;
> + pcc->slots[i].base = NULL;
> + }
> +}
>
> /* Called From mem_cgroup's initilization */
> void __init page_cgroup_init(void)
> {
> struct pcg_hash_head *head;
> - int node, i;
> + int node, cpu, i;
> unsigned long start, pfn, end, index, offset;
> long default_pcg_hash_size;
>
> @@ -174,5 +215,7 @@ void __init page_cgroup_init(void)
> }
> }
> }
> + for_each_possible_cpu(cpu)
> + clear_page_cgroup_cache_pcg(cpu);
> return;
> }
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2008-09-11 11:31 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-09-11 11:08 [RFC] [PATCH 0/9] remove page_cgroup pointer (with some enhancements) KAMEZAWA Hiroyuki
2008-09-11 11:11 ` [RFC] [PATCH 1/9] memcg:make root no limit KAMEZAWA Hiroyuki
2008-09-11 11:13 ` [RFC] [PATCH 2/9] memcg: atomic page_cgroup flags KAMEZAWA Hiroyuki
2008-09-11 11:14 ` [RFC] [PATCH 3/9] memcg: move_account between groups KAMEZAWA Hiroyuki
2008-09-12 4:36 ` KAMEZAWA Hiroyuki
2008-09-11 11:16 ` [RFC] [PATCH 4/9] memcg: new force empty KAMEZAWA Hiroyuki
2008-09-11 11:17 ` [RFC] [PATCH 5/9] memcg: set mapping null before uncharge KAMEZAWA Hiroyuki
2008-09-11 11:18 ` [RFC] [PATCH 6/9] memcg: optimize stat KAMEZAWA Hiroyuki
2008-09-11 11:20 ` [RFC] [PATCH 7/9] memcg: charge likely success KAMEZAWA Hiroyuki
2008-09-11 11:22 ` [RFC] [PATCH 8/9] memcg: remove page_cgroup pointer from memmap KAMEZAWA Hiroyuki
2008-09-11 14:00 ` Nick Piggin
2008-09-11 14:38 ` kamezawa.hiroyu
2008-09-11 15:01 ` kamezawa.hiroyu
2008-09-12 16:12 ` Balbir Singh
2008-09-12 16:19 ` Dave Hansen
2008-09-12 16:23 ` Dave Hansen
2008-09-16 12:13 ` memcg: lazy_lru (was Re: [RFC] [PATCH 8/9] memcg: remove page_cgroup pointer from memmap) KAMEZAWA Hiroyuki
2008-09-16 12:17 ` [RFC][PATCH 10/9] get/put page at charge/uncharge KAMEZAWA Hiroyuki
2008-09-16 12:19 ` [RFC][PATCH 11/9] lazy lru free vector for memcg KAMEZAWA Hiroyuki
2008-09-16 12:23 ` Pavel Emelyanov
2008-09-16 13:02 ` kamezawa.hiroyu
2008-09-16 12:21 ` [RFC] [PATCH 12/9] lazy lru add vie per cpu " KAMEZAWA Hiroyuki
2008-09-11 11:24 ` [RFC] [PATCH 9/9] memcg: percpu page cgroup lookup cache KAMEZAWA Hiroyuki
2008-09-11 11:31 ` Nick Piggin [this message]
2008-09-11 12:49 ` kamezawa.hiroyu
2008-09-12 9:35 ` [RFC] [PATCH 0/9] remove page_cgroup pointer (with some enhancements) KAMEZAWA Hiroyuki
2008-09-12 10:18 ` KAMEZAWA Hiroyuki
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200809112131.34414.nickpiggin@yahoo.com.au \
--to=nickpiggin@yahoo.com.au \
--cc=balbir@linux.vnet.ibm.com \
--cc=hugh@veritas.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=menage@google.com \
--cc=xemul@openvz.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox