From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: balbir@linux.vnet.ibm.com,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"nishimura@mxp.nes.nec.co.jp" <nishimura@mxp.nes.nec.co.jp>
Subject: Re: [RFC][PATCH] memcg remove css_get/put per pages
Date: Wed, 9 Jun 2010 14:14:01 +0900 [thread overview]
Message-ID: <20100609141401.ecdad9f1.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <20100609094734.cbb744aa.kamezawa.hiroyu@jp.fujitsu.com>
On Wed, 9 Jun 2010 09:47:34 +0900
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
> > Looks nice, Kamezawa-San could you please confirm the source of
> > raw_spin_lock_irqsave and trylock from /proc/lock_stat?
> >
> Sure. But above result can be got when lockdep etc..are off.
> (it increase lock overhead)
>
> But yes, new _raw_spin_lock seems strange.
>
Here.
==
------------------------------
&(&mm->page_table_lock)->rlock 20812995 [<ffffffff81124019>] handle_mm_fault+0x7a9/0x9b0
&(&mm->page_table_lock)->rlock 9 [<ffffffff81120c5b>] __pte_alloc+0x4b/0xf0
&(&mm->page_table_lock)->rlock 4 [<ffffffff8112c70d>] anon_vma_prepare+0xad/0x180
&(&mm->page_table_lock)->rlock 83395 [<ffffffff811204b4>] unmap_vmas+0x3c4/0xa60
------------------------------
&(&mm->page_table_lock)->rlock 7 [<ffffffff81120c5b>] __pte_alloc+0x4b/0xf0
&(&mm->page_table_lock)->rlock 20812987 [<ffffffff81124019>] handle_mm_fault+0x7a9/0x9b0
&(&mm->page_table_lock)->rlock 2 [<ffffffff8112c70d>] anon_vma_prepare+0xad/0x180
&(&mm->page_table_lock)->rlock 83408 [<ffffffff811204b4>] unmap_vmas+0x3c4/0xa60
&(&p->alloc_lock)->rlock: 6304532 6308276 0.14 1772.97 7098177.74 23165904 23222238 0.00 1980.76 12445023.62
------------------------
&(&p->alloc_lock)->rlock 6308277 [<ffffffff81153e17>] __mem_cgroup_try_charge+0x327/0x590
------------------------
&(&p->alloc_lock)->rlock 6308277 [<ffffffff81153e17>] __mem_cgroup_try_charge+0x327/0x590
==
Then, new raw_spin_lock is task_lock(). This is because task_lock(mm->owner) makes
cacheline ping pong ;(
So, this is not very good patch for multi-threaded programs, Sigh...
I'll consider how I can get safe access without locks again..
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2010-06-09 5:18 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-06-08 3:19 KAMEZAWA Hiroyuki
2010-06-08 5:40 ` Balbir Singh
2010-06-09 0:47 ` KAMEZAWA Hiroyuki
2010-06-09 5:14 ` KAMEZAWA Hiroyuki [this message]
2010-06-08 7:31 ` Daisuke Nishimura
2010-06-09 0:54 ` KAMEZAWA Hiroyuki
2010-06-09 2:05 ` Daisuke Nishimura
2010-06-09 6:59 ` [RFC][PATCH] memcg remove css_get/put per pages v2 KAMEZAWA Hiroyuki
2010-06-10 2:34 ` Daisuke Nishimura
2010-06-10 2:49 ` KAMEZAWA Hiroyuki
2010-06-11 4:37 ` Daisuke Nishimura
2010-06-11 4:52 ` KAMEZAWA Hiroyuki
2010-06-11 4:59 ` Daisuke Nishimura
2010-06-11 6:11 ` Balbir Singh
2010-06-11 6:21 ` KAMEZAWA Hiroyuki
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20100609141401.ecdad9f1.kamezawa.hiroyu@jp.fujitsu.com \
--to=kamezawa.hiroyu@jp.fujitsu.com \
--cc=balbir@linux.vnet.ibm.com \
--cc=linux-mm@kvack.org \
--cc=nishimura@mxp.nes.nec.co.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox