From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: balbir@linux.vnet.ibm.com
Cc: Greg Thelen <gthelen@google.com>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"nishimura@mxp.nes.nec.co.jp" <nishimura@mxp.nes.nec.co.jp>,
m-ikeda@ds.jp.nec.com,
"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [RFC][PATCH 7/7][memcg] use spin lock instead of bit_spin_lock in page_cgroup
Date: Tue, 3 Aug 2010 08:46:38 +0900 [thread overview]
Message-ID: <20100803084638.f95f55ed.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <20100802180911.GZ3863@balbir.in.ibm.com>
On Mon, 2 Aug 2010 23:39:11 +0530
Balbir Singh <balbir@linux.vnet.ibm.com> wrote:
> * Greg Thelen <gthelen@google.com> [2010-07-27 23:16:54]:
>
> > KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> writes:
> >
> > > From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> > >
> > > This patch replaces page_cgroup's bit_spinlock with spinlock. In general,
> > > spinlock has good implementation than bit_spin_lock and we should use
> > > it if we have a room for it. In 64bit arch, we have extra 4bytes.
> > > Let's use it.
> > >
> > > Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> > > --
> > > Index: mmotm-0719/include/linux/page_cgroup.h
> > > ===================================================================
> > > --- mmotm-0719.orig/include/linux/page_cgroup.h
> > > +++ mmotm-0719/include/linux/page_cgroup.h
> > > @@ -10,8 +10,14 @@
> > > * All page cgroups are allocated at boot or memory hotplug event,
> > > * then the page cgroup for pfn always exists.
> > > */
> > > +#ifdef CONFIG_64BIT
> > > +#define PCG_HAS_SPINLOCK
> > > +#endif
> > > struct page_cgroup {
> > > unsigned long flags;
> > > +#ifdef PCG_HAS_SPINLOCK
> > > + spinlock_t lock;
> > > +#endif
> > > unsigned short mem_cgroup; /* ID of assigned memory cgroup */
> > > unsigned short blk_cgroup; /* Not Used..but will be. */
> > > struct page *page;
> > > @@ -90,6 +96,16 @@ static inline enum zone_type page_cgroup
> > > return page_zonenum(pc->page);
> > > }
> > >
> > > +#ifdef PCG_HAS_SPINLOCK
> > > +static inline void lock_page_cgroup(struct page_cgroup *pc)
> > > +{
> > > + spin_lock(&pc->lock);
> > > +}
> >
> > This is minor issue, but this patch breaks usage of PageCgroupLocked().
> > Example from __mem_cgroup_move_account() cases panic:
> > VM_BUG_ON(!PageCgroupLocked(pc));
> >
> > I assume that this patch should also delete the following:
> > - PCG_LOCK definition from page_cgroup.h
> > - TESTPCGFLAG(Locked, LOCK) from page_cgroup.h
> > - PCGF_LOCK from memcontrol.c
> >
>
>
> Good catch! But from my understanding of the code we use spinlock_t
> only for 64 bit systems, so we still need the PCG* and TESTPGFLAGS.
>
The latest sets have proper calls.
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2010-08-02 23:48 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-07-27 7:51 [RFC][PATCH 0/7][memcg] towards I/O aware memory cgroup KAMEZAWA Hiroyuki
2010-07-27 7:53 ` [RFC][PATCH 1/7][memcg] virtually indexed array library KAMEZAWA Hiroyuki
2010-07-27 18:29 ` Jonathan Corbet
2010-07-28 0:08 ` KAMEZAWA Hiroyuki
2010-07-28 19:45 ` Andrew Morton
2010-07-29 0:32 ` KAMEZAWA Hiroyuki
2010-07-29 4:27 ` KAMEZAWA Hiroyuki
2010-08-02 18:00 ` Balbir Singh
2010-08-02 23:45 ` KAMEZAWA Hiroyuki
2010-07-27 7:54 ` [RFC][PATCH 2/7][memcg] cgroup arbitarary ID allocation KAMEZAWA Hiroyuki
2010-07-28 2:30 ` Vivek Goyal
2010-07-28 2:35 ` KAMEZAWA Hiroyuki
2010-07-28 3:10 ` Vivek Goyal
2010-08-02 18:04 ` Balbir Singh
2010-08-02 23:45 ` KAMEZAWA Hiroyuki
2010-07-27 7:55 ` [RFC][PATCH 3/7][memcg] memcg on virt array for quick access via ID KAMEZAWA Hiroyuki
2010-07-27 7:56 ` [RFC][PATCH 4/7][memcg] memcg use ID in page_cgroup KAMEZAWA Hiroyuki
2010-07-28 2:39 ` Vivek Goyal
2010-07-28 2:44 ` KAMEZAWA Hiroyuki
2010-07-28 3:13 ` Vivek Goyal
2010-07-28 3:18 ` KAMEZAWA Hiroyuki
2010-07-28 3:21 ` KAMEZAWA Hiroyuki
2010-07-28 14:17 ` Vivek Goyal
2010-07-28 15:43 ` Munehiro Ikeda
2010-07-27 7:59 ` [RFC][PATCH 5/7][memcg] memcg lockless update of file mapped KAMEZAWA Hiroyuki
2010-07-28 7:09 ` Greg Thelen
2010-07-28 7:13 ` KAMEZAWA Hiroyuki
2010-07-27 8:00 ` [RFC][PATCH 6/7][memcg] generic file status update KAMEZAWA Hiroyuki
2010-07-28 7:12 ` Greg Thelen
2010-07-28 7:14 ` KAMEZAWA Hiroyuki
2010-07-27 8:02 ` [RFC][PATCH 7/7][memcg] use spin lock instead of bit_spin_lock in page_cgroup KAMEZAWA Hiroyuki
2010-07-28 6:16 ` Greg Thelen
2010-07-28 6:20 ` KAMEZAWA Hiroyuki
2010-08-02 18:09 ` Balbir Singh
2010-08-02 23:46 ` KAMEZAWA Hiroyuki [this message]
2010-07-28 0:13 ` [RFC][PATCH 0/7][memcg] towards I/O aware memory cgroup KAMEZAWA Hiroyuki
2010-07-28 14:42 ` Balbir Singh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20100803084638.f95f55ed.kamezawa.hiroyu@jp.fujitsu.com \
--to=kamezawa.hiroyu@jp.fujitsu.com \
--cc=akpm@linux-foundation.org \
--cc=balbir@linux.vnet.ibm.com \
--cc=gthelen@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=m-ikeda@ds.jp.nec.com \
--cc=nishimura@mxp.nes.nec.co.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox