From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
"balbir@linux.vnet.ibm.com" <balbir@linux.vnet.ibm.com>,
xemul@openvz.org, "hugh@veritas.com" <hugh@veritas.com>
Subject: Re: [PATCH 5/7] radix-tree page cgroup
Date: Thu, 20 Mar 2008 13:45:13 +0900 [thread overview]
Message-ID: <20080320134513.3e4d45f1.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <1205961066.6437.10.camel@lappy>
thank you for review.
On Wed, 19 Mar 2008 22:11:06 +0100
Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:
> > New function is
> >
> > struct page *get_page_cgroup(struct page *page, gfp_mask mask, bool allocate);
> >
> > if (allocate == true), look up and allocate new one if necessary.
> > if (allocate == false), just do look up and return NULL if not exist.
>
> I think others said as well, but we generally just write
>
> if (allocate)
>
> if (!allocate)
>
ok. I'm now separating this function to 2 functions.
just look-up/ look-up and allocate.
> > + struct page_cgroup_head *head;
> > +
> > + head = kmem_cache_alloc_node(page_cgroup_cachep, mask, nid);
> > + if (!head)
> > + return NULL;
> > +
> > + init_page_cgroup(head, pfn);
>
> Just because I'm lazy, I'll suggest the shorter:
>
> if (head)
> init_page_cgroup(head, pfn)
I'll fix.
> > + struct page_cgroup_root *root;
> > + struct page_cgroup_head *head;
> > + struct page_cgroup *pc;
> > + unsigned long pfn, idx;
> > + int nid;
> > + unsigned long base_pfn, flags;
> > + int error;
>
> Would a this make sense? :
>
> might_sleep_if(allocate && (gfp_mask & __GFP_WAIT));
>
seems good. I'll add it.
> > + base_pfn = idx << PCGRP_SHIFT;
> > +retry:
> > + error = 0;
> > + rcu_read_lock();
> > + head = radix_tree_lookup(&root->root_node, idx);
> > + rcu_read_unlock();
>
> This looks iffy, who protects head here?
>
In this patch, a routine for freeing "head" is not included.
Then....Hmm.....rcu_read_xxx is not required...I'll remove it.
I'll check the whole logic around here again.
> > + for_each_online_node(nid) {
> > + if (node_state(nid, N_NORMAL_MEMORY)
> > + root = kmalloc_node(sizeof(struct page_cgroup_root),
> > + GFP_KERNEL, nid);
> > + else
> > + root = kmalloc(sizeof(struct page_cgroup_root),
> > + GFP_KERNEL);
>
> if (!node_state(nid, N_NORMAL_MEMORY))
> nid = -1;
>
> allows us to use a single kmalloc_node() statement.
>
Oh, ok. it seems good.
> > + INIT_RADIX_TREE(&root->root_node, GFP_ATOMIC);
> > + spin_lock_init(&root->tree_lock);
> > + smp_wmb();
>
> unadorned barrier; we usually require a comment outlining the race, and
> a reference to the matching barrier.
>
I'll add comments.
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2008-03-20 4:45 UTC|newest]
Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-03-14 9:59 [PATCH 0/7] memcg: radix-tree page_cgroup KAMEZAWA Hiroyuki
2008-03-14 10:03 ` [PATCH 1/7] re-define page_cgroup KAMEZAWA Hiroyuki
2008-03-16 14:15 ` Balbir Singh
2008-03-18 1:10 ` KAMEZAWA Hiroyuki
2008-03-17 0:21 ` Li Zefan
2008-03-18 1:12 ` KAMEZAWA Hiroyuki
2008-03-17 2:07 ` Li Zefan
2008-03-18 1:11 ` KAMEZAWA Hiroyuki
2008-03-14 10:06 ` [PATCH 2/7] charge/uncharge KAMEZAWA Hiroyuki
2008-03-17 1:46 ` Balbir Singh
2008-03-18 1:14 ` KAMEZAWA Hiroyuki
2008-03-17 2:26 ` Li Zefan
2008-03-18 1:15 ` KAMEZAWA Hiroyuki
2008-03-14 10:07 ` [PATCH 3/7] memcg: move_lists KAMEZAWA Hiroyuki
2008-03-18 16:44 ` Balbir Singh
2008-03-19 2:34 ` KAMEZAWA Hiroyuki
2008-03-14 10:15 ` [PATCH 4/7] memcg: page migration KAMEZAWA Hiroyuki
2008-03-17 2:36 ` Li Zefan
2008-03-18 1:17 ` KAMEZAWA Hiroyuki
2008-03-18 18:11 ` Balbir Singh
2008-03-19 2:44 ` KAMEZAWA Hiroyuki
2008-03-14 10:17 ` [PATCH 5/7] radix-tree page cgroup KAMEZAWA Hiroyuki
2008-03-17 2:56 ` Li Zefan
2008-03-17 3:26 ` Li Zefan
2008-03-18 1:18 ` KAMEZAWA Hiroyuki
2008-03-18 1:23 ` KAMEZAWA Hiroyuki
2008-03-19 2:05 ` Balbir Singh
2008-03-19 2:51 ` KAMEZAWA Hiroyuki
2008-03-19 3:14 ` Balbir Singh
2008-03-19 3:24 ` KAMEZAWA Hiroyuki
2008-03-19 21:11 ` Peter Zijlstra
2008-03-20 4:45 ` KAMEZAWA Hiroyuki [this message]
2008-03-20 5:09 ` KAMEZAWA Hiroyuki
2008-03-14 10:18 ` [PATCH 6/7] memcg: speed up by percpu KAMEZAWA Hiroyuki
2008-03-17 3:03 ` Li Zefan
2008-03-18 1:25 ` KAMEZAWA Hiroyuki
2008-03-18 23:55 ` Li Zefan
2008-03-19 2:51 ` KAMEZAWA Hiroyuki
2008-03-19 21:19 ` Peter Zijlstra
2008-03-19 21:41 ` Peter Zijlstra
2008-03-20 9:08 ` Andy Whitcroft
2008-03-20 4:46 ` KAMEZAWA Hiroyuki
2008-03-14 10:22 ` [PATCH 7/7] memcg: freeing page_cgroup at suitable chance KAMEZAWA Hiroyuki
2008-03-17 3:10 ` Li Zefan
2008-03-18 1:30 ` KAMEZAWA Hiroyuki
2008-03-19 21:33 ` Peter Zijlstra
2008-03-20 5:07 ` KAMEZAWA Hiroyuki
2008-03-20 7:55 ` Peter Zijlstra
2008-03-20 14:49 ` kamezawa.hiroyu
2008-03-20 16:04 ` kamezawa.hiroyu
2008-03-20 16:09 ` Peter Zijlstra
2008-03-20 16:15 ` kamezawa.hiroyu
2008-03-15 6:15 ` [PATCH 0/7] memcg: radix-tree page_cgroup Balbir Singh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20080320134513.3e4d45f1.kamezawa.hiroyu@jp.fujitsu.com \
--to=kamezawa.hiroyu@jp.fujitsu.com \
--cc=a.p.zijlstra@chello.nl \
--cc=balbir@linux.vnet.ibm.com \
--cc=hugh@veritas.com \
--cc=linux-mm@kvack.org \
--cc=xemul@openvz.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox