From: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: Sha Zhengju <handai.szj@gmail.com>
Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, mhocko@suse.cz,
akpm@linux-foundation.org, hughd@google.com, gthelen@google.com,
Sha Zhengju <handai.szj@taobao.com>
Subject: Re: [PATCH V2 2/3] memcg: alter mem_cgroup_{update,inc,dec}_page_stat() args to memcg pointer
Date: Tue, 14 May 2013 09:15:29 +0900 [thread overview]
Message-ID: <51918221.6090402@jp.fujitsu.com> (raw)
In-Reply-To: <1368421524-4937-1-git-send-email-handai.szj@taobao.com>
(2013/05/13 14:05), Sha Zhengju wrote:
> From: Sha Zhengju <handai.szj@taobao.com>
>
> Change the first argument of mem_cgroup_{update,inc,dec}_page_stat() from
> 'struct page *' to 'struct mem_cgroup *', and so move PageCgroupUsed(pc)
> checking out of mem_cgroup_update_page_stat(). This is a prepare patch for
> the following memcg page stat lock simplifying.
>
> Signed-off-by: Sha Zhengju <handai.szj@taobao.com>
Why we need to find page_cgroup and memcg and check pc->flags bit
even if memcg is unused ? I think it's too slow.
Sorry, NAK.
> ---
> include/linux/memcontrol.h | 14 +++++++-------
> mm/memcontrol.c | 9 ++-------
> mm/rmap.c | 28 +++++++++++++++++++++++++---
> 3 files changed, 34 insertions(+), 17 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index d6183f0..7af3ed3 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -163,20 +163,20 @@ static inline void mem_cgroup_end_update_page_stat(struct page *page,
> rcu_read_unlock();
> }
>
> -void mem_cgroup_update_page_stat(struct page *page,
> +void mem_cgroup_update_page_stat(struct mem_cgroup *memcg,
> enum mem_cgroup_page_stat_item idx,
> int val);
>
> -static inline void mem_cgroup_inc_page_stat(struct page *page,
> +static inline void mem_cgroup_inc_page_stat(struct mem_cgroup *memcg,
> enum mem_cgroup_page_stat_item idx)
> {
> - mem_cgroup_update_page_stat(page, idx, 1);
> + mem_cgroup_update_page_stat(memcg, idx, 1);
> }
>
> -static inline void mem_cgroup_dec_page_stat(struct page *page,
> +static inline void mem_cgroup_dec_page_stat(struct mem_cgroup *memcg,
> enum mem_cgroup_page_stat_item idx)
> {
> - mem_cgroup_update_page_stat(page, idx, -1);
> + mem_cgroup_update_page_stat(memcg, idx, -1);
> }
>
> unsigned long mem_cgroup_soft_limit_reclaim(struct zone *zone, int order,
> @@ -347,12 +347,12 @@ static inline void mem_cgroup_end_update_page_stat(struct page *page,
> {
> }
>
> -static inline void mem_cgroup_inc_page_stat(struct page *page,
> +static inline void mem_cgroup_inc_page_stat(struct mem_cgroup *memcg,
> enum mem_cgroup_page_stat_item idx)
> {
> }
>
> -static inline void mem_cgroup_dec_page_stat(struct page *page,
> +static inline void mem_cgroup_dec_page_stat(struct mem_cgroup *memcg,
> enum mem_cgroup_page_stat_item idx)
> {
> }
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index b31513e..a394ba4 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2367,18 +2367,13 @@ void __mem_cgroup_end_update_page_stat(struct page *page, unsigned long *flags)
> move_unlock_mem_cgroup(pc->mem_cgroup, flags);
> }
>
> -void mem_cgroup_update_page_stat(struct page *page,
> +void mem_cgroup_update_page_stat(struct mem_cgroup *memcg,
> enum mem_cgroup_page_stat_item idx, int val)
> {
> - struct mem_cgroup *memcg;
> - struct page_cgroup *pc = lookup_page_cgroup(page);
> - unsigned long uninitialized_var(flags);
> -
> if (mem_cgroup_disabled())
> return;
>
> - memcg = pc->mem_cgroup;
> - if (unlikely(!memcg || !PageCgroupUsed(pc)))
> + if (unlikely(!memcg))
> return;
>
> switch (idx) {
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 6280da8..a03c2a9 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1109,12 +1109,24 @@ void page_add_file_rmap(struct page *page)
> {
> bool locked;
> unsigned long flags;
> + struct page_cgroup *pc;
> + struct mem_cgroup *memcg = NULL;
>
> mem_cgroup_begin_update_page_stat(page, &locked, &flags);
> + pc = lookup_page_cgroup(page);
> +
> + rcu_read_lock();
> + memcg = pc->mem_cgroup;
> + if (unlikely(!PageCgroupUsed(pc)))
> + memcg = NULL;
> +
> if (atomic_inc_and_test(&page->_mapcount)) {
> __inc_zone_page_state(page, NR_FILE_MAPPED);
> - mem_cgroup_inc_page_stat(page, MEMCG_NR_FILE_MAPPED);
> + if (memcg)
> + mem_cgroup_inc_page_stat(memcg, MEMCG_NR_FILE_MAPPED);
> }
> + rcu_read_unlock();
> +
> mem_cgroup_end_update_page_stat(page, &locked, &flags);
> }
>
> @@ -1129,14 +1141,22 @@ void page_remove_rmap(struct page *page)
> bool anon = PageAnon(page);
> bool locked;
> unsigned long flags;
> + struct page_cgroup *pc;
> + struct mem_cgroup *memcg = NULL;
>
> /*
> * The anon case has no mem_cgroup page_stat to update; but may
> * uncharge_page() below, where the lock ordering can deadlock if
> * we hold the lock against page_stat move: so avoid it on anon.
> */
> - if (!anon)
> + if (!anon) {
> mem_cgroup_begin_update_page_stat(page, &locked, &flags);
> + pc = lookup_page_cgroup(page);
> + rcu_read_lock();
> + memcg = pc->mem_cgroup;
> + if (unlikely(!PageCgroupUsed(pc)))
> + memcg = NULL;
> + }
>
> /* page still mapped by someone else? */
> if (!atomic_add_negative(-1, &page->_mapcount))
> @@ -1157,7 +1177,9 @@ void page_remove_rmap(struct page *page)
> NR_ANON_TRANSPARENT_HUGEPAGES);
> } else {
> __dec_zone_page_state(page, NR_FILE_MAPPED);
> - mem_cgroup_dec_page_stat(page, MEMCG_NR_FILE_MAPPED);
> + if (memcg)
> + mem_cgroup_dec_page_stat(memcg, MEMCG_NR_FILE_MAPPED);
> + rcu_read_unlock();
> mem_cgroup_end_update_page_stat(page, &locked, &flags);
> }
> if (unlikely(PageMlocked(page)))
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-05-14 0:15 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-05-13 5:03 [PATCH V2 0/3] memcg: simply lock of page stat accounting Sha Zhengju
2013-05-13 5:04 ` [PATCH V2 1/3] memcg: rewrite the comment about race condition " Sha Zhengju
2013-05-13 5:05 ` [PATCH V2 2/3] memcg: alter mem_cgroup_{update,inc,dec}_page_stat() args to memcg pointer Sha Zhengju
2013-05-13 12:25 ` Michal Hocko
2013-05-14 9:00 ` Sha Zhengju
2013-05-14 9:10 ` Michal Hocko
2013-05-14 0:15 ` Kamezawa Hiroyuki [this message]
2013-05-14 9:03 ` Sha Zhengju
2013-05-13 5:05 ` [PATCH V2 3/3] memcg: simplify lock of memcg page stat account Sha Zhengju
2013-05-13 13:12 ` Michal Hocko
2013-05-13 13:38 ` Michal Hocko
2013-05-14 9:13 ` Sha Zhengju
2013-05-14 9:28 ` Michal Hocko
2013-05-14 8:35 ` Sha Zhengju
2013-05-14 0:41 ` [PATCH V2 0/3] memcg: simply lock of page stat accounting Kamezawa Hiroyuki
2013-05-14 7:13 ` Michal Hocko
2013-05-15 12:35 ` Konstantin Khlebnikov
2013-05-15 13:41 ` Michal Hocko
2013-05-16 4:28 ` Konstantin Khlebnikov
2013-05-16 13:28 ` Michal Hocko
2013-05-17 5:57 ` Konstantin Khlebnikov
2013-05-17 8:38 ` Michal Hocko
2013-05-17 10:29 ` Konstantin Khlebnikov
2013-05-17 12:53 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=51918221.6090402@jp.fujitsu.com \
--to=kamezawa.hiroyu@jp.fujitsu.com \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=gthelen@google.com \
--cc=handai.szj@gmail.com \
--cc=handai.szj@taobao.com \
--cc=hughd@google.com \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox