From: Glauber Costa <glommer@parallels.com>
To: Michal Hocko <mhocko@suse.cz>
Cc: Li Zefan <lizefan@huawei.com>,
Johannes Weiner <hannes@cmpxchg.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
LKML <linux-kernel@vger.kernel.org>,
Cgroups <cgroups@vger.kernel.org>,
linux-mm@kvack.org
Subject: Re: [PATCH -v2] memcg: don't do cleanup manually if mem_cgroup_css_online() fails
Date: Wed, 3 Apr 2013 12:30:59 +0400 [thread overview]
Message-ID: <515BE8C3.6040205@parallels.com> (raw)
In-Reply-To: <20130403081843.GC14384@dhcp22.suse.cz>
On 04/03/2013 12:18 PM, Michal Hocko wrote:
> Dang. You are right! Glauber, is there any reason why
> memcg_kmem_mark_dead checks only KMEM_ACCOUNTED_ACTIVE rather than
> KMEM_ACCOUNTED_MASK?
>
> This all is very confusing to say the least.
Yes, it is.
In kmemcg we need to differentiate between "active" and "activated"
states due to static branches management. This is only important in the
first activation, to make sure the static branches patching are
synchronized.
>From this point on, the ACTIVE flag is the one we should be looking at.
Again, I fully agree it is complicated, but being that a property of the
static branches (we tried to fix it in the static branches itself but
without a lot of luck, since by their design they patch one site at a
time). I tried to overcome this by testing handcrafted errors and
documenting the states as well as I could.
But that not always work *that* well. Maybe we can use the results of
this discussion to document the tear down process a bit more?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-04-03 8:30 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-04-02 7:35 [PATCH] " Li Zefan
2013-04-02 8:03 ` Li Zefan
2013-04-02 8:07 ` Glauber Costa
2013-04-02 8:34 ` Li Zefan
2013-04-02 8:42 ` Glauber Costa
2013-04-02 8:43 ` Glauber Costa
2013-04-02 12:16 ` Michal Hocko
2013-04-02 12:22 ` Glauber Costa
2013-04-02 13:32 ` Michal Hocko
2013-04-02 13:36 ` Glauber Costa
2013-04-03 3:43 ` Li Zefan
2013-04-02 14:16 ` Michal Hocko
2013-04-02 14:20 ` Glauber Costa
2013-04-02 14:28 ` Michal Hocko
2013-04-02 14:33 ` Glauber Costa
2013-04-02 15:04 ` [PATCH -v2] " Michal Hocko
2013-04-03 3:49 ` Li Zefan
2013-04-03 7:43 ` Michal Hocko
2013-04-03 7:49 ` Li Zefan
2013-04-03 8:18 ` Michal Hocko
2013-04-03 8:30 ` Glauber Costa [this message]
2013-04-03 8:37 ` Li Zefan
2013-04-03 8:50 ` Michal Hocko
2013-04-03 8:53 ` [PATCH 1/2] Revert "memcg: avoid dangling reference count in creation failure." Michal Hocko
2013-04-03 8:53 ` [PATCH 2/2] memcg, kmem: clean up reference count handling on the error path Michal Hocko
2013-04-03 9:48 ` Michal Hocko
2013-04-03 8:08 ` [PATCH -v2] memcg: don't do cleanup manually if mem_cgroup_css_online() fails Glauber Costa
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=515BE8C3.6040205@parallels.com \
--to=glommer@parallels.com \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lizefan@huawei.com \
--cc=mhocko@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox