From: Michal Hocko <mhocko@suse.cz>
To: Glauber Costa <glommer@parallels.com>
Cc: cgroups@vger.kernel.org, linux-mm@kvack.org,
kamezawa.hiroyu@jp.fujitsu.com,
Johannes Weiner <hannes@cmpxchg.org>, Tejun Heo <tj@kernel.org>
Subject: Re: [PATCH v2 5/7] May god have mercy on my soul.
Date: Fri, 18 Jan 2013 17:07:46 +0100 [thread overview]
Message-ID: <20130118160746.GJ10701@dhcp22.suse.cz> (raw)
In-Reply-To: <1357897527-15479-6-git-send-email-glommer@parallels.com>
Please merge this into the following patch.
On Fri 11-01-13 13:45:25, Glauber Costa wrote:
> Signed-off-by: Glauber Costa <glommer@parallels.com>
> ---
> mm/memcontrol.c | 16 +++++++---------
> 1 file changed, 7 insertions(+), 9 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index aa4e258..c024614 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2909,7 +2909,7 @@ int memcg_cache_id(struct mem_cgroup *memcg)
> * operation, because that is its main call site.
> *
> * But when we create a new cache, we can call this as well if its parent
> - * is kmem-limited. That will have to hold set_limit_mutex as well.
> + * is kmem-limited. That will have to hold cgroup_lock as well.
> */
> int memcg_update_cache_sizes(struct mem_cgroup *memcg)
> {
> @@ -2924,7 +2924,7 @@ int memcg_update_cache_sizes(struct mem_cgroup *memcg)
> * the beginning of this conditional), is no longer 0. This
> * guarantees only one process will set the following boolean
> * to true. We don't need test_and_set because we're protected
> - * by the set_limit_mutex anyway.
> + * by the cgroup_lock anyway.
> */
> memcg_kmem_set_activated(memcg);
>
> @@ -3265,9 +3265,9 @@ void kmem_cache_destroy_memcg_children(struct kmem_cache *s)
> *
> * Still, we don't want anyone else freeing memcg_caches under our
> * noses, which can happen if a new memcg comes to life. As usual,
> - * we'll take the set_limit_mutex to protect ourselves against this.
> + * we'll take the cgroup_lock to protect ourselves against this.
> */
> - mutex_lock(&set_limit_mutex);
> + cgroup_lock();
> for (i = 0; i < memcg_limited_groups_array_size; i++) {
> c = s->memcg_params->memcg_caches[i];
> if (!c)
> @@ -3290,7 +3290,7 @@ void kmem_cache_destroy_memcg_children(struct kmem_cache *s)
> cancel_work_sync(&c->memcg_params->destroy);
> kmem_cache_destroy(c);
> }
> - mutex_unlock(&set_limit_mutex);
> + cgroup_unlock();
> }
>
> struct create_work {
> @@ -4946,7 +4946,6 @@ static int memcg_update_kmem_limit(struct cgroup *cont, u64 val)
> * can also get rid of the use_hierarchy check.
> */
> cgroup_lock();
> - mutex_lock(&set_limit_mutex);
> if (!memcg->kmem_account_flags && val != RESOURCE_MAX) {
> if (cgroup_task_count(cont) || memcg_has_children(memcg)) {
> ret = -EBUSY;
> @@ -4971,7 +4970,6 @@ static int memcg_update_kmem_limit(struct cgroup *cont, u64 val)
> } else
> ret = res_counter_set_limit(&memcg->kmem, val);
> out:
> - mutex_unlock(&set_limit_mutex);
> cgroup_unlock();
>
> /*
> @@ -5029,9 +5027,9 @@ static int memcg_propagate_kmem(struct mem_cgroup *memcg)
> mem_cgroup_get(memcg);
> static_key_slow_inc(&memcg_kmem_enabled_key);
>
> - mutex_lock(&set_limit_mutex);
> + cgroup_lock();
> ret = memcg_update_cache_sizes(memcg);
> - mutex_unlock(&set_limit_mutex);
> + cgroup_unlock();
> #endif
> out:
> return ret;
> --
> 1.7.11.7
>
--
Michal Hocko
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-01-18 16:07 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-01-11 9:45 [PATCH v2 0/7] replace cgroup_lock with local memcg lock Glauber Costa
2013-01-11 9:45 ` [PATCH v2 1/7] memcg: prevent changes to move_charge_at_immigrate during task attach Glauber Costa
2013-01-18 14:16 ` Michal Hocko
2013-01-11 9:45 ` [PATCH v2 2/7] memcg: split part of memcg creation to css_online Glauber Costa
2013-01-18 15:25 ` Michal Hocko
2013-01-18 19:28 ` Glauber Costa
2013-01-21 7:33 ` Glauber Costa
2013-01-21 8:38 ` Michal Hocko
2013-01-21 8:42 ` Glauber Costa
2013-01-11 9:45 ` [PATCH v2 3/7] memcg: provide online test for memcg Glauber Costa
2013-01-18 15:37 ` Michal Hocko
2013-01-18 15:56 ` Michal Hocko
2013-01-18 19:42 ` Glauber Costa
2013-01-18 19:43 ` Glauber Costa
2013-01-18 19:41 ` Glauber Costa
2013-01-11 9:45 ` [PATCH v2 4/7] memcg: fast hierarchy-aware child test Glauber Costa
2013-01-18 16:06 ` Michal Hocko
2013-01-21 7:58 ` Glauber Costa
2013-01-21 8:34 ` Michal Hocko
2013-01-21 8:41 ` Glauber Costa
2013-01-21 9:15 ` Michal Hocko
2013-01-21 9:19 ` Glauber Costa
2013-01-11 9:45 ` [PATCH v2 5/7] May god have mercy on my soul Glauber Costa
2013-01-18 16:07 ` Michal Hocko [this message]
2013-01-11 9:45 ` [PATCH v2 6/7] memcg: replace cgroup_lock with memcg specific memcg_lock Glauber Costa
2013-01-18 16:21 ` Michal Hocko
2013-01-11 9:45 ` [PATCH v2 7/7] memcg: increment static branch right after limit set Glauber Costa
2013-01-18 16:23 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130118160746.GJ10701@dhcp22.suse.cz \
--to=mhocko@suse.cz \
--cc=cgroups@vger.kernel.org \
--cc=glommer@parallels.com \
--cc=hannes@cmpxchg.org \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-mm@kvack.org \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox