linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Johannes Weiner <hannes@cmpxchg.org>
To: Vladimir Davydov <vdavydov@virtuozzo.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	stable@vger.kernel.org, Michal Hocko <mhocko@kernel.org>,
	linux-mm@kvack.org, cgroups@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2] mm: memcontrol: fix possible memcg leak due to interrupted reclaim
Date: Tue, 15 Dec 2015 09:53:16 -0500	[thread overview]
Message-ID: <20151215145316.GB20355@cmpxchg.org> (raw)
In-Reply-To: <1450182697-11049-1-git-send-email-vdavydov@virtuozzo.com>

On Tue, Dec 15, 2015 at 03:31:37PM +0300, Vladimir Davydov wrote:
> Memory cgroup reclaim can be interrupted with mem_cgroup_iter_break()
> once enough pages have been reclaimed, in which case, in contrast to a
> full round-trip over a cgroup sub-tree, the current position stored in
> mem_cgroup_reclaim_iter of the target cgroup does not get invalidated
> and so is left holding the reference to the last scanned cgroup. If the
> target cgroup does not get scanned again (we might have just reclaimed
> the last page or all processes might exit and free their memory
> voluntary), we will leak it, because there is nobody to put the
> reference held by the iterator.
> 
> The problem is easy to reproduce by running the following command
> sequence in a loop:
> 
>     mkdir /sys/fs/cgroup/memory/test
>     echo 100M > /sys/fs/cgroup/memory/test/memory.limit_in_bytes
>     echo $$ > /sys/fs/cgroup/memory/test/cgroup.procs
>     memhog 150M
>     echo $$ > /sys/fs/cgroup/memory/cgroup.procs
>     rmdir test
> 
> The cgroups generated by it will never get freed.
> 
> This patch fixes this issue by making mem_cgroup_iter avoid taking
> reference to the current position. In order not to hit use-after-free
> bug while running reclaim in parallel with cgroup deletion, we make use
> of ->css_released cgroup callback to clear references to the dying
> cgroup in all reclaim iterators that might refer to it. This callback is
> called right before scheduling rcu work which will free css, so if we
> access iter->position from rcu read section, we might be sure it won't
> go away under us.
> 
> Fixes: 5ac8fb31ad2e ("mm: memcontrol: convert reclaim iterator to simple css refcounting")
> Signed-off-by: Vladimir Davydov <vdavydov@virtuozzo.com>
> Acked-by: Michal Hocko <mhocko@kernel.org>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: <stable@vger.kernel.org> # 3.19+

Acked-by: Johannes Weiner <hannes@cmpxchg.org>

Full quote follows for cgroups@vger.kernel.org.

> ---
> Changes in v2:
> 
> As pointed out by Johannes, clearing iter->position when interrupting
> memcg reclaim, as it was done in v1, would result in unfairly high
> pressure exerted on a parent cgroup in comparison to its children. So in
> v2, we go another way - instead of pinning cgroup in iterator we clear
> references to dying cgroup in all iterators that might refer to it right
> before it is scheduled to be freed.
> 
>  mm/memcontrol.c | 53 ++++++++++++++++++++++++++++++++++++++++++-----------
>  1 file changed, 42 insertions(+), 11 deletions(-)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 87af26a24491..f42352369cbc 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -859,14 +859,20 @@ struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup *root,
>  		if (prev && reclaim->generation != iter->generation)
>  			goto out_unlock;
>  
> -		do {
> +		while (1) {
>  			pos = READ_ONCE(iter->position);
> +			if (!pos || css_tryget(&pos->css))
> +				break;
>  			/*
> -			 * A racing update may change the position and
> -			 * put the last reference, hence css_tryget(),
> -			 * or retry to see the updated position.
> +			 * css reference reached zero, so iter->position will
> +			 * be cleared by ->css_released. However, we should not
> +			 * rely on this happening soon, because ->css_released
> +			 * is called from a work queue, and by busy-waiting we
> +			 * might block it. So we clear iter->position right
> +			 * away.
>  			 */
> -		} while (pos && !css_tryget(&pos->css));
> +			cmpxchg(&iter->position, pos, NULL);
> +		}
>  	}
>  
>  	if (pos)
> @@ -912,12 +918,7 @@ struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup *root,
>  	}
>  
>  	if (reclaim) {
> -		if (cmpxchg(&iter->position, pos, memcg) == pos) {
> -			if (memcg)
> -				css_get(&memcg->css);
> -			if (pos)
> -				css_put(&pos->css);
> -		}
> +		cmpxchg(&iter->position, pos, memcg);
>  
>  		/*
>  		 * pairs with css_tryget when dereferencing iter->position
> @@ -955,6 +956,28 @@ void mem_cgroup_iter_break(struct mem_cgroup *root,
>  		css_put(&prev->css);
>  }
>  
> +static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg)
> +{
> +	struct mem_cgroup *memcg = dead_memcg;
> +	struct mem_cgroup_reclaim_iter *iter;
> +	struct mem_cgroup_per_zone *mz;
> +	int nid, zid;
> +	int i;
> +
> +	while ((memcg = parent_mem_cgroup(memcg))) {
> +		for_each_node(nid) {
> +			for (zid = 0; zid < MAX_NR_ZONES; zid++) {
> +				mz = &memcg->nodeinfo[nid]->zoneinfo[zid];
> +				for (i = 0; i <= DEF_PRIORITY; i++) {
> +					iter = &mz->iter[i];
> +					cmpxchg(&iter->position,
> +						dead_memcg, NULL);
> +				}
> +			}
> +		}
> +	}
> +}
> +
>  /*
>   * Iteration constructs for visiting all cgroups (under a tree).  If
>   * loops are exited prematurely (break), mem_cgroup_iter_break() must
> @@ -4375,6 +4398,13 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
>  	wb_memcg_offline(memcg);
>  }
>  
> +static void mem_cgroup_css_released(struct cgroup_subsys_state *css)
> +{
> +	struct mem_cgroup *memcg = mem_cgroup_from_css(css);
> +
> +	invalidate_reclaim_iterators(memcg);
> +}
> +
>  static void mem_cgroup_css_free(struct cgroup_subsys_state *css)
>  {
>  	struct mem_cgroup *memcg = mem_cgroup_from_css(css);
> @@ -5229,6 +5259,7 @@ struct cgroup_subsys memory_cgrp_subsys = {
>  	.css_alloc = mem_cgroup_css_alloc,
>  	.css_online = mem_cgroup_css_online,
>  	.css_offline = mem_cgroup_css_offline,
> +	.css_released = mem_cgroup_css_released,
>  	.css_free = mem_cgroup_css_free,
>  	.css_reset = mem_cgroup_css_reset,
>  	.can_attach = mem_cgroup_can_attach,
> -- 
> 2.1.4
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2015-12-15 14:53 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-12-15 12:31 Vladimir Davydov
2015-12-15 14:53 ` Johannes Weiner [this message]
2015-12-17 23:02 ` Andrew Morton
2015-12-18 15:32   ` Vladimir Davydov
2015-12-18 16:00     ` Johannes Weiner
2015-12-18 16:24       ` Vladimir Davydov
2015-12-18 22:40         ` Andrew Morton
2015-12-19  8:51           ` Vladimir Davydov
2015-12-23 22:19 ` Johannes Weiner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20151215145316.GB20355@cmpxchg.org \
    --to=hannes@cmpxchg.org \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=stable@vger.kernel.org \
    --cc=vdavydov@virtuozzo.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox