linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Glauber Costa <glommer@parallels.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	Ying Han <yinghan@google.com>, Tejun Heo <htejun@gmail.com>,
	Li Zefan <lizefan@huawei.com>
Subject: Re: [PATCH v3 4/7] memcg: remove memcg from the reclaim iterators
Date: Wed, 13 Feb 2013 12:11:59 +0400	[thread overview]
Message-ID: <511B4ACF.90209@parallels.com> (raw)
In-Reply-To: <20130212173741.GD25235@cmpxchg.org>

On 02/12/2013 09:37 PM, Johannes Weiner wrote:
>> > All reads from root->dead_count are atomic already, so I am not sure
>> > what you mean here. Anyway, I hope I won't make this even more confusing
>> > if I post what I have right now:
> Yes, but we are doing two reads.  Can't the memcg that we'll store in
> last_visited be offlined during this and be freed after we drop the
> rcu read lock?  If we had just one read, we would detect this
> properly.
> 

I don't want to add any more confusion to an already fun discussion, but
IIUC, you are trying to avoid triggering a second round of reclaim in an
already dead memcg, right?

Can't you generalize the mechanism I use for kmemcg, where a very
similar problem exists ? This is how it looks like:


  /* this atomically sets a bit in the memcg. It does so
   * unconditionally, and it is (so far) okay if it is set
   * twice
   */
  memcg_kmem_mark_dead(memcg);

  /*
   * Then if kmem charges is not zero, we don't actually destroy the
   * memcg. The function where it lives will always be called when usage
   * reaches 0, so we guarantee that we will never miss the chance to
   * call the destruction function at least once.
   *
   * I suspect you could use a mechanism like this, or extend
   * this very same, to prevent the second reclaim to be even called
   */
  if (res_counter_read_u64(&memcg->kmem, RES_USAGE) != 0)
          return;

  /*
   * this is how we guarantee that the destruction fuction is called at
   * most once. The second caller would see the bit unset.
   */
  if (memcg_kmem_test_and_clear_dead(memcg))
          mem_cgroup_put(memcg);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2013-02-13  8:11 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-01-03 17:54 [PATCH v3 0/7] rework mem_cgroup iterator Michal Hocko
2013-01-03 17:54 ` [PATCH v3 1/7] memcg: synchronize per-zone iterator access by a spinlock Michal Hocko
2013-01-03 17:54 ` [PATCH v3 2/7] memcg: keep prev's css alive for the whole mem_cgroup_iter Michal Hocko
2013-01-03 17:54 ` [PATCH v3 3/7] memcg: rework mem_cgroup_iter to use cgroup iterators Michal Hocko
2013-01-03 17:54 ` [PATCH v3 4/7] memcg: remove memcg from the reclaim iterators Michal Hocko
2013-01-07  6:18   ` Kamezawa Hiroyuki
2013-02-08 19:33   ` Johannes Weiner
2013-02-11 15:16     ` Michal Hocko
2013-02-11 17:56       ` Johannes Weiner
2013-02-11 19:29         ` Michal Hocko
2013-02-11 19:58           ` Johannes Weiner
2013-02-11 21:27             ` Michal Hocko
2013-02-11 22:07               ` Michal Hocko
2013-02-11 22:39               ` Johannes Weiner
2013-02-12  9:54                 ` Michal Hocko
2013-02-12 15:10                   ` Johannes Weiner
2013-02-12 15:43                     ` Michal Hocko
2013-02-12 16:10                       ` Paul E. McKenney
2013-02-12 17:25                         ` Johannes Weiner
2013-02-12 18:31                           ` Paul E. McKenney
2013-02-12 19:53                             ` Johannes Weiner
2013-02-13  9:51                               ` Michal Hocko
2013-02-12 17:56                         ` Michal Hocko
2013-02-12 16:13                       ` Michal Hocko
2013-02-12 16:24                         ` Michal Hocko
2013-02-12 16:37                           ` Michal Hocko
2013-02-12 16:41                           ` Johannes Weiner
2013-02-12 17:12                             ` Michal Hocko
2013-02-12 17:37                               ` Johannes Weiner
2013-02-13  8:11                                 ` Glauber Costa [this message]
2013-02-13 10:38                                   ` Michal Hocko
2013-02-13 10:34                                 ` Michal Hocko
2013-02-13 12:56                                   ` Michal Hocko
2013-02-12 16:33                       ` Johannes Weiner
2013-01-03 17:54 ` [PATCH v3 5/7] memcg: simplify mem_cgroup_iter Michal Hocko
2013-01-03 17:54 ` [PATCH v3 6/7] memcg: further " Michal Hocko
2013-01-03 17:54 ` [PATCH v3 7/7] cgroup: remove css_get_next Michal Hocko
2013-01-04  3:42   ` Li Zefan
2013-01-23 12:52 ` [PATCH v3 0/7] rework mem_cgroup iterator Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=511B4ACF.90209@parallels.com \
    --to=glommer@parallels.com \
    --cc=hannes@cmpxchg.org \
    --cc=htejun@gmail.com \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lizefan@huawei.com \
    --cc=mhocko@suse.cz \
    --cc=yinghan@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox