From: Andrew Morton <akpm@linux-foundation.org>
To: Greg Thelen <gthelen@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
Michal Hocko <mhocko@kernel.org>,
Cgroups <cgroups@vger.kernel.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] memcg: make mem_cgroup_read_stat() unsigned
Date: Tue, 22 Sep 2015 21:03:46 -0700 [thread overview]
Message-ID: <20150922210346.749204fb.akpm@linux-foundation.org> (raw)
In-Reply-To: <xr93bncum0ey.fsf@gthelen.mtv.corp.google.com>
On Tue, 22 Sep 2015 17:42:13 -0700 Greg Thelen <gthelen@google.com> wrote:
> Andrew Morton wrote:
>
> > On Tue, 22 Sep 2015 15:16:32 -0700 Greg Thelen <gthelen@google.com> wrote:
> >
> >> mem_cgroup_read_stat() returns a page count by summing per cpu page
> >> counters. The summing is racy wrt. updates, so a transient negative sum
> >> is possible. Callers don't want negative values:
> >> - mem_cgroup_wb_stats() doesn't want negative nr_dirty or nr_writeback.
> >> - oom reports and memory.stat shouldn't show confusing negative usage.
> >> - tree_usage() already avoids negatives.
> >>
> >> Avoid returning negative page counts from mem_cgroup_read_stat() and
> >> convert it to unsigned.
> >
> > Someone please remind me why this code doesn't use the existing
> > percpu_counter library which solved this problem years ago.
> >
> >> for_each_possible_cpu(cpu)
> >
> > and which doesn't iterate across offlined CPUs.
>
> I found [1] and [2] discussing memory layout differences between:
> a) existing memcg hand rolled per cpu arrays of counters
> vs
> b) array of generic percpu_counter
> The current approach was claimed to have lower memory overhead and
> better cache behavior.
>
> I assume it's pretty straightforward to create generic
> percpu_counter_array routines which memcg could use. Possibly something
> like this could be made general enough could be created to satisfy
> vmstat, but less clear.
>
> [1] http://www.spinics.net/lists/cgroups/msg06216.html
> [2] https://lkml.org/lkml/2014/9/11/1057
That all sounds rather bogus to me. __percpu_counter_add() doesn't
modify struct percpu_counter at all except for when the cpu-local
counter overflows the configured batch size. And for the memcg
application I suspect we can set the batch size to INT_MAX...
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2015-09-23 4:02 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-09-23 0:42 Greg Thelen
2015-09-23 4:03 ` Andrew Morton [this message]
2015-09-23 7:21 ` Greg Thelen
2015-09-25 15:20 ` Michal Hocko
-- strict thread matches above, loose matches on Subject: below --
2015-09-22 22:16 Greg Thelen
2015-09-22 22:24 ` Andrew Morton
2015-09-25 15:25 ` Michal Hocko
2015-09-25 16:17 ` Greg Thelen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150922210346.749204fb.akpm@linux-foundation.org \
--to=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=gthelen@google.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox