From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pa0-f42.google.com (mail-pa0-f42.google.com [209.85.220.42]) by kanga.kvack.org (Postfix) with ESMTP id 0C9526B0253 for ; Wed, 23 Sep 2015 03:21:41 -0400 (EDT) Received: by pablk4 with SMTP id lk4so1718627pab.3 for ; Wed, 23 Sep 2015 00:21:40 -0700 (PDT) Received: from mail-pa0-x235.google.com (mail-pa0-x235.google.com. [2607:f8b0:400e:c03::235]) by mx.google.com with ESMTPS id z8si8510986par.158.2015.09.23.00.21.40 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 23 Sep 2015 00:21:40 -0700 (PDT) Received: by pacex6 with SMTP id ex6so32962298pac.0 for ; Wed, 23 Sep 2015 00:21:40 -0700 (PDT) References: <20150922210346.749204fb.akpm@linux-foundation.org> From: Greg Thelen Subject: Re: [PATCH] memcg: make mem_cgroup_read_stat() unsigned In-reply-to: <20150922210346.749204fb.akpm@linux-foundation.org> Date: Wed, 23 Sep 2015 00:21:33 -0700 Message-ID: MIME-Version: 1.0 Content-Type: text/plain Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Cgroups , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" Andrew Morton wrote: > On Tue, 22 Sep 2015 17:42:13 -0700 Greg Thelen wrote: > >> Andrew Morton wrote: >> >> > On Tue, 22 Sep 2015 15:16:32 -0700 Greg Thelen wrote: >> > >> >> mem_cgroup_read_stat() returns a page count by summing per cpu page >> >> counters. The summing is racy wrt. updates, so a transient negative sum >> >> is possible. Callers don't want negative values: >> >> - mem_cgroup_wb_stats() doesn't want negative nr_dirty or nr_writeback. >> >> - oom reports and memory.stat shouldn't show confusing negative usage. >> >> - tree_usage() already avoids negatives. >> >> >> >> Avoid returning negative page counts from mem_cgroup_read_stat() and >> >> convert it to unsigned. >> > >> > Someone please remind me why this code doesn't use the existing >> > percpu_counter library which solved this problem years ago. >> > >> >> for_each_possible_cpu(cpu) >> > >> > and which doesn't iterate across offlined CPUs. >> >> I found [1] and [2] discussing memory layout differences between: >> a) existing memcg hand rolled per cpu arrays of counters >> vs >> b) array of generic percpu_counter >> The current approach was claimed to have lower memory overhead and >> better cache behavior. >> >> I assume it's pretty straightforward to create generic >> percpu_counter_array routines which memcg could use. Possibly something >> like this could be made general enough could be created to satisfy >> vmstat, but less clear. >> >> [1] http://www.spinics.net/lists/cgroups/msg06216.html >> [2] https://lkml.org/lkml/2014/9/11/1057 > > That all sounds rather bogus to me. __percpu_counter_add() doesn't > modify struct percpu_counter at all except for when the cpu-local > counter overflows the configured batch size. And for the memcg > application I suspect we can set the batch size to INT_MAX... Nod. The memory usage will be a bit larger, but the code reuse is attractive. I dusted off Vladimir's https://lkml.org/lkml/2014/9/11/710. Next step is to benchmark it before posting. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org