On Wed, Feb 21, 2018 at 6:06 PM, Minchan Kim wrote: > On Wed, Feb 21, 2018 at 04:23:43PM -0800, Daniel Colascione wrote: > > Thanks for taking a look. > > > > On Wed, Feb 21, 2018 at 4:16 PM, Minchan Kim wrote: > > > > > Hi Daniel, > > > > > > On Wed, Feb 21, 2018 at 11:05:04AM -0800, Daniel Colascione wrote: > > > > On Mon, Feb 5, 2018 at 2:03 PM, Daniel Colascione > > > > wrote: > > > > > > > > > When SPLIT_RSS_COUNTING is in use (which it is on SMP systems, > > > > > generally speaking), we buffer certain changes to mm-wide counters > > > > > through counters local to the current struct task, flushing them to > > > > > the mm after seeing 64 page faults, as well as on task exit and > > > > > exec. This scheme can leave a large amount of memory > unaccounted-for > > > > > in process memory counters, especially for processes with many > threads > > > > > (each of which gets 64 "free" faults), and it produces an > > > > > inconsistency with the same memory counters scanned VMA-by-VMA > using > > > > > smaps. This inconsistency can persist for an arbitrarily long time, > > > > > since there is no way to force a task to flush its counters to its > mm. > > > > > > Nice catch. Incosistency is bad but we usually have done it for > > > performance. > > > So, FWIW, it would be much better to describe what you are suffering > from > > > for matainter to take it. > > > > > > > The problem is that the per-process counters in /proc/pid/status lag > behind > > the actual memory allocations, leading to an inaccurate view of overall > > memory consumed by each process. > > Yub, true. The key of question was why you need a such accurate count. > For more context: on Android, we've historically scanned each processes's address space using /proc/pid/smaps (and /proc/pid/smaps_rollup more recently) to extract memory management statistics. We're looking at replacing this mechanism with the new /proc/pid/status per-memory-type (e.g., anonymous, file-backed) counters so that we can be even more efficient, but we'd like the counts we collect to be accurate. > Don't get me wrong. I'm not saying we don't need it. > I was just curious why it becomes important now because we have been with > such inaccurate count for a decade. > > > > > > > > > This patch flushes counters on context switch. This way, we bound > the > > > > > amount of unaccounted memory without forcing tasks to flush to the > > > > > mm-wide counters on each minor page fault. The flush operation > should > > > > > be cheap: we only have a few counters, adjacent in struct task, > and we > > > > > don't atomically write to the mm counters unless we've changed > > > > > something since the last flush. > > > > > > > > > > Signed-off-by: Daniel Colascione > > > > > --- > > > > > kernel/sched/core.c | 3 +++ > > > > > 1 file changed, 3 insertions(+) > > > > > > > > > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > > > > > index a7bf32aabfda..7f197a7698ee 100644 > > > > > --- a/kernel/sched/core.c > > > > > +++ b/kernel/sched/core.c > > > > > @@ -3429,6 +3429,9 @@ asmlinkage __visible void __sched > schedule(void) > > > > > struct task_struct *tsk = current; > > > > > > > > > > sched_submit_work(tsk); > > > > > + if (tsk->mm) > > > > > + sync_mm_rss(tsk->mm); > > > > > + > > > > > do { > > > > > preempt_disable(); > > > > > __schedule(false); > > > > > > > > > > > > > > > > > Ping? Is this approach just a bad idea? We could instead just > manually > > > sync > > > > all mm-attached tasks at counter-retrieval time. > > > > > > IMHO, yes, it should be done when user want to see which would be > really > > > cold path while this shecule function is hot. > > > > > > > The problem with doing it that way is that we need to look at each task > > attached to a particular mm. AFAIK (and please tell me if I'm wrong), the > > only way to do that is to iterate over all processes, and for each > process > > attached to the mm we want, iterate over all its tasks (since each one > has > > to have the same mm, I think). Does that sound right? > > Hmm, it seems you're right. I spent some time to think over but cannot > reach > a better idea. One of option was to change RSS_EVENT_THRESH to per-mm and > control it dynamically with the count of mm_users when forking time. > However, it makes the process with many thread harmful without reason. > > So, I support your idea at this moment. But let's hear other's opinions. > FWIW, I just sent a patch that does the same thing a different way. It has the virtue of not increasing context-switch path length, but it adds a spinlock (almost never contended) around the per-task mm counter struct. I'd be happy with either this version or my previous version.