linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* PSI vs. CPU overhead for client computing
@ 2019-04-23 18:57 Luigi Semenzato
  2019-04-23 22:04 ` Suren Baghdasaryan
  0 siblings, 1 reply; 6+ messages in thread
From: Luigi Semenzato @ 2019-04-23 18:57 UTC (permalink / raw)
  To: Linux Memory Management List

I and others are working on improving system behavior under memory
pressure on Chrome OS.  We use zram, which swaps to a
statically-configured compressed RAM disk.  One challenge that we have
is that the footprint of our workloads is highly variable.  With zram,
we have to set the size of the swap partition at boot time.  When the
(logical) swap partition is full, we're left with some amount of RAM
usable by file and anonymous pages (we can ignore the rest).  We don't
get to control this amount dynamically.  Thus if the workload fits
nicely in it, everything works well.  If it doesn't, then the rate of
anonymous page faults can be quite high, causing large CPU overhead
for compression/decompression (as well as for other parts of the MM).

In Chrome OS and Android, we have the luxury that we can reduce
pressure by terminating processes (tab discard in Chrome OS, app kill
in Android---which incidentally also runs in parallel with Chrome OS
on some chromebooks).  To help decide when to reduce pressure, we
would like to have a reliable and device-independent measure of MM CPU
overhead.  I have looked into PSI and have a few questions.  I am also
looking for alternative suggestions.

PSI measures the times spent when some and all tasks are blocked by
memory allocation.  In some experiments, this doesn't seem to
correlate too well with CPU overhead (which instead correlates fairly
well with page fault rates).  Could this be because it includes
pressure from file page faults?  Is there some way of interpreting PSI
numbers so that the pressure from file pages is ignored?

What is the purpose of "some" and "full" in the PSI measurements?  The
chrome browser is a multi-process app and there is a lot of IPC.  When
process A is blocked on memory allocation, it cannot respond to IPC
from process B, thus effectively both processes are blocked on
allocation, but we don't see that.  Also, there are situations in
which some "uninteresting" process keep running.  So it's not clear we
can rely on "full".  Or maybe I am misunderstanding?  "Some" may be a
better measure, but again it doesn't measure indirect blockage.

The kernel contains various cpustat measurements, including some
slightly esoteric ones such as CPUTIME_GUEST and CPUTIME_GUEST_NICE.
Would adding a CPUTIME_MEM be out of the question?

Thanks!


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-04-25 17:31 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-23 18:57 PSI vs. CPU overhead for client computing Luigi Semenzato
2019-04-23 22:04 ` Suren Baghdasaryan
2019-04-24  4:54   ` Luigi Semenzato
2019-04-24 14:49     ` Suren Baghdasaryan
2019-04-25 17:31       ` Luigi Semenzato
2019-04-24 16:36   ` Johannes Weiner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox