linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Ivan Babrou <ivan@cloudflare.com>
To: Waiman Long <longman@redhat.com>
Cc: Shakeel Butt <shakeelb@google.com>,
	cgroups@vger.kernel.org,  Linux MM <linux-mm@kvack.org>,
	kernel-team <kernel-team@cloudflare.com>,
	 Johannes Weiner <hannes@cmpxchg.org>,
	Michal Hocko <mhocko@kernel.org>,
	 Roman Gushchin <roman.gushchin@linux.dev>,
	Muchun Song <muchun.song@linux.dev>,
	 Andrew Morton <akpm@linux-foundation.org>,
	linux-kernel <linux-kernel@vger.kernel.org>
Subject: Re: Expensive memory.stat + cpu.stat reads
Date: Fri, 14 Jul 2023 17:30:16 -0700	[thread overview]
Message-ID: <CABWYdi3YNwtPDwwJWmCO-ER50iP7CfbXkCep5TKb-9QzY-a40A@mail.gmail.com> (raw)
In-Reply-To: <CABWYdi2iWYT0sHpK74W6=Oz6HA_3bAqKQd4h+amK0n3T3nge6g@mail.gmail.com>

On Thu, Jul 13, 2023 at 4:25 PM Ivan Babrou <ivan@cloudflare.com> wrote:
> > My understanding of mem-stat and cpu-stat is that they are independent
> > of each other. In theory, reading one shouldn't affect the performance
> > of reading the others. Since you are doing mem-stat and cpu-stat reading
> > repetitively in a loop, it is likely that all the data are in the cache
> > most of the time resulting in very fast processing time. If it happens
> > that the specific memory location of mem-stat and cpu-stat data are such
> > that reading one will cause the other data to be flushed out of the
> > cache and have to be re-read from memory again, you could see
> > significant performance regression.
> >
> > It is one of the possible causes, but I may be wrong.
>
> Do you think it's somewhat similar to how iterating a matrix in rows
> is faster than in columns due to sequential vs random memory reads?
>
> * https://stackoverflow.com/q/9936132
> * https://en.wikipedia.org/wiki/Row-_and_column-major_order
> * https://en.wikipedia.org/wiki/Loop_interchange
>
> I've had a similar suspicion and it would be good to confirm whether
> it's that or something else. I can probably collect perf counters for
> different runs, but I'm not sure which ones I'll need.
>
> In a similar vein, if we could come up with a tracepoint that would
> tell us the amount of work done (or any other relevant metric that
> would help) during rstat flushing, I can certainly collect that
> information as well for every reading combination.

Since cgroup_rstat_flush_locked appears in flamegraphs for both fast
(discrete) and slow (combined) cases, I grabbed some stats for it:

* Slow:

completed: 19.43s [manual / mem-stat + cpu-stat]

$ sudo /usr/share/bcc/tools/funclatency -uT cgroup_rstat_flush_locked
Tracing 1 functions for "cgroup_rstat_flush_locked"... Hit Ctrl-C to end.
^C
00:12:55
     usecs               : count     distribution
         0 -> 1          : 0        |                                        |
         2 -> 3          : 0        |                                        |
         4 -> 7          : 0        |                                        |
         8 -> 15         : 0        |                                        |
        16 -> 31         : 0        |                                        |
        32 -> 63         : 0        |                                        |
        64 -> 127        : 1        |                                        |
       128 -> 255        : 191      |************                            |
       256 -> 511        : 590      |****************************************|
       512 -> 1023       : 186      |************                            |
      1024 -> 2047       : 2        |                                        |
      2048 -> 4095       : 0        |                                        |
      4096 -> 8191       : 0        |                                        |
      8192 -> 16383      : 504      |**********************************      |
     16384 -> 32767      : 514      |**********************************      |
     32768 -> 65535      : 3        |                                        |
     65536 -> 131071     : 1        |                                        |

avg = 8852 usecs, total: 17633268 usecs, count: 1992

* Fast:

completed:  0.95s [manual / mem-stat]
completed:  0.05s [manual / cpu-stat]

$ sudo /usr/share/bcc/tools/funclatency -uT cgroup_rstat_flush_locked
Tracing 1 functions for "cgroup_rstat_flush_locked"... Hit Ctrl-C to end.
^C
00:13:27
     usecs               : count     distribution
         0 -> 1          : 0        |                                        |
         2 -> 3          : 0        |                                        |
         4 -> 7          : 499      |****************************************|
         8 -> 15         : 253      |********************                    |
        16 -> 31         : 191      |***************                         |
        32 -> 63         : 41       |***                                     |
        64 -> 127        : 12       |                                        |
       128 -> 255        : 2        |                                        |
       256 -> 511        : 2        |                                        |
       512 -> 1023       : 0        |                                        |
      1024 -> 2047       : 0        |                                        |
      2048 -> 4095       : 0        |                                        |
      4096 -> 8191       : 0        |                                        |
      8192 -> 16383      : 34       |**                                      |
     16384 -> 32767      : 21       |*                                       |

avg = 857 usecs, total: 904762 usecs, count: 1055

There's a different number of calls into cgroup_rstat_flush_locked and
they are much slower in the slow case. There are also two bands in the
slow case, with 8ms..32ms having the half of the calls.

For mem_cgroup_css_rstat_flush:

* Slow:

completed: 32.77s [manual / mem-stat + cpu-stat]

$ sudo /usr/share/bcc/tools/funclatency -uT mem_cgroup_css_rstat_flush
Tracing 1 functions for "mem_cgroup_css_rstat_flush"... Hit Ctrl-C to end.
^C
00:21:25
     usecs               : count     distribution
         0 -> 1          : 93078    |*                                       |
         2 -> 3          : 3397714  |****************************************|
         4 -> 7          : 1009440  |***********                             |
         8 -> 15         : 168013   |*                                       |
        16 -> 31         : 93       |                                        |

avg = 3 usecs, total: 17189289 usecs, count: 4668338

* Fast:

completed:  0.16s [manual / mem-stat]
completed:  0.04s [manual / cpu-stat]

$ sudo /usr/share/bcc/tools/funclatency -uT mem_cgroup_css_rstat_flush
Tracing 1 functions for "mem_cgroup_css_rstat_flush"... Hit Ctrl-C to end.
^C
00:21:57
     usecs               : count     distribution
         0 -> 1          : 1441     |***                                     |
         2 -> 3          : 18780    |****************************************|
         4 -> 7          : 4826     |**********                              |
         8 -> 15         : 732      |*                                       |
        16 -> 31         : 1        |                                        |

avg = 3 usecs, total: 89174 usecs, count: 25780

There's an 181x difference in the number of calls into
mem_cgroup_css_rstat_flush.

Does this provide a clue? Perhaps cgroup_rstat_cpu_pop_updated is
yielding a ton more iterations for some reason here?

* https://elixir.bootlin.com/linux/v6.1/source/kernel/cgroup/rstat.c#L196

It's inlined, but I can place a probe into the loop:

      7         for_each_possible_cpu(cpu) {
      8                 raw_spinlock_t *cpu_lock =
per_cpu_ptr(&cgroup_rstat_cpu_lock,
                                                               cpu);
     10                 struct cgroup *pos = NULL;
                        unsigned long flags;

                        /*
                         * The _irqsave() is needed because cgroup_rstat_lock is
                         * spinlock_t which is a sleeping lock on
PREEMPT_RT. Acquiring
                         * this lock with the _irq() suffix only
disables interrupts on
                         * a non-PREEMPT_RT kernel. The raw_spinlock_t
below disables
                         * interrupts on both configurations. The
_irqsave() ensures
                         * that interrupts are always disabled and
later restored.
                         */
                        raw_spin_lock_irqsave(cpu_lock, flags);
                        while ((pos =
cgroup_rstat_cpu_pop_updated(pos, cgrp, cpu))) {
                                struct cgroup_subsys_state *css;

                                cgroup_base_stat_flush(pos, cpu);
     26                         bpf_rstat_flush(pos, cgroup_parent(pos), cpu);

     28                         rcu_read_lock();
     29                         list_for_each_entry_rcu(css,
&pos->rstat_css_list,
                                                        rstat_css_node)
     31                                 css->ss->css_rstat_flush(css, cpu);
     32                         rcu_read_unlock();
                        }
     34                 raw_spin_unlock_irqrestore(cpu_lock, flags);

I added probes on both line 26 and line 31 to catch the middle and inner loops.

* Slow:

completed: 32.97s [manual / mem-stat + cpu-stat]

 Performance counter stats for '/tmp/derp':

         4,702,570      probe:cgroup_rstat_flush_locked_L26
         9,301,436      probe:cgroup_rstat_flush_locked_L31

* Fast:

completed:  0.17s [manual / mem-stat]
completed:  0.34s [manual / cpu-stat]

 Performance counter stats for '/tmp/derp':

            31,769      probe:cgroup_rstat_flush_locked_L26
            62,849      probe:cgroup_rstat_flush_locked_L31

It definitely looks like cgroup_rstat_cpu_pop_updated is yielding a
lot more positions.

I'm going to sign off for the week, but let me know if I should place
any more probes to nail this down.


  parent reply	other threads:[~2023-07-15  0:30 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-30 23:22 Ivan Babrou
2023-07-06  6:20 ` Shakeel Butt
2023-07-10 23:21   ` Ivan Babrou
2023-07-11  0:44     ` Waiman Long
2023-07-13 23:25       ` Ivan Babrou
2023-07-14 17:23         ` Waiman Long
2023-07-15  0:00           ` Ivan Babrou
2023-07-15  0:30         ` Ivan Babrou [this message]
2023-08-11 22:03           ` Ivan Babrou
2023-08-11 22:27             ` Waiman Long
2023-08-11 22:35               ` Ivan Babrou
2023-08-12  2:33                 ` Shakeel Butt
2023-08-14 17:56                   ` Ivan Babrou
2023-08-11 23:43             ` Yosry Ahmed
2023-08-12  0:01               ` Yosry Ahmed
2023-08-15  0:18                 ` Tejun Heo
2023-08-15  0:30                   ` Ivan Babrou
2023-08-15  0:31                     ` Yosry Ahmed
2023-07-15  0:14     ` Ivan Babrou
2023-07-10 14:44 ` Michal Koutný
2023-07-10 23:23   ` Ivan Babrou

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CABWYdi3YNwtPDwwJWmCO-ER50iP7CfbXkCep5TKb-9QzY-a40A@mail.gmail.com \
    --to=ivan@cloudflare.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@cloudflare.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=longman@redhat.com \
    --cc=mhocko@kernel.org \
    --cc=muchun.song@linux.dev \
    --cc=roman.gushchin@linux.dev \
    --cc=shakeelb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox