From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18B5DC001DE for ; Thu, 13 Jul 2023 23:25:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6B7A38D0002; Thu, 13 Jul 2023 19:25:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 667BA6B009A; Thu, 13 Jul 2023 19:25:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 52F1A8D0002; Thu, 13 Jul 2023 19:25:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 3F5B96B0099 for ; Thu, 13 Jul 2023 19:25:28 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 089C41A0385 for ; Thu, 13 Jul 2023 23:25:28 +0000 (UTC) X-FDA: 81008172336.15.230E242 Received: from mail-wr1-f43.google.com (mail-wr1-f43.google.com [209.85.221.43]) by imf04.hostedemail.com (Postfix) with ESMTP id 091A140011 for ; Thu, 13 Jul 2023 23:25:25 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=cloudflare.com header.s=google header.b=rT7wOYM7; dmarc=pass (policy=reject) header.from=cloudflare.com; spf=pass (imf04.hostedemail.com: domain of ivan@cloudflare.com designates 209.85.221.43 as permitted sender) smtp.mailfrom=ivan@cloudflare.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1689290726; a=rsa-sha256; cv=none; b=I97funwfe2FHtXI3ibQUlBvKkKTd9P20ZcPPCEF6ktpbMeDgY8dekfIPSmk73Zu5O5ckr3 lXofq3M1arDtnCwGblvIsyMt1IHv8jhUuGn9IFNohL3LfFjewvDfZP9TdP5xyH3VKkFjE0 +5P0VvCXWg+TP5vqq5cEJg0fi0OQ/8s= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=cloudflare.com header.s=google header.b=rT7wOYM7; dmarc=pass (policy=reject) header.from=cloudflare.com; spf=pass (imf04.hostedemail.com: domain of ivan@cloudflare.com designates 209.85.221.43 as permitted sender) smtp.mailfrom=ivan@cloudflare.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1689290726; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Ldt/m7vMp8HF0ifqHxjiRFq7e6BEhHM91NOp3YGXdYE=; b=PxqIbFMNVwS4SJbB/NOLQQmoujfSXPcg3/FmkME/44FRIlnxvnbnT23ah7D7N3pdvPPHTs 1kktzGwwGq528qQ7/0xsDRDrVAVUEaqGxZZ6RX93PUFjGIYFrlmVnmzJ7Yv7fRyMyFxT6C BR5u555+AiQFn3vEk2wPon+fHf10Ku4= Received: by mail-wr1-f43.google.com with SMTP id ffacd0b85a97d-3159d75606dso1385089f8f.1 for ; Thu, 13 Jul 2023 16:25:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloudflare.com; s=google; t=1689290724; x=1691882724; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=Ldt/m7vMp8HF0ifqHxjiRFq7e6BEhHM91NOp3YGXdYE=; b=rT7wOYM73TGH7nce+q9Q0mqkHE46+XclhlnVMn5LD2+Fj1FK9IXWuLoYHFLF/W2bTu JrX5TUBkI0hyq1MyT/SMKCb6CsovdHxHmPzif6xsD8rA5Jshy0lZ8PJ0RnZ/Sk/NPVxQ 5Vu0Xh9msZ1zXdd3l4aLJj1IeSencA6b/m8lA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689290724; x=1691882724; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ldt/m7vMp8HF0ifqHxjiRFq7e6BEhHM91NOp3YGXdYE=; b=UI7JTwJASJkmog+QpS/pPy9EuEdjO671GD3r0HkxPuZnPPYxPX7hvDcQ7NtPxoqvga G49L9Zyt9AzCO/B4booZ+19rU9geKoboD5MEywuWSbJ8TrRZGr5ypcf1+AYj3nbwGlY1 Wa9L6FjN5rLoEpAv/FpIfveCFnKkxKenPvpxs2uN1csaeoy7A5PdboqM/fJM9aTWGoEv +0RC79MKrL3yNnSv5KfBmYvfTJ1EbVq1UZy2nCMsqFwhcxhUwk6Jid7SCTyahbrz0Lxc Na/J+Nb/t0ZQx84GbiBU7emPlvAr0akRhBbtXfvx8NXShYFAuTXAygWau6CDGYyDYH2p koPA== X-Gm-Message-State: ABy/qLbpM2D1HKJ6qhvtDevTgRbvdhl5tcDroKCKDjP0CzLOPepsslSZ fGod4jwU8XPxuKZdFc6DULAyrcV8j3YJbZ+bwaqgag== X-Google-Smtp-Source: APBJJlGe3Qbx9y7uUIFH51v4cW0mTJdTxI78fcBw7ath04XBIi0I7nTDJNYHJnY8kezZinvoPmvK0lHOhyKapndW66c= X-Received: by 2002:adf:e952:0:b0:30e:5bd0:21a2 with SMTP id m18-20020adfe952000000b0030e5bd021a2mr2638278wrn.52.1689290724372; Thu, 13 Jul 2023 16:25:24 -0700 (PDT) MIME-Version: 1.0 References: <20230706062045.xwmwns7cm4fxd7iu@google.com> In-Reply-To: From: Ivan Babrou Date: Thu, 13 Jul 2023 16:25:13 -0700 Message-ID: Subject: Re: Expensive memory.stat + cpu.stat reads To: Waiman Long Cc: Shakeel Butt , cgroups@vger.kernel.org, Linux MM , kernel-team , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Andrew Morton , linux-kernel Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 091A140011 X-Stat-Signature: t4u78ptdsqacz9o88q4bx9yrptm9ug7k X-HE-Tag: 1689290725-286932 X-HE-Meta: U2FsdGVkX1+DK5TuK2TZ1eIPLhgQS5WbpKtwHw3Tk4Jb3dNqLxKiSRcZM/qe8xTS474jYaF15vPKtGEaBUBoDs5INShtxP0Y4bQfPKrGnMjOeTxOq2jkp9SETmFICQXB4O+GK08f97/BKl0BkFp4EOOeVaMDMHtZ4oR1E1wKiDM7+dueitUItuybC0vYKKiDpw1BDBlrW1s/Zg3SY6s1DWxA/ogjO7I3B1MAsUfzqZ4yZp+V6+dhod8hpt+QqQrR1BzLNVY986TpQWHSZdHUMZvzTkZXd3ITWhOMz50KK9d5/rUgcHG+pPS7+k6a2DdyKWTLuq56DiGrnGl+UlYdUF7Hhli9dAaB59w9Cjy1qEbjsIEAEWnxQDdAFsah2zUBixgLc2mTw4sfXYx48FGnLyGi5742pI/4H/BdxaGPxomxfyIGdCcNG0EPzpHEBRMpl/wDM04q0CNcdaj5yBiLpO6UIoiS+inEXQF+KjdDl8sVGG+hYIdTyzSRq3xsFBZXH7J/VnAi9I4HtXw0v+bSESaG1PO+olYM7IHLhTGFgWGtjsvsFcef7BhzhnbaC6JKqTSFkTpKof0L3hvllhrEkLSh/ta255wEAOnq6NIT/bLsUA6AKn0LvKR2b1doDnprGcMahywJYqOqXMpt9i7j5Vky4jkxtTYV1E3x0Tp2Pqgzzn5I4+J8WnavTkHBwyPrHNo5ysjAw9QWm9nJC2ym5+zakovQOZ88kS54n4/hdV7dwpyODT94316dTJUvO5Kiuwi0TVyXNjkSaSeb1/G/skVIZg98qdqCHaeCcR9Gtu/1JSmQYFOAlePvn8tY4ydHeRvzSvKIpS4O8VI3J7VNetRWHpsrnlr2EghPdLdf+yfKdr+rwXGIY1L7nN/xQMEehTD+sJ1xRtfYw2ms2y0VX+zyE93Mr5WcMGtBbe+EAZ2eQE9C8NSpn/CFs0a6N7TWoCnkGAQL6fXvtdtZMtj +fdwTuzE 8md8aQOpBYhtc+Pj8SqB0VcXt2pKRYFWRAYQvtsXjS1l2ywX28vedwhttsfHB4jV9z+MY8uHGtxdDZHPD62Cw79cTNp4Yrxv50VEcGRAEZbA9yM411uLCflzTcL7HpqkPq6XyPp+n1JG6Nwfdym7rIrTTTeh2gO0VntChZIk9xY4DL5PQ1mpoRr01DwzxgwsapuDVOpsEGzvK0qRI5JMBIeZdov/ggWjfiiRZmgOYzZ+EVF78n6H3WaYXcNMroBgGcCKqwD6Td7TvaWxmQynqmyOSXlnWWIEJtg7a9IPiOq/PLvwlgrP+HPCoR3nc6cc3WdZ0p4WxdckZ4VZcI4wJHIKArX80Pv0ZzbCt X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jul 10, 2023 at 5:44=E2=80=AFPM Waiman Long wr= ote: > > On 7/10/23 19:21, Ivan Babrou wrote: > > On Wed, Jul 5, 2023 at 11:20=E2=80=AFPM Shakeel Butt wrote: > >> On Fri, Jun 30, 2023 at 04:22:28PM -0700, Ivan Babrou wrote: > >>> Hello, > >>> > >>> We're seeing CPU load issues with cgroup stats retrieval. I made a > >>> public gist with all the details, including the repro code (which > >>> unfortunately requires heavily loaded hardware) and some flamegraphs: > >>> > >>> * https://gist.github.com/bobrik/5ba58fb75a48620a1965026ad30a0a13 > >>> > >>> I'll repeat the gist of that gist here. Our repro has the following > >>> output after a warm-up run: > >>> > >>> completed: 5.17s [manual / mem-stat + cpu-stat] > >>> completed: 5.59s [manual / cpu-stat + mem-stat] > >>> completed: 0.52s [manual / mem-stat] > >>> completed: 0.04s [manual / cpu-stat] > >>> > >>> The first two lines do effectively the following: > >>> > >>> for _ in $(seq 1 1000); do cat /sys/fs/cgroup/system.slice/memory.sta= t > >>> /sys/fs/cgroup/system.slice/cpu.stat > /dev/null > >>> > >>> The latter two are the same thing, but via two loops: > >>> > >>> for _ in $(seq 1 1000); do cat /sys/fs/cgroup/system.slice/cpu.stat > > >>> /dev/null; done > >>> for _ in $(seq 1 1000); do cat /sys/fs/cgroup/system.slice/memory.sta= t > >>>> /dev/null; done > >>> As you might've noticed from the output, splitting the loop into two > >>> makes the code run 10x faster. This isn't great, because most > >>> monitoring software likes to get all stats for one service before > >>> reading the stats for the next one, which maps to the slow and > >>> expensive way of doing this. > >>> > >>> We're running Linux v6.1 (the output is from v6.1.25) with no patches > >>> that touch the cgroup or mm subsystems, so you can assume vanilla > >>> kernel. > >>> > >>> From the flamegraph it just looks like rstat flushing takes longer. = I > >>> used the following flags on an AMD EPYC 7642 system (our usual pick > >>> cpu-clock was blaming spinlock irqrestore, which was questionable): > >>> > >>> perf -e cycles -g --call-graph fp -F 999 -- /tmp/repro > >>> > >>> Naturally, there are two questions that arise: > >>> > >>> * Is this expected (I guess not, but good to be sure)? > >>> * What can we do to make this better? > >>> > >>> I am happy to try out patches or to do some tracing to help understan= d > >>> this better. > >> Hi Ivan, > >> > >> Thanks a lot, as always, for reporting this. This is not expected and > >> should be fixed. Is the issue easy to repro or some specific workload = or > >> high load/traffic is required? Can you repro this with the latest linu= s > >> tree? Also do you see any difference of root's cgroup.stat where this > >> issue happens vs good state? > > I'm afraid there's no easy way to reproduce. We see it from time to > > time in different locations. The one that I was looking at for the > > initial email does not reproduce it anymore: > > My understanding of mem-stat and cpu-stat is that they are independent > of each other. In theory, reading one shouldn't affect the performance > of reading the others. Since you are doing mem-stat and cpu-stat reading > repetitively in a loop, it is likely that all the data are in the cache > most of the time resulting in very fast processing time. If it happens > that the specific memory location of mem-stat and cpu-stat data are such > that reading one will cause the other data to be flushed out of the > cache and have to be re-read from memory again, you could see > significant performance regression. > > It is one of the possible causes, but I may be wrong. Do you think it's somewhat similar to how iterating a matrix in rows is faster than in columns due to sequential vs random memory reads? * https://stackoverflow.com/q/9936132 * https://en.wikipedia.org/wiki/Row-_and_column-major_order * https://en.wikipedia.org/wiki/Loop_interchange I've had a similar suspicion and it would be good to confirm whether it's that or something else. I can probably collect perf counters for different runs, but I'm not sure which ones I'll need. In a similar vein, if we could come up with a tracepoint that would tell us the amount of work done (or any other relevant metric that would help) during rstat flushing, I can certainly collect that information as well for every reading combination.