From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D4CCEB64D9 for ; Mon, 10 Jul 2023 23:22:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C98B38E0003; Mon, 10 Jul 2023 19:22:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C494B8D0001; Mon, 10 Jul 2023 19:22:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B11BB8E0003; Mon, 10 Jul 2023 19:22:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 9DCD78D0001 for ; Mon, 10 Jul 2023 19:22:05 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 71F0DA02BA for ; Mon, 10 Jul 2023 23:22:05 +0000 (UTC) X-FDA: 80997277410.07.AFC5F7C Received: from mail-wr1-f53.google.com (mail-wr1-f53.google.com [209.85.221.53]) by imf09.hostedemail.com (Postfix) with ESMTP id 7969F140007 for ; Mon, 10 Jul 2023 23:22:03 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=cloudflare.com header.s=google header.b=LiKHuEY8; spf=pass (imf09.hostedemail.com: domain of ivan@cloudflare.com designates 209.85.221.53 as permitted sender) smtp.mailfrom=ivan@cloudflare.com; dmarc=pass (policy=reject) header.from=cloudflare.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1689031323; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GQcaxlquPgLsEtf9zjMdvG6opf0s0bsmOxkcrF9GkyU=; b=fnTFzg+Q2f8oKL0agXxJh23JvCfpwpZmB5ifiTC6J/iPTsT5vpERPtmp+eLQFKc4O8TGwV 1FAzP6ha0Pr507YL94ewF4bu03nwSF3+f9Xo1tQqN8kdUGZ0ow1JESQPBN/zrGoefBigbd xKUm2ROAUVIflpAMD/ZSl5gvlPfQz2E= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1689031323; a=rsa-sha256; cv=none; b=1iW7bi1YsJxOTkowGkcBVXohixyT9ECU5TpfUcdu/u0q4bCcp/FM1+HQWJqCKBk4MYRBjv uJAbEqBF+RXJnHyL9YoDabrPGsV50ZeG76dWwDZ/et2NGy+EM0NT9IJb1awPNVhGaq4wsJ v1b20v7d+CtxF2238d38+ykzDzy74Hk= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=cloudflare.com header.s=google header.b=LiKHuEY8; spf=pass (imf09.hostedemail.com: domain of ivan@cloudflare.com designates 209.85.221.53 as permitted sender) smtp.mailfrom=ivan@cloudflare.com; dmarc=pass (policy=reject) header.from=cloudflare.com Received: by mail-wr1-f53.google.com with SMTP id ffacd0b85a97d-31454996e06so4684253f8f.2 for ; Mon, 10 Jul 2023 16:22:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloudflare.com; s=google; t=1689031322; x=1691623322; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=GQcaxlquPgLsEtf9zjMdvG6opf0s0bsmOxkcrF9GkyU=; b=LiKHuEY8+B0eE9iet2Xj4FKSCqKi9D4frQwFtNXsst0xV4BzybsxdnwYPBK6KXCGkz 2ACTx02GPmCH4E5+NW576cPNdAIEkyeDmTmTiXibeodrdQFcZAdo50kWCBaramTS4KIO srCm8oB7dsyVg4AgU/wBvSV+TRePAExXqzxNA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689031322; x=1691623322; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GQcaxlquPgLsEtf9zjMdvG6opf0s0bsmOxkcrF9GkyU=; b=ijdo7gHJ2SaP35YSsL7sN9Va6Ky1TEBVlMeCyax+mXaAb1xDLvd7l4pos5SN/0fose x0zuGbEVNvUgBUNE4pv677Iabe6jszpvf7a9Wh4BcOJdiH/OiPn4c13qorLv+Ll89pG6 fNlfMZPGzDKLxhRoX0gbMIg1NesbuR6Qf2sukjjV5G+beVS4k52dmPbPSOzRd+HsWQBH JO1ra3w1YyQf4I5y0mwTRImBTKdc3WCUK7B3eCPlWkPOfTv6qb+IWf5AQiQ34Sfuan83 wxZ/fNxQyeuC2+rmdSDIzsKtMBB+FbQa+GuyOliwHIfYLNOlg0XVT/juINyJ/46VwAMr wWyA== X-Gm-Message-State: ABy/qLbgprEtp7VL8R7yDpxz3HvVXlaCcZ+6MSKRYYhQkUeBnDOT8bBw GLEaTlRNVNV0YA9vU0VqUpOleuzMd6kVvTvfxEXzxQ== X-Google-Smtp-Source: APBJJlHgmT7D0rc2mTUV7WI1F/GqetfVFVi3NigwPeEnwVVGEJgjhlAeGv3GTccd93GHS0XLTZqChIxwGfK7DC08B3E= X-Received: by 2002:a05:6000:118a:b0:30f:bb83:e6f4 with SMTP id g10-20020a056000118a00b0030fbb83e6f4mr12221189wrx.0.1689031321866; Mon, 10 Jul 2023 16:22:01 -0700 (PDT) MIME-Version: 1.0 References: <20230706062045.xwmwns7cm4fxd7iu@google.com> In-Reply-To: <20230706062045.xwmwns7cm4fxd7iu@google.com> From: Ivan Babrou Date: Mon, 10 Jul 2023 16:21:51 -0700 Message-ID: Subject: Re: Expensive memory.stat + cpu.stat reads To: Shakeel Butt Cc: cgroups@vger.kernel.org, Linux MM , kernel-team , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Andrew Morton , linux-kernel Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 7969F140007 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: t4rzsctwrb1w6nrgspqmeu5dizyj6qxz X-HE-Tag: 1689031323-281591 X-HE-Meta: U2FsdGVkX19NzO9xvsbAbDs4emmhXp4br3KS2P15kMW+j9zwscqBgT7kvD4WpKqxN/EiQIf2TDdYnUoTKhJD+nRjv+uDqHl4KJGD86eDFAQ0GHAzIuzjpHtaDBjRWWzNqkM/N7yZgSyOCAwwW/M4PpC6K/bajICXl8FpYLzZQ6jwGzYXHfDalhRGs0zfUelfBzXvYJthQ+LPU/vjMfMS1fqXjuZilQdCi4mWQBE+rhb34FV3KUt64Q5YIngu0SM1exb0dASdv//DOOUnLHOyheKJWHcjlACOmzyY7NnADMUXHKQt7HE0qqUdKlrVC18uzWRREXw9E7jvcD2K7hGEGOoWkkKZC+RB0NrZQFJaFbz7zLeCmKLiITBJP4yKNLUfHXngbkFOqX7/p6M/DYocU3qGTMMzsisjm4x/E0FwtgxqBVzj75/Wif3Uf6vDVNDHeO4np9qktjg9vRU9Sbs2cSUGrkrD+4zvGvagFGFYyLKOYguCbYpM6drbXr3ghZ5G5hhKpA8VZu9KAf+1tjn/tfHr10OrNQ/h+o0bRBSYocuA1F4RyeH/YZtrSqcf5QBJE+vKwwAdlObKOpilZsjiTv4CeHHmmw5N5Dbb3mpAul+lKtdc9ewPAqEvvSO5LtFF7ZD7fhEsmBdmboM1xX99fjFP6r4uIHKUkggsmSaNXBTvnqucLp66h45AUG0UMminVRXEM8hogyEjJctruSRa4K5rwW1qCxwIAsKx3HhtHPMHfcDyKeSdkklZB22nM2FBq8CXpfpMLB3LFklqfr3nDjL3Uzko9Wk8aI6EgTZM6kEur3dR1wxFVe8+D8KINsNhgMxD4O2NExm0sofnUOUTP3s2LeDSvWLr3AhEgWmEkTbJ/KrNMY+JYosviaOSOm/mChuiKTcJchVX8VNvmeRJIvxCncsQqYm0w/WJTyu2QKHnsiXxNeEaHg96Hs9mmVSK5//gdLpySI7mg9P8p3n 72Lb+Zq+ Fzpj9cZ1NfyIq87ap6lvBBAMDBHcZHAJ15Yh2ewFE4+LeCeEXwpJdEi7eaj4pvfzz8L/ypeF/yRmgj/CO/LdWhRlDiZwk8jVsq6FaTB3iVyAcdA0hqiQ3naM3KGReW6/SoV2C7lJ6VHYJWm4zDKO/xyoKUi5M0M4MRdl6uAWzSrvz7iPED1SkrE/HR7vqIgHOtUeHB0yoJ9H4ihqEHc/2pQgXaeOC0xUVNE0JPKdGldPI4Ok8net2cQ/7rSnP03wdjwrpI+YimOKk+ZIy7HGzQoQC7AEsKjdX1tvOGgivRaihEHmZ1VZ9Rt2gBbKvyJ8tqA0mGn3kuOXRNAEeSqnwvoIa8Gx3KSaEgfRM/hAw7vJUK/HP7584QCM1kbIvBgwW9hpAayzFID+itV+Km5JtI8ZuySS86muII+pi X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jul 5, 2023 at 11:20=E2=80=AFPM Shakeel Butt = wrote: > > On Fri, Jun 30, 2023 at 04:22:28PM -0700, Ivan Babrou wrote: > > Hello, > > > > We're seeing CPU load issues with cgroup stats retrieval. I made a > > public gist with all the details, including the repro code (which > > unfortunately requires heavily loaded hardware) and some flamegraphs: > > > > * https://gist.github.com/bobrik/5ba58fb75a48620a1965026ad30a0a13 > > > > I'll repeat the gist of that gist here. Our repro has the following > > output after a warm-up run: > > > > completed: 5.17s [manual / mem-stat + cpu-stat] > > completed: 5.59s [manual / cpu-stat + mem-stat] > > completed: 0.52s [manual / mem-stat] > > completed: 0.04s [manual / cpu-stat] > > > > The first two lines do effectively the following: > > > > for _ in $(seq 1 1000); do cat /sys/fs/cgroup/system.slice/memory.stat > > /sys/fs/cgroup/system.slice/cpu.stat > /dev/null > > > > The latter two are the same thing, but via two loops: > > > > for _ in $(seq 1 1000); do cat /sys/fs/cgroup/system.slice/cpu.stat > > > /dev/null; done > > for _ in $(seq 1 1000); do cat /sys/fs/cgroup/system.slice/memory.stat > > > /dev/null; done > > > > As you might've noticed from the output, splitting the loop into two > > makes the code run 10x faster. This isn't great, because most > > monitoring software likes to get all stats for one service before > > reading the stats for the next one, which maps to the slow and > > expensive way of doing this. > > > > We're running Linux v6.1 (the output is from v6.1.25) with no patches > > that touch the cgroup or mm subsystems, so you can assume vanilla > > kernel. > > > > From the flamegraph it just looks like rstat flushing takes longer. I > > used the following flags on an AMD EPYC 7642 system (our usual pick > > cpu-clock was blaming spinlock irqrestore, which was questionable): > > > > perf -e cycles -g --call-graph fp -F 999 -- /tmp/repro > > > > Naturally, there are two questions that arise: > > > > * Is this expected (I guess not, but good to be sure)? > > * What can we do to make this better? > > > > I am happy to try out patches or to do some tracing to help understand > > this better. > > Hi Ivan, > > Thanks a lot, as always, for reporting this. This is not expected and > should be fixed. Is the issue easy to repro or some specific workload or > high load/traffic is required? Can you repro this with the latest linus > tree? Also do you see any difference of root's cgroup.stat where this > issue happens vs good state? I'm afraid there's no easy way to reproduce. We see it from time to time in different locations. The one that I was looking at for the initial email does not reproduce it anymore: completed: 0.75s [manual / cpu-stat + mem-stat] completed: 0.72s [manual / mem-stat] completed: 0.05s [manual / cpu-stat] I took the top 20 servers by metrics scrape duration for cadvisor and it it does happen on the slowest ones: completed: 22.32s [manual / cpu-stat + mem-stat] completed: 0.34s [manual / mem-stat] completed: 1.68s [manual / cpu-stat] Is cadvisor or a similar metrics agent used by Google? Any chance you could see if your own fleet exhibits this behavior? Given that this behavior requires full production setup with customer traffic and long qualification times we have for kernels, I'm not sure if I can try the linus tree here. I uploaded the contents of /sys/fs/cgroup/memory.stat here: * https://gist.github.com/bobrik/9255b5e8ed0a83afb73ebf06b79f07c4 The fast one is v6.1.37 and the slow one is v6.1.25. I'm not sure if the kernel version makes a difference or if it's a matter of uptime / traffic profile. The data is from two different locations. The fast location has gone through an expansion, which meant a full reboot with a kernel upgrade, so maybe that affected things: * https://i.imgur.com/x8uyMaF.png Let me try to reboot the slow location and see if there's any lasting improvement.