From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDE41CDB474 for ; Thu, 12 Oct 2023 21:06:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F08708D0005; Thu, 12 Oct 2023 17:06:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EB7A78D0002; Thu, 12 Oct 2023 17:06:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D7F448D0005; Thu, 12 Oct 2023 17:06:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id C87358D0002 for ; Thu, 12 Oct 2023 17:06:13 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id A18E7140939 for ; Thu, 12 Oct 2023 21:06:13 +0000 (UTC) X-FDA: 81338042226.21.7488F75 Received: from mail-ej1-f46.google.com (mail-ej1-f46.google.com [209.85.218.46]) by imf24.hostedemail.com (Postfix) with ESMTP id A07F518002E for ; Thu, 12 Oct 2023 21:06:11 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=QgcQyTlJ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.46 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697144771; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=U+qU6BUc+Yf/LfU49BhL9yi5ulNhXzI9oQByMXuhbEg=; b=UqwAcVv8luaIwLl45FvorRkrl+9myCiTgbXpZW/8yYsHu6qLinlAaWrbWq5+5KIaxCPqJU vnJwQ9JCve3Qg3lUHQu18bJ0MzYMSz/MfmIDEUY9Kil16wV6WOBqzz4EAXV1u69UXSiCvw q/DUKVc8V63JgIcSsYtVdpNeW+UStwY= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=QgcQyTlJ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.46 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697144771; a=rsa-sha256; cv=none; b=zxIJOsg8uFhasRX93lEb55sxjq7RMQSt2OOjt8A0QRvSkRv5mQNOA+Fzdbsm/KYBB3QYEq /SEzj4/7dRHEYItHYUxuEAv+yaYI2H0IX6D/QI2fMvztkq7t7TrLyhSJrf6wP9YBl8nqro CRrg+F4gKJbPlmNuQBNIltomk4bgjBw= Received: by mail-ej1-f46.google.com with SMTP id a640c23a62f3a-9adb9fa7200so289327166b.0 for ; Thu, 12 Oct 2023 14:06:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1697144770; x=1697749570; darn=kvack.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=U+qU6BUc+Yf/LfU49BhL9yi5ulNhXzI9oQByMXuhbEg=; b=QgcQyTlJ4bRS8ZA1613/GaEaHBnS1BP7/nnmrjNDHE6miOfNIYzx0tdGGh5BKWQyw9 +E97Qw79x+0JuWQf/QH7jar/Aw3i16OqZsTQNayaJ3SDLf/BX0sYtGUGanPxZniGtD+b DYBuUbw8sqyP8E6oBsjJjcIkPEw2r6X/KV9GEmvl7Ku2QBZ/ZImVp5gWEykTTcc8k0kf cRPMrLa8PZWx9Wi7+OBSTYm5eiU4vd8lDZfuVtJWDKVWZZZtlgtJPWY3ELgSZqFhazQe BEmzHmFecgoBDNw5uPk2dSX1gDCPvt+TmZUVeHfzpA3f/5Rpn9ZkrxzPjOy2TnsgpeDN 2pQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697144770; x=1697749570; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=U+qU6BUc+Yf/LfU49BhL9yi5ulNhXzI9oQByMXuhbEg=; b=GjlD4hEHYOlrg2lbu9O0uZg3j8qIafFIk9lzPYqG3a08+bG1eeuD5jxH6LjJR3isiv g/tEojLrLCx8CO6fJknxfydG9kWu7fmB7nnToGFMd3ZuWNBEj+XgQ9WRw+EqyNcrKV0J mekA8YQ/Mi4a8ZPkOgQvpZcJuoZy5t0HpkcKh7SctEusm0xMc7qlblRNS7audnWFyxSc uMrdHeRXLZuNmtOfyGCl5rBc/Y+FqjgCfqLIgu9lJWtTisVm1WuXwjPP/8i24ncOhUI3 orEIIIn6B860CJD1Q9puvo4aM7Yo5STWY19z7hzIa+HgjVmxwiyCd7wiciDa70X6zDcE tE8g== X-Gm-Message-State: AOJu0YzOHhTwax6WTKm3X5du5rDnFj3kic4AdR5XyOVmfdS9T97Iy9aT ofUgGkBrvfaLsPPZZt+mu63TSkT5NWQGVxP+o/ah8w== X-Google-Smtp-Source: AGHT+IEMEev1fCcpfuV8sPYSsL+tp51FurlzcduDBSyW680ptIpoot7s6rm+mhDt5HS2vIKEyVOU5nII/oglFVQ0YlM= X-Received: by 2002:a17:906:fe08:b0:9a1:e0b1:e919 with SMTP id wy8-20020a170906fe0800b009a1e0b1e919mr20987908ejb.4.1697144769713; Thu, 12 Oct 2023 14:06:09 -0700 (PDT) MIME-Version: 1.0 References: <20231010032117.1577496-1-yosryahmed@google.com> <20231010032117.1577496-4-yosryahmed@google.com> <20231011003646.dt5rlqmnq6ybrlnd@google.com> In-Reply-To: From: Yosry Ahmed Date: Thu, 12 Oct 2023 14:05:30 -0700 Message-ID: Subject: Re: [PATCH v2 3/5] mm: memcg: make stats flushing threshold per-memcg To: Shakeel Butt Cc: Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Ivan Babrou , Tejun Heo , =?UTF-8?Q?Michal_Koutn=C3=BD?= , Waiman Long , kernel-team@cloudflare.com, Wei Xu , Greg Thelen , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: A07F518002E X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: fxzefgy6yb7ywbakpj919yhbkiabrjny X-HE-Tag: 1697144771-893235 X-HE-Meta: U2FsdGVkX1/A348/NkSffdr+s6wG+Eydl3dHJeYfX7kQo47Om9cJdAAcjeklP/OIdFQ6ywKkbqiYpJR/pLjN7QCZNB4RLKbxoLQWgLxyLPq335DquZEm3clxNwKOKQ2VlRhkyQAD9LDdq1eZyPoRFGOwia9f+W7EhrK+FV9He+ZUgLkxVUzTvltF6ST5mXFNiH9NZAId8WZRwJBfuPgrpVDitrNp8svHBkxuc00y5BSHmXHW32uSV74GU7fw7mXh1Fd015oVZg8h/RhXz2XppTc/3MJ5nkaA2cdQ4GdKBXKwpJx/bhOawyq/RUISR8wE9BZu/YdzM1ZCTVrzTdnMhWTHWRvnOpmpgem3dhxS79o/bGJWYmx0542gm+mYQyDdlxGKT7+JJDJJJ7Qh6FOcoBqAGKhn69i7sS9p3My8zXzoP384XQyRBsYThw+ZOypDrObOAvnETXD4Z3bcXkYUDBKRPFlrBXSj4/Z908QLrZFwQdjkKJv0H4M78XStNCGnOCM6w8cNDgdjceetQzIdBHSx2s/yvCFnl8itGmqQovHgScghJWi7q74Gkb9dZ7Do1vnoU+tzI2LSFc8qKLdoWb8oQEhlwf0Ypor6lx0j1GeLHBhd0QLkyii8alGy4u3TCbsSvy3VEl+HLXJqliuLyC8PPyc8eDdLWF6S+UgLKZS0gl+e4i1xZPGq9KLF7pSx8mkWaHA+bUGQUVHJ6mLw8YoJUf0SgVhyh/dng4tYwGQPXjFYTl2qczAZ6wmlUZzLxN7dBp+60HLzkaMRQEJOX1PMVlacieiGHt8ct7hqPtK1FLuW7/1KPmABnUsHmnZKNkQtKeyjc3NrMUmnygOz0850PvIMh4qWpiYdj/PXoq0SSmlUqbxNKfo7hXFrVmz7bGzr+VJ9J/+mzRhNag5VsNBzdlNbLy0415NBZQ9yIcqLBsD7ZzfLf9iJeeURz1yCtzGXiFIlLWsBeMhUkTc MfnmZL1g V7y+Iz7yOqszp6XkAnJkLa/BhpSNHupBa5aX5iGyfLRDHL+DNRVnz//JwmItwXuMAufNyKXBHyfset0dDMDKKajT5jVvSduCXnn1ofc2PYdUBfvGrIza1ZuHDxpXgVeVSjALoJ6MDrmIQCNCh6Lz3uAHFHB0K+4DXlmJfPP+2PW7YyQLOWP6+w8iswC2GFrRIHcBrA6HFXii21oip8xd7o/NQNR+TR9tLWpI3 X-Bogosity: Ham, tests=bogofilter, spamicity=0.011915, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: [..] > > > > > > > > Using next-20231009 and a similar 44 core machine with hyperthreading > > > > disabled, I ran 22 instances of netperf in parallel and got the > > > > following numbers from averaging 20 runs: > > > > > > > > Base: 33076.5 mbps > > > > Patched: 31410.1 mbps > > > > > > > > That's about 5% diff. I guess the number of iterations helps reduce > > > > the noise? I am not sure. > > > > > > > > Please also keep in mind that in this case all netperf instances are > > > > in the same cgroup and at a 4-level depth. I imagine in a practical > > > > setup processes would be a little more spread out, which means less > > > > common ancestors, so less contended atomic operations. > > > > > > > > > (Resending the reply as I messed up the last one, was not in plain text) > > > > > > I was curious, so I ran the same testing in a cgroup 2 levels deep > > > (i.e /sys/fs/cgroup/a/b), which is a much more common setup in my > > > experience. Here are the numbers: > > > > > > Base: 40198.0 mbps > > > Patched: 38629.7 mbps > > > > > > The regression is reduced to ~3.9%. > > > > > > What's more interesting is that going from a level 2 cgroup to a level > > > 4 cgroup is already a big hit with or without this patch: > > > > > > Base: 40198.0 -> 33076.5 mbps (~17.7% regression) > > > Patched: 38629.7 -> 31410.1 (~18.7% regression) > > > > > > So going from level 2 to 4 is already a significant regression for > > > other reasons (e.g. hierarchical charging). This patch only makes it > > > marginally worse. This puts the numbers more into perspective imo than > > > comparing values at level 4. What do you think? > > > > This is weird as we are running the experiments on the same machine. I > > will rerun with 2 levels as well. Also can you rerun the page fault > > benchmark as well which was showing 9% regression in your original > > commit message? > > Thanks. I will re-run the page_fault tests, but keep in mind that the > page fault benchmarks in will-it-scale are highly variable. We run > them between kernel versions internally, and I think we ignore any > changes below 10% as the benchmark is naturally noisy. > > I have a couple of runs for page_fault3_scalability showing a 2-3% > improvement with this patch :) I ran the page_fault tests for 10 runs on a machine with 256 cpus in a level 2 cgroup, here are the results (the results in the original commit message are for 384 cpus in a level 4 cgroup): LABEL | MEAN | MEDIAN | STDDEV | ------------------------------+-------------+-------------+------------- page_fault1_per_process_ops | | | | (A) base | 270249.164 | 265437.000 | 13451.836 | (B) patched | 261368.709 | 255725.000 | 13394.767 | | -3.29% | -3.66% | | page_fault1_per_thread_ops | | | | (A) base | 242111.345 | 239737.000 | 10026.031 | (B) patched | 237057.109 | 235305.000 | 9769.687 | | -2.09% | -1.85% | | page_fault1_scalability | | | (A) base | 0.034387 | 0.035168 | 0.0018283 | (B) patched | 0.033988 | 0.034573 | 0.0018056 | | -1.16% | -1.69% | | page_fault2_per_process_ops | | | (A) base | 203561.836 | 203301.000 | 2550.764 | (B) patched | 197195.945 | 197746.000 | 2264.263 | | -3.13% | -2.73% | | page_fault2_per_thread_ops | | | (A) base | 171046.473 | 170776.000 | 1509.679 | (B) patched | 166626.327 | 166406.000 | 768.753 | | -2.58% | -2.56% | | page_fault2_scalability | | | (A) base | 0.054026 | 0.053821 | 0.00062121 | (B) patched | 0.053329 | 0.05306 | 0.00048394 | | -1.29% | -1.41% | | page_fault3_per_process_ops | | | (A) base | 1295807.782 | 1297550.000 | 5907.585 | (B) patched | 1275579.873 | 1273359.000 | 8759.160 | | -1.56% | -1.86% | | page_fault3_per_thread_ops | | | (A) base | 391234.164 | 390860.000 | 1760.720 | (B) patched | 377231.273 | 376369.000 | 1874.971 | | -3.58% | -3.71% | | page_fault3_scalability | | | (A) base | 0.60369 | 0.60072 | 0.0083029 | (B) patched | 0.61733 | 0.61544 | 0.009855 | | +2.26% | +2.45% | | The numbers are much better. I can modify the commit log to include the testing in the replies instead of what's currently there if this helps (22 netperf instances on 44 cpus and will-it-scale page_fault on 256 cpus -- all in a level 2 cgroup).