From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84689CDB474 for ; Thu, 12 Oct 2023 21:16:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E9BC08D000E; Thu, 12 Oct 2023 17:16:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E4BEE8D0002; Thu, 12 Oct 2023 17:16:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D132C8D000E; Thu, 12 Oct 2023 17:16:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id C1A298D0002 for ; Thu, 12 Oct 2023 17:16:40 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 8CE5212087D for ; Thu, 12 Oct 2023 21:16:40 +0000 (UTC) X-FDA: 81338068560.22.0851AD0 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) by imf24.hostedemail.com (Postfix) with ESMTP id A5F88180011 for ; Thu, 12 Oct 2023 21:16:38 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=FfslCbSv; spf=pass (imf24.hostedemail.com: domain of shakeelb@google.com designates 209.85.214.182 as permitted sender) smtp.mailfrom=shakeelb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697145398; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NmAbrYXCHrdmDHgSyanO5Oq0ZwHhxvZuo69nPGosiS8=; b=QhwOHIt+yCwNmt/XkMDOJyTILIAfeh9HP2JlWOpqIkgV0LObat95vBw2PYp/0cnRsIgkze IzrvI0132RYGQeh0vrHc12Gu4fCf/dueAtut01Y441W74aOg8HOQOF7cjCnciNqSdLMCbv kGwb+UUsG9eCJH3ytmccczSU26QUiMA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697145398; a=rsa-sha256; cv=none; b=sNvVEHzyg8zWWf5OGWTfZji23HN9vkaI8c7xN7C2YV77wSeG6bjf0LMznmcVuS1qxl42Yt FHcO1DIcdhdNDJKgYccYGq08kPhovV6j5EEn5mn98MC8PMXF0v/RtWk9O42LAQMWCXUO6s xzLn83L52pfOi4YU80c5VmrT1BwcgD4= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=FfslCbSv; spf=pass (imf24.hostedemail.com: domain of shakeelb@google.com designates 209.85.214.182 as permitted sender) smtp.mailfrom=shakeelb@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-1c9b70b9671so23965ad.1 for ; Thu, 12 Oct 2023 14:16:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1697145397; x=1697750197; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=NmAbrYXCHrdmDHgSyanO5Oq0ZwHhxvZuo69nPGosiS8=; b=FfslCbSvyqq4z5MTEDPGTZEGJjo83g7pIX0nuVsMV7Ja/4dw9wqKOnmxGekrw/e1wn VQt1weCSJBvAzcpseF8vd4c1ibwNVCpQe2SbilvQxZXZXe3xvG4XFglRYzWWu+yNTqQd 8CbfdJoSUggKVULFh4zuqalGfnQKyg7QBeECHpvDtkpT191K/VGgOmHbVtAH3oSuu5uW 76Ofl6SWMW9WZs/sF7rlf3K9B0bzVRrEuZVIlC5dX6bi4fBTxYGXDqL5xGzKYb2z2srv oIQB82ljjtQTJXSy4shndYuXN165TW7fr+SDKjr/jAOsgaHprwIsrl2KQsHxHTwhnmzA TQGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697145397; x=1697750197; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NmAbrYXCHrdmDHgSyanO5Oq0ZwHhxvZuo69nPGosiS8=; b=ip0/WwqYDCDHF7KuoRaIUlD9BTg8kVESOCtCa/C//5RJU5r8zJX7h6UY62glSsqM4K my0Y+DKwjDH/W7a4jd0VGZDLaVW9sRIZyziIuw7HYoBW2Lf6Qxl46K1PejMale+rJQE4 LTVNh8nqwBq4e3H/uqomSj+AYjfyxogfQALrmgBG/8qX4ZfJrLVoUY03V0a1DaNv0Oqp HUL8P3rdyTJ6y4fv0NrUZivAOJpm5TRIcRhWrVnI+WuqCZl5f5xlRPUhMtnYZPNgsdAK OkziX2vXu4Gi9fXmlG9s4ecrVCrc7hhIEPtZuemy3Gbq7Y5TYctShhulkgecXXg05Bj6 2/Qw== X-Gm-Message-State: AOJu0YxznUjQqu8vShh1nqNKMsApfbuptBoUbJ3IPABJsiWP0TDFCkYJ UuDm/U8yMkPI7v88U/tBxQ7RGmZxUJtCDRSfhbGDzw== X-Google-Smtp-Source: AGHT+IFSkYQPO6wwHvN4xF6AFvZW/RJQ26WNEZeSEgFgTrbWeyT5hKT9UKSRV6h5LzTjDNrXCavoRLc0BY+myTWjATI= X-Received: by 2002:a17:903:240c:b0:1c9:db79:255c with SMTP id e12-20020a170903240c00b001c9db79255cmr67515plo.22.1697145397080; Thu, 12 Oct 2023 14:16:37 -0700 (PDT) MIME-Version: 1.0 References: <20231010032117.1577496-1-yosryahmed@google.com> <20231010032117.1577496-4-yosryahmed@google.com> <20231011003646.dt5rlqmnq6ybrlnd@google.com> In-Reply-To: From: Shakeel Butt Date: Thu, 12 Oct 2023 14:16:25 -0700 Message-ID: Subject: Re: [PATCH v2 3/5] mm: memcg: make stats flushing threshold per-memcg To: Yosry Ahmed Cc: Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Ivan Babrou , Tejun Heo , =?UTF-8?Q?Michal_Koutn=C3=BD?= , Waiman Long , kernel-team@cloudflare.com, Wei Xu , Greg Thelen , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: rruuti1ymh3zrp3qjbu8n6manudpe7oc X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: A5F88180011 X-Rspam-User: X-HE-Tag: 1697145398-46282 X-HE-Meta: U2FsdGVkX1/WZTcEwYJdRuzNfheQSQkqNfasD+GaW19lQjlb6UF2EkaRUPmRkyR0X3dVaiO+Yjhdx0Earil+hajcbroQ/rHyep8+MXxCeH+F4BQF4neHze+sORFLcl3g6j3w4nyA8G8uO+QwNJo711KIols0oh0FMONrqK1PGFs+hgWmg0D+DbDH9BpYx8yb/gsfZs73NpEc38EXq67HpQ/l/QsgsbTDrh/x7UCkdcazhV5VdMD1cwk/SudpGtwKsjKjua3m1KJJm2dfbjKSsRjNHDaDKmFK8+ZVxw7O5rvTQiF9Ufa6d0YX8Hr6M3FF7UcT5obiZDo/ISj7dEzSGhITMjfapV57COrtjnYaFgtzWdV+yOXUb4BO0LDpuX8jk0bANV24shleT6yP2nmSkx3+vS/bwfHUcOVtLOxPWTyPiLmhxwDkKFSCVJAUrN/0tr4AgAHAk/616Bfogi/Cfw0ox9ggMPPhrKESLHxWZtcpSNgeVXC6gOWJXerQg64nH12oxmOoqV0i5dE2dAC4pgdE+1eA3VLOvszU7QHkZVQuGuInCulpHyhmLWeJ19R8IshrOfFk6i2lssSboMF7Ud0ugtE5eK3vybwK/TlK5SerRLZBrSpQ5GCozbrrUw7BmLScpzpUDK9kQfsZ4u4GnojdD6yjJoiiOTUBCHoc0naD1fqgTz00B5XQbHxmQBDd1yjrx/NVQqXQMkEBC348nLtgbS2nK0Qcj2ARqiz/eNDi7nDq9qWFGJosjTUpXt6pAALOvfmHKB1aoJmCsrQGcMVhxKzmvjpWeYQUYCLOv9dMSgEGDnw1y0W3DO4e+56UyRW576nTISixO0K+j9s032Jq7eUM6Uol7UGpHuMose7k5duxhIEUl71rWfx70NImu0iLkc4hMq4XVcxgcX836HGRelD511CbcUTNrwizlhiiW2TN/9bafVg1h+5jczXeBww7orld9jh3/cHUqpC J9/wwqCJ PEKkFPhisnGNNy9IwrFN5J2uo4VlFJv0ExuJIoKwOX411vHkcYZhJv/8b6w1Strpv64Mj5gpRKaBZmqP9PM79HZnesiwhFMLZNreYkdzxBFRF2jzRX7VId++bv5W/06spWLhtezQE5tczoGMTrTk9SMmTnuaMHasaw5R1Rz0bPpeMl1X3PTz4iE9D8cZ4LdnDQfwNuwYY7nbFdpyITo9WJ3J9HopYInWTwWdqSsOTxB4rkIFHdOkzeitqstr4tbjZ3Tqd X-Bogosity: Ham, tests=bogofilter, spamicity=0.004828, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Oct 12, 2023 at 2:06=E2=80=AFPM Yosry Ahmed = wrote: > > [..] > > > > > > > > > > Using next-20231009 and a similar 44 core machine with hyperthrea= ding > > > > > disabled, I ran 22 instances of netperf in parallel and got the > > > > > following numbers from averaging 20 runs: > > > > > > > > > > Base: 33076.5 mbps > > > > > Patched: 31410.1 mbps > > > > > > > > > > That's about 5% diff. I guess the number of iterations helps redu= ce > > > > > the noise? I am not sure. > > > > > > > > > > Please also keep in mind that in this case all netperf instances = are > > > > > in the same cgroup and at a 4-level depth. I imagine in a practic= al > > > > > setup processes would be a little more spread out, which means le= ss > > > > > common ancestors, so less contended atomic operations. > > > > > > > > > > > > (Resending the reply as I messed up the last one, was not in plain = text) > > > > > > > > I was curious, so I ran the same testing in a cgroup 2 levels deep > > > > (i.e /sys/fs/cgroup/a/b), which is a much more common setup in my > > > > experience. Here are the numbers: > > > > > > > > Base: 40198.0 mbps > > > > Patched: 38629.7 mbps > > > > > > > > The regression is reduced to ~3.9%. > > > > > > > > What's more interesting is that going from a level 2 cgroup to a le= vel > > > > 4 cgroup is already a big hit with or without this patch: > > > > > > > > Base: 40198.0 -> 33076.5 mbps (~17.7% regression) > > > > Patched: 38629.7 -> 31410.1 (~18.7% regression) > > > > > > > > So going from level 2 to 4 is already a significant regression for > > > > other reasons (e.g. hierarchical charging). This patch only makes i= t > > > > marginally worse. This puts the numbers more into perspective imo t= han > > > > comparing values at level 4. What do you think? > > > > > > This is weird as we are running the experiments on the same machine. = I > > > will rerun with 2 levels as well. Also can you rerun the page fault > > > benchmark as well which was showing 9% regression in your original > > > commit message? > > > > Thanks. I will re-run the page_fault tests, but keep in mind that the > > page fault benchmarks in will-it-scale are highly variable. We run > > them between kernel versions internally, and I think we ignore any > > changes below 10% as the benchmark is naturally noisy. > > > > I have a couple of runs for page_fault3_scalability showing a 2-3% > > improvement with this patch :) > > I ran the page_fault tests for 10 runs on a machine with 256 cpus in a > level 2 cgroup, here are the results (the results in the original > commit message are for 384 cpus in a level 4 cgroup): > > LABEL | MEAN | MEDIAN | STDDEV = | > ------------------------------+-------------+-------------+------------- > page_fault1_per_process_ops | | | | > (A) base | 270249.164 | 265437.000 | 13451.836 | > (B) patched | 261368.709 | 255725.000 | 13394.767 | > | -3.29% | -3.66% | | > page_fault1_per_thread_ops | | | | > (A) base | 242111.345 | 239737.000 | 10026.031 | > (B) patched | 237057.109 | 235305.000 | 9769.687 | > | -2.09% | -1.85% | | > page_fault1_scalability | | | > (A) base | 0.034387 | 0.035168 | 0.0018283 | > (B) patched | 0.033988 | 0.034573 | 0.0018056 | > | -1.16% | -1.69% | | > page_fault2_per_process_ops | | | > (A) base | 203561.836 | 203301.000 | 2550.764 | > (B) patched | 197195.945 | 197746.000 | 2264.263 | > | -3.13% | -2.73% | | > page_fault2_per_thread_ops | | | > (A) base | 171046.473 | 170776.000 | 1509.679 | > (B) patched | 166626.327 | 166406.000 | 768.753 | > | -2.58% | -2.56% | | > page_fault2_scalability | | | > (A) base | 0.054026 | 0.053821 | 0.00062121 | > (B) patched | 0.053329 | 0.05306 | 0.00048394 | > | -1.29% | -1.41% | | > page_fault3_per_process_ops | | | > (A) base | 1295807.782 | 1297550.000 | 5907.585 | > (B) patched | 1275579.873 | 1273359.000 | 8759.160 | > | -1.56% | -1.86% | | > page_fault3_per_thread_ops | | | > (A) base | 391234.164 | 390860.000 | 1760.720 | > (B) patched | 377231.273 | 376369.000 | 1874.971 | > | -3.58% | -3.71% | | > page_fault3_scalability | | | > (A) base | 0.60369 | 0.60072 | 0.0083029 | > (B) patched | 0.61733 | 0.61544 | 0.009855 | > | +2.26% | +2.45% | | > > The numbers are much better. I can modify the commit log to include > the testing in the replies instead of what's currently there if this > helps (22 netperf instances on 44 cpus and will-it-scale page_fault on > 256 cpus -- all in a level 2 cgroup). Yes this looks better. I think we should also ask intel perf and phoronix folks to run their benchmarks as well (but no need to block on them).