From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 114EBCDB47E for ; Thu, 12 Oct 2023 21:20:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 85D918D0011; Thu, 12 Oct 2023 17:20:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 80DBF8D0002; Thu, 12 Oct 2023 17:20:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6D5688D0011; Thu, 12 Oct 2023 17:20:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 5F3FB8D0002 for ; Thu, 12 Oct 2023 17:20:37 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 14F5814092C for ; Thu, 12 Oct 2023 21:20:37 +0000 (UTC) X-FDA: 81338078514.23.DCCCE9C Received: from mail-ej1-f42.google.com (mail-ej1-f42.google.com [209.85.218.42]) by imf26.hostedemail.com (Postfix) with ESMTP id 3F93314000D for ; Thu, 12 Oct 2023 21:20:35 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=b04FZuED; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.42 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697145635; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6E+ctilpvTRp1sWziPGrKb0bFclcGitH24520WnSwgs=; b=2SPsJqDxPfvM+KEF/iCI5ucMqMEMDicIoGPIkfRPoIvFX3pxWW/VYhzy8mwYPkPL56MDF2 IZZP+weJ5MVTu9GCZZHN4s+HJU4r6hxPBOQqwdtGdQdkE4YUiEv7q1xoCw9/ex2JZr2Umd NP6aURXMo6k1h8b4ZGAXAJNDCnhZdwU= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=b04FZuED; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.42 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697145635; a=rsa-sha256; cv=none; b=BOhC5uySTvO3zuen9QcL3FXiMqet/Dtmn7ivDuheyVQnAx+KNzGYvrY4RaIE8ElUeHC3cx /b9ZlwFSFExHdioldI+C/YI0B46vZf1qXUMbCE9HpwS2jBSNCWgqmb+syc+j+OZQ092hGF ES4IokzmOrUiaWc+x9Zsf0yuuh6tQMU= Received: by mail-ej1-f42.google.com with SMTP id a640c23a62f3a-9b275afb6abso546478966b.1 for ; Thu, 12 Oct 2023 14:20:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1697145634; x=1697750434; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=6E+ctilpvTRp1sWziPGrKb0bFclcGitH24520WnSwgs=; b=b04FZuEDdOyAkdM8P1UlJCnvBdUPRwKAWwI5qqNeL2Y0smrCAujGQ8vC869e3hCJS6 /elxWe3LOIKokZueumD6anHJXH8TDNjDatXZHItuI+U/GrZDr23kk5Q3Vq4hJlU+6DIb O5rRV4ebWoyCDjI7Pu5PVMg2a8edcQNsnIesQ6y9xHq0u9ajBB2Xx01hGlmp5IjRM715 5T/Ljb09QvO8oKrGvIrgN2hs03Y4JXcmc5y8SnM1qTW6D8Ol7Yw9vpg6QCnfxEk29/U+ u4rxcyUQb6k4QzLpSlosVRcdq5VSwCyItNF+sGeF3VoZtSuLDq4MFxr+CiWA+yzBuDsy d2Ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697145634; x=1697750434; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6E+ctilpvTRp1sWziPGrKb0bFclcGitH24520WnSwgs=; b=aLu2VCJqN8Eu8nZteIt/jyM5s06Vk5zo1e2g16/9L/vJs7UZj+vQ6D/+v1azGWhW/q RQxT+geOfx2Ty4i0GRex+CnLhgoJ9QkiS6otakTCrWoEAoVpICBKwvvo9ieAZmn3j3mB Ld2785W7flxv5kDS5G1foz8CcuSbeVN3LvtwCVde2VEFeJsEQ/kqOP1Y4XCMgnfrRNnU cVNCkrQlZR6iyKKfT7pCRsmLis8Xp6gcBB7EOq9XXR7voWsY3js3DdJKUfvbxmFzLqcQ RGlnoZNdpycYwLcDtgHUQDhFRejRtgAPsORuRtu2itWM99ls0ZR+9kX2VDegOYsN8bWy 7Ixw== X-Gm-Message-State: AOJu0YzrGU9fVwvQqbOGWerPqGyecfON8uI6MOTtlY4yuBaNGwrOKPRf ah43eWLvCX9WjJmAhtC05jG7JBoNJ2v3v/eFY+p1Ig== X-Google-Smtp-Source: AGHT+IHYyMKX/mt1qXenjXQXNg9i7QjMRmRu+cx5Qv0V/h53iTtEzf7z7GQSkj9UBoDduFKAXaEjA9B83NI+fRf1ZIE= X-Received: by 2002:a17:907:3e1a:b0:9ae:699d:8a31 with SMTP id hp26-20020a1709073e1a00b009ae699d8a31mr26012691ejc.33.1697145633579; Thu, 12 Oct 2023 14:20:33 -0700 (PDT) MIME-Version: 1.0 References: <20231010032117.1577496-1-yosryahmed@google.com> <20231010032117.1577496-4-yosryahmed@google.com> <20231011003646.dt5rlqmnq6ybrlnd@google.com> In-Reply-To: From: Yosry Ahmed Date: Thu, 12 Oct 2023 14:19:55 -0700 Message-ID: Subject: Re: [PATCH v2 3/5] mm: memcg: make stats flushing threshold per-memcg To: Shakeel Butt Cc: Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Ivan Babrou , Tejun Heo , =?UTF-8?Q?Michal_Koutn=C3=BD?= , Waiman Long , kernel-team@cloudflare.com, Wei Xu , Greg Thelen , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 3F93314000D X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: g961zr3zz38sapxhafcwo5piz3xtsn8g X-HE-Tag: 1697145635-7188 X-HE-Meta: U2FsdGVkX19pRzcqd2eebDPShIqtthPcvwdSPLmpavdB4pInrCLPY+Qfrdp70O6anh2JTeM5ltLcJ5O7Cisdbe/6GxLksGnwhRVfdiamH/DtTvNdq2uMnGHSA/rmuZVsqREJAuASmnIr1ld4DUYBXARYHUknPn3oCMjABs5/A0fBLMRLXykFBaQazTS+an9J493I5aiB2oC9ogyKR7rZWcVHT4FI8etWJxeFCS6gmWdUeGokHH+bWq8d62TfL+SkwQzuuofkllwy3NeQuIoQ3g02eGoz2xZlBqUf1wE9L4ykxwFwpynimvKwma+J4sxspzXPYrHdNhDVCt6h5Cnaz6UtvkSzLoF04GQyUnaeZgxM+olJr5D3Le0qPGMbIz9OXrhjfQ67lPNpfxfr/ztsaEwLMoNKhds7su7n7jpM792ZXUKAGk42BScJaW8gd7UOABmpDV7e/c+lpZkwIPQV2BxIsO8jFdf84ZXvnvbrL0pgKGTTIjbuJN5XsRpFgHWTUGz4+9uYrrj1yQtsVQTnwxsMf7ECYxL3vBACBZM1e9I2U1boMhiJtyb/8YQ9GQmj0lcUKweuGrnY+oECp4OsyfPq1qmHb1QqUYDQYMf8h3XA2oZ618bwmTogQ6FnCa6himw6eI1xY6zReR7DRcb+p+0USTKieoshXvKXYgA4kZUInNVDwbxdwXsVqZHpaLS9Ux+abZkqdlr7hfIJFy4CGwW9EuqDKzVpGrVaTF++Kn9xNUKd4hg4PZB4nziD5NRWg23HD4Y0ZZyUpt36lXOreQPa6e0D/kn/uaN//tKG5AU3jnd1gTBB4wd0DT59OJJzLCVk4Qqs4PuEIF/5HQTnbrzQvvHWYB8D003mcGU7X3vAMV9CN7/7NfCSYEe+WWipFke3bBxmr2SmGToHdcxnjxzjqoc8zwv/15T/OvU/Tg6bVFsTUOGE0R+s0kiPDcpeji/LufySYL/ufO+UkBq GqaiddpN rmuqz+tnIrZcBLbDc9A5+2G7559jDsRnV76umR6SDfh1n1BHnWEhpn1WOXfn6EXPLqEu7ueU66O+mmWsOnzwdzzGTbDxTPF3PbmUaQGLHgnfpDAHCoLQIiSZFC2UpAOFslZ6lv7oQXZLs6oo9M+vw/22zRiYZDnw1v+GGrootiiIzeKvd5fC29om5ptbvncOsPyEwuJqbRaYi0iPtW4LpJO0hMVX6QlCtkjmzCC0RK0ryvp19N6RYDcrzjQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.010003, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Oct 12, 2023 at 2:16=E2=80=AFPM Shakeel Butt = wrote: > > On Thu, Oct 12, 2023 at 2:06=E2=80=AFPM Yosry Ahmed wrote: > > > > [..] > > > > > > > > > > > > Using next-20231009 and a similar 44 core machine with hyperthr= eading > > > > > > disabled, I ran 22 instances of netperf in parallel and got the > > > > > > following numbers from averaging 20 runs: > > > > > > > > > > > > Base: 33076.5 mbps > > > > > > Patched: 31410.1 mbps > > > > > > > > > > > > That's about 5% diff. I guess the number of iterations helps re= duce > > > > > > the noise? I am not sure. > > > > > > > > > > > > Please also keep in mind that in this case all netperf instance= s are > > > > > > in the same cgroup and at a 4-level depth. I imagine in a pract= ical > > > > > > setup processes would be a little more spread out, which means = less > > > > > > common ancestors, so less contended atomic operations. > > > > > > > > > > > > > > > (Resending the reply as I messed up the last one, was not in plai= n text) > > > > > > > > > > I was curious, so I ran the same testing in a cgroup 2 levels dee= p > > > > > (i.e /sys/fs/cgroup/a/b), which is a much more common setup in my > > > > > experience. Here are the numbers: > > > > > > > > > > Base: 40198.0 mbps > > > > > Patched: 38629.7 mbps > > > > > > > > > > The regression is reduced to ~3.9%. > > > > > > > > > > What's more interesting is that going from a level 2 cgroup to a = level > > > > > 4 cgroup is already a big hit with or without this patch: > > > > > > > > > > Base: 40198.0 -> 33076.5 mbps (~17.7% regression) > > > > > Patched: 38629.7 -> 31410.1 (~18.7% regression) > > > > > > > > > > So going from level 2 to 4 is already a significant regression fo= r > > > > > other reasons (e.g. hierarchical charging). This patch only makes= it > > > > > marginally worse. This puts the numbers more into perspective imo= than > > > > > comparing values at level 4. What do you think? > > > > > > > > This is weird as we are running the experiments on the same machine= . I > > > > will rerun with 2 levels as well. Also can you rerun the page fault > > > > benchmark as well which was showing 9% regression in your original > > > > commit message? > > > > > > Thanks. I will re-run the page_fault tests, but keep in mind that the > > > page fault benchmarks in will-it-scale are highly variable. We run > > > them between kernel versions internally, and I think we ignore any > > > changes below 10% as the benchmark is naturally noisy. > > > > > > I have a couple of runs for page_fault3_scalability showing a 2-3% > > > improvement with this patch :) > > > > I ran the page_fault tests for 10 runs on a machine with 256 cpus in a > > level 2 cgroup, here are the results (the results in the original > > commit message are for 384 cpus in a level 4 cgroup): > > > > LABEL | MEAN | MEDIAN | STDDEV = | > > ------------------------------+-------------+-------------+------------= - > > page_fault1_per_process_ops | | | = | > > (A) base | 270249.164 | 265437.000 | 13451.836 = | > > (B) patched | 261368.709 | 255725.000 | 13394.767 = | > > | -3.29% | -3.66% | = | > > page_fault1_per_thread_ops | | | = | > > (A) base | 242111.345 | 239737.000 | 10026.031 = | > > (B) patched | 237057.109 | 235305.000 | 9769.687 = | > > | -2.09% | -1.85% | = | > > page_fault1_scalability | | | > > (A) base | 0.034387 | 0.035168 | 0.0018283 = | > > (B) patched | 0.033988 | 0.034573 | 0.0018056 = | > > | -1.16% | -1.69% | = | > > page_fault2_per_process_ops | | | > > (A) base | 203561.836 | 203301.000 | 2550.764 = | > > (B) patched | 197195.945 | 197746.000 | 2264.263 = | > > | -3.13% | -2.73% | = | > > page_fault2_per_thread_ops | | | > > (A) base | 171046.473 | 170776.000 | 1509.679 = | > > (B) patched | 166626.327 | 166406.000 | 768.753 = | > > | -2.58% | -2.56% | = | > > page_fault2_scalability | | | > > (A) base | 0.054026 | 0.053821 | 0.00062121 = | > > (B) patched | 0.053329 | 0.05306 | 0.00048394 = | > > | -1.29% | -1.41% | = | > > page_fault3_per_process_ops | | | > > (A) base | 1295807.782 | 1297550.000 | 5907.585 = | > > (B) patched | 1275579.873 | 1273359.000 | 8759.160 = | > > | -1.56% | -1.86% | = | > > page_fault3_per_thread_ops | | | > > (A) base | 391234.164 | 390860.000 | 1760.720 = | > > (B) patched | 377231.273 | 376369.000 | 1874.971 = | > > | -3.58% | -3.71% | = | > > page_fault3_scalability | | | > > (A) base | 0.60369 | 0.60072 | 0.0083029 = | > > (B) patched | 0.61733 | 0.61544 | 0.009855 = | > > | +2.26% | +2.45% | = | > > > > The numbers are much better. I can modify the commit log to include > > the testing in the replies instead of what's currently there if this > > helps (22 netperf instances on 44 cpus and will-it-scale page_fault on > > 256 cpus -- all in a level 2 cgroup). > > Yes this looks better. I think we should also ask intel perf and > phoronix folks to run their benchmarks as well (but no need to block > on them). Anything I need to do for this to happen? (I thought such testing is already done on linux-next) Also, any further comments on the patch (or the series in general)? If not, I can send a new commit message for this patch in-place.