From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C5B4C35DF9 for ; Thu, 27 Feb 2020 19:56:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E9E5324699 for ; Thu, 27 Feb 2020 19:56:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=cmpxchg-org.20150623.gappssmtp.com header.i=@cmpxchg-org.20150623.gappssmtp.com header.b="oBEvIdqy" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E9E5324699 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 95C676B0006; Thu, 27 Feb 2020 14:56:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 90B3F6B0007; Thu, 27 Feb 2020 14:56:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7D4896B0008; Thu, 27 Feb 2020 14:56:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0038.hostedemail.com [216.40.44.38]) by kanga.kvack.org (Postfix) with ESMTP id 63E5C6B0006 for ; Thu, 27 Feb 2020 14:56:15 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 430365DEB for ; Thu, 27 Feb 2020 19:56:15 +0000 (UTC) X-FDA: 76536963510.09.car34_368b6725f305f X-HE-Tag: car34_368b6725f305f X-Filterd-Recvd-Size: 9772 Received: from mail-qk1-f169.google.com (mail-qk1-f169.google.com [209.85.222.169]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Thu, 27 Feb 2020 19:56:14 +0000 (UTC) Received: by mail-qk1-f169.google.com with SMTP id m2so633274qka.7 for ; Thu, 27 Feb 2020 11:56:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fSikhrFs5BcRusqV72w1ZPleJmM+MohxSSUW83ONj5U=; b=oBEvIdqy/cYuOQutMhWUrNcPeYL2gICCQr+fu7tq7vVplWwxr0u7hW9ENL65iMdRom xFPYrs5j+CvsSDj/SOJ0fb+AQ/JGskuPHfhd/wFPRruaeOtFk7Qsg0IPt9PLNCzvD8GM MmHIl/zx6wFCAnLSLRRaK29JxbAtIbEZ+s4gqBOK5c3KKtIn4PFtJZlJHD01gCLtU7K1 EEkJsiMrCUO5M/l4QvUQDaP2zS6qxa36u/D6n3QWZmfkHyiDGzSoYqRjuxTxE5s1Ij6k dRwDWFVoDGKvv8qDZk+m2mPbpSKjWvMiuAVVYT76aAmsqxeBqxRVUU1ugyX8ZIdHdP1x pdJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fSikhrFs5BcRusqV72w1ZPleJmM+MohxSSUW83ONj5U=; b=niR85f6KYV1KFvRdtbjy+BAb4KEn6s5Ty+ehHg0SudVpA5mn/UHrrTtYQvHWM5WnTC oZMD4BfZu1uMYshuMSKly6mG4XWXO9nHJnTkIqvDY3LJJGXiW0TKD/Gwk1n1P5+fhhph S+CDcA+e8OL/u8AxR9PZNImFQx8uAxJXsuJSTUbhQBCb+dsODfZngriYFVAksZxDBUmJ 24A9j9UjXlasuivkHAG6q5CUqgC5MYqg5f2OKuXzhfDwL1cyKFpvOBIMM8AZZFRa2mjv UMv0R9QjRK/5EO/9OTILCxGKKYX4VVNBAJzfeJrIEi3c9ArlOaGw+qKD3okmjD8Sipc3 hDJw== X-Gm-Message-State: APjAAAVINtD3QlvbzkvC/yTIMtHuIkTnS/2xUIwM0dpvVFCru6PcImAd prLPiGikBukO4nqIj2HjtKwBVQ== X-Google-Smtp-Source: APXvYqwzGV55cVF8neavTQ0KHqXTRgctIxpNwffmP9jz1OBsL5ZdIPghXnLFLvTkBlP+8+xsm0fcGg== X-Received: by 2002:a05:620a:42:: with SMTP id t2mr1196414qkt.45.1582833373465; Thu, 27 Feb 2020 11:56:13 -0800 (PST) Received: from localhost ([2620:10d:c091:500::3:2450]) by smtp.gmail.com with ESMTPSA id a14sm3708173qkk.73.2020.02.27.11.56.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Feb 2020 11:56:12 -0800 (PST) From: Johannes Weiner To: Andrew Morton Cc: Roman Gushchin , Michal Hocko , Tejun Heo , Chris Down , =?UTF-8?q?Michal=20Koutn=C3=BD?= , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 1/3] mm: memcontrol: fix memory.low proportional distribution Date: Thu, 27 Feb 2020 14:56:04 -0500 Message-Id: <20200227195606.46212-2-hannes@cmpxchg.org> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200227195606.46212-1-hannes@cmpxchg.org> References: <20200227195606.46212-1-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When memory.low is overcommitted - i.e. the children claim more protection than their shared ancestor grants them - the allowance is distributed in proportion to how much each sibling uses their own declared protection: low_usage =3D min(memory.low, memory.current) elow =3D parent_elow * (low_usage / siblings_low_usage) However, siblings_low_usage is not the sum of all low_usages. It sums up the usages of *only those cgroups that are within their memory.low* That means that low_usage can be *bigger* than siblings_low_usage, and consequently the total protection afforded to the children can be bigger than what the ancestor grants the subtree. Consider three groups where two are in excess of their protection: A/memory.low =3D 10G A/A1/memory.low =3D 10G, memory.current =3D 20G A/A2/memory.low =3D 10G, memory.current =3D 20G A/A3/memory.low =3D 10G, memory.current =3D 8G siblings_low_usage =3D 8G (only A3 contributes) A1/elow =3D parent_elow(10G) * low_usage(10G) / siblings_low_usage(8G) = =3D 12.5G -> 10G A2/elow =3D parent_elow(10G) * low_usage(10G) / siblings_low_usage(8G) = =3D 12.5G -> 10G A3/elow =3D parent_elow(10G) * low_usage(8G) / siblings_low_usage(8G) =3D= 10.0G (the 12.5G are capped to the explicit memory.low setting of 10G) With that, the sum of all awarded protection below A is 30G, when A only grants 10G for the entire subtree. What does this mean in practice? A1 and A2 would still be in excess of their 10G allowance and would be reclaimed, whereas A3 would not. As they eventually drop below their protection setting, they would be counted in siblings_low_usage again and the error would right itself. When reclaim was applied in a binary fashion (cgroup is reclaimed when it's above its protection, otherwise it's skipped) this would actually work out just fine. However, since 1bc63fb1272b ("mm, memcg: make scan aggression always exclude protection"), reclaim pressure is scaled to how much a cgroup is above its protection. As a result this calculation error unduly skews pressure away from A1 and A2 toward the rest of the system. But why did we do it like this in the first place? The reasoning behind exempting groups in excess from siblings_low_usage was to go after them first during reclaim in an overcommitted subtree: A/memory.low =3D 2G, memory.current =3D 4G A/A1/memory.low =3D 3G, memory.current =3D 2G A/A2/memory.low =3D 1G, memory.current =3D 2G siblings_low_usage =3D 2G (only A1 contributes) A1/elow =3D parent_elow(2G) * low_usage(2G) / siblings_low_usage(2G) =3D= 2G A2/elow =3D parent_elow(2G) * low_usage(1G) / siblings_low_usage(2G) =3D= 1G While the children combined are overcomitting A and are technically both at fault, A2 is actively declaring unprotected memory and we would like to reclaim that first. However, while this sounds like a noble goal on the face of it, it doesn't make much difference in actual memory distribution: Because A is overcommitted, reclaim will not stop once A2 gets pushed back to within its allowance; we'll have to reclaim A1 either way. The end result is still that protection is distributed proportionally, with A1 getting 3/4 (1.5G) and A2 getting 1/4 (0.5G) of A's allowance. [ If A weren't overcommitted, it wouldn't make a difference since each cgroup would just get the protection it declares: A/memory.low =3D 2G, memory.current =3D 3G A/A1/memory.low =3D 1G, memory.current =3D 1G A/A2/memory.low =3D 1G, memory.current =3D 2G With the current calculation: siblings_low_usage =3D 1G (only A1 contributes) A1/elow =3D parent_elow(2G) * low_usage(1G) / siblings_low_usage(1G) =3D= 2G -> 1G A2/elow =3D parent_elow(2G) * low_usage(1G) / siblings_low_usage(1G) =3D= 2G -> 1G Including excess groups in siblings_low_usage: siblings_low_usage =3D 2G A1/elow =3D parent_elow(2G) * low_usage(1G) / siblings_low_usage(2G) =3D= 1G -> 1G A2/elow =3D parent_elow(2G) * low_usage(1G) / siblings_low_usage(2G) =3D= 1G -> 1G ] Simplify the calculation and fix the proportional reclaim bug by including excess cgroups in siblings_low_usage. After this patch, the effective memory.low distribution from the example above would be as follows: A/memory.low =3D 10G A/A1/memory.low =3D 10G, memory.current =3D 20G A/A2/memory.low =3D 10G, memory.current =3D 20G A/A3/memory.low =3D 10G, memory.current =3D 8G siblings_low_usage =3D 28G A1/elow =3D parent_elow(10G) * low_usage(10G) / siblings_low_usage(28G)= =3D 3.5G A2/elow =3D parent_elow(10G) * low_usage(10G) / siblings_low_usage(28G)= =3D 3.5G A3/elow =3D parent_elow(10G) * low_usage(8G) / siblings_low_usage(28G) = =3D 2.8G Fixes: 1bc63fb1272b ("mm, memcg: make scan aggression always exclude prot= ection") Fixes: 230671533d64 ("mm: memory.low hierarchical behavior") Acked-by: Tejun Heo Acked-by: Roman Gushchin Acked-by: Chris Down Acked-by: Michal Hocko Signed-off-by: Johannes Weiner --- mm/memcontrol.c | 4 +--- mm/page_counter.c | 12 ++---------- 2 files changed, 3 insertions(+), 13 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index c5b5f74cfd4d..874a0b00f89b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6236,9 +6236,7 @@ struct cgroup_subsys memory_cgrp_subsys =3D { * elow =3D min( memory.low, parent->elow * ------------------ ), * siblings_low_usage * - * | memory.current, if memory.current < memory.low - * low_usage =3D | - * | 0, otherwise. + * low_usage =3D min(memory.low, memory.current) * * * Such definition of the effective memory.low provides the expected diff --git a/mm/page_counter.c b/mm/page_counter.c index de31470655f6..75d53f15f040 100644 --- a/mm/page_counter.c +++ b/mm/page_counter.c @@ -23,11 +23,7 @@ static void propagate_protected_usage(struct page_coun= ter *c, return; =20 if (c->min || atomic_long_read(&c->min_usage)) { - if (usage <=3D c->min) - protected =3D usage; - else - protected =3D 0; - + protected =3D min(usage, c->min); old_protected =3D atomic_long_xchg(&c->min_usage, protected); delta =3D protected - old_protected; if (delta) @@ -35,11 +31,7 @@ static void propagate_protected_usage(struct page_coun= ter *c, } =20 if (c->low || atomic_long_read(&c->low_usage)) { - if (usage <=3D c->low) - protected =3D usage; - else - protected =3D 0; - + protected =3D min(usage, c->low); old_protected =3D atomic_long_xchg(&c->low_usage, protected); delta =3D protected - old_protected; if (delta) --=20 2.24.1