From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FBD9C83004 for ; Wed, 29 Apr 2020 10:15:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A74172073E for ; Wed, 29 Apr 2020 10:15:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A74172073E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2C9DC8E0005; Wed, 29 Apr 2020 06:15:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 27B768E0001; Wed, 29 Apr 2020 06:15:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1909A8E0005; Wed, 29 Apr 2020 06:15:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0166.hostedemail.com [216.40.44.166]) by kanga.kvack.org (Postfix) with ESMTP id 003D88E0001 for ; Wed, 29 Apr 2020 06:15:13 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id B69F8181AEF00 for ; Wed, 29 Apr 2020 10:15:13 +0000 (UTC) X-FDA: 76760484906.26.hair40_2d9ac81d95061 X-HE-Tag: hair40_2d9ac81d95061 X-Filterd-Recvd-Size: 6493 Received: from mail-wr1-f67.google.com (mail-wr1-f67.google.com [209.85.221.67]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Wed, 29 Apr 2020 10:15:13 +0000 (UTC) Received: by mail-wr1-f67.google.com with SMTP id b11so1806844wrs.6 for ; Wed, 29 Apr 2020 03:15:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=Njrl+LtsYOcLWuGaTy+eEj39sQfLR9JQYx3iweX+KpU=; b=Z54J6CYo9vUsHRNp/OqAMmnWEDqj0OnMRpePYe4QvXnbg+rVUgZWapEK0hiqTu2X+p x/isyi5tY+RBXjmPsD25iFsfVTltjbNXwVhr2D6V2flZtdorOCGxapvip1UbJAENAQL/ SW+b9ut0FAT6GnFRKnwn2M0WbdBDxBwW1puqYms35HRqhLzTj12q+rN7p965ITIJQ1HN nHaqwEwobZtbI3+g6VViYkGhomhHRVSV7qhxE8Y2ZINJKTHgWSMFI5tfYuPpTcnVYZFS 54JOycKoq2U05Fsg5C0PZxFzg7BOtssT0puuKxAz/QeNQ5tY0V56MjXFt2D7FiLEIMJs 6HXA== X-Gm-Message-State: AGi0Pua3w9XmqLPigehKOGjRGQYGuWo/XCfopBwLnumBMNDiZnTN35Bt NgvaXHzC3o2r4Dxmkm2IWMs= X-Google-Smtp-Source: APiQypJGgFvLLqTU1s5YisUTtz/tbz+dtqB8uoU/wVmtyeOtMDieqK+p6vICrzf6demSsbqO+3KtGw== X-Received: by 2002:a5d:428a:: with SMTP id k10mr38619699wrq.59.1588155312300; Wed, 29 Apr 2020 03:15:12 -0700 (PDT) Received: from localhost (ip-37-188-130-62.eurotel.cz. [37.188.130.62]) by smtp.gmail.com with ESMTPSA id n6sm6870097wmc.28.2020.04.29.03.15.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 Apr 2020 03:15:11 -0700 (PDT) Date: Wed, 29 Apr 2020 12:15:10 +0200 From: Michal Hocko To: Chris Down Cc: Andrew Morton , Johannes Weiner , Roman Gushchin , Yafang Shao , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 1/2] mm, memcg: Avoid stale protection values when cgroup is above protection Message-ID: <20200429101510.GA28637@dhcp22.suse.cz> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue 28-04-20 19:26:47, Chris Down wrote: > From: Yafang Shao > > A cgroup can have both memory protection and a memory limit to isolate > it from its siblings in both directions - for example, to prevent it > from being shrunk below 2G under high pressure from outside, but also > from growing beyond 4G under low pressure. > > Commit 9783aa9917f8 ("mm, memcg: proportional memory.{low,min} reclaim") > implemented proportional scan pressure so that multiple siblings in > excess of their protection settings don't get reclaimed equally but > instead in accordance to their unprotected portion. > > During limit reclaim, this proportionality shouldn't apply of course: > there is no competition, all pressure is from within the cgroup and > should be applied as such. Reclaim should operate at full efficiency. > > However, mem_cgroup_protected() never expected anybody to look at the > effective protection values when it indicated that the cgroup is above > its protection. As a result, a query during limit reclaim may return > stale protection values that were calculated by a previous reclaim cycle > in which the cgroup did have siblings. > > When this happens, reclaim is unnecessarily hesitant and potentially > slow to meet the desired limit. In theory this could lead to premature > OOM kills, although it's not obvious this has occurred in practice. Thanks this describes the underlying problem. I would be also explicit that the issue should be visible only on tail memcgs which have both max/high and protection configured and the effect depends on the difference between the two (the smaller it is the largrger the effect). There is no mention about the fix. The patch resets effective values for the reclaim root and I've had some concerns about that http://lkml.kernel.org/r/20200424162103.GK11591@dhcp22.suse.cz. Johannes has argued that other races are possible and I didn't get to think about it thoroughly. But this patch is introducing a new possibility of breaking protection. If we want to have a quick and simple fix that would be easier to backport to older kernels then I would feel much better if we simply workedaround the problem as suggested earlier http://lkml.kernel.org/r/20200423061629.24185-1-laoar.shao@gmail.com We can rework the effective values calculation to be more robust against races on top of that because this is likely a more tricky thing to do. > Fixes: 9783aa9917f8 ("mm, memcg: proportional memory.{low,min} reclaim") > Signed-off-by: Yafang Shao > Signed-off-by: Chris Down > Cc: Johannes Weiner > Cc: Michal Hocko > Cc: Roman Gushchin > > [hannes@cmpxchg.org: rework code comment] > [hannes@cmpxchg.org: changelog] > [chris@chrisdown.name: fix store tear] > [chris@chrisdown.name: retitle] > --- > mm/memcontrol.c | 13 ++++++++++++- > 1 file changed, 12 insertions(+), 1 deletion(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 0be00826b832..b0374be44e9e 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -6392,8 +6392,19 @@ enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root, > > if (!root) > root = root_mem_cgroup; > - if (memcg == root) > + if (memcg == root) { > + /* > + * The cgroup is the reclaim root in this reclaim > + * cycle, and therefore not protected. But it may have > + * stale effective protection values from previous > + * cycles in which it was not the reclaim root - for > + * example, global reclaim followed by limit reclaim. > + * Reset these values for mem_cgroup_protection(). > + */ > + WRITE_ONCE(memcg->memory.emin, 0); > + WRITE_ONCE(memcg->memory.elow, 0); > return MEMCG_PROT_NONE; > + } > > usage = page_counter_read(&memcg->memory); > if (!usage) > -- > 2.26.2 -- Michal Hocko SUSE Labs