From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B8AEC433E6 for ; Tue, 12 Jan 2021 21:13:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C147023120 for ; Tue, 12 Jan 2021 21:13:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C147023120 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D4A416B00DE; Tue, 12 Jan 2021 16:13:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CABDA6B00E0; Tue, 12 Jan 2021 16:13:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B72476B00E1; Tue, 12 Jan 2021 16:13:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0140.hostedemail.com [216.40.44.140]) by kanga.kvack.org (Postfix) with ESMTP id 9A16D6B00DE for ; Tue, 12 Jan 2021 16:13:56 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 63689181AEF2A for ; Tue, 12 Jan 2021 21:13:56 +0000 (UTC) X-FDA: 77698375272.11.tin11_1e0dc1f27518 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id 4520B180F8B82 for ; Tue, 12 Jan 2021 21:13:56 +0000 (UTC) X-HE-Tag: tin11_1e0dc1f27518 X-Filterd-Recvd-Size: 7309 Received: from mail-qk1-f180.google.com (mail-qk1-f180.google.com [209.85.222.180]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Tue, 12 Jan 2021 21:13:55 +0000 (UTC) Received: by mail-qk1-f180.google.com with SMTP id v126so3227882qkd.11 for ; Tue, 12 Jan 2021 13:13:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=aIy5x+1VD8v0XL/QyWpF14P/+yjwNedjhAL+yga2UPc=; b=YHfwauVIxc5r9PQVvXLF0Ptk/bZelgyuHn2/RbdJAmQcqlYKNCnDhPPF78earV1OJU eLuKfwV9RGNyuF8hJtIyqKXgzk/0cNDU8QMRiPwKLs+B8Re8SZUe7Jsfjk0oTpOHSVGw kQzL7pV5us/Y4Fw2Kn0L717G6kDF5pGfCRxyNRl/x1wolCOm+jtXKNSIR/NFNAQxnUB1 eSePrcH24+SIWwgI/6/cs6XhctXCuo0fqeEPmXStoLyQTR6iHKNNe3u0vFaz3zcoSfCB zKiuEQhoPaiNTD9VQGJOfm3Ju9a5FTQcfyDqVv+tgqFFT0y5nLPsRtRZh8eu2tqkoGRl i7RQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=aIy5x+1VD8v0XL/QyWpF14P/+yjwNedjhAL+yga2UPc=; b=HW35owiO0Ao/sgOYw6A2mAqBeWJlmnnlsdRX1FAUG70OXkDp943YqyRhTNLSazwRKK 2JdnEdnfWReHJMVJuxn69sDXjUD34qKAbe8FxEdTSvhTHA/G8eOWyBwxwjo/rdXG3YCJ 57X5f1HDMMpGNW80B0kqG++0OFatAcjKaxpYqygIm5A6LwS5GOs9uzVBWmmLvsDQLeY1 3W3FE+w+YaBgh3wVvSi26JPxWigPonOu19/PuQIneq5jc+I6L9CTi1AFeTCY8I8MrMrD DopzgJ29TX7frOcENq/S/SGAolGluhkg1N5Pv9bRPVXooCCD+PF0SS57WmrXTWYQjvUI uVpQ== X-Gm-Message-State: AOAM530RUPpYNlr8J75xPCxNjpNaGdiufIDt3jWPXPlVnHL01Kq/Zebh oA4aIsM47mG3gM3zBChg3vhOAg== X-Google-Smtp-Source: ABdhPJwtcyul1odUGeo+GDyNTpKzGN5r8NxTsSdgEKBg6DDycyvGV4AIq1tspzWibNEqRKtcsmkPUw== X-Received: by 2002:a37:8703:: with SMTP id j3mr1414393qkd.455.1610486034803; Tue, 12 Jan 2021 13:13:54 -0800 (PST) Received: from localhost ([2620:10d:c091:480::1:1fb4]) by smtp.gmail.com with ESMTPSA id i3sm1779278qtd.95.2021.01.12.13.13.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jan 2021 13:13:53 -0800 (PST) Date: Tue, 12 Jan 2021 16:11:27 -0500 From: Johannes Weiner To: Roman Gushchin Cc: Andrew Morton , Tejun Heo , Michal Hocko , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: Re: [PATCH] mm: memcontrol: prevent starvation when writing memory.high Message-ID: References: <20210112163011.127833-1-hannes@cmpxchg.org> <20210112170322.GA99586@carbon.dhcp.thefacebook.com> <20210112201237.GB99586@carbon.dhcp.thefacebook.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210112201237.GB99586@carbon.dhcp.thefacebook.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jan 12, 2021 at 12:12:37PM -0800, Roman Gushchin wrote: > On Tue, Jan 12, 2021 at 02:45:43PM -0500, Johannes Weiner wrote: > > On Tue, Jan 12, 2021 at 09:03:22AM -0800, Roman Gushchin wrote: > > > On Tue, Jan 12, 2021 at 11:30:11AM -0500, Johannes Weiner wrote: > > > > When a value is written to a cgroup's memory.high control file, the > > > > write() context first tries to reclaim the cgroup to size before > > > > putting the limit in place for the workload. Concurrent charges from > > > > the workload can keep such a write() looping in reclaim indefinitely. > > > > > > > > In the past, a write to memory.high would first put the limit in place > > > > for the workload, then do targeted reclaim until the new limit has > > > > been met - similar to how we do it for memory.max. This wasn't prone > > > > to the described starvation issue. However, this sequence could cause > > > > excessive latencies in the workload, when allocating threads could be > > > > put into long penalty sleeps on the sudden memory.high overage created > > > > by the write(), before that had a chance to work it off. > > > > > > > > Now that memory_high_write() performs reclaim before enforcing the new > > > > limit, reflect that the cgroup may well fail to converge due to > > > > concurrent workload activity. Bail out of the loop after a few tries. > > > > > > > > Fixes: 536d3bf261a2 ("mm: memcontrol: avoid workload stalls when lowering memory.high") > > > > Cc: # 5.8+ > > > > Reported-by: Tejun Heo > > > > Signed-off-by: Johannes Weiner > > > > --- > > > > mm/memcontrol.c | 7 +++---- > > > > 1 file changed, 3 insertions(+), 4 deletions(-) > > > > > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > > > index 605f671203ef..63a8d47c1cd3 100644 > > > > --- a/mm/memcontrol.c > > > > +++ b/mm/memcontrol.c > > > > @@ -6275,7 +6275,6 @@ static ssize_t memory_high_write(struct kernfs_open_file *of, > > > > > > > > for (;;) { > > > > unsigned long nr_pages = page_counter_read(&memcg->memory); > > > > - unsigned long reclaimed; > > > > > > > > if (nr_pages <= high) > > > > break; > > > > @@ -6289,10 +6288,10 @@ static ssize_t memory_high_write(struct kernfs_open_file *of, > > > > continue; > > > > } > > > > > > > > - reclaimed = try_to_free_mem_cgroup_pages(memcg, nr_pages - high, > > > > - GFP_KERNEL, true); > > > > + try_to_free_mem_cgroup_pages(memcg, nr_pages - high, > > > > + GFP_KERNEL, true); > > > > > > > > - if (!reclaimed && !nr_retries--) > > > > + if (!nr_retries--) > > > > > > Shouldn't it be (!reclaimed || !nr_retries) instead? > > > > > > If reclaimed == 0, it probably doesn't make much sense to retry. > > > > We usually allow nr_retries worth of no-progress reclaim cycles to > > make up for intermittent reclaim failures. > > > > The difference to OOMs/memory.max is that we don't want to loop > > indefinitely on forward progress, but we should allow the usual number > > of no-progress loops. > > Re memory.max: trying really hard makes sense because we are OOMing otherwise. > With memory.high such an idea is questionable: if were not able to reclaim > a single page from the first attempt, it's unlikely that we can reclaim many > from repeating 16 times. > > My concern here is that we can see CPU regressions in some cases when there is > no reclaimable memory. Do you think we can win something by trying harder? > If so, it's worth mentioning in the commit log. Because it's really a separate > change to what's described in the log, to some extent it's a move into an opposite > direction. Hm, I'm confused what change you are referring to. Current upstream allows: a. unlimited progress loops b. 16 no-progress loops My patch is fixing the issue resulting from the unlimited progress loops in a). This is described in the changelog. You seem to be advocating for an unrelated change to the no-progress loops condition in b). Am I missing something?