From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FB94C81857 for ; Mon, 27 Apr 2020 23:36:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 00F4120775 for ; Mon, 27 Apr 2020 23:36:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="2cpPjegO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 00F4120775 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9F4208E0005; Mon, 27 Apr 2020 19:36:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9A4CB8E0001; Mon, 27 Apr 2020 19:36:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 86C0D8E0005; Mon, 27 Apr 2020 19:36:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0020.hostedemail.com [216.40.44.20]) by kanga.kvack.org (Postfix) with ESMTP id 6D03F8E0001 for ; Mon, 27 Apr 2020 19:36:01 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 331175DE7 for ; Mon, 27 Apr 2020 23:36:01 +0000 (UTC) X-FDA: 76755245322.30.plot18_5199df12b814c X-HE-Tag: plot18_5199df12b814c X-Filterd-Recvd-Size: 4332 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Mon, 27 Apr 2020 23:36:00 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 5FDB4206D4; Mon, 27 Apr 2020 23:35:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1588030559; bh=xmk7CyW+Iq+Rc+TKxRT/9WDnIkypckD7+blQ1n4svRo=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=2cpPjegOkw3YBWsW96LWmxnh3azEbh7CSJHStLSLXp5TY4TnoMr9NfVaRauY6i1R+ proOSGIherCc6G0gUH/650UezzeWUFWcYKzAw3D1USFGbqChxVC67gwszbXI5FvtYd AZd7HzTIQLujQMJGLpIyy6ZZ8tf1tzRdAQRThkBc= Date: Mon, 27 Apr 2020 16:35:58 -0700 From: Andrew Morton To: David Rientjes Cc: Vlastimil Babka , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [patch] mm, oom: stop reclaiming if GFP_ATOMIC will start failing soon Message-Id: <20200427163558.5b08487d63da3cc7a89bf50b@linux-foundation.org> In-Reply-To: References: <20200425172706.26b5011293e8dc77b1dccaf3@linux-foundation.org> <20200427133051.b71f961c1bc53a8e72c4f003@linux-foundation.org> X-Mailer: Sylpheed 3.5.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, 27 Apr 2020 16:03:56 -0700 (PDT) David Rientjes wrote: > On Mon, 27 Apr 2020, Andrew Morton wrote: > > > > No - that would actually make the problem worse. > > > > > > Today, per-zone min watermarks dictate when user allocations will loop or > > > oom kill. should_reclaim_retry() currently loops if reclaim has succeeded > > > in the past few tries and we should be able to allocate if we are able to > > > reclaim the amount of memory that we think we can. > > > > > > The issue is that this supposes that looping to reclaim more will result > > > in more free memory. That doesn't always happen if there are concurrent > > > memory allocators. > > > > > > GFP_ATOMIC allocators can access below these per-zone watermarks. So the > > > issue is that per-zone free pages stays between ALLOC_HIGH watermarks > > > (the watermark that GFP_ATOMIC allocators can allocate to) and min > > > watermarks. We never reclaim enough memory to get back to min watermarks > > > because reclaim cannot keep up with the amount of GFP_ATOMIC allocations. > > > > But there should be an upper bound upon the total amount of in-flight > > GFP_ATOMIC memory at any point in time? These aren't like pagecache > > which will take more if we give it more. Setting the various > > thresholds appropriately should ensure that blockable allocations don't > > get their memory stolen by GPP_ATOMIC allocations? > > > > Certainly if that upper bound is defined and enforced somewhere we would > not have run into this issue causing all userspace to become completely > unresponsive. Do you have links to patches that proposed enforcing this > upper bound? There is no such enforcement and there are no such patches, as I'm sure you know. No consumer of GFP_ATOMIC memory should consume an unbounded amount of it. Subsystems such as networking will consume a certain amount and will then start recycling it. The total amount in-flight will vary over the longer term as workloads change. A dynamically tuning threshold system will need to adapt rapidly enough to sudden load shifts, which might require unreasonable amounts of headroom. Michal asked relevant questions regarding watermark tuning - an ansewr to those would be interesting. To amplify that, is it possible to manually tune this system so that the problem no longer exhibits? If so, then why can't that tuning be performed automatically?