From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4BDCDC433FE for ; Tue, 21 Sep 2021 20:46:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EF2C861156 for ; Tue, 21 Sep 2021 20:46:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org EF2C861156 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=fromorbit.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 807CA900002; Tue, 21 Sep 2021 16:46:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 78FE86B0071; Tue, 21 Sep 2021 16:46:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 630B9900002; Tue, 21 Sep 2021 16:46:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0182.hostedemail.com [216.40.44.182]) by kanga.kvack.org (Postfix) with ESMTP id 507ED6B006C for ; Tue, 21 Sep 2021 16:46:30 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D54F0181FFCBC for ; Tue, 21 Sep 2021 20:46:29 +0000 (UTC) X-FDA: 78612763698.01.AFDB16E Received: from mail107.syd.optusnet.com.au (mail107.syd.optusnet.com.au [211.29.132.53]) by imf03.hostedemail.com (Postfix) with ESMTP id 2191230000B8 for ; Tue, 21 Sep 2021 20:46:28 +0000 (UTC) Received: from dread.disaster.area (pa49-195-238-16.pa.nsw.optusnet.com.au [49.195.238.16]) by mail107.syd.optusnet.com.au (Postfix) with ESMTPS id 9EE88FAB5D4; Wed, 22 Sep 2021 06:46:22 +1000 (AEST) Received: from dave by dread.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1mSmej-00FAQB-3k; Wed, 22 Sep 2021 06:46:21 +1000 Date: Wed, 22 Sep 2021 06:46:21 +1000 From: Dave Chinner To: Mel Gorman Cc: Linux-MM , NeilBrown , Theodore Ts'o , Andreas Dilger , "Darrick J . Wong" , Matthew Wilcox , Michal Hocko , Rik van Riel , Vlastimil Babka , Johannes Weiner , Jonathan Corbet , Linux-fsdevel , LKML Subject: Re: [RFC PATCH 0/5] Remove dependency on congestion_wait in mm/ Message-ID: <20210921204621.GY2361455@dread.disaster.area> References: <20210920085436.20939-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210920085436.20939-1-mgorman@techsingularity.net> X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.3 cv=Tu+Yewfh c=1 sm=1 tr=0 a=DzKKRZjfViQTE5W6EVc0VA==:117 a=DzKKRZjfViQTE5W6EVc0VA==:17 a=kj9zAlcOel0A:10 a=7QKq2e-ADPsA:10 a=VwQbUJbxAAAA:8 a=7-415B0cAAAA:8 a=Eazbco4TDQp801Hdg9MA:9 a=CjuIK1q_8ugA:10 a=AjGcO6oz07-iQ99wixmX:22 a=biEYGPWJfzWAr4FL6Ov7:22 Authentication-Results: imf03.hostedemail.com; dkim=none; dmarc=none; spf=none (imf03.hostedemail.com: domain of david@fromorbit.com has no SPF policy when checking 211.29.132.53) smtp.mailfrom=david@fromorbit.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 2191230000B8 X-Stat-Signature: 7qetsw84ce35ayfcoameqae5w3jk364t X-HE-Tag: 1632257188-974636 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Sep 20, 2021 at 09:54:31AM +0100, Mel Gorman wrote: > Cc list similar to "congestion_wait() and GFP_NOFAIL" as they're loosely > related. > > This is a prototype series that removes all calls to congestion_wait > in mm/ and deletes wait_iff_congested. It's not a clever > implementation but congestion_wait has been broken for a long time > (https://lore.kernel.org/linux-mm/45d8b7a6-8548-65f5-cccf-9f451d4ae3d4@kernel.dk/). > Even if it worked, it was never a great idea. While excessive > dirty/writeback pages at the tail of the LRU is one possibility that > reclaim may be slow, there is also the problem of too many pages being > isolated and reclaim failing for other reasons (elevated references, > too many pages isolated, excessive LRU contention etc). > > This series replaces the reclaim conditions with event driven ones > > o If there are too many dirty/writeback pages, sleep until a timeout > or enough pages get cleaned > o If too many pages are isolated, sleep until enough isolated pages > are either reclaimed or put back on the LRU > o If no progress is being made, let direct reclaim tasks sleep until > another task makes progress > > This has been lightly tested only and the testing was useless as the > relevant code was not executed. The workload configurations I had that > used to trigger these corner cases no longer work (yey?) and I'll need > to implement a new synthetic workload. If someone is aware of a realistic > workload that forces reclaim activity to the point where reclaim stalls > then kindly share the details. Got a git tree pointer so I can pull it into a test kernel so I can see what impact it has on behaviour before I try to make sense of the code? Cheers, Dave. -- Dave Chinner david@fromorbit.com