From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58647C3B187 for ; Tue, 11 Feb 2020 19:31:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D46C520637 for ; Tue, 11 Feb 2020 19:31:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=cmpxchg-org.20150623.gappssmtp.com header.i=@cmpxchg-org.20150623.gappssmtp.com header.b="ReYqpjfP" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D46C520637 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 647996B0311; Tue, 11 Feb 2020 14:31:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5F85D6B0313; Tue, 11 Feb 2020 14:31:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 50DBE6B0314; Tue, 11 Feb 2020 14:31:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0109.hostedemail.com [216.40.44.109]) by kanga.kvack.org (Postfix) with ESMTP id 36FC36B0311 for ; Tue, 11 Feb 2020 14:31:04 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id E34A0180AD801 for ; Tue, 11 Feb 2020 19:31:03 +0000 (UTC) X-FDA: 76478839206.03.stage58_4369f1131da48 X-HE-Tag: stage58_4369f1131da48 X-Filterd-Recvd-Size: 5565 Received: from mail-qk1-f193.google.com (mail-qk1-f193.google.com [209.85.222.193]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Tue, 11 Feb 2020 19:31:03 +0000 (UTC) Received: by mail-qk1-f193.google.com with SMTP id a2so10804370qko.12 for ; Tue, 11 Feb 2020 11:31:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=SE2lIQx3P5mwY3EQaRSCUgCct2sPIMvI9iCNgtybvd8=; b=ReYqpjfPtN0mGGjim5Vxe1D7me7yyrMd6UiO9aJzmfLByAaHdNIpNtjB7n9qUXmSFK ugbHGIs7g+cxoaYJKyWMPXVoHRIpfbOJc6bSrffd7tbAPYExseRBxWT8fqYhJRuyEDR/ EnaUw8YK9RIQtJs8emRfcYw1FZSNtripVX5NARu5L+oeYeanwYE1TuLzfAkLz06lmhey KYplkiBKiR91Vn6oMybNQNj3JC2O/Zv5Zm0qsP9fsU+yYoTa+wpQkt5GMLaWHWpd/iAY BeCMdehRlvjIbQYeFvnPpTl0bz7+9JMPd5RbW/52X+D2jSSvxfEIWumljFERaAeX8gRX LyKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=SE2lIQx3P5mwY3EQaRSCUgCct2sPIMvI9iCNgtybvd8=; b=LSdKU6Kv/zTUWGnSpqfnigoZAsUgvewGRB8CiCoZXdYlMTbc+UwIJyAX6AMlFmCuy0 gMW83woMSokoTtMc+hyidQ9z1I08aJIbBmzfzvRLDoIuu/799xgNccDqhbsyIN9qS1Go /JLcz5kN5lE06VYKo9jiUI8hKmlkhoEh753Q0RXIv6hbb9cgGRHBmf0IOHAmemHakfsf XlaSQCAbpDoQaSKOpTPvOhHNckP0SVHBG+rJD7ePzPG+1mM1xjV3ifwsaRDZjiawyWLI SidZ32Tuf1Pb45fYxjt0sFkxXM+jqdBIhskNUXQ6nh1LyHb1dqEnjrj2XF4WgMOLW4Xo aZlg== X-Gm-Message-State: APjAAAXuvvf9T2aLMCA1k1WDHWZzdj32H52E2ndjX4KNfuY/qne4Rp4H zYumXAZnQ1N7Wl91twTFLn0sbg== X-Google-Smtp-Source: APXvYqwcqY2c2mfW1BVMHT24Xz0JXGOIxC0CzaJ9T0HaNd3CVrTU9BYlUHiRtqnXvP3T7MhdUQBaOg== X-Received: by 2002:a05:620a:1010:: with SMTP id z16mr4305510qkj.237.1581449462363; Tue, 11 Feb 2020 11:31:02 -0800 (PST) Received: from localhost ([2620:10d:c091:500::3:3189]) by smtp.gmail.com with ESMTPSA id z21sm2537612qka.122.2020.02.11.11.31.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 11 Feb 2020 11:31:01 -0800 (PST) Date: Tue, 11 Feb 2020 14:31:01 -0500 From: Johannes Weiner To: Rik van Riel Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Dave Chinner , Yafang Shao , Michal Hocko , Roman Gushchin , Andrew Morton , Linus Torvalds , Al Viro , kernel-team@fb.com Subject: Re: [PATCH] vfs: keep inodes with page cache off the inode shrinker LRU Message-ID: <20200211193101.GA178975@cmpxchg.org> References: <20200211175507.178100-1-hannes@cmpxchg.org> <29b6e848ff4ad69b55201751c9880921266ec7f4.camel@surriel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <29b6e848ff4ad69b55201751c9880921266ec7f4.camel@surriel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Feb 11, 2020 at 02:05:38PM -0500, Rik van Riel wrote: > On Tue, 2020-02-11 at 12:55 -0500, Johannes Weiner wrote: > > The VFS inode shrinker is currently allowed to reclaim inodes with > > populated page cache. As a result it can drop gigabytes of hot and > > active page cache on the floor without consulting the VM (recorded as > > "inodesteal" events in /proc/vmstat). > > > > This causes real problems in practice. Consider for example how the > > VM > > would cache a source tree, such as the Linux git tree. As large parts > > of the checked out files and the object database are accessed > > repeatedly, the page cache holding this data gets moved to the active > > list, where it's fully (and indefinitely) insulated from one-off > > cache > > moving through the inactive list. > > > This behavior of invalidating page cache from the inode shrinker goes > > back to even before the git import of the kernel tree. It may have > > been less noticeable when the VM itself didn't have real workingset > > protection, and floods of one-off cache would push out any active > > cache over time anyway. But the VM has come a long way since then and > > the inode shrinker is now actively subverting its caching strategy. > > Two things come to mind when looking at this: > - highmem > - NUMA > > IIRC one of the reasons reclaim is done in this way is > because a page cache page in one area of memory (highmem, > or a NUMA node) can end up pinning inode slab memory in > another memory area (normal zone, other NUMA node). That's a good point, highmem does ring a bell now that you mention it. If we still care, I think this could be solved by doing something similar to what we do with buffer_heads_over_limit: allow a lowmem allocation to reclaim page cache inside the highmem zone if the bhs (or inodes in this case) have accumulated excessively. AFAICS, we haven't done anything similar for NUMA, so it might not be much of a problem there. I could imagine this is in part because NUMA nodes tend to be more balanced in size, and the ratio between cache memory and inode/bh memory means that these objects won't turn into a significant externality. Whereas with extreme highmem:lowmem ratios, they can.