From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.4 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EAFCCC43331 for ; Tue, 12 Nov 2019 20:36:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9D792206A3 for ; Tue, 12 Nov 2019 20:36:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="r7sVXLCu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9D792206A3 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 37F3D6B0007; Tue, 12 Nov 2019 15:36:00 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 307B56B000A; Tue, 12 Nov 2019 15:36:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1CF296B0275; Tue, 12 Nov 2019 15:36:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0035.hostedemail.com [216.40.44.35]) by kanga.kvack.org (Postfix) with ESMTP id 0433C6B0007 for ; Tue, 12 Nov 2019 15:35:59 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id ADF87180AD830 for ; Tue, 12 Nov 2019 20:35:59 +0000 (UTC) X-FDA: 76148782038.01.slave38_8b937b6378637 X-HE-Tag: slave38_8b937b6378637 X-Filterd-Recvd-Size: 7749 Received: from mail-wr1-f68.google.com (mail-wr1-f68.google.com [209.85.221.68]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Tue, 12 Nov 2019 20:35:59 +0000 (UTC) Received: by mail-wr1-f68.google.com with SMTP id l7so8468450wrp.6 for ; Tue, 12 Nov 2019 12:35:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=szdw8t9B42/4/cNhYqo19gfrxOuMxRXBv0qv4PANfYk=; b=r7sVXLCumfB11AuMztsUR3o8f9V356UpMRPQ0PC8s9rtg/GkSoDvhc0mNnrc+If5dH sECt/dps2b6HtAXTbbz6fLSRkLRNHMhi1U+PU+cB6xqm6cfaGjTvnHsjWssgRTIlsdRp JzukyEmtUn+3YQV8wAH2Z/Lr81N7w7OFbanN1N6j+9py43dsR8Vlt+ATI4kduM12WXWD RXj2/HzrUS/ZsgAN+ED+vjtDxeu2sTUlHemNDNe03AkIr+akv+MAMrTt9Zra+ukW4YAH h17XqL1PjxDEZ1UybGc1DaXoo3QTWLKiDtLyHaUHLl0E+/G8Z3vkwrErZUOjnBgXR05O xjeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=szdw8t9B42/4/cNhYqo19gfrxOuMxRXBv0qv4PANfYk=; b=Z5eWAFJKrZQaTr2uTltpdSrSooLp2nN2NGz6pESwF32T2/LxDAFksuPsly+J0MA2if QRMbzkXWc3ZNUS0r8Ug5IELo8baMFanyLVoXfgnsnavKXcDVOcLFkdVZPAzhppWO6Xd1 zboaDGwz+KocHwXGCkkb1Cc1tDx0Hq00Ad8qwxaReLuk01MUL8sstzhdi0r8CJ8qxnru 2E/ijf2y2lebYYSKZ0DD6jYu7craPgHRvOBkGrbuTxXn68yAxSR4S0ZG2ahKiewd05Il lYztacVwFzGCAZE+5HKnJ95Yjh0FsCJ+iZvI9kLMFzDmdFaNFIx0t6L8Roi/Y1+Cdp3d 94CQ== X-Gm-Message-State: APjAAAU0SvGJicS0BeZ85JYU6Iu+NKchSpoLW/tdFq4k/wWSIegfwrCx cA+cZNFPYscZSN7ZP7v+9W3gHbfk0GOQ5Y9E2oTIFg== X-Google-Smtp-Source: APXvYqwBJ/lGLB3LFtDuu5tjKJUezRtBWPF7Y0SJpgkkKWIy/E4IS8svvwXHrYCe993aZR2TLFe4IQQWQzXOH19MI70= X-Received: by 2002:adf:ffd0:: with SMTP id x16mr4035599wrs.86.1573590957195; Tue, 12 Nov 2019 12:35:57 -0800 (PST) MIME-Version: 1.0 References: <20191107205334.158354-1-hannes@cmpxchg.org> <20191107205334.158354-3-hannes@cmpxchg.org> <20191112174533.GA178331@cmpxchg.org> <20191112185932.GC179587@cmpxchg.org> In-Reply-To: <20191112185932.GC179587@cmpxchg.org> From: Suren Baghdasaryan Date: Tue, 12 Nov 2019 12:35:46 -0800 Message-ID: Subject: Re: [PATCH 2/3] mm: vmscan: detect file thrashing at the reclaim root To: Johannes Weiner Cc: Andrew Morton , Andrey Ryabinin , Shakeel Butt , Rik van Riel , Michal Hocko , linux-mm , cgroups mailinglist , LKML , kernel-team@fb.com Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Nov 12, 2019 at 10:59 AM Johannes Weiner wrote: > > On Tue, Nov 12, 2019 at 10:45:44AM -0800, Suren Baghdasaryan wrote: > > On Tue, Nov 12, 2019 at 9:45 AM Johannes Weiner wrote: > > > > > > On Sun, Nov 10, 2019 at 06:01:18PM -0800, Suren Baghdasaryan wrote: > > > > On Thu, Nov 7, 2019 at 12:53 PM Johannes Weiner wrote: > > > > > > > > > > We use refault information to determine whether the cache workingset > > > > > is stable or transitioning, and dynamically adjust the inactive:active > > > > > file LRU ratio so as to maximize protection from one-off cache during > > > > > stable periods, and minimize IO during transitions. > > > > > > > > > > With cgroups and their nested LRU lists, we currently don't do this > > > > > correctly. While recursive cgroup reclaim establishes a relative LRU > > > > > order among the pages of all involved cgroups, refaults only affect > > > > > the local LRU order in the cgroup in which they are occuring. As a > > > > > result, cache transitions can take longer in a cgrouped system as the > > > > > active pages of sibling cgroups aren't challenged when they should be. > > > > > > > > > > [ Right now, this is somewhat theoretical, because the siblings, under > > > > > continued regular reclaim pressure, should eventually run out of > > > > > inactive pages - and since inactive:active *size* balancing is also > > > > > done on a cgroup-local level, we will challenge the active pages > > > > > eventually in most cases. But the next patch will move that relative > > > > > size enforcement to the reclaim root as well, and then this patch > > > > > here will be necessary to propagate refault pressure to siblings. ] > > > > > > > > > > This patch moves refault detection to the root of reclaim. Instead of > > > > > remembering the cgroup owner of an evicted page, remember the cgroup > > > > > that caused the reclaim to happen. When refaults later occur, they'll > > > > > correctly influence the cross-cgroup LRU order that reclaim follows. > > > > > > > > I spent some time thinking about the idea of calculating refault > > > > distance using target_memcg's inactive_age and then activating > > > > refaulted page in (possibly) another memcg and I am still having > > > > trouble convincing myself that this should work correctly. However I > > > > also was unable to convince myself otherwise... We use refault > > > > distance to calculate the deficit in inactive LRU space and then > > > > activate the refaulted page if that distance is less that > > > > active+inactive LRU size. However making that decision based on LRU > > > > sizes of one memcg and then activating the page in another one seems > > > > very counterintuitive to me. Maybe that's just me though... > > > > > > It's not activating in a random, unrelated memcg - it's the parental > > > relationship that makes it work. > > > > > > If you have a cgroup tree > > > > > > root > > > | > > > A > > > / \ > > > B1 B2 > > > > > > and reclaim is driven by a limit in A, we are reclaiming the pages in > > > B1 and B2 as if they were on a single LRU list A (it's approximated by > > > the round-robin reclaim and has some caveats, but that's the idea). > > > > > > So when a page that belongs to B2 gets evicted, it gets evicted from > > > virtual LRU list A. When it refaults later, we make the (in)active > > > size and distance comparisons against virtual LRU list A as well. > > > > > > The pages on the physical LRU list B2 are not just ordered relative to > > > its B2 peers, they are also ordered relative to the pages in B1. And > > > that of course is necessary if we want fair competition between them > > > under shared reclaim pressure from A. > > > > Thanks for clarification. The testcase in your description when group > > B has a large inactive cache which does not get reclaimed while its > > sibling group A has to drop its active cache got me under the > > impression that sibling cgroups (in your reply above B1 and B2) can > > cause memory pressure in each other. Maybe that's not a legit case and > > B1 would not cause pressure in B2 without causing pressure in their > > shared parent A? It now makes more sense to me and I want to confirm > > that is the case. > > Yes. I'm sorry if this was misleading. They should only cause pressure > onto each other by causing pressure on A; and then reclaim in A treats > them as one combined pool of pages. Reviewed-by: Suren Baghdasaryan