From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.7 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CEE98C32770 for ; Sat, 4 Jan 2020 07:26:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 77423215A4 for ; Sat, 4 Jan 2020 07:26:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="JS2Hu8eF" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 77423215A4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AF0888E0005; Sat, 4 Jan 2020 02:26:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AA0EF8E0003; Sat, 4 Jan 2020 02:26:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9DEA08E0005; Sat, 4 Jan 2020 02:26:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0040.hostedemail.com [216.40.44.40]) by kanga.kvack.org (Postfix) with ESMTP id 8A9768E0003 for ; Sat, 4 Jan 2020 02:26:51 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 3001D180AD801 for ; Sat, 4 Jan 2020 07:26:51 +0000 (UTC) X-FDA: 76339119822.06.day19_75fb8d208ad4c X-HE-Tag: day19_75fb8d208ad4c X-Filterd-Recvd-Size: 6458 Received: from mail-il1-f193.google.com (mail-il1-f193.google.com [209.85.166.193]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Sat, 4 Jan 2020 07:26:50 +0000 (UTC) Received: by mail-il1-f193.google.com with SMTP id f5so38439719ilq.5 for ; Fri, 03 Jan 2020 23:26:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=4USNzvDLp/jyy/kPx70cSKAWD4PRDCrYfn94EJlOaio=; b=JS2Hu8eFeQjooDGgpOE3mupsoTlqoEgC/pzbymKFSBrdxuTe8XQpaeZ9dV1q1bm959 yqwfjj6NvnAI5H7f/1cSICaLM6kNI3nn3X7PkmRsULXF/MahGSsyY66+0cmRhDsjcZqM j0HwW4DO/HPUPGW5XE9AiU6iJ+wDxadef2SWdfHTqaofJ1bJv5FwgFuqhcz9/bf5dcUV pyO+BuxIu+q4hQBd4fh9/54Qaz4INhahpautM+eB0kiV6m+u/aJM2BO9+Fqwi1eGIs/J xg0pI2+8E8Eulyf7a6TxfifiHsXPuAttgjOFYzDUMBbH/JdDMz8x/lCfKxLx+RA9kVHt 3KIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=4USNzvDLp/jyy/kPx70cSKAWD4PRDCrYfn94EJlOaio=; b=Uj/+9JgsM6AnP5qJx3OfQNlbyQwhzUnVmKrPKcmicTqPL85CIQkxBBp2LHW4LOjapD +ciIYF/TqzmiulBsUQlPYnWKDgQ9SMK5dRtrVmTcXrASaXpnSraqleT8uswECjTRbCJq VWORwXhKMts97eG4xqkWi+blAbf/DHIhS2fzVtATK7n7mYbfC4EzDFdnEvWB16J4grne cCfj8oBp0fgRHAeB2n6BwFes2uJyV7Zru+0mG4jpDAMRHx28qrEoxYW+8TWXGrazJHCR bibKpH0gNWE+ptgcZ2QAWZNVi5XvCwysNIXPcfldACsYXMO9amIcdtMwKCZPtNf/lj+T VDqw== X-Gm-Message-State: APjAAAWtEaAUKzlxYte1BNTg5BhBQLbz5BmYJN1kXQ+sqfnL/T7PuBfh T3fIqse6WTELRnJQPnXLmrVPHazJyCwwlg5Me7g= X-Google-Smtp-Source: APXvYqyhUdzkt1LEV3/mnjboJOnzFbBNyxAIqFqasq6eUlGxIHSns3eIUOhEp8lzDyPFPxc5FWwv01ZoSr8eB8W+Ir8= X-Received: by 2002:a92:3a07:: with SMTP id h7mr81250759ila.203.1578122810054; Fri, 03 Jan 2020 23:26:50 -0800 (PST) MIME-Version: 1.0 References: <1577174006-13025-1-git-send-email-laoar.shao@gmail.com> <1577174006-13025-5-git-send-email-laoar.shao@gmail.com> <20200104033558.GD23195@dread.disaster.area> In-Reply-To: <20200104033558.GD23195@dread.disaster.area> From: Yafang Shao Date: Sat, 4 Jan 2020 15:26:13 +0800 Message-ID: Subject: Re: [PATCH v2 4/5] mm: make memcg visible to lru walker isolation function To: Dave Chinner Cc: Johannes Weiner , Michal Hocko , Vladimir Davydov , Andrew Morton , Al Viro , Linux MM , linux-fsdevel@vger.kernel.org, Dave Chinner Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sat, Jan 4, 2020 at 11:36 AM Dave Chinner wrote: > > On Tue, Dec 24, 2019 at 02:53:25AM -0500, Yafang Shao wrote: > > The lru walker isolation function may use this memcg to do something, e.g. > > the inode isolatation function will use the memcg to do inode protection in > > followup patch. So make memcg visible to the lru walker isolation function. > > > > Something should be emphasized in this patch is it replaces > > for_each_memcg_cache_index() with for_each_mem_cgroup() in > > list_lru_walk_node(). Because there's a gap between these two MACROs that > > for_each_mem_cgroup() depends on CONFIG_MEMCG while the other one depends > > on CONFIG_MEMCG_KMEM. But as list_lru_memcg_aware() returns false if > > CONFIG_MEMCG_KMEM is not configured, it is safe to this replacement. > > > > Cc: Dave Chinner > > Signed-off-by: Yafang Shao > > .... > > > @@ -299,17 +299,15 @@ unsigned long list_lru_walk_node(struct list_lru *lru, int nid, > > list_lru_walk_cb isolate, void *cb_arg, > > unsigned long *nr_to_walk) > > { > > + struct mem_cgroup *memcg; > > long isolated = 0; > > - int memcg_idx; > > > > - isolated += list_lru_walk_one(lru, nid, NULL, isolate, cb_arg, > > - nr_to_walk); > > - if (*nr_to_walk > 0 && list_lru_memcg_aware(lru)) { > > - for_each_memcg_cache_index(memcg_idx) { > > + if (list_lru_memcg_aware(lru)) { > > + for_each_mem_cgroup(memcg) { > > struct list_lru_node *nlru = &lru->node[nid]; > > > > spin_lock(&nlru->lock); > > - isolated += __list_lru_walk_one(nlru, memcg_idx, > > + isolated += __list_lru_walk_one(nlru, memcg, > > isolate, cb_arg, > > nr_to_walk); > > spin_unlock(&nlru->lock); > > @@ -317,7 +315,11 @@ unsigned long list_lru_walk_node(struct list_lru *lru, int nid, > > if (*nr_to_walk <= 0) > > break; > > } > > + } else { > > + isolated += list_lru_walk_one(lru, nid, NULL, isolate, cb_arg, > > + nr_to_walk); > > } > > + > > That's a change of behaviour. The old code always runs per-node > reclaim, then if the LRU is memcg aware it also runs the memcg > aware reclaim. The new code never runs global per-node reclaim > if the list is memcg aware, so shrinkers that are initialised > with the flags SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE seem > likely to have reclaim problems with mixed memcg/global memory > pressure scenarios. > > e.g. if all the memory is in the per-node lists, and the memcg needs > to reclaim memory because of a global shortage, it is now unable to > reclaim global memory..... > Hi Dave, Thanks for your detailed explanation. But I have different understanding. The difference between for_each_mem_cgroup(memcg) and for_each_memcg_cache_index(memcg_idx) is that the for_each_mem_cgroup() includes the root_mem_cgroup while the for_each_memcg_cache_index() excludes the root_mem_cgroup because the memcg_idx of it is -1. So it can reclaim global memory even if the list is memcg aware. Is that right ? Thanks Yafang