From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.5 required=3.0 tests=MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 218C4CA9EC5 for ; Wed, 30 Oct 2019 17:53:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DE70B208E3 for ; Wed, 30 Oct 2019 17:53:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DE70B208E3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 680586B0003; Wed, 30 Oct 2019 13:53:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 60A7A6B0007; Wed, 30 Oct 2019 13:53:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F8076B0008; Wed, 30 Oct 2019 13:53:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0120.hostedemail.com [216.40.44.120]) by kanga.kvack.org (Postfix) with ESMTP id 27CFE6B0003 for ; Wed, 30 Oct 2019 13:53:11 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 8509F181AEF3F for ; Wed, 30 Oct 2019 17:53:10 +0000 (UTC) X-FDA: 76101197340.05.kiss24_9017fa3a7ab13 X-HE-Tag: kiss24_9017fa3a7ab13 X-Filterd-Recvd-Size: 3643 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Wed, 30 Oct 2019 17:53:10 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id A713AACEF; Wed, 30 Oct 2019 17:53:08 +0000 (UTC) Date: Wed, 30 Oct 2019 18:53:02 +0100 From: Michal Hocko To: Johannes Weiner Cc: Shakeel Butt , Greg Thelen , Roman Gushchin , Andrew Morton , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, syzbot+13f93c99c06988391efe@syzkaller.appspotmail.com Subject: Re: [PATCH] mm: vmscan: memcontrol: remove mem_cgroup_select_victim_node() Message-ID: <20191030175302.GM31513@dhcp22.suse.cz> References: <20191029234753.224143-1-shakeelb@google.com> <20191030174455.GA45135@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191030174455.GA45135@cmpxchg.org> User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed 30-10-19 13:44:55, Johannes Weiner wrote: > On Tue, Oct 29, 2019 at 04:47:53PM -0700, Shakeel Butt wrote: > > Since commit 1ba6fc9af35b ("mm: vmscan: do not share cgroup iteration > > between reclaimers"), the memcg reclaim does not bail out earlier based > > on sc->nr_reclaimed and will traverse all the nodes. All the reclaimable > > pages of the memcg on all the nodes will be scanned relative to the > > reclaim priority. So, there is no need to maintain state regarding which > > node to start the memcg reclaim from. Also KCSAN complains data races in > > the code maintaining the state. > > > > This patch effectively reverts the commit 889976dbcb12 ("memcg: reclaim > > memory from nodes in round-robin order") and the commit 453a9bf347f1 > > ("memcg: fix numa scan information update to be triggered by memory > > event"). > > > > Signed-off-by: Shakeel Butt > > Reported-by: > > Excellent, thanks Shakeel! > Acked-by: Johannes Weiner > > Just a request on this bit: > > > @@ -3360,16 +3358,9 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg, > > .may_unmap = 1, > > .may_swap = may_swap, > > }; > > + struct zonelist *zonelist = node_zonelist(numa_node_id(), sc.gfp_mask); > > > > set_task_reclaim_state(current, &sc.reclaim_state); > > - /* > > - * Unlike direct reclaim via alloc_pages(), memcg's reclaim doesn't > > - * take care of from where we get pages. So the node where we start the > > - * scan does not need to be the current node. > > - */ > > - nid = mem_cgroup_select_victim_node(memcg); > > - > > - zonelist = &NODE_DATA(nid)->node_zonelists[ZONELIST_FALLBACK]; > > This works, but it *is* somewhat fragile if we decide to add bail-out > conditions to reclaim again. And some numa nodes receiving slightly > less pressure than others could be quite tricky to debug. > > Can we add a comment here that points out the assumption that the > zonelist walk is comprehensive, and that all nodes receive equal > reclaim pressure? Makes sense > Also, I think we should use sc.gfp_mask & ~__GFP_THISNODE, so that > allocations with a physical node preference still do node-agnostic > reclaim for the purpose of cgroup accounting. Do not we exclude that by GFP_RECLAIM_MASK already? -- Michal Hocko SUSE Labs