From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.9 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94E24C4727C for ; Mon, 21 Sep 2020 22:03:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0E37223A60 for ; Mon, 21 Sep 2020 22:03:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="DdepaFHP" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0E37223A60 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 83CA8900004; Mon, 21 Sep 2020 18:03:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7C74D6B00A7; Mon, 21 Sep 2020 18:03:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 68E32900004; Mon, 21 Sep 2020 18:03:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0060.hostedemail.com [216.40.44.60]) by kanga.kvack.org (Postfix) with ESMTP id 4AEFD6B00A5 for ; Mon, 21 Sep 2020 18:03:30 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 126698249980 for ; Mon, 21 Sep 2020 22:03:30 +0000 (UTC) X-FDA: 77288445780.19.noise90_19029e727148 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id CB8E61ACEA4 for ; Mon, 21 Sep 2020 22:03:29 +0000 (UTC) X-HE-Tag: noise90_19029e727148 X-Filterd-Recvd-Size: 7143 Received: from mail-oo1-f67.google.com (mail-oo1-f67.google.com [209.85.161.67]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Mon, 21 Sep 2020 22:03:29 +0000 (UTC) Received: by mail-oo1-f67.google.com with SMTP id k13so3659615oor.2 for ; Mon, 21 Sep 2020 15:03:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=i9wIfsv5rz4rfrgx+xfjmFTxPq64qmLCQPg3J+waEbE=; b=DdepaFHP5SGw8u3Jxxrft7G4MFxCoXyObylHPKbacrL+cHB16NaAJhD5pIbYyNVpQd Klaaxa9m+XOEu8WCDI2BGJEh9xJjFu6wVMkxYCQs/wHlhajC7H1OK0C8z3pKFLSUutfo /O7veoCMTReFJMDGKLzCE0pM1GtuT8DTXG6yGto0omKozwErUb6rkpUMzjHeaV+XrD0t iAJga/vtnJp+LqFmK+3cqp4Jaa+9OIekQDdDnAxNz5daBoxRoUmhzN96QuWg4gkx+svY 3JdEGi8JFhwQzUsY/o2RfnpeJ3ffX8rby7q+sqiHf2n3DKWczyN8EYsw8pxVOvxpV03m UyYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=i9wIfsv5rz4rfrgx+xfjmFTxPq64qmLCQPg3J+waEbE=; b=onqMMDrn40EI/4q+fmxQwMgx7hOvStiS+/H8nMHWmfyyBPhTjQ0vfEnYvApAmiUYiL ZEahPqGUjxM4U9nDDSdDWhDsdlTrNyV1HFeBf2yP/zvtlYvyMZ+RPVuCGbXxv8MZbYdv 3Z3Z/esllg7AqIAn5jT27q71iirhVeV0ICE5PCzrCbvW4MPoesr15M0QoMkvhR63fddk hhQWQdWBZGwVNGPO+dK3ltFT9qg3qtQEjKg9TDdh9xtjhBz52jmfYfQGPIEcMPjGkl92 wh0aC4v6cHDOmeAYhukFrp2SiVE7vjeLiXhReUwTE7U98uv0sQKURslKQ3UiuM3mnBgU ytsA== X-Gm-Message-State: AOAM531IVBGpIkWuULk46i51GUTPCJJWR8DUBTDqtVeSTBy+SGK92J+O jUyPDPik7I4QUVhEDD8W5SR/NA== X-Google-Smtp-Source: ABdhPJzeWJ4rwPj2Cwauo8tZscTlT1Kyhg81tkk759dQqFBpbkcFmtijVxaaNg1MA8kUUBfkJNG63Q== X-Received: by 2002:a4a:bc90:: with SMTP id m16mr1015628oop.12.1600725808483; Mon, 21 Sep 2020 15:03:28 -0700 (PDT) Received: from eggly.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id b14sm452685oii.52.2020.09.21.15.03.25 (version=TLS1 cipher=ECDHE-ECDSA-AES128-SHA bits=128/128); Mon, 21 Sep 2020 15:03:27 -0700 (PDT) Date: Mon, 21 Sep 2020 15:03:24 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@eggly.anvils To: Alex Shi cc: akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name, alexander.duyck@gmail.com, rong.a.chen@intel.com, mhocko@suse.com, vdavydov.dev@gmail.com, shy828301@gmail.com Subject: Re: [PATCH v18 15/32] mm/lru: move lock into lru_note_cost In-Reply-To: Message-ID: References: <1598273705-69124-1-git-send-email-alex.shi@linux.alibaba.com> <1598273705-69124-16-git-send-email-alex.shi@linux.alibaba.com> User-Agent: Alpine 2.11 (LSU 23 2013-08-11) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, 21 Sep 2020, Hugh Dickins wrote: > On Mon, 24 Aug 2020, Alex Shi wrote: > > > We have to move lru_lock into lru_note_cost, since it cycle up on memcg > > tree, for future per lruvec lru_lock replace. It's a bit ugly and may > > cost a bit more locking, but benefit from multiple memcg locking could > > cover the lost. > > > > Signed-off-by: Alex Shi > > Acked-by: Hugh Dickins > > In your lruv19 github tree, you have merged 14/32 into this one: thanks. Grr, I've only just started, and already missed some of my notes. I wanted to point out that this patch does introduce an extra unlock+lock in shrink_inactive_list(), even in a !CONFIG_MEMCG build. I think you've done the right thing for now, keeping it simple, and maybe nobody will notice the extra overhead; but I expect us to replace lru_note_cost() by lru_note_cost_unlock_irq() later on, expecting the caller to do the initial lock_irq(). lru_note_cost_page() looks redundant to me, but you're right not to delete it here, unless Johannes asks you to add that in: that's his business, and it may be dependent on the XXX at its callsite. > > > Cc: Johannes Weiner > > Cc: Andrew Morton > > Cc: linux-mm@kvack.org > > Cc: linux-kernel@vger.kernel.org > > --- > > mm/swap.c | 5 +++-- > > mm/vmscan.c | 4 +--- > > 2 files changed, 4 insertions(+), 5 deletions(-) > > > > diff --git a/mm/swap.c b/mm/swap.c > > index 906255db6006..f80ccd6f3cb4 100644 > > --- a/mm/swap.c > > +++ b/mm/swap.c > > @@ -269,7 +269,9 @@ void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages) > > { > > do { > > unsigned long lrusize; > > + struct pglist_data *pgdat = lruvec_pgdat(lruvec); > > > > + spin_lock_irq(&pgdat->lru_lock); > > /* Record cost event */ > > if (file) > > lruvec->file_cost += nr_pages; > > @@ -293,15 +295,14 @@ void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages) > > lruvec->file_cost /= 2; > > lruvec->anon_cost /= 2; > > } > > + spin_unlock_irq(&pgdat->lru_lock); > > } while ((lruvec = parent_lruvec(lruvec))); > > } > > > > void lru_note_cost_page(struct page *page) > > { > > - spin_lock_irq(&page_pgdat(page)->lru_lock); > > lru_note_cost(mem_cgroup_page_lruvec(page, page_pgdat(page)), > > page_is_file_lru(page), thp_nr_pages(page)); > > - spin_unlock_irq(&page_pgdat(page)->lru_lock); > > } > > > > static void __activate_page(struct page *page, struct lruvec *lruvec) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index ffccb94defaf..7b7b36bd1448 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -1971,19 +1971,17 @@ static int current_may_throttle(void) > > &stat, false); > > > > spin_lock_irq(&pgdat->lru_lock); > > - > > move_pages_to_lru(lruvec, &page_list); > > > > __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); > > - lru_note_cost(lruvec, file, stat.nr_pageout); > > item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT; > > if (!cgroup_reclaim(sc)) > > __count_vm_events(item, nr_reclaimed); > > __count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed); > > __count_vm_events(PGSTEAL_ANON + file, nr_reclaimed); > > - > > spin_unlock_irq(&pgdat->lru_lock); > > > > + lru_note_cost(lruvec, file, stat.nr_pageout); > > mem_cgroup_uncharge_list(&page_list); > > free_unref_page_list(&page_list); > > > > -- > > 1.8.3.1 > > > > >