From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=BAYES_00,BODY_ENHANCEMENT, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB8DBC2D0A3 for ; Thu, 12 Nov 2020 12:19:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 66EA12223F for ; Thu, 12 Nov 2020 12:19:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 66EA12223F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 72E8A6B0036; Thu, 12 Nov 2020 07:19:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 705266B005D; Thu, 12 Nov 2020 07:19:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F42A6B0068; Thu, 12 Nov 2020 07:19:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0228.hostedemail.com [216.40.44.228]) by kanga.kvack.org (Postfix) with ESMTP id 33BF16B0036 for ; Thu, 12 Nov 2020 07:19:22 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D03453620 for ; Thu, 12 Nov 2020 12:19:21 +0000 (UTC) X-FDA: 77475671322.10.swim61_2408d7127306 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin10.hostedemail.com (Postfix) with ESMTP id ACD45169F9A for ; Thu, 12 Nov 2020 12:19:21 +0000 (UTC) X-HE-Tag: swim61_2408d7127306 X-Filterd-Recvd-Size: 5203 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Thu, 12 Nov 2020 12:19:21 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id AA4D2AB95; Thu, 12 Nov 2020 12:19:19 +0000 (UTC) To: Alex Shi , akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name, alexander.duyck@gmail.com, rong.a.chen@intel.com, mhocko@suse.com, vdavydov.dev@gmail.com, shy828301@gmail.com Cc: Michal Hocko , Yang Shi References: <1604566549-62481-1-git-send-email-alex.shi@linux.alibaba.com> <1604566549-62481-18-git-send-email-alex.shi@linux.alibaba.com> From: Vlastimil Babka Subject: Re: [PATCH v21 17/19] mm/lru: replace pgdat lru_lock with lruvec lock Message-ID: Date: Thu, 12 Nov 2020 13:19:18 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.4.0 MIME-Version: 1.0 In-Reply-To: <1604566549-62481-18-git-send-email-alex.shi@linux.alibaba.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 11/5/20 9:55 AM, Alex Shi wrote: > This patch moves per node lru_lock into lruvec, thus bring a lru_lock f= or > each of memcg per node. So on a large machine, each of memcg don't > have to suffer from per node pgdat->lru_lock competition. They could go > fast with their self lru_lock. >=20 > After move memcg charge before lru inserting, page isolation could > serialize page's memcg, then per memcg lruvec lock is stable and could > replace per node lru lock. >=20 > In func isolate_migratepages_block, compact_unlock_should_abort and > lock_page_lruvec_irqsave are open coded to work with compact_control. > Also add a debug func in locking which may give some clues if there are > sth out of hands. >=20 > Daniel Jordan's testing show 62% improvement on modified readtwice case > on his 2P * 10 core * 2 HT broadwell box. > https://lore.kernel.org/lkml/20200915165807.kpp7uhiw7l3loofu@ca-dmjorda= n1.us.oracle.com/ >=20 > On a large machine with memcg enabled but not used, the page's lruvec > seeking pass a few pointers, that may lead to lru_lock holding time > increase and a bit regression. >=20 > Hugh Dickins helped on the patch polish, thanks! >=20 > Signed-off-by: Alex Shi > Acked-by: Hugh Dickins > Cc: Rong Chen > Cc: Hugh Dickins > Cc: Andrew Morton > Cc: Johannes Weiner > Cc: Michal Hocko > Cc: Vladimir Davydov > Cc: Yang Shi > Cc: Matthew Wilcox > Cc: Konstantin Khlebnikov > Cc: Tejun Heo > Cc: linux-kernel@vger.kernel.org > Cc: linux-mm@kvack.org > Cc: cgroups@vger.kernel.org I think I need some explanation about the rcu_read_lock() usage in=20 lock_page_lruvec*() (and places effectively opencoding it). Preferably in form of some code comment, but that can be also added as a=20 additional patch later, I don't want to block the series. mem_cgroup_page_lruvec() comment says * This function relies on page->mem_cgroup being stable - see the * access rules in commit_charge(). commit_charge() comment: * Any of the following ensures page->mem_cgroup stability: * * - the page lock * - LRU isolation * - lock_page_memcg() * - exclusive reference "LRU isolation" used to be quite clear, but now is it after=20 TestClearPageLRU(page) or after deleting from the lru list as well? Also it doesn't mention rcu_read_lock(), should it? So what exactly are we protecting by rcu_read_lock() in e.g. lock_page_lr= uvec()? rcu_read_lock(); lruvec =3D mem_cgroup_page_lruvec(page, pgdat); spin_lock(&lruvec->lru_lock); rcu_read_unlock(); Looks like we are protecting the lruvec from going away and it can't go a= way=20 anymore after we take the lru_lock? But then e.g. in __munlock_pagevec() we are doing this without an rcu_rea= d_lock(): new_lruvec =3D mem_cgroup_page_lruvec(page, page_pgdat(page)); where new_lruvec is potentionally not the one that we have locked And the last thing mem_cgroup_page_lruvec() is doing is: if (unlikely(lruvec->pgdat !=3D pgdat)) lruvec->pgdat =3D pgdat; return lruvec; So without the rcu_read_lock() is this potentionally accessing the pgdat = field=20 of lruvec that might have just gone away? Thanks, Vlastimil