From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE, SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74297C433E1 for ; Tue, 28 Jul 2020 07:16:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3BC33207E8 for ; Tue, 28 Jul 2020 07:16:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3BC33207E8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AF2926B0006; Tue, 28 Jul 2020 03:16:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AA3BD8D0002; Tue, 28 Jul 2020 03:16:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 991916B0008; Tue, 28 Jul 2020 03:16:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0164.hostedemail.com [216.40.44.164]) by kanga.kvack.org (Postfix) with ESMTP id 7F5376B0006 for ; Tue, 28 Jul 2020 03:16:00 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 3B8F63620 for ; Tue, 28 Jul 2020 07:16:00 +0000 (UTC) X-FDA: 77086625280.04.sky90_5a10bff26f68 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 11DB98003405 for ; Tue, 28 Jul 2020 07:16:00 +0000 (UTC) X-HE-Tag: sky90_5a10bff26f68 X-Filterd-Recvd-Size: 8224 Received: from out30-44.freemail.mail.aliyun.com (out30-44.freemail.mail.aliyun.com [115.124.30.44]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Tue, 28 Jul 2020 07:15:57 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R221e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04357;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=21;SR=0;TI=SMTPD_---0U42Lwr8_1595920545; Received: from IT-FVFX43SYHV2H.local(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U42Lwr8_1595920545) by smtp.aliyun-inc.com(127.0.0.1); Tue, 28 Jul 2020 15:15:46 +0800 Subject: Re: [PATCH v17 17/21] mm/lru: replace pgdat lru_lock with lruvec lock To: Alexander Duyck Cc: Andrew Morton , Mel Gorman , Tejun Heo , Hugh Dickins , Konstantin Khlebnikov , Daniel Jordan , Yang Shi , Matthew Wilcox , Johannes Weiner , kbuild test robot , linux-mm , LKML , cgroups@vger.kernel.org, Shakeel Butt , Joonsoo Kim , Wei Yang , "Kirill A. Shutemov" , Rong Chen , Michal Hocko , Vladimir Davydov References: <1595681998-19193-1-git-send-email-alex.shi@linux.alibaba.com> <1595681998-19193-18-git-send-email-alex.shi@linux.alibaba.com> From: Alex Shi Message-ID: <49d2a784-3560-4d97-ece2-f2dfb6941495@linux.alibaba.com> Date: Tue, 28 Jul 2020 15:15:34 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 X-Rspamd-Queue-Id: 11DB98003405 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: =E5=9C=A8 2020/7/28 =E4=B8=8A=E5=8D=887:34, Alexander Duyck =E5=86=99=E9=81= =93: >> @@ -847,11 +847,21 @@ static bool too_many_isolated(pg_data_t *pgdat) >> * contention, to give chance to IRQs. Abort completel= y if >> * a fatal signal is pending. >> */ >> - if (!(low_pfn % SWAP_CLUSTER_MAX) >> - && compact_unlock_should_abort(&pgdat->lru_lock, >> - flags, &locked, cc)) { >> - low_pfn =3D 0; >> - goto fatal_pending; >> + if (!(low_pfn % SWAP_CLUSTER_MAX)) { >> + if (locked_lruvec) { >> + unlock_page_lruvec_irqrestore(locked_l= ruvec, >> + = flags); >> + locked_lruvec =3D NULL; >> + } >> + >> + if (fatal_signal_pending(current)) { >> + cc->contended =3D true; >> + >> + low_pfn =3D 0; >> + goto fatal_pending; >> + } >> + >> + cond_resched(); >> } >> >> if (!pfn_valid_within(low_pfn)) >=20 > I'm noticing this patch introduces a bunch of noise. What is the > reason for getting rid of compact_unlock_should_abort? It seems like > you just open coded it here. If there is some sort of issue with it > then it might be better to replace it as part of a preparatory patch > before you introduce this one as changes like this make it harder to > review. Thanks for comments, Alex. the func compact_unlock_should_abort should be removed since one of param= eters changed from 'bool *locked' to 'struct lruvec *lruvec'. So it's not appli= cable now. I have to open it here instead of adding a only one user func. >=20 > It might make more sense to look at modifying > compact_unlock_should_abort and compact_lock_irqsave (which always > returns true so should probably be a void) to address the deficiencies > they have that make them unusable for you. I am wondering if people like a patch which just open compact_unlock_shou= ld_abort func and move bool to void as a preparation patch, do you like this? >> @@ -966,10 +975,20 @@ static bool too_many_isolated(pg_data_t *pgdat) >> if (!TestClearPageLRU(page)) >> goto isolate_fail_put; >> >> + rcu_read_lock(); >> + lruvec =3D mem_cgroup_page_lruvec(page, pgdat); >> + >> /* If we already hold the lock, we can skip some reche= cking */ >> - if (!locked) { >> - locked =3D compact_lock_irqsave(&pgdat->lru_lo= ck, >> - &flags= , cc); >> + if (lruvec !=3D locked_lruvec) { >> + if (locked_lruvec) >> + unlock_page_lruvec_irqrestore(locked_l= ruvec, >> + = flags); >> + >> + compact_lock_irqsave(&lruvec->lru_lock, &flags= , cc); >> + locked_lruvec =3D lruvec; >> + rcu_read_unlock(); >> + >> + lruvec_memcg_debug(lruvec, page); >> >> /* Try get exclusive access under lock */ >> if (!skip_updated) { >=20 > So this bit makes things a bit complicated. From what I can can tell > the comment about exclusive access under the lock is supposed to apply > to the pageblock via the lru_lock. However you are having to retest > the lock for each page because it is possible the page was moved to > another memory cgroup while the lru_lock was released correct? So in The pageblock is aligned by pfn, so pages in them maynot on same memcg originally. and yes, page may be changed memcg also. > this case is the lru vector lock really providing any protection for > the skip_updated portion of this code block if the lock isn't > exclusive to the pageblock? In theory this would probably make more > sense to have protected the skip bits under the zone lock, but I > imagine that was avoided due to the additional overhead. when we change to lruvec->lru_lock, it does the same thing as pgdat->lru_= lock. just may get a bit more chance to here, and find out this is a skipable pageblock and quit.=20 Yes, logically, pgdat lru_lock seems better, but since we are holding lru= _lock. It's fine to not bother more locks. >=20 >> @@ -1876,6 +1876,12 @@ static unsigned noinline_for_stack move_pages_t= o_lru(struct lruvec *lruvec, >> * list_add(&pa= ge->lru,) >> * list_add(&page->lru,) //corrupt >> */ >> + new_lruvec =3D mem_cgroup_page_lruvec(page, page_pgdat= (page)); >> + if (new_lruvec !=3D lruvec) { >> + if (lruvec) >> + spin_unlock_irq(&lruvec->lru_lock); >> + lruvec =3D lock_page_lruvec_irq(page); >> + } >> SetPageLRU(page); >> >> if (unlikely(put_page_testzero(page))) { >=20 > I was going through the code of the entire patch set and I noticed > these changes in move_pages_to_lru. What is the reason for adding the > new_lruvec logic? My understanding is that we are moving the pages to > the lruvec provided are we not?If so why do we need to add code to get > a new lruvec? The code itself seems to stand out from the rest of the > patch as it is introducing new code instead of replacing existing > locking code, and it doesn't match up with the description of what > this function is supposed to do since it changes the lruvec. A code here since some bugs happened. I will check it again anyway. Thanks!