From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, UNPARSEABLE_RELAY,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA2EAC33CAE for ; Mon, 13 Jan 2020 07:23:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8C2422073D for ; Mon, 13 Jan 2020 07:23:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8C2422073D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2209A8E0006; Mon, 13 Jan 2020 02:23:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1D2648E0001; Mon, 13 Jan 2020 02:23:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0E7B78E0006; Mon, 13 Jan 2020 02:23:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0250.hostedemail.com [216.40.44.250]) by kanga.kvack.org (Postfix) with ESMTP id EC66C8E0001 for ; Mon, 13 Jan 2020 02:23:31 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 96577181AC9B6 for ; Mon, 13 Jan 2020 07:23:31 +0000 (UTC) X-FDA: 76371770622.26.soup36_1e991b4727d40 X-HE-Tag: soup36_1e991b4727d40 X-Filterd-Recvd-Size: 5923 Received: from out30-57.freemail.mail.aliyun.com (out30-57.freemail.mail.aliyun.com [115.124.30.57]) by imf29.hostedemail.com (Postfix) with ESMTP for ; Mon, 13 Jan 2020 07:23:29 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R851e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04426;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0TnZoR5p_1578900205; Received: from IT-FVFX43SYHV2H.local(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0TnZoR5p_1578900205) by smtp.aliyun-inc.com(127.0.0.1); Mon, 13 Jan 2020 15:23:25 +0800 Subject: Re: [PATCH v7 01/10] mm/vmscan: remove unnecessary lruvec adding To: Konstantin Khlebnikov , cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, shakeelb@google.com, hannes@cmpxchg.org Cc: yun.wang@linux.alibaba.com References: <1577264666-246071-1-git-send-email-alex.shi@linux.alibaba.com> <1577264666-246071-2-git-send-email-alex.shi@linux.alibaba.com> <6c91fb0a-d4e0-d960-1cfd-62bef5cd15a5@yandex-team.ru> From: Alex Shi Message-ID: <8f0f9fd5-56d3-0ec3-e875-f2eb5e1e7971@linux.alibaba.com> Date: Mon, 13 Jan 2020 15:21:58 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.3.1 MIME-Version: 1.0 In-Reply-To: <6c91fb0a-d4e0-d960-1cfd-62bef5cd15a5@yandex-team.ru> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: =E5=9C=A8 2020/1/10 =E4=B8=8B=E5=8D=884:39, Konstantin Khlebnikov =E5=86=99= =E9=81=93: > On 25/12/2019 12.04, Alex Shi wrote: >> We don't have to add a freeable page into lru and then remove from it. >> This change saves a couple of actions and makes the moving more clear. >> >> The SetPageLRU needs to be kept here for list intergrity. Otherwise: >> >> =C2=A0=C2=A0=C2=A0=C2=A0 #0 mave_pages_to_lru=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 #1 releas= e_pages >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 if (put_page_testzero()) >> =C2=A0=C2=A0=C2=A0=C2=A0 if (put_page_testzero()) >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0 !PageLRU //skip lru_lock >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 list_a= dd(&page->lru,); >> =C2=A0=C2=A0=C2=A0=C2=A0 else >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 list_add(&page->lru,)= //corrupt >> >> Signed-off-by: Alex Shi >> Cc: Andrew Morton >> Cc: Johannes Weiner >> Cc: Hugh Dickins >> Cc: yun.wang@linux.alibaba.com >> Cc: linux-mm@kvack.org >> Cc: linux-kernel@vger.kernel.org >> --- >> =C2=A0 mm/vmscan.c | 16 +++++++--------- >> =C2=A0 1 file changed, 7 insertions(+), 9 deletions(-) >> >> diff --git a/mm/vmscan.c b/mm/vmscan.c >> index 572fb17c6273..8719361b47a0 100644 >> --- a/mm/vmscan.c >> +++ b/mm/vmscan.c >> @@ -1852,26 +1852,18 @@ static unsigned noinline_for_stack move_pages_= to_lru(struct lruvec *lruvec, >=20 > Here is another cleanup: pass only pgdat as argument. >=20 > This function reavaluates lruvec for each page under lru lock. > Probably this is redundant for now but could be used in the future (or = your patchset already use that). Thanks a lot for comments, Konstantin! yes, we could use pgdat here, but since to remove the pgdat later, maybe = better to save this change? :) >=20 >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 while (!list_empty(list)) { >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 page =3D lru_to= _page(list); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 VM_BUG_ON_PAGE(= PageLRU(page), page); >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 list_del(&page->lru); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (unlikely(!p= age_evictable(page))) { >> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 li= st_del(&page->lru); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 spin_unlock_irq(&pgdat->lru_lock); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 putback_lru_page(page); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 spin_lock_irq(&pgdat->lru_lock); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 continue; >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } >> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 lruvec =3D mem_cgroup_page= _lruvec(page, pgdat); >> - >=20 > Please leave a comment that we must set PageLRU before dropping our pag= e reference. Right, I will try to give a comments here. =20 >=20 >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 SetPageLRU(page= ); >> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 lru =3D page_lru(page); >> - >> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 nr_pages =3D hpage_nr_page= s(page); >> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 update_lru_size(lruvec, lr= u, page_zonenum(page), nr_pages); >> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 list_move(&page->lru, &lru= vec->lists[lru]); >> =C2=A0 =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (put_= page_testzero(page)) { >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 __ClearPageLRU(page); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 __ClearPageActive(page); >> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 de= l_page_from_lru_list(page, lruvec, lru); >> =C2=A0 =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 if (unlikely(PageCompound(page))) { >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 spin_unlock_irq(&pgdat->lru_lock); >> @@ -1880,6 +1872,12 @@ static unsigned noinline_for_stack move_pages_t= o_lru(struct lruvec *lruvec, >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 } else >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 list_add(&page->lru, &pages_to_free); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } else { >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 lr= uvec =3D mem_cgroup_page_lruvec(page, pgdat); >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 lr= u =3D page_lru(page); >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 nr= _pages =3D hpage_nr_pages(page); >> + >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 up= date_lru_size(lruvec, lru, page_zonenum(page), nr_pages); >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 li= st_add(&page->lru, &lruvec->lists[lru]); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 nr_moved +=3D nr_pages; >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } >=20 > IMHO It looks better to in this way:>=20 > SetPageLRU() >=20 > if (unlikely(put_page_testzero())) { > =C2=A0 > =C2=A0continue; > } >=20 > Yes, this looks better! Thanks Alex >=20 >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } >>