From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F55CC433E2 for ; Fri, 17 Jul 2020 22:03:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 363E720656 for ; Fri, 17 Jul 2020 22:03:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="MhfZenzK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 363E720656 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B56788D0018; Fri, 17 Jul 2020 18:03:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B074F8D0003; Fri, 17 Jul 2020 18:03:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F5AB8D0018; Fri, 17 Jul 2020 18:03:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0087.hostedemail.com [216.40.44.87]) by kanga.kvack.org (Postfix) with ESMTP id 9015D8D0003 for ; Fri, 17 Jul 2020 18:03:39 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 0563A181AEF0B for ; Fri, 17 Jul 2020 22:03:39 +0000 (UTC) X-FDA: 77048945358.20.frame97_210f9a326f0e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 427B018020636 for ; Fri, 17 Jul 2020 22:03:36 +0000 (UTC) X-HE-Tag: frame97_210f9a326f0e X-Filterd-Recvd-Size: 9581 Received: from mail-il1-f196.google.com (mail-il1-f196.google.com [209.85.166.196]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Fri, 17 Jul 2020 22:03:35 +0000 (UTC) Received: by mail-il1-f196.google.com with SMTP id t27so8577695ill.9 for ; Fri, 17 Jul 2020 15:03:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ADWTNLSTN/jx8R6eCOwpfUeeP1CfoDk/4v09jrg5Jzg=; b=MhfZenzKmMsq1FsLNsKxsMsxBoD4kD+lM8XSRF0h3RNY74+U/MflNagIwW1EwNOkoC a8ug25VoJ/znvH/8yjAIFBThfFR59BURUB1eQcWuqiGbC777vUF90JulYCdeiZrUviVn R/SE8aj7Dr0SuWXC2W0jRmWz0RlnHNj/+Sgs3kpqyalH3o7UgLhQkvRCwC85O5jrTmcW VaRhz7XMKxjEUOHHvqm/mWFAhISpTfB10Gf1YSUWE0KkZFc3rEREmlz9YqIAAfwj7IRM +kYwqmJZGS/MNl+1EBRKeKV7lJUtHh06TNK3Y7jRs2U8r+l7AxuwPONhwMNZLqUnv60h RFMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ADWTNLSTN/jx8R6eCOwpfUeeP1CfoDk/4v09jrg5Jzg=; b=HvPF92RzYN7OZmQZl3te5Y91WqEu7v7W1kGlCZNgTsgq3hlw50LQTl82Mq5tTy5FOY SBdK1OaeR4kAh/iNBV3ogMNd8kslGYopAZAGUu1umdpLRZjACzxCrAbazaXaizgUuLUI zjYB8sD//2m0F8DZNl3ZNCgtzDurpH4c8yxXC3OlpYYCvKq8JSjxw51YOkUbnZyY90wx AkKz+x80WyuQ977P6pFRewkQpV1uQlNi1O39tNW/c4cEWqNDqIlNDmitwkcDfLfAAMtC 64rF/fedaM4v0Gzuk9VnPgS+vCfSEj4mLeaAUDhK7bxuOC8WkyHYwKNCDi2T0BJWPklu YEPg== X-Gm-Message-State: AOAM533N50bPqYkSk1qBiBduOhCDkOYPHlMoVdZKkmD0S5voCRAPEEM8 iW576xsDWiDPVbhC3Jd3BFbvnVSUvxCIyZZvdWk= X-Google-Smtp-Source: ABdhPJw6WOFurD5hVZHUMz8S4qU4WfpAG8+1gwGjbP59so/THz2RWrJ4uQWnxz9xnU2p266+mjwf3FPqKJ3zVuqul9c= X-Received: by 2002:a92:5a05:: with SMTP id o5mr11918205ilb.237.1595023415104; Fri, 17 Jul 2020 15:03:35 -0700 (PDT) MIME-Version: 1.0 References: <1594429136-20002-1-git-send-email-alex.shi@linux.alibaba.com> <1594429136-20002-20-git-send-email-alex.shi@linux.alibaba.com> In-Reply-To: <1594429136-20002-20-git-send-email-alex.shi@linux.alibaba.com> From: Alexander Duyck Date: Fri, 17 Jul 2020 15:03:24 -0700 Message-ID: Subject: Re: [PATCH v16 19/22] mm/lru: introduce the relock_page_lruvec function To: Alex Shi Cc: Andrew Morton , Mel Gorman , Tejun Heo , Hugh Dickins , Konstantin Khlebnikov , Daniel Jordan , Yang Shi , Matthew Wilcox , Johannes Weiner , kbuild test robot , linux-mm , LKML , cgroups@vger.kernel.org, Shakeel Butt , Joonsoo Kim , Wei Yang , "Kirill A. Shutemov" , Thomas Gleixner , Andrey Ryabinin Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 427B018020636 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jul 10, 2020 at 5:59 PM Alex Shi wrote: > > Use this new function to replace repeated same code, no func change. > > Signed-off-by: Alex Shi > Cc: Johannes Weiner > Cc: Andrew Morton > Cc: Thomas Gleixner > Cc: Andrey Ryabinin > Cc: Matthew Wilcox > Cc: Mel Gorman > Cc: Konstantin Khlebnikov > Cc: Hugh Dickins > Cc: Tejun Heo > Cc: linux-kernel@vger.kernel.org > Cc: cgroups@vger.kernel.org > Cc: linux-mm@kvack.org > --- > mm/mlock.c | 9 +-------- > mm/swap.c | 33 +++++++-------------------------- > mm/vmscan.c | 8 +------- > 3 files changed, 9 insertions(+), 41 deletions(-) > > diff --git a/mm/mlock.c b/mm/mlock.c > index cb23a0c2cfbf..4f40fc091cf9 100644 > --- a/mm/mlock.c > +++ b/mm/mlock.c > @@ -289,17 +289,10 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) > /* Phase 1: page isolation */ > for (i = 0; i < nr; i++) { > struct page *page = pvec->pages[i]; > - struct lruvec *new_lruvec; > bool clearlru; > > clearlru = TestClearPageLRU(page); > - > - new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); > - if (new_lruvec != lruvec) { > - if (lruvec) > - unlock_page_lruvec_irq(lruvec); > - lruvec = lock_page_lruvec_irq(page); > - } > + lruvec = relock_page_lruvec_irq(page, lruvec); > > if (!TestClearPageMlocked(page)) { > delta_munlocked++; > diff --git a/mm/swap.c b/mm/swap.c > index 129c532357a4..9fb906fbaed5 100644 > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -209,19 +209,12 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, > > for (i = 0; i < pagevec_count(pvec); i++) { > struct page *page = pvec->pages[i]; > - struct lruvec *new_lruvec; > - > - new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); > - if (lruvec != new_lruvec) { > - if (lruvec) > - unlock_page_lruvec_irqrestore(lruvec, flags); > - lruvec = lock_page_lruvec_irqsave(page, &flags); > - } > > /* block memcg migration during page moving between lru */ > if (!TestClearPageLRU(page)) > continue; > > + lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags); > (*move_fn)(page, lruvec); > > SetPageLRU(page); So looking at this I realize that patch 18 probably should have ordered this the same way with the TestClearPageLRU happening before you fetched the new_lruvec. Otherwise I think you are potentially exposed to the original issue you were fixing the the previous patch that added the call to TestClearPageLRU. > @@ -866,17 +859,12 @@ void release_pages(struct page **pages, int nr) > } > > if (PageLRU(page)) { > - struct lruvec *new_lruvec; > - > - new_lruvec = mem_cgroup_page_lruvec(page, > - page_pgdat(page)); > - if (new_lruvec != lruvec) { > - if (lruvec) > - unlock_page_lruvec_irqrestore(lruvec, > - flags); > + struct lruvec *pre_lruvec = lruvec; > + > + lruvec = relock_page_lruvec_irqsave(page, lruvec, > + &flags); > + if (pre_lruvec != lruvec) So this doesn't really read right. I suppose "pre_lruvec" should probably be "prev_lruvec" since I assume you mean "previous" not "before". > lock_batch = 0; > - lruvec = lock_page_lruvec_irqsave(page, &flags); > - } > > __ClearPageLRU(page); > del_page_from_lru_list(page, lruvec, page_off_lru(page)); > @@ -982,15 +970,8 @@ void __pagevec_lru_add(struct pagevec *pvec) > > for (i = 0; i < pagevec_count(pvec); i++) { > struct page *page = pvec->pages[i]; > - struct lruvec *new_lruvec; > - > - new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); > - if (lruvec != new_lruvec) { > - if (lruvec) > - unlock_page_lruvec_irqrestore(lruvec, flags); > - lruvec = lock_page_lruvec_irqsave(page, &flags); > - } > > + lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags); > __pagevec_lru_add_fn(page, lruvec); > } > if (lruvec) > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 168c1659e430..bdb53a678e7e 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -4292,15 +4292,9 @@ void check_move_unevictable_pages(struct pagevec *pvec) > > for (i = 0; i < pvec->nr; i++) { > struct page *page = pvec->pages[i]; > - struct lruvec *new_lruvec; > > pgscanned++; > - new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); > - if (lruvec != new_lruvec) { > - if (lruvec) > - unlock_page_lruvec_irq(lruvec); > - lruvec = lock_page_lruvec_irq(page); > - } > + lruvec = relock_page_lruvec_irq(page, lruvec); > > if (!PageLRU(page) || !PageUnevictable(page)) > continue; > -- > 1.8.3.1 > >