From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89523C388F2 for ; Fri, 6 Nov 2020 07:50:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BF0052087D for ; Fri, 6 Nov 2020 07:50:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BF0052087D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EA9A86B0068; Fri, 6 Nov 2020 02:50:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E5A5C6B006C; Fri, 6 Nov 2020 02:50:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CF99A6B006E; Fri, 6 Nov 2020 02:50:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0163.hostedemail.com [216.40.44.163]) by kanga.kvack.org (Postfix) with ESMTP id A47FC6B0068 for ; Fri, 6 Nov 2020 02:50:43 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 4C3168249980 for ; Fri, 6 Nov 2020 07:50:43 +0000 (UTC) X-FDA: 77453221566.06.coast31_63083c7272d0 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 3CA831004F4CE for ; Fri, 6 Nov 2020 07:50:43 +0000 (UTC) X-HE-Tag: coast31_63083c7272d0 X-Filterd-Recvd-Size: 10105 Received: from out30-54.freemail.mail.aliyun.com (out30-54.freemail.mail.aliyun.com [115.124.30.54]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Fri, 6 Nov 2020 07:50:40 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R581e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01424;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=24;SR=0;TI=SMTPD_---0UEOeJgs_1604649032; Received: from IT-FVFX43SYHV2H.local(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0UEOeJgs_1604649032) by smtp.aliyun-inc.com(127.0.0.1); Fri, 06 Nov 2020 15:50:33 +0800 Subject: Re: [PATCH v21 18/19] mm/lru: introduce the relock_page_lruvec function From: Alex Shi To: akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name, alexander.duyck@gmail.com, rong.a.chen@intel.com, mhocko@suse.com, vdavydov.dev@gmail.com, shy828301@gmail.com Cc: Alexander Duyck , Thomas Gleixner , Andrey Ryabinin References: <1604566549-62481-1-git-send-email-alex.shi@linux.alibaba.com> <1604566549-62481-19-git-send-email-alex.shi@linux.alibaba.com> Message-ID: <66d8e79d-7ec6-bfbc-1c82-bf32db3ae5b7@linux.alibaba.com> Date: Fri, 6 Nov 2020 15:50:22 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: <1604566549-62481-19-git-send-email-alex.shi@linux.alibaba.com> Content-Type: text/plain; charset=gbk Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: =D4=DA 2020/11/5 =CF=C2=CE=E74:55, Alex Shi =D0=B4=B5=C0: > From: Alexander Duyck update the patch on page_memcg() change: >From 6c142eb582e7d0dbf473572ad092eca07ab75221 Mon Sep 17 00:00:00 2001 From: Alexander Duyck Date: Tue, 26 May 2020 17:31:15 +0800 Subject: [PATCH v21 18/19] mm/lru: introduce the relock_page_lruvec funct= ion Use this new function to replace repeated same code, no func change. When testing for relock we can avoid the need for RCU locking if we simpl= y compare the page pgdat and memcg pointers versus those that the lruvec is holding. By doing this we can avoid the extra pointer walks and accesses = of the memory cgroup. In addition we can avoid the checks entirely if lruvec is currently NULL. Signed-off-by: Alexander Duyck Signed-off-by: Alex Shi Acked-by: Hugh Dickins Acked-by: Johannes Weiner Cc: Johannes Weiner Cc: Andrew Morton Cc: Thomas Gleixner Cc: Andrey Ryabinin Cc: Matthew Wilcox Cc: Mel Gorman Cc: Konstantin Khlebnikov Cc: Hugh Dickins Cc: Tejun Heo Cc: linux-kernel@vger.kernel.org Cc: cgroups@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/memcontrol.h | 52 ++++++++++++++++++++++++++++++++++++++++= ++++++ mm/mlock.c | 11 +--------- mm/swap.c | 33 +++++++---------------------- mm/vmscan.c | 12 ++--------- 4 files changed, 62 insertions(+), 46 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 6ecb08ff4ad1..8c57d6335ee4 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -660,6 +660,22 @@ static inline struct lruvec *mem_cgroup_lruvec(struc= t mem_cgroup *memcg, =20 struct lruvec *mem_cgroup_page_lruvec(struct page *, struct pglist_data = *); =20 +static inline bool lruvec_holds_page_lru_lock(struct page *page, + struct lruvec *lruvec) +{ + pg_data_t *pgdat =3D page_pgdat(page); + const struct mem_cgroup *memcg; + struct mem_cgroup_per_node *mz; + + if (mem_cgroup_disabled()) + return lruvec =3D=3D &pgdat->__lruvec; + + mz =3D container_of(lruvec, struct mem_cgroup_per_node, lruvec); + memcg =3D page_memcg(page) ? : root_mem_cgroup; + + return lruvec->pgdat =3D=3D pgdat && mz->memcg =3D=3D memcg; +} + struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p); =20 struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm); @@ -1221,6 +1237,14 @@ static inline struct lruvec *mem_cgroup_page_lruve= c(struct page *page, return &pgdat->__lruvec; } =20 +static inline bool lruvec_holds_page_lru_lock(struct page *page, + struct lruvec *lruvec) +{ + pg_data_t *pgdat =3D page_pgdat(page); + + return lruvec =3D=3D &pgdat->__lruvec; +} + static inline struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *me= mcg) { return NULL; @@ -1663,6 +1687,34 @@ static inline void unlock_page_lruvec_irqrestore(s= truct lruvec *lruvec, spin_unlock_irqrestore(&lruvec->lru_lock, flags); } =20 +/* Don't lock again iff page's lruvec locked */ +static inline struct lruvec *relock_page_lruvec_irq(struct page *page, + struct lruvec *locked_lruvec) +{ + if (locked_lruvec) { + if (lruvec_holds_page_lru_lock(page, locked_lruvec)) + return locked_lruvec; + + unlock_page_lruvec_irq(locked_lruvec); + } + + return lock_page_lruvec_irq(page); +} + +/* Don't lock again iff page's lruvec locked */ +static inline struct lruvec *relock_page_lruvec_irqsave(struct page *pag= e, + struct lruvec *locked_lruvec, unsigned long *flags) +{ + if (locked_lruvec) { + if (lruvec_holds_page_lru_lock(page, locked_lruvec)) + return locked_lruvec; + + unlock_page_lruvec_irqrestore(locked_lruvec, *flags); + } + + return lock_page_lruvec_irqsave(page, flags); +} + #ifdef CONFIG_CGROUP_WRITEBACK =20 struct wb_domain *mem_cgroup_wb_domain(struct bdi_writeback *wb); diff --git a/mm/mlock.c b/mm/mlock.c index ab164a675c25..55b3b3672977 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -277,16 +277,7 @@ static void __munlock_pagevec(struct pagevec *pvec, = struct zone *zone) * so we can spare the get_page() here. */ if (TestClearPageLRU(page)) { - struct lruvec *new_lruvec; - - new_lruvec =3D mem_cgroup_page_lruvec(page, - page_pgdat(page)); - if (new_lruvec !=3D lruvec) { - if (lruvec) - unlock_page_lruvec_irq(lruvec); - lruvec =3D lock_page_lruvec_irq(page); - } - + lruvec =3D relock_page_lruvec_irq(page, lruvec); del_page_from_lru_list(page, lruvec, page_lru(page)); continue; diff --git a/mm/swap.c b/mm/swap.c index ed033f7c4f2d..c593ba596dea 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -210,19 +210,12 @@ static void pagevec_lru_move_fn(struct pagevec *pve= c, =20 for (i =3D 0; i < pagevec_count(pvec); i++) { struct page *page =3D pvec->pages[i]; - struct lruvec *new_lruvec; =20 /* block memcg migration during page moving between lru */ if (!TestClearPageLRU(page)) continue; =20 - new_lruvec =3D mem_cgroup_page_lruvec(page, page_pgdat(page)); - if (lruvec !=3D new_lruvec) { - if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); - lruvec =3D lock_page_lruvec_irqsave(page, &flags); - } - + lruvec =3D relock_page_lruvec_irqsave(page, lruvec, &flags); (*move_fn)(page, lruvec); =20 SetPageLRU(page); @@ -918,17 +911,12 @@ void release_pages(struct page **pages, int nr) } =20 if (PageLRU(page)) { - struct lruvec *new_lruvec; - - new_lruvec =3D mem_cgroup_page_lruvec(page, - page_pgdat(page)); - if (new_lruvec !=3D lruvec) { - if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, - flags); + struct lruvec *prev_lruvec =3D lruvec; + + lruvec =3D relock_page_lruvec_irqsave(page, lruvec, + &flags); + if (prev_lruvec !=3D lruvec) lock_batch =3D 0; - lruvec =3D lock_page_lruvec_irqsave(page, &flags); - } =20 VM_BUG_ON_PAGE(!PageLRU(page), page); __ClearPageLRU(page); @@ -1033,15 +1021,8 @@ void __pagevec_lru_add(struct pagevec *pvec) =20 for (i =3D 0; i < pagevec_count(pvec); i++) { struct page *page =3D pvec->pages[i]; - struct lruvec *new_lruvec; - - new_lruvec =3D mem_cgroup_page_lruvec(page, page_pgdat(page)); - if (lruvec !=3D new_lruvec) { - if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); - lruvec =3D lock_page_lruvec_irqsave(page, &flags); - } =20 + lruvec =3D relock_page_lruvec_irqsave(page, lruvec, &flags); __pagevec_lru_add_fn(page, lruvec); } if (lruvec) diff --git a/mm/vmscan.c b/mm/vmscan.c index 2953ddec88a0..3b09a39de8cd 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1884,8 +1884,7 @@ static unsigned noinline_for_stack move_pages_to_lr= u(struct lruvec *lruvec, * All pages were isolated from the same lruvec (and isolation * inhibits memcg migration). */ - VM_BUG_ON_PAGE(mem_cgroup_page_lruvec(page, page_pgdat(page)) - !=3D lruvec, page); + VM_BUG_ON_PAGE(!lruvec_holds_page_lru_lock(page, lruvec), page); lru =3D page_lru(page); nr_pages =3D thp_nr_pages(page); =20 @@ -4277,7 +4276,6 @@ void check_move_unevictable_pages(struct pagevec *p= vec) for (i =3D 0; i < pvec->nr; i++) { struct page *page =3D pvec->pages[i]; int nr_pages; - struct lruvec *new_lruvec; =20 if (PageTransTail(page)) continue; @@ -4289,13 +4287,7 @@ void check_move_unevictable_pages(struct pagevec *= pvec) if (!TestClearPageLRU(page)) continue; =20 - new_lruvec =3D mem_cgroup_page_lruvec(page, page_pgdat(page)); - if (lruvec !=3D new_lruvec) { - if (lruvec) - unlock_page_lruvec_irq(lruvec); - lruvec =3D lock_page_lruvec_irq(page); - } - + lruvec =3D relock_page_lruvec_irq(page, lruvec); if (page_evictable(page) && PageUnevictable(page)) { enum lru_list lru =3D page_lru_base_type(page); =20 --=20 1.8.3.1