From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D74FC07E99 for ; Mon, 12 Jul 2021 19:55:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CA7F660FF0 for ; Mon, 12 Jul 2021 19:55:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CA7F660FF0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EFA026B0095; Mon, 12 Jul 2021 15:55:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EA9076B0099; Mon, 12 Jul 2021 15:55:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D23626B009B; Mon, 12 Jul 2021 15:55:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0202.hostedemail.com [216.40.44.202]) by kanga.kvack.org (Postfix) with ESMTP id A70AE6B0099 for ; Mon, 12 Jul 2021 15:55:15 -0400 (EDT) Received: from smtpin33.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id C0B58231D4 for ; Mon, 12 Jul 2021 19:55:14 +0000 (UTC) X-FDA: 78354989748.33.A8626F5 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id 5C87860019A3 for ; Mon, 12 Jul 2021 19:55:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=K7Ic6eJFDV8yceds+Z5EfqueOwmLWt+/K/EVlG1lScE=; b=NyLqnSQ6IBIU1xawSE8XJ/Ny4c DFL2WrsvnT5FZdgteHBDVMDUO/nW3Xukod66Nota9+OE+3u5jRnRR6kBukk6cLHUqNvEIJ/DByCQh BZkIc3pK9pBCRjNS1VeEUJgJd8UfaRoo51/iBXEXiGbxN3JyeOn0KgZVhKr1Zzv1+anE+3UAXQnxB PQ7AqbHDpFnTu2mQgtZA5snFmsOqFJ3EjtDcnzHrhGibz5S3AiJENuJrhIxZbZhCHNk3pc7VSu5ov 2TX5/Fu2Uqz0QeUYJxjLE29DBkaNOrn6iOkPQfplQ/n7TA/quxN1knIGq9Oa3qC4/AuE8reJYu/vt DK0vei7g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3207-000OWv-E2; Mon, 12 Jul 2021 19:54:11 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linux-foundation.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, cgroups@vger.kernel.org Subject: [PATCH v13 16/18] mm/memcg: Add folio_lruvec_lock() and similar functions Date: Mon, 12 Jul 2021 20:45:49 +0100 Message-Id: <20210712194551.91920-17-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210712194551.91920-1-willy@infradead.org> References: <20210712194551.91920-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=NyLqnSQ6; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 5C87860019A3 X-Stat-Signature: xy77zjwx4cisgywbbednczg8rqjyzk3s X-HE-Tag: 1626119714-634703 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These are the folio equivalents of lock_page_lruvec() and similar functions. Also convert lruvec_memcg_debug() to take a folio. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/memcontrol.h | 32 ++++++++++++++----------- mm/compaction.c | 2 +- mm/huge_memory.c | 5 ++-- mm/memcontrol.c | 48 ++++++++++++++++---------------------- mm/rmap.c | 2 +- mm/swap.c | 8 ++++--- mm/vmscan.c | 3 ++- 7 files changed, 50 insertions(+), 50 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index d4af898a1294..57b1bf457f51 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -768,15 +768,16 @@ struct mem_cgroup *mem_cgroup_from_task(struct task= _struct *p); =20 struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm); =20 -struct lruvec *lock_page_lruvec(struct page *page); -struct lruvec *lock_page_lruvec_irq(struct page *page); -struct lruvec *lock_page_lruvec_irqsave(struct page *page, +struct lruvec *folio_lruvec_lock(struct folio *folio); +struct lruvec *folio_lruvec_lock_irq(struct folio *folio); +struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags); =20 #ifdef CONFIG_DEBUG_VM -void lruvec_memcg_debug(struct lruvec *lruvec, struct page *page); +void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio); #else -static inline void lruvec_memcg_debug(struct lruvec *lruvec, struct page= *page) +static inline +void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) { } #endif @@ -1231,7 +1232,8 @@ static inline struct lruvec *folio_lruvec(struct fo= lio *folio) return &pgdat->__lruvec; } =20 -static inline void lruvec_memcg_debug(struct lruvec *lruvec, struct page= *page) +static inline +void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) { } =20 @@ -1261,26 +1263,26 @@ static inline void mem_cgroup_put(struct mem_cgro= up *memcg) { } =20 -static inline struct lruvec *lock_page_lruvec(struct page *page) +static inline struct lruvec *folio_lruvec_lock(struct folio *folio) { - struct pglist_data *pgdat =3D page_pgdat(page); + struct pglist_data *pgdat =3D folio_pgdat(folio); =20 spin_lock(&pgdat->__lruvec.lru_lock); return &pgdat->__lruvec; } =20 -static inline struct lruvec *lock_page_lruvec_irq(struct page *page) +static inline struct lruvec *folio_lruvec_lock_irq(struct folio *folio) { - struct pglist_data *pgdat =3D page_pgdat(page); + struct pglist_data *pgdat =3D folio_pgdat(folio); =20 spin_lock_irq(&pgdat->__lruvec.lru_lock); return &pgdat->__lruvec; } =20 -static inline struct lruvec *lock_page_lruvec_irqsave(struct page *page, +static inline struct lruvec *folio_lruvec_lock_irqsave(struct folio *fol= io, unsigned long *flagsp) { - struct pglist_data *pgdat =3D page_pgdat(page); + struct pglist_data *pgdat =3D folio_pgdat(folio); =20 spin_lock_irqsave(&pgdat->__lruvec.lru_lock, *flagsp); return &pgdat->__lruvec; @@ -1537,6 +1539,7 @@ static inline bool page_matches_lruvec(struct page = *page, struct lruvec *lruvec) static inline struct lruvec *relock_page_lruvec_irq(struct page *page, struct lruvec *locked_lruvec) { + struct folio *folio =3D page_folio(page); if (locked_lruvec) { if (page_matches_lruvec(page, locked_lruvec)) return locked_lruvec; @@ -1544,13 +1547,14 @@ static inline struct lruvec *relock_page_lruvec_i= rq(struct page *page, unlock_page_lruvec_irq(locked_lruvec); } =20 - return lock_page_lruvec_irq(page); + return folio_lruvec_lock_irq(folio); } =20 /* Don't lock again iff page's lruvec locked */ static inline struct lruvec *relock_page_lruvec_irqsave(struct page *pag= e, struct lruvec *locked_lruvec, unsigned long *flags) { + struct folio *folio =3D page_folio(page); if (locked_lruvec) { if (page_matches_lruvec(page, locked_lruvec)) return locked_lruvec; @@ -1558,7 +1562,7 @@ static inline struct lruvec *relock_page_lruvec_irq= save(struct page *page, unlock_page_lruvec_irqrestore(locked_lruvec, *flags); } =20 - return lock_page_lruvec_irqsave(page, flags); + return folio_lruvec_lock_irqsave(folio, flags); } =20 #ifdef CONFIG_CGROUP_WRITEBACK diff --git a/mm/compaction.c b/mm/compaction.c index a88f7b893f80..6f77577be248 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1038,7 +1038,7 @@ isolate_migratepages_block(struct compact_control *= cc, unsigned long low_pfn, compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); locked =3D lruvec; =20 - lruvec_memcg_debug(lruvec, page); + lruvec_memcg_debug(lruvec, page_folio(page)); =20 /* Try get exclusive access under lock */ if (!skip_updated) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index ecb1fb1f5f3e..763bf687ca92 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2431,7 +2431,8 @@ static void __split_huge_page_tail(struct page *hea= d, int tail, static void __split_huge_page(struct page *page, struct list_head *list, pgoff_t end) { - struct page *head =3D compound_head(page); + struct folio *folio =3D page_folio(page); + struct page *head =3D &folio->page; struct lruvec *lruvec; struct address_space *swap_cache =3D NULL; unsigned long offset =3D 0; @@ -2450,7 +2451,7 @@ static void __split_huge_page(struct page *page, st= ruct list_head *list, } =20 /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ - lruvec =3D lock_page_lruvec(head); + lruvec =3D folio_lruvec_lock(folio); =20 for (i =3D nr - 1; i >=3D 1; i--) { __split_huge_page_tail(head, i, lruvec, list); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 3152a0e1ba6f..08add9e110ee 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1158,67 +1158,59 @@ int mem_cgroup_scan_tasks(struct mem_cgroup *memc= g, } =20 #ifdef CONFIG_DEBUG_VM -void lruvec_memcg_debug(struct lruvec *lruvec, struct page *page) +void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) { struct mem_cgroup *memcg; =20 if (mem_cgroup_disabled()) return; =20 - memcg =3D page_memcg(page); + memcg =3D folio_memcg(folio); =20 if (!memcg) - VM_BUG_ON_PAGE(lruvec_memcg(lruvec) !=3D root_mem_cgroup, page); + VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) !=3D root_mem_cgroup, folio); else - VM_BUG_ON_PAGE(lruvec_memcg(lruvec) !=3D memcg, page); + VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) !=3D memcg, folio); } #endif =20 /** - * lock_page_lruvec - lock and return lruvec for a given page. - * @page: the page + * folio_lruvec_lock - lock and return lruvec for a given folio. + * @folio: Pointer to the folio. * * These functions are safe to use under any of the following conditions= : - * - page locked - * - PageLRU cleared - * - lock_page_memcg() - * - page->_refcount is zero + * - folio locked + * - folio_lru cleared + * - folio_memcg_lock() + * - folio frozen (refcount of 0) */ -struct lruvec *lock_page_lruvec(struct page *page) +struct lruvec *folio_lruvec_lock(struct folio *folio) { - struct folio *folio =3D page_folio(page); - struct lruvec *lruvec; + struct lruvec *lruvec =3D folio_lruvec(folio); =20 - lruvec =3D folio_lruvec(folio); spin_lock(&lruvec->lru_lock); - - lruvec_memcg_debug(lruvec, page); + lruvec_memcg_debug(lruvec, folio); =20 return lruvec; } =20 -struct lruvec *lock_page_lruvec_irq(struct page *page) +struct lruvec *folio_lruvec_lock_irq(struct folio *folio) { - struct folio *folio =3D page_folio(page); - struct lruvec *lruvec; + struct lruvec *lruvec =3D folio_lruvec(folio); =20 - lruvec =3D folio_lruvec(folio); spin_lock_irq(&lruvec->lru_lock); - - lruvec_memcg_debug(lruvec, page); + lruvec_memcg_debug(lruvec, folio); =20 return lruvec; } =20 -struct lruvec *lock_page_lruvec_irqsave(struct page *page, unsigned long= *flags) +struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, + unsigned long *flags) { - struct folio *folio =3D page_folio(page); - struct lruvec *lruvec; + struct lruvec *lruvec =3D folio_lruvec(folio); =20 - lruvec =3D folio_lruvec(folio); spin_lock_irqsave(&lruvec->lru_lock, *flags); - - lruvec_memcg_debug(lruvec, page); + lruvec_memcg_debug(lruvec, folio); =20 return lruvec; } diff --git a/mm/rmap.c b/mm/rmap.c index 795f9d5f8386..b416af486812 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -33,7 +33,7 @@ * mapping->private_lock (in __set_page_dirty_buffers) * lock_page_memcg move_lock (in __set_page_dirty_buff= ers) * i_pages lock (widely used) - * lruvec->lru_lock (in lock_page_lruvec_irq) + * lruvec->lru_lock (in folio_lruvec_lock_irq) * inode->i_lock (in set_page_dirty's __mark_inode_dirty= ) * bdi.wb->list_lock (in set_page_dirty's __mark_inode_d= irty) * sb_lock (within inode_lock in fs/fs-writeback.c) diff --git a/mm/swap.c b/mm/swap.c index d5136cac4267..a82812caf409 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -80,10 +80,11 @@ static DEFINE_PER_CPU(struct lru_pvecs, lru_pvecs) =3D= { static void __page_cache_release(struct page *page) { if (PageLRU(page)) { + struct folio *folio =3D page_folio(page); struct lruvec *lruvec; unsigned long flags; =20 - lruvec =3D lock_page_lruvec_irqsave(page, &flags); + lruvec =3D folio_lruvec_lock_irqsave(folio, &flags); del_page_from_lru_list(page, lruvec); __clear_page_lru_flags(page); unlock_page_lruvec_irqrestore(lruvec, flags); @@ -372,11 +373,12 @@ static inline void activate_page_drain(int cpu) =20 static void activate_page(struct page *page) { + struct folio *folio =3D page_folio(page); struct lruvec *lruvec; =20 - page =3D compound_head(page); + page =3D &folio->page; if (TestClearPageLRU(page)) { - lruvec =3D lock_page_lruvec_irq(page); + lruvec =3D folio_lruvec_lock_irq(folio); __activate_page(page, lruvec); unlock_page_lruvec_irq(lruvec); SetPageLRU(page); diff --git a/mm/vmscan.c b/mm/vmscan.c index 4620df62f0ff..0d48306d37dc 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1965,6 +1965,7 @@ static unsigned long isolate_lru_pages(unsigned lon= g nr_to_scan, */ int isolate_lru_page(struct page *page) { + struct folio *folio =3D page_folio(page); int ret =3D -EBUSY; =20 VM_BUG_ON_PAGE(!page_count(page), page); @@ -1974,7 +1975,7 @@ int isolate_lru_page(struct page *page) struct lruvec *lruvec; =20 get_page(page); - lruvec =3D lock_page_lruvec_irq(page); + lruvec =3D folio_lruvec_lock_irq(folio); del_page_from_lru_list(page, lruvec); unlock_page_lruvec_irq(lruvec); ret =3D 0; --=20 2.30.2