From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34C79C433DB for ; Tue, 30 Mar 2021 10:23:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8BE2661955 for ; Tue, 30 Mar 2021 10:23:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8BE2661955 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 140B76B00A4; Tue, 30 Mar 2021 06:23:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 116606B00A5; Tue, 30 Mar 2021 06:23:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EABA76B00A6; Tue, 30 Mar 2021 06:23:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0180.hostedemail.com [216.40.44.180]) by kanga.kvack.org (Postfix) with ESMTP id C7E906B00A4 for ; Tue, 30 Mar 2021 06:23:34 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 7DD3B442B for ; Tue, 30 Mar 2021 10:23:34 +0000 (UTC) X-FDA: 77976153948.04.B6D8CB4 Received: from mail-pj1-f52.google.com (mail-pj1-f52.google.com [209.85.216.52]) by imf15.hostedemail.com (Postfix) with ESMTP id 2389AA0009F5 for ; Tue, 30 Mar 2021 10:23:32 +0000 (UTC) Received: by mail-pj1-f52.google.com with SMTP id bt4so7498014pjb.5 for ; Tue, 30 Mar 2021 03:23:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fT0weKC4DEAC+5wWcmHzQMivFqo3ZXEo4WZWR/hlXUA=; b=u/1orgfVtogLUV0OALnuu2NoPCBwa8seczxkr+RtaBGXBOZtJ1jgP3GKQMJTAW8mC6 CONRxw1VpiZ4cME0tZqapqteuKMppgWKpAftOMHbKHNpfGLH1SZWT8+hHU3E3POnUPLY 0ldx5LKfAzaEhFHbYLEWzxIMMTdN6mNunSx++cnE2xprtZKVZulaNF8Prcgr2lYs3B2v WLjywPAPLvmhtnMxWpBXbQEV3EEmS8yr+AZppBRi7i/DRnSiWss8ECtz2SpSiiFLbL1t /Yi92Ba+sXnxIXOzxQZD77hwr1yws6beAetsgCln9tL22W2+n8QwbFuOH6Qtz880dVd2 7O5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fT0weKC4DEAC+5wWcmHzQMivFqo3ZXEo4WZWR/hlXUA=; b=Po8I/yJtL0mXXin56+zD/vs7NZBRmZfuSG7XSQmn2X9YaSOpDFKQkn5H2YDUfYEzos GBtimM8PYtGGIYe+EGJqzCC3lpew/km4Us/S4AsRcQcMtskK6V4nIT/ZTShl81g0LXIo OcdqSDFXbiJ+x8+OTdWtjg5ksX57WrhMP28DMWWhCX9vohOZZbRqI7krHc4kQz/3NuFG jA0lrHKMDBItxZA98UoRPe/70MD/GuaqwTnBFR831QM0bNTOW1R89nDGotADvhMMUdtI vCr6k6QXu53S0L5BhR6XYNNKK1VneHWjuMoMSI9BU/t1yTihXbHDhn/JNZvdN4hWzVfr lEEQ== X-Gm-Message-State: AOAM530ar03wLxUyYCPkLrGE+YYQCvRpWv+fsUi78l7K5rQdNGu14b0S u0I+UoYOBGqmYiQOXVUkOuU0fw== X-Google-Smtp-Source: ABdhPJy9F8UoZuP7LT1OlYDjJNsmaCuqqa+etWBFX0//1uxYfgub3NBcMG1hlbTr2kKZKz+qdPA8qg== X-Received: by 2002:a17:90a:c207:: with SMTP id e7mr3536823pjt.188.1617099812967; Tue, 30 Mar 2021 03:23:32 -0700 (PDT) Received: from localhost.localdomain ([2408:8445:ad30:68d8:c87f:ca1b:dc00:4730]) by smtp.gmail.com with ESMTPSA id k10sm202259pfk.205.2021.03.30.03.23.10 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 30 Mar 2021 03:23:32 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, Muchun Song Subject: [RFC PATCH 14/15] mm: memcontrol: rename {un}lock_page_memcg() to {un}lock_page_objcg() Date: Tue, 30 Mar 2021 18:15:30 +0800 Message-Id: <20210330101531.82752-15-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210330101531.82752-1-songmuchun@bytedance.com> References: <20210330101531.82752-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 2389AA0009F5 X-Stat-Signature: f57smzsggkopwjzohtfr4cnjcq3zgzxi Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf15; identity=mailfrom; envelope-from=""; helo=mail-pj1-f52.google.com; client-ip=209.85.216.52 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617099812-562291 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now the lock_page_memcg() does not lock a page and memcg binding, it actually lock a page and objcg binding. So rename lock_page_memcg() to lock_page_objcg(). This is just code cleanup without any functionality changes. Signed-off-by: Muchun Song --- Documentation/admin-guide/cgroup-v1/memory.rst | 2 +- fs/buffer.c | 10 ++-- fs/iomap/buffered-io.c | 4 +- include/linux/memcontrol.h | 22 +++++---- mm/filemap.c | 2 +- mm/huge_memory.c | 4 +- mm/memcontrol.c | 65 ++++++++++++++++----= ------ mm/page-writeback.c | 26 +++++------ mm/rmap.c | 14 +++--- 9 files changed, 85 insertions(+), 64 deletions(-) diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentati= on/admin-guide/cgroup-v1/memory.rst index 0936412e044e..578823f2c764 100644 --- a/Documentation/admin-guide/cgroup-v1/memory.rst +++ b/Documentation/admin-guide/cgroup-v1/memory.rst @@ -291,7 +291,7 @@ Lock order is as follows: =20 Page lock (PG_locked bit of page->flags) mm->page_table_lock or split pte_lock - lock_page_memcg (memcg->move_lock) + lock_page_objcg (memcg->move_lock) mapping->i_pages lock lruvec->lru_lock. =20 diff --git a/fs/buffer.c b/fs/buffer.c index 790ba6660d10..8b6d66511690 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -595,7 +595,7 @@ EXPORT_SYMBOL(mark_buffer_dirty_inode); * If warn is true, then emit a warning if the page is not uptodate and = has * not been truncated. * - * The caller must hold lock_page_memcg(). + * The caller must hold lock_page_objcg(). */ void __set_page_dirty(struct page *page, struct address_space *mapping, int warn) @@ -660,14 +660,14 @@ int __set_page_dirty_buffers(struct page *page) * Lock out page's memcg migration to keep PageDirty * synchronized with per-memcg dirty page counters. */ - lock_page_memcg(page); + lock_page_objcg(page); newly_dirty =3D !TestSetPageDirty(page); spin_unlock(&mapping->private_lock); =20 if (newly_dirty) __set_page_dirty(page, mapping, 1); =20 - unlock_page_memcg(page); + unlock_page_objcg(page); =20 if (newly_dirty) __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); @@ -1168,13 +1168,13 @@ void mark_buffer_dirty(struct buffer_head *bh) struct page *page =3D bh->b_page; struct address_space *mapping =3D NULL; =20 - lock_page_memcg(page); + lock_page_objcg(page); if (!TestSetPageDirty(page)) { mapping =3D page_mapping(page); if (mapping) __set_page_dirty(page, mapping, 0); } - unlock_page_memcg(page); + unlock_page_objcg(page); if (mapping) __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); } diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 16a1e82e3aeb..8a3ffd38d9e0 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -653,11 +653,11 @@ iomap_set_page_dirty(struct page *page) * Lock out page's memcg migration to keep PageDirty * synchronized with per-memcg dirty page counters. */ - lock_page_memcg(page); + lock_page_objcg(page); newly_dirty =3D !TestSetPageDirty(page); if (newly_dirty) __set_page_dirty(page, mapping, 0); - unlock_page_memcg(page); + unlock_page_objcg(page); =20 if (newly_dirty) __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index cd9e9ff6c2bf..688a8e1fa9b6 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -410,11 +410,12 @@ static inline struct obj_cgroup *page_objcg(struct = page *page) * proper memory cgroup pointer. It's not safe to call this function * against some type of pages, e.g. slab pages or ex-slab pages. * - * For a page any of the following ensures page and objcg binding stabil= ity: + * For a page any of the following ensures page and objcg binding stabil= ity + * (But the page can be reparented to its parent memcg): * * - the page lock * - LRU isolation - * - lock_page_memcg() + * - lock_page_objcg() * - exclusive reference * * Based on the stable binding of page and objcg, for a page any of the @@ -947,9 +948,9 @@ void mem_cgroup_print_oom_group(struct mem_cgroup *me= mcg); extern bool cgroup_memory_noswap; #endif =20 -struct mem_cgroup *lock_page_memcg(struct page *page); -void __unlock_page_memcg(struct mem_cgroup *memcg); -void unlock_page_memcg(struct page *page); +struct obj_cgroup *lock_page_objcg(struct page *page); +void __unlock_page_objcg(struct obj_cgroup *objcg); +void unlock_page_objcg(struct page *page); =20 /* * idx can be of type enum memcg_stat_item or node_stat_item. @@ -1155,6 +1156,11 @@ void mem_cgroup_split_huge_fixup(struct page *head= ); =20 struct mem_cgroup; =20 +static inline struct obj_cgroup *page_objcg(struct page *page) +{ + return NULL; +} + static inline struct mem_cgroup *page_memcg(struct page *page) { return NULL; @@ -1375,16 +1381,16 @@ mem_cgroup_print_oom_meminfo(struct mem_cgroup *m= emcg) { } =20 -static inline struct mem_cgroup *lock_page_memcg(struct page *page) +static inline struct obj_cgroup *lock_page_objcg(struct page *page) { return NULL; } =20 -static inline void __unlock_page_memcg(struct mem_cgroup *memcg) +static inline void __unlock_page_objcg(struct obj_cgroup *objcg) { } =20 -static inline void unlock_page_memcg(struct page *page) +static inline void unlock_page_objcg(struct page *page) { } =20 diff --git a/mm/filemap.c b/mm/filemap.c index 925964b67583..c427de610860 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -110,7 +110,7 @@ * ->i_pages lock (page_remove_rmap->set_page_dirty) * bdi.wb->list_lock (page_remove_rmap->set_page_dirty) * ->inode->i_lock (page_remove_rmap->set_page_dirty) - * ->memcg->move_lock (page_remove_rmap->lock_page_memcg) + * ->memcg->move_lock (page_remove_rmap->lock_page_objcg) * bdi.wb->list_lock (zap_pte_range->set_page_dirty) * ->inode->i_lock (zap_pte_range->set_page_dirty) * ->private_lock (zap_pte_range->__set_page_dirty_buffers) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a47c97a48951..088511eaa326 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2303,7 +2303,7 @@ static void __split_huge_pmd_locked(struct vm_area_= struct *vma, pmd_t *pmd, atomic_inc(&page[i]._mapcount); } =20 - lock_page_memcg(page); + lock_page_objcg(page); if (atomic_add_negative(-1, compound_mapcount_ptr(page))) { /* Last compound_mapcount is gone. */ __mod_lruvec_page_state(page, NR_ANON_THPS, @@ -2314,7 +2314,7 @@ static void __split_huge_pmd_locked(struct vm_area_= struct *vma, pmd_t *pmd, atomic_dec(&page[i]._mapcount); } } - unlock_page_memcg(page); + unlock_page_objcg(page); } =20 smp_wmb(); /* make pte visible before pmd */ diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 71689243242f..442b846dc7bc 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1439,7 +1439,7 @@ int mem_cgroup_scan_tasks(struct mem_cgroup *memcg, * These functions are safe to use under any of the following conditions= : * - page locked * - PageLRU cleared - * - lock_page_memcg() + * - lock_page_objcg() * - page->_refcount is zero */ struct lruvec *lock_page_lruvec(struct page *page) @@ -2255,20 +2255,22 @@ void mem_cgroup_print_oom_group(struct mem_cgroup= *memcg) } =20 /** - * lock_page_memcg - lock a page and memcg binding + * lock_page_objcg - lock a page and objcg binding * @page: the page * * This function protects unlocked LRU pages from being moved to - * another cgroup. + * another object cgroup. But the page can be reparented to its + * parent memcg. * - * It ensures lifetime of the returned memcg. Caller is responsible - * for the lifetime of the page; __unlock_page_memcg() is available + * It ensures lifetime of the returned objcg. Caller is responsible + * for the lifetime of the page; __unlock_page_objcg() is available * when @page might get freed inside the locked section. */ -struct mem_cgroup *lock_page_memcg(struct page *page) +struct obj_cgroup *lock_page_objcg(struct page *page) { struct page *head =3D compound_head(page); /* rmap on tail pages */ struct mem_cgroup *memcg; + struct obj_cgroup *objcg; unsigned long flags; =20 /* @@ -2287,10 +2289,12 @@ struct mem_cgroup *lock_page_memcg(struct page *p= age) if (mem_cgroup_disabled()) return NULL; again: - memcg =3D page_memcg(head); - if (unlikely(!memcg)) + objcg =3D page_objcg(head); + if (unlikely(!objcg)) return NULL; =20 + memcg =3D obj_cgroup_memcg(objcg); + #ifdef CONFIG_PROVE_LOCKING local_irq_save(flags); might_lock(&memcg->move_lock); @@ -2298,7 +2302,7 @@ struct mem_cgroup *lock_page_memcg(struct page *pag= e) #endif =20 if (atomic_read(&memcg->moving_account) <=3D 0) - return memcg; + return objcg; =20 spin_lock_irqsave(&memcg->move_lock, flags); if (memcg !=3D page_memcg(head)) { @@ -2309,23 +2313,34 @@ struct mem_cgroup *lock_page_memcg(struct page *p= age) /* * When charge migration first begins, we can have locked and * unlocked page stat updates happening concurrently. Track - * the task who has the lock for unlock_page_memcg(). + * the task who has the lock for unlock_page_objcg(). */ memcg->move_lock_task =3D current; memcg->move_lock_flags =3D flags; =20 - return memcg; + /* + * The cgroup migration and memory cgroup offlining are serialized by + * cgroup_mutex. If we reach here, it means that we are race with cgrou= p + * migration (or we are cgroup migration) and the @page cannot be + * reparented to its parent memory cgroup. So during the whole process + * from lock_page_objcg(page) to unlock_page_objcg(page), page_memcg(pa= ge) + * and obj_cgroup_memcg(objcg) are stable. + */ + + return objcg; } -EXPORT_SYMBOL(lock_page_memcg); +EXPORT_SYMBOL(lock_page_objcg); =20 /** - * __unlock_page_memcg - unlock and unpin a memcg - * @memcg: the memcg + * __unlock_page_objcg - unlock and unpin a objcg + * @objcg: the objcg * - * Unlock and unpin a memcg returned by lock_page_memcg(). + * Unlock and unpin a objcg returned by lock_page_objcg(). */ -void __unlock_page_memcg(struct mem_cgroup *memcg) +void __unlock_page_objcg(struct obj_cgroup *objcg) { + struct mem_cgroup *memcg =3D objcg ? obj_cgroup_memcg(objcg) : NULL; + if (memcg && memcg->move_lock_task =3D=3D current) { unsigned long flags =3D memcg->move_lock_flags; =20 @@ -2339,16 +2354,16 @@ void __unlock_page_memcg(struct mem_cgroup *memcg= ) } =20 /** - * unlock_page_memcg - unlock a page and memcg binding + * unlock_page_objcg - unlock a page and objcg binding * @page: the page */ -void unlock_page_memcg(struct page *page) +void unlock_page_objcg(struct page *page) { struct page *head =3D compound_head(page); =20 - __unlock_page_memcg(page_memcg(head)); + __unlock_page_objcg(page_objcg(head)); } -EXPORT_SYMBOL(unlock_page_memcg); +EXPORT_SYMBOL(unlock_page_objcg); =20 struct memcg_stock_pcp { struct mem_cgroup *cached; /* this never be root cgroup */ @@ -3042,7 +3057,7 @@ static void commit_charge(struct page *page, struct= obj_cgroup *objcg) * * - the page lock * - LRU isolation - * - lock_page_memcg() + * - lock_page_objcg() * - exclusive reference */ page->memcg_data =3D (unsigned long)objcg; @@ -5785,7 +5800,7 @@ static int mem_cgroup_move_account(struct page *pag= e, from_vec =3D mem_cgroup_lruvec(from, pgdat); to_vec =3D mem_cgroup_lruvec(to, pgdat); =20 - lock_page_memcg(page); + lock_page_objcg(page); =20 if (PageAnon(page)) { if (page_mapped(page)) { @@ -5837,7 +5852,7 @@ static int mem_cgroup_move_account(struct page *pag= e, * with (un)charging, migration, LRU putback, or anything else * that would rely on a stable page's memory cgroup. * - * Note that lock_page_memcg is a memcg lock, not a page lock, + * Note that lock_page_objcg is a memcg lock, not a page lock, * to save space. As soon as we switch page's memory cgroup to a * new memcg that isn't locked, the above state can change * concurrently again. Make sure we're truly done with it. @@ -5849,7 +5864,7 @@ static int mem_cgroup_move_account(struct page *pag= e, =20 page->memcg_data =3D (unsigned long)to->objcg; =20 - __unlock_page_memcg(from); + __unlock_page_objcg(from->objcg); =20 ret =3D 0; =20 @@ -6291,7 +6306,7 @@ static void mem_cgroup_move_charge(void) { lru_add_drain_all(); /* - * Signal lock_page_memcg() to take the memcg's move_lock + * Signal lock_page_objcg() to take the memcg's move_lock * while we're moving its pages to another memcg. Then wait * for already started RCU-only updates to finish. */ diff --git a/mm/page-writeback.c b/mm/page-writeback.c index f517e0669924..2a119afbf7fa 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2413,7 +2413,7 @@ int __set_page_dirty_no_writeback(struct page *page= ) /* * Helper function for set_page_dirty family. * - * Caller must hold lock_page_memcg(). + * Caller must hold lock_page_objcg(). * * NOTE: This relies on being atomic wrt interrupts. */ @@ -2445,7 +2445,7 @@ void account_page_dirtied(struct page *page, struct= address_space *mapping) /* * Helper function for deaccounting dirty page without writeback. * - * Caller must hold lock_page_memcg(). + * Caller must hold lock_page_objcg(). */ void account_page_cleaned(struct page *page, struct address_space *mappi= ng, struct bdi_writeback *wb) @@ -2472,13 +2472,13 @@ void account_page_cleaned(struct page *page, stru= ct address_space *mapping, */ int __set_page_dirty_nobuffers(struct page *page) { - lock_page_memcg(page); + lock_page_objcg(page); if (!TestSetPageDirty(page)) { struct address_space *mapping =3D page_mapping(page); unsigned long flags; =20 if (!mapping) { - unlock_page_memcg(page); + unlock_page_objcg(page); return 1; } =20 @@ -2489,7 +2489,7 @@ int __set_page_dirty_nobuffers(struct page *page) __xa_set_mark(&mapping->i_pages, page_index(page), PAGECACHE_TAG_DIRTY); xa_unlock_irqrestore(&mapping->i_pages, flags); - unlock_page_memcg(page); + unlock_page_objcg(page); =20 if (mapping->host) { /* !PageAnon && !swapper_space */ @@ -2497,7 +2497,7 @@ int __set_page_dirty_nobuffers(struct page *page) } return 1; } - unlock_page_memcg(page); + unlock_page_objcg(page); return 0; } EXPORT_SYMBOL(__set_page_dirty_nobuffers); @@ -2630,14 +2630,14 @@ void __cancel_dirty_page(struct page *page) struct bdi_writeback *wb; struct wb_lock_cookie cookie =3D {}; =20 - lock_page_memcg(page); + lock_page_objcg(page); wb =3D unlocked_inode_to_wb_begin(inode, &cookie); =20 if (TestClearPageDirty(page)) account_page_cleaned(page, mapping, wb); =20 unlocked_inode_to_wb_end(inode, &cookie); - unlock_page_memcg(page); + unlock_page_objcg(page); } else { ClearPageDirty(page); } @@ -2722,11 +2722,11 @@ EXPORT_SYMBOL(clear_page_dirty_for_io); int test_clear_page_writeback(struct page *page) { struct address_space *mapping =3D page_mapping(page); - struct mem_cgroup *memcg; + struct obj_cgroup *objcg; struct lruvec *lruvec; int ret; =20 - memcg =3D lock_page_memcg(page); + objcg =3D lock_page_objcg(page); lruvec =3D mem_cgroup_page_lruvec(page); if (mapping && mapping_use_writeback_tags(mapping)) { struct inode *inode =3D mapping->host; @@ -2759,7 +2759,7 @@ int test_clear_page_writeback(struct page *page) dec_zone_page_state(page, NR_ZONE_WRITE_PENDING); inc_node_page_state(page, NR_WRITTEN); } - __unlock_page_memcg(memcg); + __unlock_page_objcg(objcg); return ret; } =20 @@ -2768,7 +2768,7 @@ int __test_set_page_writeback(struct page *page, bo= ol keep_write) struct address_space *mapping =3D page_mapping(page); int ret, access_ret; =20 - lock_page_memcg(page); + lock_page_objcg(page); if (mapping && mapping_use_writeback_tags(mapping)) { XA_STATE(xas, &mapping->i_pages, page_index(page)); struct inode *inode =3D mapping->host; @@ -2808,7 +2808,7 @@ int __test_set_page_writeback(struct page *page, bo= ol keep_write) inc_lruvec_page_state(page, NR_WRITEBACK); inc_zone_page_state(page, NR_ZONE_WRITE_PENDING); } - unlock_page_memcg(page); + unlock_page_objcg(page); access_ret =3D arch_make_page_accessible(page); /* * If writeback has been triggered on a page that cannot be made diff --git a/mm/rmap.c b/mm/rmap.c index b0fc27e77d6d..3c2488e1081c 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -31,7 +31,7 @@ * swap_lock (in swap_duplicate, swap_info_get) * mmlist_lock (in mmput, drain_mmlist and others) * mapping->private_lock (in __set_page_dirty_buffers) - * lock_page_memcg move_lock (in __set_page_dirty_buff= ers) + * lock_page_objcg move_lock (in __set_page_dirty_buff= ers) * i_pages lock (widely used) * lruvec->lru_lock (in lock_page_lruvec_irq) * inode->i_lock (in set_page_dirty's __mark_inode_dirty= ) @@ -1127,7 +1127,7 @@ void do_page_add_anon_rmap(struct page *page, bool first; =20 if (unlikely(PageKsm(page))) - lock_page_memcg(page); + lock_page_objcg(page); else VM_BUG_ON_PAGE(!PageLocked(page), page); =20 @@ -1155,7 +1155,7 @@ void do_page_add_anon_rmap(struct page *page, } =20 if (unlikely(PageKsm(page))) { - unlock_page_memcg(page); + unlock_page_objcg(page); return; } =20 @@ -1215,7 +1215,7 @@ void page_add_file_rmap(struct page *page, bool com= pound) int i, nr =3D 1; =20 VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page); - lock_page_memcg(page); + lock_page_objcg(page); if (compound && PageTransHuge(page)) { int nr_pages =3D thp_nr_pages(page); =20 @@ -1244,7 +1244,7 @@ void page_add_file_rmap(struct page *page, bool com= pound) } __mod_lruvec_page_state(page, NR_FILE_MAPPED, nr); out: - unlock_page_memcg(page); + unlock_page_objcg(page); } =20 static void page_remove_file_rmap(struct page *page, bool compound) @@ -1345,7 +1345,7 @@ static void page_remove_anon_compound_rmap(struct p= age *page) */ void page_remove_rmap(struct page *page, bool compound) { - lock_page_memcg(page); + lock_page_objcg(page); =20 if (!PageAnon(page)) { page_remove_file_rmap(page, compound); @@ -1384,7 +1384,7 @@ void page_remove_rmap(struct page *page, bool compo= und) * faster for those pages still in swapcache. */ out: - unlock_page_memcg(page); + unlock_page_objcg(page); } =20 /* --=20 2.11.0