From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5759C433E0 for ; Wed, 3 Mar 2021 06:02:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0671A64DF4 for ; Wed, 3 Mar 2021 06:02:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0671A64DF4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 85F7D8D0128; Wed, 3 Mar 2021 01:02:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8374B8D0127; Wed, 3 Mar 2021 01:02:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 689E68D0128; Wed, 3 Mar 2021 01:02:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0240.hostedemail.com [216.40.44.240]) by kanga.kvack.org (Postfix) with ESMTP id 4916E8D0127 for ; Wed, 3 Mar 2021 01:02:10 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 0A91D181BCDC7 for ; Wed, 3 Mar 2021 06:02:10 +0000 (UTC) X-FDA: 77877517620.30.FEABF0A Received: from mail-pj1-f53.google.com (mail-pj1-f53.google.com [209.85.216.53]) by imf28.hostedemail.com (Postfix) with ESMTP id A96AA2000381 for ; Wed, 3 Mar 2021 06:02:09 +0000 (UTC) Received: by mail-pj1-f53.google.com with SMTP id l18so3649556pji.3 for ; Tue, 02 Mar 2021 22:02:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=oplEqh5eV2GZ+vXN8Na+NKbtehy1/4Xwc42h7dUdrAs=; b=zwjWBgnKKdx3RtGLqjpG50StX1guMHqcNEGEkmnoPezr6dlSyKy1KHrlCKV2e5x+FM ZYfy+XWMsQ0MTGFzY6D9CfN7USKbaz5ulvFP4mOsbYmEAIWEvWUWRx9vLYbOR7tFEsNE 1aopf71zceWMqhcynUBb2GKAHxV+T8sbfm0VCReDzsNua8Qro2YBdWfZecPX/OahFS3e CMyAK08ESRbD3JlWZLMFAs4p14EQ8zGYGc0Zbs/fQ/bsAIxJ4pRx+UjTURoOWep0Mias BSkz6yamu9RZT8AFm8yVZu+dCwenY5U8zd+dfajOJcbzTTRiQVSo58DQD9Zxr7EnOja+ +8ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oplEqh5eV2GZ+vXN8Na+NKbtehy1/4Xwc42h7dUdrAs=; b=Xmv5JUOx6QmBsteUZalZ9deEnzWWUBO1c+JLK3jZy21gx+xQuynoKjosme120Bwshb MFF7mc43FJd3dlzQ9oxLl8itnVpdI+MczauNtLkvZt1h4tMGyYMKW/aZ9k2q/sp3FcZh Yj/OEdPd8t1HYIlcqduGIURBoi6oeXASMONCurgxZ8Z9d4IenCvhK9G+cJrAQUNqySvw BShYv/zisE8gYo17iuh3K3gwj8ph0Yw2pLzxqVaU5JSXwmZD2XG7qkWGTF5cSUpoYRAy zu9Mo9w8wBILhQVCJhQSXchVyE3gmk80gMPFsnycWq1B+j58DyM9qH2NzwU6ZUVisAGB 1vBw== X-Gm-Message-State: AOAM533y2h0aQ2dMhWxuQDF9fkHDCBNHjyEOLMJDmInDM7xgGHfVhXYh FkiwoqZEdj8GsAbpxUmAQXpiKw== X-Google-Smtp-Source: ABdhPJxaErlSOSFTBuKKDyN0xxNxqhWv8qVqBdvv08tIooh0PWvnlvd2ps190khBgEnpHSR08l+kpQ== X-Received: by 2002:a17:903:304e:b029:e5:d43:9415 with SMTP id u14-20020a170903304eb02900e50d439415mr6858937pla.42.1614751328502; Tue, 02 Mar 2021 22:02:08 -0800 (PST) Received: from localhost.localdomain ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id x11sm6131088pjh.0.2021.03.02.22.02.04 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 02 Mar 2021 22:02:08 -0800 (PST) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, Muchun Song Subject: [PATCH v2 3/5] mm: memcontrol: charge kmem pages by using obj_cgroup APIs Date: Wed, 3 Mar 2021 13:59:15 +0800 Message-Id: <20210303055917.66054-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210303055917.66054-1-songmuchun@bytedance.com> References: <20210303055917.66054-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: A96AA2000381 X-Stat-Signature: h43f36aj63u6ptf9jb488h1fu4wphfs1 Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf28; identity=mailfrom; envelope-from=""; helo=mail-pj1-f53.google.com; client-ip=209.85.216.53 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1614751329-269411 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since Roman series "The new cgroup slab memory controller" applied. All slab objects are charged via the new APIs of obj_cgroup. The new APIs introduce a struct obj_cgroup to charge slab objects. It prevents long-living objects from pinning the original memory cgroup in the memory= . But there are still some corner objects (e.g. allocations larger than order-1 page on SLUB) which are not charged via the new APIs. Those objects (include the pages which are allocated from buddy allocator directly) are charged as kmem pages which still hold a reference to the memory cgroup. This patch aims to charge the kmem pages by using the new APIs of obj_cgroup. Finally, the page->memcg_data of the kmem page points to an object cgroup. We can use the page_objcg() to get the object cgroup associated with a kmem page. Or we can use page_memcg_check() to get the memory cgroup associated with a kmem page, but caller must ensure that the returned memcg won't be released (e.g. acquire the rcu_read_lock or css_set_lock). Signed-off-by: Muchun Song --- include/linux/memcontrol.h | 63 +++++++++++++++++------ mm/memcontrol.c | 123 +++++++++++++++++++++++++++++++--------= ------ 2 files changed, 133 insertions(+), 53 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 049b80246cbf..5911b9d107b0 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -370,6 +370,18 @@ static inline bool page_memcg_charged(struct page *p= age) } =20 /* + * After the initialization objcg->memcg is always pointing at + * a valid memcg, but can be atomically swapped to the parent memcg. + * + * The caller must ensure that the returned memcg won't be released: + * e.g. acquire the rcu_read_lock or css_set_lock. + */ +static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *obj= cg) +{ + return READ_ONCE(objcg->memcg); +} + +/* * page_memcg - get the memory cgroup associated with a non-kmem page * @page: a pointer to the page struct * @@ -421,9 +433,10 @@ static inline struct mem_cgroup *page_memcg_rcu(stru= ct page *page) * @page: a pointer to the page struct * * Returns a pointer to the memory cgroup associated with the page, - * or NULL. This function unlike page_memcg() can take any page + * or NULL. This function unlike page_memcg() can take any non-kmem page * as an argument. It has to be used in cases when it's not known if a p= age - * has an associated memory cgroup pointer or an object cgroups vector. + * has an associated memory cgroup pointer or an object cgroups vector o= r + * an object cgroup. * * Any of the following ensures page and memcg binding stability: * - the page lock @@ -442,6 +455,17 @@ static inline struct mem_cgroup *page_memcg_check(st= ruct page *page) if (memcg_data & MEMCG_DATA_OBJCGS) return NULL; =20 + if (memcg_data & MEMCG_DATA_KMEM) { + struct obj_cgroup *objcg; + + /* + * The caller must ensure that the returned memcg won't be + * released: e.g. acquire the rcu_read_lock or css_set_lock. + */ + objcg =3D (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return obj_cgroup_memcg(objcg); + } + return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } =20 @@ -500,6 +524,24 @@ static inline struct obj_cgroup **page_objcgs_check(= struct page *page) return (struct obj_cgroup **)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } =20 +/* + * page_objcg - get the object cgroup associated with a kmem page + * @page: a pointer to the page struct + * + * Returns a pointer to the object cgroup associated with the kmem page, + * or NULL. This function assumes that the page is known to have an + * associated object cgroup. It's only safe to call this function + * against kmem pages (PageMemcgKmem() returns true). + */ +static inline struct obj_cgroup *page_objcg(struct page *page) +{ + unsigned long memcg_data =3D page->memcg_data; + + VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_OBJCGS, page); + VM_BUG_ON_PAGE(!(memcg_data & MEMCG_DATA_KMEM), page); + + return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); +} #else static inline struct obj_cgroup **page_objcgs(struct page *page) { @@ -510,6 +552,11 @@ static inline struct obj_cgroup **page_objcgs_check(= struct page *page) { return NULL; } + +static inline struct obj_cgroup *page_objcg(struct page *page) +{ + return NULL; +} #endif =20 static __always_inline bool memcg_stat_item_in_bytes(int idx) @@ -728,18 +775,6 @@ static inline void obj_cgroup_put(struct obj_cgroup = *objcg) percpu_ref_put(&objcg->refcnt); } =20 -/* - * After the initialization objcg->memcg is always pointing at - * a valid memcg, but can be atomically swapped to the parent memcg. - * - * The caller must ensure that the returned memcg won't be released: - * e.g. acquire the rcu_read_lock or css_set_lock. - */ -static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *obj= cg) -{ - return READ_ONCE(objcg->memcg); -} - static inline void mem_cgroup_put(struct mem_cgroup *memcg) { if (memcg) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 86a8db937ec6..0cf342d22547 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -856,10 +856,16 @@ void __mod_lruvec_page_state(struct page *page, enu= m node_stat_item idx, { struct page *head =3D compound_head(page); /* rmap on tail pages */ struct mem_cgroup *memcg; - pg_data_t *pgdat =3D page_pgdat(page); + pg_data_t *pgdat; struct lruvec *lruvec; =20 - memcg =3D page_memcg_check(head); + if (PageMemcgKmem(head)) { + __mod_lruvec_kmem_state(page_to_virt(head), idx, val); + return; + } + + pgdat =3D page_pgdat(head); + memcg =3D page_memcg(head); /* Untracked pages have no memcg, no lruvec. Update only the node */ if (!memcg) { __mod_node_page_state(pgdat, idx, val); @@ -3144,18 +3150,18 @@ static void __memcg_kmem_uncharge(struct mem_cgro= up *memcg, unsigned int nr_page */ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order) { - struct mem_cgroup *memcg; + struct obj_cgroup *objcg; int ret =3D 0; =20 - memcg =3D get_mem_cgroup_from_current(); - if (memcg && !mem_cgroup_is_root(memcg)) { - ret =3D __memcg_kmem_charge(memcg, gfp, 1 << order); + objcg =3D get_obj_cgroup_from_current(); + if (objcg) { + ret =3D obj_cgroup_charge_page(objcg, gfp, 1 << order); if (!ret) { - page->memcg_data =3D (unsigned long)memcg | + page->memcg_data =3D (unsigned long)objcg | MEMCG_DATA_KMEM; return 0; } - css_put(&memcg->css); + obj_cgroup_put(objcg); } return ret; } @@ -3167,17 +3173,18 @@ int __memcg_kmem_charge_page(struct page *page, g= fp_t gfp, int order) */ void __memcg_kmem_uncharge_page(struct page *page, int order) { - struct mem_cgroup *memcg; + struct obj_cgroup *objcg; unsigned int nr_pages =3D 1 << order; =20 if (!page_memcg_charged(page)) return; =20 - memcg =3D page_memcg_check(page); - VM_BUG_ON_PAGE(mem_cgroup_is_root(memcg), page); - __memcg_kmem_uncharge(memcg, nr_pages); + VM_BUG_ON_PAGE(!PageMemcgKmem(page), page); + + objcg =3D page_objcg(page); + obj_cgroup_uncharge_page(objcg, nr_pages); page->memcg_data =3D 0; - css_put(&memcg->css); + obj_cgroup_put(objcg); } =20 static bool consume_obj_stock(struct obj_cgroup *objcg, unsigned int nr_= bytes) @@ -6794,8 +6801,12 @@ struct uncharge_gather { struct mem_cgroup *memcg; unsigned long nr_pages; unsigned long pgpgout; - unsigned long nr_kmem; struct page *dummy_page; + +#ifdef CONFIG_MEMCG_KMEM + struct obj_cgroup *objcg; + unsigned long nr_kmem; +#endif }; =20 static inline void uncharge_gather_clear(struct uncharge_gather *ug) @@ -6807,12 +6818,21 @@ static void uncharge_batch(const struct uncharge_= gather *ug) { unsigned long flags; =20 +#ifdef CONFIG_MEMCG_KMEM + if (ug->objcg) { + obj_cgroup_uncharge_page(ug->objcg, ug->nr_kmem); + /* drop reference from uncharge_kmem_page */ + obj_cgroup_put(ug->objcg); + } +#endif + + if (!ug->memcg) + return; + if (!mem_cgroup_is_root(ug->memcg)) { page_counter_uncharge(&ug->memcg->memory, ug->nr_pages); if (do_memsw_account()) page_counter_uncharge(&ug->memcg->memsw, ug->nr_pages); - if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && ug->nr_kmem) - page_counter_uncharge(&ug->memcg->kmem, ug->nr_kmem); memcg_oom_recover(ug->memcg); } =20 @@ -6822,26 +6842,40 @@ static void uncharge_batch(const struct uncharge_= gather *ug) memcg_check_events(ug->memcg, ug->dummy_page); local_irq_restore(flags); =20 - /* drop reference from uncharge_page */ + /* drop reference from uncharge_user_page */ css_put(&ug->memcg->css); } =20 -static void uncharge_page(struct page *page, struct uncharge_gather *ug) +#ifdef CONFIG_MEMCG_KMEM +static void uncharge_kmem_page(struct page *page, struct uncharge_gather= *ug) { - unsigned long nr_pages; - struct mem_cgroup *memcg; + struct obj_cgroup *objcg =3D page_objcg(page); =20 - VM_BUG_ON_PAGE(PageLRU(page), page); + if (ug->objcg !=3D objcg) { + if (ug->objcg) { + uncharge_batch(ug); + uncharge_gather_clear(ug); + } + ug->objcg =3D objcg; =20 - if (!page_memcg_charged(page)) - return; + /* pairs with obj_cgroup_put in uncharge_batch */ + obj_cgroup_get(ug->objcg); + } + + ug->nr_kmem +=3D compound_nr(page); + page->memcg_data =3D 0; + obj_cgroup_put(ug->objcg); +} +#else +static void uncharge_kmem_page(struct page *page, struct uncharge_gather= *ug) +{ +} +#endif + +static void uncharge_user_page(struct page *page, struct uncharge_gather= *ug) +{ + struct mem_cgroup *memcg =3D page_memcg(page); =20 - /* - * Nobody should be changing or seriously looking at - * page memcg at this point, we have fully exclusive - * access to the page. - */ - memcg =3D page_memcg_check(page); if (ug->memcg !=3D memcg) { if (ug->memcg) { uncharge_batch(ug); @@ -6852,18 +6886,30 @@ static void uncharge_page(struct page *page, stru= ct uncharge_gather *ug) /* pairs with css_put in uncharge_batch */ css_get(&ug->memcg->css); } + ug->pgpgout++; + ug->dummy_page =3D page; + + ug->nr_pages +=3D compound_nr(page); + page->memcg_data =3D 0; + css_put(&ug->memcg->css); +} =20 - nr_pages =3D compound_nr(page); - ug->nr_pages +=3D nr_pages; +static void uncharge_page(struct page *page, struct uncharge_gather *ug) +{ + VM_BUG_ON_PAGE(PageLRU(page), page); =20 + if (!page_memcg_charged(page)) + return; + + /* + * Nobody should be changing or seriously looking at + * page memcg at this point, we have fully exclusive + * access to the page. + */ if (PageMemcgKmem(page)) - ug->nr_kmem +=3D nr_pages; + uncharge_kmem_page(page, ug); else - ug->pgpgout++; - - ug->dummy_page =3D page; - page->memcg_data =3D 0; - css_put(&ug->memcg->css); + uncharge_user_page(page, ug); } =20 /** @@ -6906,8 +6952,7 @@ void mem_cgroup_uncharge_list(struct list_head *pag= e_list) uncharge_gather_clear(&ug); list_for_each_entry(page, page_list, lru) uncharge_page(page, &ug); - if (ug.memcg) - uncharge_batch(&ug); + uncharge_batch(&ug); } =20 /** --=20 2.11.0