From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A038AC433DB for ; Thu, 18 Mar 2021 11:08:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CEEC864DD8 for ; Thu, 18 Mar 2021 11:08:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CEEC864DD8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5929C6B0074; Thu, 18 Mar 2021 07:08:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 569016B0075; Thu, 18 Mar 2021 07:08:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3BD066B0078; Thu, 18 Mar 2021 07:08:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0083.hostedemail.com [216.40.44.83]) by kanga.kvack.org (Postfix) with ESMTP id 1BD9C6B0074 for ; Thu, 18 Mar 2021 07:08:46 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id D655B8249980 for ; Thu, 18 Mar 2021 11:08:45 +0000 (UTC) X-FDA: 77932722210.32.04EEF36 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) by imf22.hostedemail.com (Postfix) with ESMTP id 30305C0007C2 for ; Thu, 18 Mar 2021 11:08:45 +0000 (UTC) Received: by mail-pl1-f182.google.com with SMTP id k4so1122054plk.5 for ; Thu, 18 Mar 2021 04:08:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=08vyzPUPKvmLNfOadbgAGNQdLOXmbl0LbP58+XW4jVM=; b=lJOHSBb8un9AZY5DlX2KQY+f+BrK1KheoJjVbMIQI1taLuqBgAAC79D4bDNNekaNkK 5dOVtF4X/SgaPxsHRsKwNLC/F+0e6f75k1vwrNkP1aGpDmht/TN7JOIohR12MWxmSGOj QRCaI7Wa2X3D2QDtIVMvZkTXozhzTMRZe53uC24myEYqtEUVtiWACelEl3gdAudZNn3D g1Sd2TtPBtTamsEnjoCCHmCNA516e9wbQbcoIEbsGDxBhH2MbEvSYAeOAunWyjcOnos0 2WOtdXDde3C9EtwOwYlz00cNxxUf5JyOSCk9/hPmZjccOuM77a17Wyw4uL5nTVyi49z/ M1mQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=08vyzPUPKvmLNfOadbgAGNQdLOXmbl0LbP58+XW4jVM=; b=RZzjpJ6pQ0SJimrxaZTE0HM+Sr6zKwKy2C8Oe2uktcjcDdr6cJcQtAQZ7aNXEr6dbU WUXJHn0VV2W9zEJGo48cGXv1rFygfWfCPv+QowtqYDt4k4roZpNga73DRRrK5haodfmZ lMt9gxKXwVMW8JAeQUhlNiHLtxjiC/lbMYKZww05Tgec2ihC4+dzTZnp6M5IwHzuRHtz nGws/L+yddVUz0q2A93ceCkyuPGQBSo6QlvTwFLGHkPBc34THKfVGUU0b1l5BmkM8m1V pOeG20D80sav1cPuST44r/MHVcgz/SRBNhbe/CSIhAwAdxG0tdvWX0iRz3/GEv3guzUJ nwag== X-Gm-Message-State: AOAM533HX4BnMldc5IYxczWRYdodbIUD1U3TAImGWo4RWb7EmjILG16A 7Mt9a9pEpe1CJcUnY9tyZytwSg== X-Google-Smtp-Source: ABdhPJz66Q481dL8WfSeW2L0N6YQirVOEzoQXHkK0DamES8JlQLHZuohYoFAPOn8UCn26YHqpE897g== X-Received: by 2002:a17:902:ea0e:b029:e4:81d4:ddae with SMTP id s14-20020a170902ea0eb02900e481d4ddaemr9248123plg.12.1616065724143; Thu, 18 Mar 2021 04:08:44 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.231]) by smtp.gmail.com with ESMTPSA id e21sm1779509pgv.74.2021.03.18.04.08.40 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 18 Mar 2021 04:08:43 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, Muchun Song Subject: [PATCH v4 4/5] mm: memcontrol: use obj_cgroup APIs to charge kmem pages Date: Thu, 18 Mar 2021 19:06:57 +0800 Message-Id: <20210318110658.60892-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210318110658.60892-1-songmuchun@bytedance.com> References: <20210318110658.60892-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 30305C0007C2 X-Stat-Signature: tcctgxjrb9ynnrqmik57uoqmsx8kjkn8 Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf22; identity=mailfrom; envelope-from=""; helo=mail-pl1-f182.google.com; client-ip=209.85.214.182 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1616065725-275149 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since Roman series "The new cgroup slab memory controller" applied. All slab objects are charged via the new APIs of obj_cgroup. The new APIs introduce a struct obj_cgroup to charge slab objects. It prevents long-living objects from pinning the original memory cgroup in the memory= . But there are still some corner objects (e.g. allocations larger than order-1 page on SLUB) which are not charged via the new APIs. Those objects (include the pages which are allocated from buddy allocator directly) are charged as kmem pages which still hold a reference to the memory cgroup. We want to reuse the obj_cgroup APIs to charge the kmem pages. If we do that, we should store an object cgroup pointer to page->memcg_data for the kmem pages. Finally, page->memcg_data will have 3 different meanings. 1) For the slab pages, page->memcg_data points to an object cgroups vector. 2) For the kmem pages (exclude the slab pages), page->memcg_data points to an object cgroup. 3) For the user pages (e.g. the LRU pages), page->memcg_data points to a memory cgroup. We do not change the behavior of page_memcg() and page_memcg_rcu(). They are also suitable for LRU pages and kmem pages. Why? Because memory allocations pinning memcgs for a long time - it exists at a larger scale and is causing recurring problems in the real world: page cache doesn't get reclaimed for a long time, or is used by the second, third, fourth, ... instance of the same job that was restarted into a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, and make page reclaim very inefficient. We can convert LRU pages and most other raw memcg pins to the objcg direction to fix this problem, and then the page->memcg will always point to an object cgroup pointer. At that time, LRU pages and kmem pages will be treated the same. The implementation of page_memcg() will remove the kmem page check. This patch aims to charge the kmem pages by using the new APIs of obj_cgroup. Finally, the page->memcg_data of the kmem page points to an object cgroup. We can use the __page_objcg() to get the object cgroup associated with a kmem page. Or we can use page_memcg() to get the memory cgroup associated with a kmem page, but caller must ensure that the returned memcg won't be released (e.g. acquire the rcu_read_lock or css_set_lock). Signed-off-by: Muchun Song --- include/linux/memcontrol.h | 116 +++++++++++++++++++++++++++++++++++----= ------ mm/memcontrol.c | 101 ++++++++++++++++++++++++--------------- 2 files changed, 156 insertions(+), 61 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index e6dc793d587d..395a113e4a3b 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -358,6 +358,62 @@ enum page_memcg_data_flags { =20 #define MEMCG_DATA_FLAGS_MASK (__NR_MEMCG_DATA_FLAGS - 1) =20 +static inline bool PageMemcgKmem(struct page *page); + +/* + * After the initialization objcg->memcg is always pointing at + * a valid memcg, but can be atomically swapped to the parent memcg. + * + * The caller must ensure that the returned memcg won't be released: + * e.g. acquire the rcu_read_lock or css_set_lock. + */ +static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *obj= cg) +{ + return READ_ONCE(objcg->memcg); +} + +/* + * __page_memcg - get the memory cgroup associated with a non-kmem page + * @page: a pointer to the page struct + * + * Returns a pointer to the memory cgroup associated with the page, + * or NULL. This function assumes that the page is known to have a + * proper memory cgroup pointer. It's not safe to call this function + * against some type of pages, e.g. slab pages or ex-slab pages or + * kmem pages. + */ +static inline struct mem_cgroup *__page_memcg(struct page *page) +{ + unsigned long memcg_data =3D page->memcg_data; + + VM_BUG_ON_PAGE(PageSlab(page), page); + VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_OBJCGS, page); + VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, page); + + return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); +} + +/* + * __page_objcg - get the object cgroup associated with a kmem page + * @page: a pointer to the page struct + * + * Returns a pointer to the object cgroup associated with the page, + * or NULL. This function assumes that the page is known to have a + * proper object cgroup pointer. It's not safe to call this function + * against some type of pages, e.g. slab pages or ex-slab pages or + * LRU pages. + */ +static inline struct obj_cgroup *__page_objcg(struct page *page) +{ + unsigned long memcg_data =3D page->memcg_data; + + VM_BUG_ON_PAGE(PageSlab(page), page); + VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_OBJCGS, page); + VM_BUG_ON_PAGE(!(memcg_data & MEMCG_DATA_KMEM), page); + + return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); +} + /* * page_memcg - get the memory cgroup associated with a page * @page: a pointer to the page struct @@ -367,20 +423,23 @@ enum page_memcg_data_flags { * proper memory cgroup pointer. It's not safe to call this function * against some type of pages, e.g. slab pages or ex-slab pages. * - * Any of the following ensures page and memcg binding stability: + * For a non-kmem page any of the following ensures page and memcg bindi= ng + * stability: + * * - the page lock * - LRU isolation * - lock_page_memcg() * - exclusive reference + * + * For a kmem page a caller should hold an rcu read lock to protect memc= g + * associated with a kmem page from being released. */ static inline struct mem_cgroup *page_memcg(struct page *page) { - unsigned long memcg_data =3D page->memcg_data; - - VM_BUG_ON_PAGE(PageSlab(page), page); - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_OBJCGS, page); - - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + if (PageMemcgKmem(page)) + return obj_cgroup_memcg(__page_objcg(page)); + else + return __page_memcg(page); } =20 /* @@ -394,11 +453,19 @@ static inline struct mem_cgroup *page_memcg(struct = page *page) */ static inline struct mem_cgroup *page_memcg_rcu(struct page *page) { + unsigned long memcg_data =3D READ_ONCE(page->memcg_data); + VM_BUG_ON_PAGE(PageSlab(page), page); WARN_ON_ONCE(!rcu_read_lock_held()); =20 - return (struct mem_cgroup *)(READ_ONCE(page->memcg_data) & - ~MEMCG_DATA_FLAGS_MASK); + if (memcg_data & MEMCG_DATA_KMEM) { + struct obj_cgroup *objcg; + + objcg =3D (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return obj_cgroup_memcg(objcg); + } + + return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } =20 /* @@ -406,15 +473,21 @@ static inline struct mem_cgroup *page_memcg_rcu(str= uct page *page) * @page: a pointer to the page struct * * Returns a pointer to the memory cgroup associated with the page, - * or NULL. This function unlike page_memcg() can take any page + * or NULL. This function unlike page_memcg() can take any page * as an argument. It has to be used in cases when it's not known if a p= age - * has an associated memory cgroup pointer or an object cgroups vector. + * has an associated memory cgroup pointer or an object cgroups vector o= r + * an object cgroup. + * + * For a non-kmem page any of the following ensures page and memcg bindi= ng + * stability: * - * Any of the following ensures page and memcg binding stability: * - the page lock * - LRU isolation * - lock_page_memcg() * - exclusive reference + * + * For a kmem page a caller should hold an rcu read lock to protect memc= g + * associated with a kmem page from being released. */ static inline struct mem_cgroup *page_memcg_check(struct page *page) { @@ -427,6 +500,13 @@ static inline struct mem_cgroup *page_memcg_check(st= ruct page *page) if (memcg_data & MEMCG_DATA_OBJCGS) return NULL; =20 + if (memcg_data & MEMCG_DATA_KMEM) { + struct obj_cgroup *objcg; + + objcg =3D (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return obj_cgroup_memcg(objcg); + } + return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } =20 @@ -713,18 +793,6 @@ static inline void obj_cgroup_put(struct obj_cgroup = *objcg) percpu_ref_put(&objcg->refcnt); } =20 -/* - * After the initialization objcg->memcg is always pointing at - * a valid memcg, but can be atomically swapped to the parent memcg. - * - * The caller must ensure that the returned memcg won't be released: - * e.g. acquire the rcu_read_lock or css_set_lock. - */ -static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *obj= cg) -{ - return READ_ONCE(objcg->memcg); -} - static inline void mem_cgroup_put(struct mem_cgroup *memcg) { if (memcg) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 104bddf21314..1cef20a2f116 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -855,18 +855,22 @@ void __mod_lruvec_page_state(struct page *page, enu= m node_stat_item idx, int val) { struct page *head =3D compound_head(page); /* rmap on tail pages */ - struct mem_cgroup *memcg =3D page_memcg(head); + struct mem_cgroup *memcg; pg_data_t *pgdat =3D page_pgdat(page); struct lruvec *lruvec; =20 + rcu_read_lock(); + memcg =3D page_memcg(head); /* Untracked pages have no memcg, no lruvec. Update only the node */ if (!memcg) { + rcu_read_unlock(); __mod_node_page_state(pgdat, idx, val); return; } =20 lruvec =3D mem_cgroup_lruvec(memcg, pgdat); __mod_lruvec_state(lruvec, idx, val); + rcu_read_unlock(); } EXPORT_SYMBOL(__mod_lruvec_page_state); =20 @@ -2905,6 +2909,20 @@ static void commit_charge(struct page *page, struc= t mem_cgroup *memcg) page->memcg_data =3D (unsigned long)memcg; } =20 +static inline struct mem_cgroup *get_obj_cgroup_memcg(struct obj_cgroup = *objcg) +{ + struct mem_cgroup *memcg; + + rcu_read_lock(); +retry: + memcg =3D obj_cgroup_memcg(objcg); + if (unlikely(!css_tryget(&memcg->css))) + goto retry; + rcu_read_unlock(); + + return memcg; +} + #ifdef CONFIG_MEMCG_KMEM int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s= , gfp_t gfp, bool new_page) @@ -3070,15 +3088,8 @@ static int obj_cgroup_charge_pages(struct obj_cgro= up *objcg, gfp_t gfp, struct mem_cgroup *memcg; int ret; =20 - rcu_read_lock(); -retry: - memcg =3D obj_cgroup_memcg(objcg); - if (unlikely(!css_tryget(&memcg->css))) - goto retry; - rcu_read_unlock(); - + memcg =3D get_obj_cgroup_memcg(objcg); ret =3D __memcg_kmem_charge(memcg, gfp, nr_pages); - css_put(&memcg->css); =20 return ret; @@ -3143,18 +3154,18 @@ static void __memcg_kmem_uncharge(struct mem_cgro= up *memcg, unsigned int nr_page */ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order) { - struct mem_cgroup *memcg; + struct obj_cgroup *objcg; int ret =3D 0; =20 - memcg =3D get_mem_cgroup_from_current(); - if (memcg && !mem_cgroup_is_root(memcg)) { - ret =3D __memcg_kmem_charge(memcg, gfp, 1 << order); + objcg =3D get_obj_cgroup_from_current(); + if (objcg) { + ret =3D obj_cgroup_charge_pages(objcg, gfp, 1 << order); if (!ret) { - page->memcg_data =3D (unsigned long)memcg | + page->memcg_data =3D (unsigned long)objcg | MEMCG_DATA_KMEM; return 0; } - css_put(&memcg->css); + obj_cgroup_put(objcg); } return ret; } @@ -3166,16 +3177,16 @@ int __memcg_kmem_charge_page(struct page *page, g= fp_t gfp, int order) */ void __memcg_kmem_uncharge_page(struct page *page, int order) { - struct mem_cgroup *memcg =3D page_memcg(page); + struct obj_cgroup *objcg; unsigned int nr_pages =3D 1 << order; =20 - if (!memcg) + if (!PageMemcgKmem(page)) return; =20 - VM_BUG_ON_PAGE(mem_cgroup_is_root(memcg), page); - __memcg_kmem_uncharge(memcg, nr_pages); + objcg =3D __page_objcg(page); + obj_cgroup_uncharge_pages(objcg, nr_pages); page->memcg_data =3D 0; - css_put(&memcg->css); + obj_cgroup_put(objcg); } =20 static bool consume_obj_stock(struct obj_cgroup *objcg, unsigned int nr_= bytes) @@ -6790,7 +6801,7 @@ int mem_cgroup_charge(struct page *page, struct mm_= struct *mm, gfp_t gfp_mask) =20 struct uncharge_gather { struct mem_cgroup *memcg; - unsigned long nr_pages; + unsigned long nr_memory; unsigned long pgpgout; unsigned long nr_kmem; struct page *dummy_page; @@ -6805,10 +6816,10 @@ static void uncharge_batch(const struct uncharge_= gather *ug) { unsigned long flags; =20 - if (!mem_cgroup_is_root(ug->memcg)) { - page_counter_uncharge(&ug->memcg->memory, ug->nr_pages); + if (ug->nr_memory) { + page_counter_uncharge(&ug->memcg->memory, ug->nr_memory); if (do_memsw_account()) - page_counter_uncharge(&ug->memcg->memsw, ug->nr_pages); + page_counter_uncharge(&ug->memcg->memsw, ug->nr_memory); if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && ug->nr_kmem) page_counter_uncharge(&ug->memcg->kmem, ug->nr_kmem); memcg_oom_recover(ug->memcg); @@ -6816,7 +6827,7 @@ static void uncharge_batch(const struct uncharge_ga= ther *ug) =20 local_irq_save(flags); __count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout); - __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_pages)= ; + __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_memory= ); memcg_check_events(ug->memcg, ug->dummy_page); local_irq_restore(flags); =20 @@ -6827,40 +6838,56 @@ static void uncharge_batch(const struct uncharge_= gather *ug) static void uncharge_page(struct page *page, struct uncharge_gather *ug) { unsigned long nr_pages; + struct mem_cgroup *memcg; + struct obj_cgroup *objcg; =20 VM_BUG_ON_PAGE(PageLRU(page), page); =20 - if (!page_memcg(page)) - return; - /* * Nobody should be changing or seriously looking at - * page_memcg(page) at this point, we have fully + * page memcg or objcg at this point, we have fully * exclusive access to the page. */ + if (PageMemcgKmem(page)) { + objcg =3D __page_objcg(page); + memcg =3D get_obj_cgroup_memcg(objcg); + } else { + memcg =3D __page_memcg(page); + } + + if (!memcg) + return; =20 - if (ug->memcg !=3D page_memcg(page)) { + if (ug->memcg !=3D memcg) { if (ug->memcg) { uncharge_batch(ug); uncharge_gather_clear(ug); } - ug->memcg =3D page_memcg(page); + ug->memcg =3D memcg; ug->dummy_page =3D page; =20 /* pairs with css_put in uncharge_batch */ - css_get(&ug->memcg->css); + css_get(&memcg->css); } =20 nr_pages =3D compound_nr(page); - ug->nr_pages +=3D nr_pages; =20 - if (PageMemcgKmem(page)) + if (PageMemcgKmem(page)) { + ug->nr_memory +=3D nr_pages; ug->nr_kmem +=3D nr_pages; - else + + page->memcg_data =3D 0; + obj_cgroup_put(objcg); + } else { + /* LRU pages aren't accounted at the root level */ + if (!mem_cgroup_is_root(memcg)) + ug->nr_memory +=3D nr_pages; ug->pgpgout++; =20 - page->memcg_data =3D 0; - css_put(&ug->memcg->css); + page->memcg_data =3D 0; + } + + css_put(&memcg->css); } =20 /** --=20 2.11.0