From: Joshua Hahn <joshua.hahnjy@gmail.com>
To: Joshua Hahn <joshua.hahnjy@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Michal Hocko <mhocko@suse.com>,
Roman Gushchin <roman.gushchin@linux.dev>,
Shakeel Butt <shakeel.butt@linux.dev>,
Muchun Song <muchun.song@linux.dev>,
linux-mm@kvack.org, cgroups@vger.kernel.org,
linux-kernel@vger.kernel.org, kernel-team@meta.com
Subject: [RFC PATCH 4/6] mm/memcontrol: Charge and uncharge from toptier
Date: Mon, 23 Feb 2026 14:38:27 -0800 [thread overview]
Message-ID: <20260223223830.586018-5-joshua.hahnjy@gmail.com> (raw)
In-Reply-To: <20260223223830.586018-1-joshua.hahnjy@gmail.com>
Modify memcg charging and uncharging sites to also update toptier
statistics.
Unfortunately, try_charge_memcg is unaware of the physical folio being
charged; it only deals with nr_pages. Instead of modifying
try_charge_memcg, instead adjust the toptier fields once
try_charge_memcg succeeds, inside charge_memcg.
Signed-off-by: Joshua Hahn <joshua.hahnjy@gmail.com>
---
mm/memcontrol.c | 39 +++++++++++++++++++++++++++++++++++++++
1 file changed, 39 insertions(+)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index f3e4a6ce7181..07464f02c529 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1948,6 +1948,24 @@ static void memcg_uncharge(struct mem_cgroup *memcg, unsigned int nr_pages)
page_counter_uncharge(&memcg->memsw, nr_pages);
}
+static void memcg_charge_toptier(struct mem_cgroup *memcg,
+ unsigned long nr_pages)
+{
+ struct page_counter *c;
+
+ for (c = &memcg->memory; c; c = c->parent)
+ atomic_long_add(nr_pages, &c->toptier_usage);
+}
+
+static void memcg_uncharge_toptier(struct mem_cgroup *memcg,
+ unsigned long nr_pages)
+{
+ struct page_counter *c;
+
+ for (c = &memcg->memory; c; c = c->parent)
+ atomic_long_sub(nr_pages, &c->toptier_usage);
+}
+
/*
* Returns stocks cached in percpu and reset cached information.
*/
@@ -4830,6 +4848,9 @@ static int charge_memcg(struct folio *folio, struct mem_cgroup *memcg,
if (ret)
goto out;
+ if (node_is_toptier(folio_nid(folio)))
+ memcg_charge_toptier(memcg, folio_nr_pages(folio));
+
css_get(&memcg->css);
commit_charge(folio, memcg);
memcg1_commit_charge(folio, memcg);
@@ -4921,6 +4942,7 @@ int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm,
struct uncharge_gather {
struct mem_cgroup *memcg;
unsigned long nr_memory;
+ unsigned long nr_toptier;
unsigned long pgpgout;
unsigned long nr_kmem;
int nid;
@@ -4941,6 +4963,8 @@ static void uncharge_batch(const struct uncharge_gather *ug)
}
memcg1_oom_recover(ug->memcg);
}
+ if (ug->nr_toptier)
+ memcg_uncharge_toptier(ug->memcg, ug->nr_toptier);
memcg1_uncharge_batch(ug->memcg, ug->pgpgout, ug->nr_memory, ug->nid);
@@ -4989,6 +5013,9 @@ static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug)
nr_pages = folio_nr_pages(folio);
+ if (node_is_toptier(folio_nid(folio)))
+ ug->nr_toptier += nr_pages;
+
if (folio_memcg_kmem(folio)) {
ug->nr_memory += nr_pages;
ug->nr_kmem += nr_pages;
@@ -5072,6 +5099,10 @@ void mem_cgroup_replace_folio(struct folio *old, struct folio *new)
page_counter_charge(&memcg->memsw, nr_pages);
}
+ /* The old folio's toptier_usage will be decremented when it is freed */
+ if (node_is_toptier(folio_nid(new)))
+ memcg_charge_toptier(memcg, nr_pages);
+
css_get(&memcg->css);
commit_charge(new, memcg);
memcg1_commit_charge(new, memcg);
@@ -5091,6 +5122,7 @@ void mem_cgroup_replace_folio(struct folio *old, struct folio *new)
void mem_cgroup_migrate(struct folio *old, struct folio *new)
{
struct mem_cgroup *memcg;
+ int old_toptier, new_toptier;
VM_BUG_ON_FOLIO(!folio_test_locked(old), old);
VM_BUG_ON_FOLIO(!folio_test_locked(new), new);
@@ -5111,6 +5143,13 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new)
if (!memcg)
return;
+ old_toptier = node_is_toptier(folio_nid(old));
+ new_toptier = node_is_toptier(folio_nid(new));
+ if (old_toptier && !new_toptier)
+ memcg_uncharge_toptier(memcg, folio_nr_pages(old));
+ else if (!old_toptier && new_toptier)
+ memcg_charge_toptier(memcg, folio_nr_pages(old));
+
/* Transfer the charge and the css ref */
commit_charge(new, memcg);
--
2.47.3
next prev parent reply other threads:[~2026-02-23 22:38 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-23 22:38 [RFC PATCH 0/6] mm/memcontrol: Make memcg limits tier-aware Joshua Hahn
2026-02-23 22:38 ` [RFC PATCH 1/6] mm/memory-tiers: Introduce tier-aware memcg limit sysfs Joshua Hahn
2026-02-23 22:38 ` [RFC PATCH 2/6] mm/page_counter: Introduce tiered memory awareness to page_counter Joshua Hahn
2026-02-23 22:38 ` [RFC PATCH 3/6] mm/memory-tiers, memcontrol: Introduce toptier capacity updates Joshua Hahn
2026-02-23 22:38 ` Joshua Hahn [this message]
2026-02-23 22:38 ` [RFC PATCH 5/6] mm/memcontrol, page_counter: Make memory.low tier-aware Joshua Hahn
2026-02-23 22:38 ` [RFC PATCH 6/6] mm/memcontrol: Make memory.high tier-aware Joshua Hahn
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260223223830.586018-5-joshua.hahnjy@gmail.com \
--to=joshua.hahnjy@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=kernel-team@meta.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=muchun.song@linux.dev \
--cc=roman.gushchin@linux.dev \
--cc=shakeel.butt@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox