From: Shakeel Butt <shakeel.butt@linux.dev>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
Michal Hocko <mhocko@kernel.org>,
Roman Gushchin <roman.gushchin@linux.dev>,
Muchun Song <muchun.song@linux.dev>,
Vlastimil Babka <vbabka@suse.cz>,
Jakub Kicinski <kuba@kernel.org>,
Eric Dumazet <edumazet@google.com>,
Soheil Hassas Yeganeh <soheil@google.com>,
linux-mm@kvack.org, cgroups@vger.kernel.org,
netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
Meta kernel team <kernel-team@meta.com>
Subject: Re: [PATCH] memcg: multi-memcg percpu charge cache
Date: Wed, 30 Apr 2025 08:32:42 -0700 [thread overview]
Message-ID: <dieeei3squ2gcnqxdjayvxbvzldr266rhnvtl3vjzsqevxkevf@ckui5vjzl2qg> (raw)
In-Reply-To: <20250416180229.2902751-1-shakeel.butt@linux.dev>
Andrew, please find another fix/improvements for this patch below.
From: Shakeel Butt <shakeel.butt@linux.dev>
Date: Wed, 30 Apr 2025 08:28:23 -0700
Subject: [PATCH] memcg: multi-memcg percpu charge cache - fix 4
Add comment suggested by Michal and use DEFINE_PER_CPU_ALIGNED instead
of DEFINE_PER_CPU suggested by Vlastimil.
Signed-off-by: Shakeel Butt <shakeel.butt@linux.dev>
---
mm/memcontrol.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 5a07e0375254..b877287aeb11 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1775,6 +1775,10 @@ void mem_cgroup_print_oom_group(struct mem_cgroup *memcg)
pr_cont(" are going to be killed due to memory.oom.group set\n");
}
+/*
+ * The value of NR_MEMCG_STOCK is selected to keep the cached memcgs and their
+ * nr_pages in a single cacheline. This may change in future.
+ */
#define NR_MEMCG_STOCK 7
struct memcg_stock_pcp {
local_trylock_t stock_lock;
@@ -1791,7 +1795,7 @@ struct memcg_stock_pcp {
unsigned long flags;
#define FLUSHING_CACHED_CHARGE 0
};
-static DEFINE_PER_CPU(struct memcg_stock_pcp, memcg_stock) = {
+static DEFINE_PER_CPU_ALIGNED(struct memcg_stock_pcp, memcg_stock) = {
.stock_lock = INIT_LOCAL_TRYLOCK(stock_lock),
};
static DEFINE_MUTEX(percpu_charge_mutex);
--
2.47.1
prev parent reply other threads:[~2025-04-30 15:32 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-16 18:02 Shakeel Butt
2025-04-23 1:10 ` Jakub Kicinski
2025-04-23 22:16 ` Shakeel Butt
2025-04-23 22:30 ` Jakub Kicinski
2025-04-23 22:59 ` Shakeel Butt
2025-04-23 23:14 ` Shakeel Butt
2025-04-25 20:18 ` Shakeel Butt
2025-04-29 9:40 ` Hugh Dickins
2025-04-29 14:50 ` Shakeel Butt
2025-04-30 10:05 ` Vlastimil Babka
2025-04-30 15:16 ` Shakeel Butt
2025-04-29 12:13 ` Michal Hocko
2025-04-29 18:43 ` Shakeel Butt
2025-04-30 6:48 ` Michal Hocko
2025-04-30 9:57 ` Vlastimil Babka
2025-04-30 15:05 ` Shakeel Butt
2025-04-30 15:32 ` Shakeel Butt [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=dieeei3squ2gcnqxdjayvxbvzldr266rhnvtl3vjzsqevxkevf@ckui5vjzl2qg \
--to=shakeel.butt@linux.dev \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=edumazet@google.com \
--cc=hannes@cmpxchg.org \
--cc=kernel-team@meta.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=muchun.song@linux.dev \
--cc=netdev@vger.kernel.org \
--cc=roman.gushchin@linux.dev \
--cc=soheil@google.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox