linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm: memcg: add cacheline padding after lruvec in mem_cgroup_per_node
@ 2024-07-23 17:12 Roman Gushchin
  2024-07-23 17:30 ` Shakeel Butt
  0 siblings, 1 reply; 2+ messages in thread
From: Roman Gushchin @ 2024-07-23 17:12 UTC (permalink / raw)
  To: Andrew Morton, Shakeel Butt
  Cc: linux-mm, linux-kernel, Johannes Weiner, Michal Hocko,
	Muchun Song, Roman Gushchin, kernel test robot

Oliver Sand reported a performance regression caused by
commit 98c9daf5ae6b ("mm: memcg: guard memcg1-specific members of struct
mem_cgroup_per_node"), which puts some fields of the
mem_cgroup_per_node structure under the CONFIG_MEMCG_V1 config option.
Apparently it causes a false cache sharing between lruvec and
lru_zone_size members of the structure. Fix it by adding an explicit
padding after the lruvec member.

Even though the padding is not required with CONFIG_MEMCG_V1 set,
it seems like the introduced memory overhead is not significant
enough to warrant another divergence in the mem_cgroup_per_node
layout, so the padding is added unconditionally.

Fixes: 98c9daf5ae6b ("mm: memcg: guard memcg1-specific members of struct mem_cgroup_per_node")
Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202407121335.31a10cb6-oliver.sang@intel.com
Tested-by: Oliver Sang <oliver.sang@intel.com>
Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
---
 include/linux/memcontrol.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 7e2eb091049a..0e5bf25d324f 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -109,6 +109,7 @@ struct mem_cgroup_per_node {
 
 	/* Fields which get updated often at the end. */
 	struct lruvec		lruvec;
+	CACHELINE_PADDING(_pad2_);
 	unsigned long		lru_zone_size[MAX_NR_ZONES][NR_LRU_LISTS];
 	struct mem_cgroup_reclaim_iter	iter;
 };
-- 
2.45.2.1089.g2a221341d9-goog



^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH] mm: memcg: add cacheline padding after lruvec in mem_cgroup_per_node
  2024-07-23 17:12 [PATCH] mm: memcg: add cacheline padding after lruvec in mem_cgroup_per_node Roman Gushchin
@ 2024-07-23 17:30 ` Shakeel Butt
  0 siblings, 0 replies; 2+ messages in thread
From: Shakeel Butt @ 2024-07-23 17:30 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Andrew Morton, linux-mm, linux-kernel, Johannes Weiner,
	Michal Hocko, Muchun Song, kernel test robot

On Tue, Jul 23, 2024 at 05:12:44PM GMT, Roman Gushchin wrote:
> Oliver Sand 

Oliver Sang

> reported a performance regression caused by
> commit 98c9daf5ae6b ("mm: memcg: guard memcg1-specific members of struct
> mem_cgroup_per_node"), which puts some fields of the
> mem_cgroup_per_node structure under the CONFIG_MEMCG_V1 config option.
> Apparently it causes a false cache sharing between lruvec and
> lru_zone_size members of the structure. Fix it by adding an explicit
> padding after the lruvec member.
> 
> Even though the padding is not required with CONFIG_MEMCG_V1 set,
> it seems like the introduced memory overhead is not significant
> enough to warrant another divergence in the mem_cgroup_per_node
> layout, so the padding is added unconditionally.
> 
> Fixes: 98c9daf5ae6b ("mm: memcg: guard memcg1-specific members of struct mem_cgroup_per_node")
> Reported-by: kernel test robot <oliver.sang@intel.com>
> Closes: https://lore.kernel.org/oe-lkp/202407121335.31a10cb6-oliver.sang@intel.com
> Tested-by: Oliver Sang <oliver.sang@intel.com>
> Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>

Acked-by: Shakeel Butt <shakeel.butt@linux.dev>


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2024-07-23 17:30 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-07-23 17:12 [PATCH] mm: memcg: add cacheline padding after lruvec in mem_cgroup_per_node Roman Gushchin
2024-07-23 17:30 ` Shakeel Butt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox