linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Shakeel Butt <shakeel.butt@linux.dev>
To: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org,  linux-kernel@vger.kernel.org,
	Johannes Weiner <hannes@cmpxchg.org>,
	 Michal Hocko <mhocko@kernel.org>,
	Muchun Song <muchun.song@linux.dev>
Subject: Re: [PATCH v1 6/9] mm: memcg: put memcg1-specific struct mem_cgroup's members under CONFIG_MEMCG_V1
Date: Fri, 28 Jun 2024 17:48:54 -0700	[thread overview]
Message-ID: <ug2qpeiq6jrtr4qtnblquiod7rgqdqsy6nfu5idnpxqwrzdq6o@mmbsul2g6t52> (raw)
In-Reply-To: <20240628210317.272856-7-roman.gushchin@linux.dev>

On Fri, Jun 28, 2024 at 09:03:14PM GMT, Roman Gushchin wrote:
> Put memcg1-specific members of struct mem_cgroup under the
> CONFIG_MEMCG_V1 config option. Also group them close to the end
> of struct mem_cgroup just before the dynamic per-node part.
> 
> Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
> ---
>  include/linux/memcontrol.h | 103 +++++++++++++++++++------------------
>  1 file changed, 53 insertions(+), 50 deletions(-)
> 
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 44ab6394c9ed..107b0c5d6eab 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -188,10 +188,6 @@ struct mem_cgroup {
>  		struct page_counter memsw;	/* v1 only */
>  	};
>  
> -	/* Legacy consumer-oriented counters */
> -	struct page_counter kmem;		/* v1 only */
> -	struct page_counter tcpmem;		/* v1 only */
> -
>  	/* Range enforcement for interrupt charges */
>  	struct work_struct high_work;
>  
> @@ -205,8 +201,6 @@ struct mem_cgroup {
>  	bool zswap_writeback;
>  #endif
>  
> -	unsigned long soft_limit;
> -
>  	/* vmpressure notifications */
>  	struct vmpressure vmpressure;
>  
> @@ -215,13 +209,7 @@ struct mem_cgroup {
>  	 */
>  	bool oom_group;
>  
> -	/* protected by memcg_oom_lock */
> -	bool		oom_lock;
> -	int		under_oom;
> -
> -	int	swappiness;
> -	/* OOM-Killer disable */
> -	int		oom_kill_disable;
> +	int swappiness;
>  
>  	/* memory.events and memory.events.local */
>  	struct cgroup_file events_file;
> @@ -230,27 +218,6 @@ struct mem_cgroup {
>  	/* handle for "memory.swap.events" */
>  	struct cgroup_file swap_events_file;
>  
> -	/* protect arrays of thresholds */
> -	struct mutex thresholds_lock;
> -
> -	/* thresholds for memory usage. RCU-protected */
> -	struct mem_cgroup_thresholds thresholds;
> -
> -	/* thresholds for mem+swap usage. RCU-protected */
> -	struct mem_cgroup_thresholds memsw_thresholds;
> -
> -	/* For oom notifier event fd */
> -	struct list_head oom_notify;
> -
> -	/*
> -	 * Should we move charges of a task when a task is moved into this
> -	 * mem_cgroup ? And what type of charges should we move ?
> -	 */
> -	unsigned long move_charge_at_immigrate;
> -	/* taken only while moving_account > 0 */
> -	spinlock_t		move_lock;
> -	unsigned long		move_lock_flags;
> -
>  	CACHELINE_PADDING(_pad1_);

Let's also remove these _pad1_ and also _pad2_ as well as this
rearrangement nullifies the reasons behind these paddings. We need to
run some perf benchmarks to identify the newer false cache sharing
ields.


  reply	other threads:[~2024-06-29  0:49 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-06-28 21:03 [PATCH v1 0/9] mm: memcg: put cgroup v1-specific memcg data " Roman Gushchin
2024-06-28 21:03 ` [PATCH v1 1/9] mm: memcg: move memcg_account_kmem() to memcontrol-v1.c Roman Gushchin
2024-06-29  0:30   ` Shakeel Butt
2024-06-28 21:03 ` [PATCH v1 2/9] mm: memcg: factor out legacy socket memory accounting code Roman Gushchin
2024-06-29  0:39   ` Shakeel Butt
2024-06-28 21:03 ` [PATCH v1 3/9] mm: memcg: guard cgroup v1-specific code in mem_cgroup_print_oom_meminfo() Roman Gushchin
2024-06-29  0:40   ` Shakeel Butt
2024-06-28 21:03 ` [PATCH v1 4/9] mm: memcg: gather memcg1-specific fields initialization in memcg1_memcg_init() Roman Gushchin
2024-06-29  0:43   ` Shakeel Butt
2024-06-28 21:03 ` [PATCH v1 5/9] mm: memcg: guard memcg1-specific fields accesses in mm/memcontrol.c Roman Gushchin
2024-06-29  0:49   ` Shakeel Butt
2024-06-28 21:03 ` [PATCH v1 6/9] mm: memcg: put memcg1-specific struct mem_cgroup's members under CONFIG_MEMCG_V1 Roman Gushchin
2024-06-29  0:48   ` Shakeel Butt [this message]
2024-07-04 23:35     ` Andrew Morton
2024-07-05  3:43       ` Shakeel Butt
2024-07-03 15:12   ` Shakeel Butt
2024-06-28 21:03 ` [PATCH v1 7/9] mm: memcg: guard memcg1-specific members of struct mem_cgroup_per_node Roman Gushchin
2024-06-29  0:52   ` Shakeel Butt
2024-07-03 15:13   ` Shakeel Butt
2024-06-28 21:03 ` [PATCH v1 8/9] mm: memcg: put struct task_struct::memcg_in_oom under CONFIG_MEMCG_V1 Roman Gushchin
2024-06-29  0:53   ` Shakeel Butt
2024-06-28 21:03 ` [PATCH v1 9/9] mm: memcg: put struct task_struct::in_user_fault " Roman Gushchin
2024-06-29  0:55   ` Shakeel Butt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ug2qpeiq6jrtr4qtnblquiod7rgqdqsy6nfu5idnpxqwrzdq6o@mmbsul2g6t52 \
    --to=shakeel.butt@linux.dev \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=muchun.song@linux.dev \
    --cc=roman.gushchin@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox