From: Yue Zhao <findns94@gmail.com>
To: akpm@linux-foundation.org
Cc: roman.gushchin@linux.dev, hannes@cmpxchg.org, mhocko@kernel.org,
shakeelb@google.com, muchun.song@linux.dev, willy@infradead.org,
linux-mm@kvack.org, cgroups@vger.kernel.org,
linux-kernel@vger.kernel.org, tangyeechou@gmail.com,
Yue Zhao <findns94@gmail.com>
Subject: [PATCH v2, 3/4] mm, memcg: Prevent memory.oom_control load/store tearing
Date: Mon, 6 Mar 2023 23:41:37 +0800 [thread overview]
Message-ID: <20230306154138.3775-4-findns94@gmail.com> (raw)
In-Reply-To: <20230306154138.3775-1-findns94@gmail.com>
The knob for cgroup v1 memory controller: memory.oom_control
is not protected by any locking so it can be modified while it is used.
This is not an actual problem because races are unlikely.
But it is better to use READ_ONCE/WRITE_ONCE to prevent compiler from
doing anything funky.
The access of memcg->oom_kill_disable is lockless,
so it can be concurrently set at the same time as we are
trying to read it.
Signed-off-by: Yue Zhao <findns94@gmail.com>
---
mm/memcontrol.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index dca895c66a9b..26605b2f51b1 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -4515,7 +4515,7 @@ static int mem_cgroup_oom_control_read(struct seq_file *sf, void *v)
{
struct mem_cgroup *memcg = mem_cgroup_from_seq(sf);
- seq_printf(sf, "oom_kill_disable %d\n", memcg->oom_kill_disable);
+ seq_printf(sf, "oom_kill_disable %d\n", READ_ONCE(memcg->oom_kill_disable));
seq_printf(sf, "under_oom %d\n", (bool)memcg->under_oom);
seq_printf(sf, "oom_kill %lu\n",
atomic_long_read(&memcg->memory_events[MEMCG_OOM_KILL]));
@@ -4531,7 +4531,7 @@ static int mem_cgroup_oom_control_write(struct cgroup_subsys_state *css,
if (mem_cgroup_is_root(memcg) || !((val == 0) || (val == 1)))
return -EINVAL;
- memcg->oom_kill_disable = val;
+ WRITE_ONCE(memcg->oom_kill_disable, val);
if (!val)
memcg_oom_recover(memcg);
--
2.17.1
next prev parent reply other threads:[~2023-03-06 15:42 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-06 15:41 [PATCH v2, 0/4] mm, memcg: cgroup v1 and v2 tunable load/store tearing fixes Yue Zhao
2023-03-06 15:41 ` [PATCH v2, 1/4] mm, memcg: Prevent memory.oom.group load/store tearing Yue Zhao
2023-03-06 17:47 ` Michal Hocko
2023-03-06 15:41 ` [PATCH v2, 2/4] mm, memcg: Prevent memory.swappiness " Yue Zhao
2023-03-06 17:50 ` Michal Hocko
2023-03-06 15:41 ` Yue Zhao [this message]
2023-03-06 17:53 ` [PATCH v2, 3/4] mm, memcg: Prevent memory.oom_control " Michal Hocko
2023-03-08 14:51 ` Martin Zhao
2023-03-06 15:41 ` [PATCH v2, 4/4] mm, memcg: Prevent memory.soft_limit_in_bytes " Yue Zhao
2023-03-06 17:55 ` Michal Hocko
2023-03-06 16:25 ` [PATCH v2, 0/4] mm, memcg: cgroup v1 and v2 tunable load/store tearing fixes Shakeel Butt
2023-03-06 17:51 ` Roman Gushchin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230306154138.3775-4-findns94@gmail.com \
--to=findns94@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=muchun.song@linux.dev \
--cc=roman.gushchin@linux.dev \
--cc=shakeelb@google.com \
--cc=tangyeechou@gmail.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox