From: Michal Hocko <mhocko@suse.com>
To: Yue Zhao <findns94@gmail.com>
Cc: akpm@linux-foundation.org, roman.gushchin@linux.dev,
hannes@cmpxchg.org, shakeelb@google.com, muchun.song@linux.dev,
willy@infradead.org, linux-mm@kvack.org, cgroups@vger.kernel.org,
linux-kernel@vger.kernel.org, tangyeechou@gmail.com
Subject: Re: [PATCH v2, 4/4] mm, memcg: Prevent memory.soft_limit_in_bytes load/store tearing
Date: Mon, 6 Mar 2023 18:55:11 +0100 [thread overview]
Message-ID: <ZAYo/ztYBgY0Tx9I@dhcp22.suse.cz> (raw)
In-Reply-To: <20230306154138.3775-5-findns94@gmail.com>
On Mon 06-03-23 23:41:38, Yue Zhao wrote:
> The knob for cgroup v1 memory controller: memory.soft_limit_in_bytes
> is not protected by any locking so it can be modified while it is used.
> This is not an actual problem because races are unlikely.
> But it is better to use READ_ONCE/WRITE_ONCE to prevent compiler from
> doing anything funky.
>
> The access of memcg->soft_limit is lockless,
> so it can be concurrently set at the same time as we are
> trying to read it.
Similar here. mem_cgroup_css_reset and mem_cgroup_css_alloc are not
covered.
>
> Signed-off-by: Yue Zhao <findns94@gmail.com>
> ---
> mm/memcontrol.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 26605b2f51b1..20566f59bbcb 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -3728,7 +3728,7 @@ static u64 mem_cgroup_read_u64(struct cgroup_subsys_state *css,
> case RES_FAILCNT:
> return counter->failcnt;
> case RES_SOFT_LIMIT:
> - return (u64)memcg->soft_limit * PAGE_SIZE;
> + return (u64)READ_ONCE(memcg->soft_limit) * PAGE_SIZE;
> default:
> BUG();
> }
> @@ -3870,7 +3870,7 @@ static ssize_t mem_cgroup_write(struct kernfs_open_file *of,
> if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
> ret = -EOPNOTSUPP;
> } else {
> - memcg->soft_limit = nr_pages;
> + WRITE_ONCE(memcg->soft_limit, nr_pages);
> ret = 0;
> }
> break;
> --
> 2.17.1
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2023-03-06 17:55 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-06 15:41 [PATCH v2, 0/4] mm, memcg: cgroup v1 and v2 tunable load/store tearing fixes Yue Zhao
2023-03-06 15:41 ` [PATCH v2, 1/4] mm, memcg: Prevent memory.oom.group load/store tearing Yue Zhao
2023-03-06 17:47 ` Michal Hocko
2023-03-06 15:41 ` [PATCH v2, 2/4] mm, memcg: Prevent memory.swappiness " Yue Zhao
2023-03-06 17:50 ` Michal Hocko
2023-03-06 15:41 ` [PATCH v2, 3/4] mm, memcg: Prevent memory.oom_control " Yue Zhao
2023-03-06 17:53 ` Michal Hocko
2023-03-08 14:51 ` Martin Zhao
2023-03-06 15:41 ` [PATCH v2, 4/4] mm, memcg: Prevent memory.soft_limit_in_bytes " Yue Zhao
2023-03-06 17:55 ` Michal Hocko [this message]
2023-03-06 16:25 ` [PATCH v2, 0/4] mm, memcg: cgroup v1 and v2 tunable load/store tearing fixes Shakeel Butt
2023-03-06 17:51 ` Roman Gushchin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZAYo/ztYBgY0Tx9I@dhcp22.suse.cz \
--to=mhocko@suse.com \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=findns94@gmail.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=muchun.song@linux.dev \
--cc=roman.gushchin@linux.dev \
--cc=shakeelb@google.com \
--cc=tangyeechou@gmail.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox