linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Jan Kara <jack@suse.cz>
To: Chen Ridong <chenridong@huaweicloud.com>
Cc: viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz,
	 hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev,
	 shakeel.butt@linux.dev, muchun.song@linux.dev, tj@kernel.org,
	mkoutny@suse.com,  akpm@linux-foundation.org, vbabka@suse.cz,
	surenb@google.com, jackmanb@google.com,  ziy@nvidia.com,
	linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
	 cgroups@vger.kernel.org, linux-mm@kvack.org,
	lujialin4@huawei.com, chenridong@huawei.com
Subject: Re: [PATCH -next] cgroup: switch to css_is_online() helper
Date: Tue, 2 Dec 2025 12:01:01 +0100	[thread overview]
Message-ID: <dzp6jxmf5ggidkhmqabuttaotyrkxzf6ohiuzgcdn6oppkcmfc@vrjeeypoppwe> (raw)
In-Reply-To: <20251202025747.1658159-1-chenridong@huaweicloud.com>

On Tue 02-12-25 02:57:47, Chen Ridong wrote:
> From: Chen Ridong <chenridong@huawei.com>
> 
> Use the new css_is_online() helper that has been introduced to check css
> online state, instead of testing the CSS_ONLINE flag directly. This
> improves readability and centralizes the state check logic.
> 
> No functional changes intended.
> 
> Signed-off-by: Chen Ridong <chenridong@huawei.com>

Looks good. Feel free to add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  fs/fs-writeback.c          | 2 +-
>  include/linux/memcontrol.h | 2 +-
>  kernel/cgroup/cgroup.c     | 4 ++--
>  mm/memcontrol.c            | 2 +-
>  mm/page_owner.c            | 2 +-
>  5 files changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
> index 6800886c4d10..5dd6e89a6d29 100644
> --- a/fs/fs-writeback.c
> +++ b/fs/fs-writeback.c
> @@ -981,7 +981,7 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct folio *folio
>  
>  	css = mem_cgroup_css_from_folio(folio);
>  	/* dead cgroups shouldn't contribute to inode ownership arbitration */
> -	if (!(css->flags & CSS_ONLINE))
> +	if (!css_is_online(css))
>  		return;
>  
>  	id = css->id;
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 0651865a4564..6a48398a1f4e 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -893,7 +893,7 @@ static inline bool mem_cgroup_online(struct mem_cgroup *memcg)
>  {
>  	if (mem_cgroup_disabled())
>  		return true;
> -	return !!(memcg->css.flags & CSS_ONLINE);
> +	return css_is_online(&memcg->css);
>  }
>  
>  void mem_cgroup_update_lru_size(struct lruvec *lruvec, enum lru_list lru,
> diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
> index 1e4033d05c29..ad0a35721dff 100644
> --- a/kernel/cgroup/cgroup.c
> +++ b/kernel/cgroup/cgroup.c
> @@ -4939,7 +4939,7 @@ bool css_has_online_children(struct cgroup_subsys_state *css)
>  
>  	rcu_read_lock();
>  	css_for_each_child(child, css) {
> -		if (child->flags & CSS_ONLINE) {
> +		if (css_is_online(child)) {
>  			ret = true;
>  			break;
>  		}
> @@ -5744,7 +5744,7 @@ static void offline_css(struct cgroup_subsys_state *css)
>  
>  	lockdep_assert_held(&cgroup_mutex);
>  
> -	if (!(css->flags & CSS_ONLINE))
> +	if (!css_is_online(css))
>  		return;
>  
>  	if (ss->css_offline)
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index be810c1fbfc3..e2e49f4ec9e0 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -281,7 +281,7 @@ ino_t page_cgroup_ino(struct page *page)
>  	/* page_folio() is racy here, but the entire function is racy anyway */
>  	memcg = folio_memcg_check(page_folio(page));
>  
> -	while (memcg && !(memcg->css.flags & CSS_ONLINE))
> +	while (memcg && !css_is_online(&memcg->css))
>  		memcg = parent_mem_cgroup(memcg);
>  	if (memcg)
>  		ino = cgroup_ino(memcg->css.cgroup);
> diff --git a/mm/page_owner.c b/mm/page_owner.c
> index a70245684206..27d19f01009c 100644
> --- a/mm/page_owner.c
> +++ b/mm/page_owner.c
> @@ -530,7 +530,7 @@ static inline int print_page_owner_memcg(char *kbuf, size_t count, int ret,
>  	if (!memcg)
>  		goto out_unlock;
>  
> -	online = (memcg->css.flags & CSS_ONLINE);
> +	online = css_is_online(&memcg->css);
>  	cgroup_name(memcg->css.cgroup, name, sizeof(name));
>  	ret += scnprintf(kbuf + ret, count - ret,
>  			"Charged %sto %smemcg %s\n",
> -- 
> 2.34.1
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


  reply	other threads:[~2025-12-02 11:01 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-02  2:57 Chen Ridong
2025-12-02 11:01 ` Jan Kara [this message]
2025-12-02 11:54   ` Chen Ridong
2025-12-02 15:27 ` Zi Yan
2025-12-02 16:08 ` Shakeel Butt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=dzp6jxmf5ggidkhmqabuttaotyrkxzf6ohiuzgcdn6oppkcmfc@vrjeeypoppwe \
    --to=jack@suse.cz \
    --cc=akpm@linux-foundation.org \
    --cc=brauner@kernel.org \
    --cc=cgroups@vger.kernel.org \
    --cc=chenridong@huawei.com \
    --cc=chenridong@huaweicloud.com \
    --cc=hannes@cmpxchg.org \
    --cc=jackmanb@google.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lujialin4@huawei.com \
    --cc=mhocko@kernel.org \
    --cc=mkoutny@suse.com \
    --cc=muchun.song@linux.dev \
    --cc=roman.gushchin@linux.dev \
    --cc=shakeel.butt@linux.dev \
    --cc=surenb@google.com \
    --cc=tj@kernel.org \
    --cc=vbabka@suse.cz \
    --cc=viro@zeniv.linux.org.uk \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox