linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Chengming Zhou <chengming.zhou@linux.dev>
To: linke li <lilinke99@qq.com>
Cc: Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Vlastimil Babka <vbabka@suse.cz>,
	Roman Gushchin <roman.gushchin@linux.dev>,
	Hyeonggon Yoo <42.hyeyoo@gmail.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm/slub: mark racy accesses on slab->slabs
Date: Sat, 9 Mar 2024 19:24:11 +0800	[thread overview]
Message-ID: <8bc1aebf-8395-416f-8c23-53cbd25d0eef@linux.dev> (raw)
In-Reply-To: <tencent_60DA4643DAB0341D8320190AC0A403636807@qq.com>

On 2024/3/9 15:48, linke li wrote:
> The reads of slab->slabs are racy because it may be changed by 
> put_cpu_partial concurrently. And in slabs_cpu_partial_show ->slabs is
> only used for output. Data-racy reads from shared variables that are used
> only for diagnostic purposes should typically use data_race(), since it 
> is normally not a problem if the values are off by a little.
> 
> This patch is aimed at reducing the number of benign races reported by 
> KCSAN in order to focus future debugging effort on harmful races.
> 
> Signed-off-by: linke li <lilinke99@qq.com>
> ---
>  mm/slub.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 2ef88bbf56a3..7b20591e7f8a 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -6257,7 +6257,7 @@ static ssize_t slabs_cpu_partial_show(struct kmem_cache *s, char *buf)
>  		slab = slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu));
>  
>  		if (slab)
> -			slabs += slab->slabs;
> +			slabs += data_race(slab->slabs);
>  	}
>  #endif
>  
> @@ -6271,7 +6271,7 @@ static ssize_t slabs_cpu_partial_show(struct kmem_cache *s, char *buf)
>  
>  		slab = slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu));
>  		if (slab) {
> -			slabs = READ_ONCE(slab->slabs);
> +			slabs = data_race(slab->slabs);
>  			objects = (slabs * oo_objects(s->oo)) / 2;
>  			len += sysfs_emit_at(buf, len, " C%d=%d(%d)",
>  					     cpu, objects, slabs);

There is another unmarked access of "slab->slabs" in the show_slab_objects(),
which you can change too.

I'm not sure that it's really safe to access "slab->slabs" here without any protection?
Although it should be no problem in practice, alternative choice maybe putting partial
slabs count in the kmem_cache_cpu struct.

Thanks.


  reply	other threads:[~2024-03-09 11:24 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-09  7:48 linke li
2024-03-09 11:24 ` Chengming Zhou [this message]
2024-03-21  2:48   ` linke li
2024-03-21  3:12     ` Chengming Zhou

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8bc1aebf-8395-416f-8c23-53cbd25d0eef@linux.dev \
    --to=chengming.zhou@linux.dev \
    --cc=42.hyeyoo@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=lilinke99@qq.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox