linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: Chengming Zhou <chengming.zhou@linux.dev>,
	Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Roman Gushchin <roman.gushchin@linux.dev>,
	Hyeonggon Yoo <42.hyeyoo@gmail.com>,
	Alexander Potapenko <glider@google.com>,
	Marco Elver <elver@google.com>,
	Dmitry Vyukov <dvyukov@google.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	kasan-dev@googlegroups.com
Subject: Re: [PATCH 4/4] mm/slub: free KFENCE objects in slab_free_hook()
Date: Wed, 6 Dec 2023 10:58:05 +0100	[thread overview]
Message-ID: <79e29576-12a2-a423-92f3-d8a7bcd2f0ce@suse.cz> (raw)
In-Reply-To: <44421a37-4343-46d0-9e5c-17c2cd038cf2@linux.dev>

On 12/5/23 14:27, Chengming Zhou wrote:
> On 2023/12/5 03:34, Vlastimil Babka wrote:
>> When freeing an object that was allocated from KFENCE, we do that in the
>> slowpath __slab_free(), relying on the fact that KFENCE "slab" cannot be
>> the cpu slab, so the fastpath has to fallback to the slowpath.
>> 
>> This optimization doesn't help much though, because is_kfence_address()
>> is checked earlier anyway during the free hook processing or detached
>> freelist building. Thus we can simplify the code by making the
>> slab_free_hook() free the KFENCE object immediately, similarly to KASAN
>> quarantine.
>> 
>> In slab_free_hook() we can place kfence_free() above init processing, as
>> callers have been making sure to set init to false for KFENCE objects.
>> This simplifies slab_free(). This places it also above kasan_slab_free()
>> which is ok as that skips KFENCE objects anyway.
>> 
>> While at it also determine the init value in slab_free_freelist_hook()
>> outside of the loop.
>> 
>> This change will also make introducing per cpu array caches easier.
>> 
>> Tested-by: Marco Elver <elver@google.com>
>> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
>> ---
>>  mm/slub.c | 22 ++++++++++------------
>>  1 file changed, 10 insertions(+), 12 deletions(-)
>> 
>> diff --git a/mm/slub.c b/mm/slub.c
>> index ed2fa92e914c..e38c2b712f6c 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -2039,7 +2039,7 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab,
>>   * production configuration these hooks all should produce no code at all.
>>   *
>>   * Returns true if freeing of the object can proceed, false if its reuse
>> - * was delayed by KASAN quarantine.
>> + * was delayed by KASAN quarantine, or it was returned to KFENCE.
>>   */
>>  static __always_inline
>>  bool slab_free_hook(struct kmem_cache *s, void *x, bool init)
>> @@ -2057,6 +2057,9 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init)
>>  		__kcsan_check_access(x, s->object_size,
>>  				     KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT);
>>  
>> +	if (kfence_free(kasan_reset_tag(x)))
> 
> I'm wondering if "kasan_reset_tag()" is needed here?

I think so, because AFAICS the is_kfence_address() check in kfence_free()
could be a false negative otherwise. In fact now I even question some of the
other is_kfence_address() checks in mm/slub.c, mainly
build_detached_freelist() which starts from pointers coming directly from
slab users. Insight from KASAN/KFENCE folks appreciated :)

> The patch looks good to me!
> 
> Reviewed-by: Chengming Zhou <zhouchengming@bytedance.com>

Thanks!

> Thanks.
> 
>> +		return false;
>> +
>>  	/*
>>  	 * As memory initialization might be integrated into KASAN,
>>  	 * kasan_slab_free and initialization memset's must be
>> @@ -2086,23 +2089,25 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s,
>>  	void *object;
>>  	void *next = *head;
>>  	void *old_tail = *tail;
>> +	bool init;
>>  
>>  	if (is_kfence_address(next)) {
>>  		slab_free_hook(s, next, false);
>> -		return true;
>> +		return false;
>>  	}
>>  
>>  	/* Head and tail of the reconstructed freelist */
>>  	*head = NULL;
>>  	*tail = NULL;
>>  
>> +	init = slab_want_init_on_free(s);
>> +
>>  	do {
>>  		object = next;
>>  		next = get_freepointer(s, object);
>>  
>>  		/* If object's reuse doesn't have to be delayed */
>> -		if (likely(slab_free_hook(s, object,
>> -					  slab_want_init_on_free(s)))) {
>> +		if (likely(slab_free_hook(s, object, init))) {
>>  			/* Move object to the new freelist */
>>  			set_freepointer(s, object, *head);
>>  			*head = object;
>> @@ -4103,9 +4108,6 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
>>  
>>  	stat(s, FREE_SLOWPATH);
>>  
>> -	if (kfence_free(head))
>> -		return;
>> -
>>  	if (IS_ENABLED(CONFIG_SLUB_TINY) || kmem_cache_debug(s)) {
>>  		free_to_partial_list(s, slab, head, tail, cnt, addr);
>>  		return;
>> @@ -4290,13 +4292,9 @@ static __fastpath_inline
>>  void slab_free(struct kmem_cache *s, struct slab *slab, void *object,
>>  	       unsigned long addr)
>>  {
>> -	bool init;
>> -
>>  	memcg_slab_free_hook(s, slab, &object, 1);
>>  
>> -	init = !is_kfence_address(object) && slab_want_init_on_free(s);
>> -
>> -	if (likely(slab_free_hook(s, object, init)))
>> +	if (likely(slab_free_hook(s, object, slab_want_init_on_free(s))))
>>  		do_slab_free(s, slab, object, object, 1, addr);
>>  }
>>  
>> 



  reply	other threads:[~2023-12-06  9:58 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-04 19:34 [PATCH 0/4] SLUB: cleanup hook processing Vlastimil Babka
2023-12-04 19:34 ` [PATCH 1/4] mm/slub: fix bulk alloc and free stats Vlastimil Babka
2023-12-05  8:11   ` Chengming Zhou
2023-12-04 19:34 ` [PATCH 2/4] mm/slub: introduce __kmem_cache_free_bulk() without free hooks Vlastimil Babka
2023-12-05  8:19   ` Chengming Zhou
2023-12-05 19:57     ` Vlastimil Babka
2023-12-06  0:31       ` Chengming Zhou
2023-12-04 19:34 ` [PATCH 3/4] mm/slub: handle bulk and single object freeing separately Vlastimil Babka
2023-12-05 13:23   ` Chengming Zhou
2023-12-04 19:34 ` [PATCH 4/4] mm/slub: free KFENCE objects in slab_free_hook() Vlastimil Babka
2023-12-05 13:27   ` Chengming Zhou
2023-12-06  9:58     ` Vlastimil Babka [this message]
2023-12-06 13:01       ` Chengming Zhou
2023-12-06 14:44         ` Marco Elver
2023-12-11 22:11           ` Andrey Konovalov
2023-12-12 11:42             ` Vlastimil Babka
2023-12-20 23:44               ` Andrey Konovalov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=79e29576-12a2-a423-92f3-d8a7bcd2f0ce@suse.cz \
    --to=vbabka@suse.cz \
    --cc=42.hyeyoo@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=chengming.zhou@linux.dev \
    --cc=cl@linux.com \
    --cc=dvyukov@google.com \
    --cc=elver@google.com \
    --cc=glider@google.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox