From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8111DC3DA7F for ; Thu, 1 Aug 2024 00:23:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 005046B0089; Wed, 31 Jul 2024 20:23:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EF62B6B00B9; Wed, 31 Jul 2024 20:23:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D96BF6B00C2; Wed, 31 Jul 2024 20:23:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id BB1E66B0089 for ; Wed, 31 Jul 2024 20:23:09 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 5DDEB160822 for ; Thu, 1 Aug 2024 00:23:09 +0000 (UTC) X-FDA: 82401776898.05.71294F5 Received: from mail-wr1-f52.google.com (mail-wr1-f52.google.com [209.85.221.52]) by imf16.hostedemail.com (Postfix) with ESMTP id 7BEB0180012 for ; Thu, 1 Aug 2024 00:23:07 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="l0n8M2/T"; spf=pass (imf16.hostedemail.com: domain of andreyknvl@gmail.com designates 209.85.221.52 as permitted sender) smtp.mailfrom=andreyknvl@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722471731; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vsRe15f8SeekhA1unFJl3x8nWV4+YMvQOTBZActqfL4=; b=R03ozjvyUoykp4G4BQ/VwsxhtpMMYwVRp2xws7lvgscfELPDeuFDtaP6vejrE9at3cKbg4 rivLxaATNbZgurX4Rw1uuUYDM6RtSHUXb2wEsJqiLbw+He+JyWrlFHqaB7mfGH2o5uvwz2 289yC2KbFzF85hYYBdxRold/1XWcRSg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722471731; a=rsa-sha256; cv=none; b=yj3llPrmsEnkEwF5rf53IxZtaLadQZWx16mYU9cE8Cbi9vluAxzb3MB+Avyvy4zDKxrSYL pXf0DOdvjoi2FPkMvCz8XTYlvwsD2lVOJf4o5bAdqRZKcpq/ztjJwQE98sQakpWevifV1C 17mwzL8vLZMh60HVor9omSoOghzPCfk= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="l0n8M2/T"; spf=pass (imf16.hostedemail.com: domain of andreyknvl@gmail.com designates 209.85.221.52 as permitted sender) smtp.mailfrom=andreyknvl@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-wr1-f52.google.com with SMTP id ffacd0b85a97d-3685a5e7d3cso3617961f8f.1 for ; Wed, 31 Jul 2024 17:23:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1722471786; x=1723076586; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=vsRe15f8SeekhA1unFJl3x8nWV4+YMvQOTBZActqfL4=; b=l0n8M2/TIhnXxf47vNGNYPukIhA82gH/FGBn79i6oj0QRAufcfzEduCTwTDNmIrRi+ Szc1foEZKQWlZJJxbWj+sIk/uh/I7wsnt7SaRboA8UhskcscR4xqfEm+COCHgIFk0ACf Pv6ndDQIs0nfopXPklKw2DrOpPuIwX09Rn4VKfKUEcC17FvMsSBDuH3VUWROsXvKh9Td nT0e3Hoj20TstbO/hxvGLZFlFCxCWcz2Lyyuy43REy/SmnUfxuffKVmvaRWBFNnQA8nE 8mZ4iabPtYMDi5HlkNYdfRa5kSU7Mx0gib2MO3NjxKEh/cD3bjAxXKISZ7ulnZ3fLyN+ 0Rng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722471786; x=1723076586; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vsRe15f8SeekhA1unFJl3x8nWV4+YMvQOTBZActqfL4=; b=GvtBeUfyf3xCeyYuzXHdHmA0T8U1xe0MYR062ifFzWArUlEOlLvufm4aK/DgOo1Qdl 0+qKph894jk3To5R08mddhLR91RL3oRaK+LGOYt33s9uv1mm4JXi8Jh+H3nZ/oPL2Apg /UCLGmfGo92BJWuGjMLsQKOERNCisLUODc+Mz9ngjLqi1N5Bd2cBWDerUD6U5yYJStWU 5Nt/G76ECXdiXP7iBVeDkOh8zEhmPte+YoVHW/cx7gSmVtooDXhMkgS3w6dZ58U1QXIf GJjKZRBtlFvEqYg7bwrEXzuv16ewcA8ghhyEv2FgSDptMhg+V6CVGig+fhWpuNvPA6Ir MvmQ== X-Forwarded-Encrypted: i=1; AJvYcCVPVOuvATLWkVpAbFEkb35ee3Sy4UUrNCbqSpXsndEy++zP1tXBkrG/ygZibMlRcq9h6IZ4gC6GE4Iro2j+i0shUPA= X-Gm-Message-State: AOJu0YwWRsP5ETaeF6kPW2BDkfIrtP0npJxlhsL8+c+sV7bW2gVGMPWX cX7eNxprG2mo+Lpwr7YRhnuBL9Wwrueo9FvJHFEVlW9ueeXwd/CG5DWVyYX8d4oyFGMGbfcluSE oqLFSIUdY9YVF+ZAv4kECO/FoITs= X-Google-Smtp-Source: AGHT+IGwWhG32zlK/wsGtoQT5etLpgbP5OrnXTKZFx5x3tG3CWTCdTVVahU6VEOH6gbd9OrdRmgPBs5lJ71dOcDTP+g= X-Received: by 2002:a05:6000:188:b0:367:9088:fecc with SMTP id ffacd0b85a97d-36baaca26cfmr431122f8f.7.1722471785502; Wed, 31 Jul 2024 17:23:05 -0700 (PDT) MIME-Version: 1.0 References: <20240730-kasan-tsbrcu-v5-0-48d3cbdfccc5@google.com> <20240730-kasan-tsbrcu-v5-1-48d3cbdfccc5@google.com> In-Reply-To: <20240730-kasan-tsbrcu-v5-1-48d3cbdfccc5@google.com> From: Andrey Konovalov Date: Thu, 1 Aug 2024 02:22:54 +0200 Message-ID: Subject: Re: [PATCH v5 1/2] kasan: catch invalid free before SLUB reinitializes the object To: Jann Horn Cc: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Marco Elver , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 7BEB0180012 X-Stat-Signature: h4tx6e4sfg1srjhfms9anis81dofi43d X-HE-Tag: 1722471787-354926 X-HE-Meta: U2FsdGVkX18NulnGkqlFtAAfdigNzuDZccm2og5FdVZuWXkvKWuHPyNKOOp6S1FPqxk820m5CWfAP0g2LqZ03ohgIaxj7m0tdfib/4MISyrGhBrfy1jGn13Soj74uKV8kCNsI2KPxjsXFQn7RqqIn6KckUrIQw21PxLADwu9hW/SUJBzFJ1E4cKY7mZypvKkrCohlr7OBtHM9cx/f2PoMpoHx9CPSmst1SiTDDYMI/peiXIhGecvKrbC0qwi9jk0HJdE/6E0ZYO1XEBUfF9pGs9dqGrWQ295FUfac37YtBLenscRmbgZ/MenW0kEdEZ4gCn/qlbNFW/CQfDDUbGHEDic/I/8hHKNXrE5NJwFC3XL9QpwKg5R3fG5OVFq0nIGPNhJIr+vUb/Eyl/5fhdXPaemHVa2DP78lXyFbvHbk8JHSUQEh58hHu2eCDxf177sQ9cFYvWD3wxapTa67N59gVeVnZ/n+zvLUQ2IzgeV0DxYU8oOuRfmNWdQcSi/RH6oA/eupC92EK00hneeXOIm31EoMLa1D/glxqLbvrPEAO0F0eyQAam4NnBrvtMjtPAMLcIsaGtSq9gCh3O+n+mjmA8DBd54XBRGNtSoobuERQsfNVcmGJbGXaHrITUkAi1MVUVmOlpZ5FmEXrmr5oyXKN3fNOMLrjfTk8gSrahjkqNtePwkRaKPQNVytL4RJNsB+utjkJPaxvnW2ufHHAs9osgNIVjm2IP/NVotgOR25daXF91mVmxMX63Zr6ZeKvTnggb2ALUoJjoiGAIOV1uwCfSYHL8HJbGoornChYFz5o+Jjjvxpczf0nAqdk0R48Fe4Cf7lQZiXe7rRbghrVmPt5cCBBF1eevJgEs8QQDpp428+i9XsFTl3qfHEmQET3mHMnF34lK+gnQx9e9HgCYGHjLL7xky7AdZjTmv+iVaYJdmJ/DKzGD5kuftgcpLtw2tePSVzOveVYQ9y/3vFzw rOL7lEx6 gKB7Ebn5ggBHSpgnQpBGWohHcGG40NdEpszubc8u1Rl1pSRFLu75oxg0MLX8ScZcrR893ATmG/w3z6zv/75sXEQYVfxM5bvdQKmTLrzvYo5zp3KLPUhfCCtkpirz/2AJwI/PVxHBciic8x9Pf+5riRwSaMJ3XilqcjRpm0mRn757tk3h0+dl7WOQ8iHGXko7dau6SFilOEdUwSjCQqen9urNC+ekoUZ77ynl3+LPPNdN9BhiyoconeaifsZ4tUA8xlwdntrDuPKAPRV/Ynx3JqfO3speRnR8c2g7fcy0pBa3UgP68jLVyzEbsUiID/j3cfXFOHJnc5DtKkOdXSbXl6fz92YfUdvXoyMd5JNfFa2kKyX6llMCTg3xAXp3UeMNVxJKGWDbMr8l9l8hz5KM1dtoGm5K8mCVu8orwt8JXst+wzVKqP0VXj6m2kMl4x4S5V4XD X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Jul 30, 2024 at 1:06=E2=80=AFPM Jann Horn wrote: > > Currently, when KASAN is combined with init-on-free behavior, the > initialization happens before KASAN's "invalid free" checks. > > More importantly, a subsequent commit will want to RCU-delay the actual > SLUB freeing of an object, and we'd like KASAN to still validate > synchronously that freeing the object is permitted. (Otherwise this > change will make the existing testcase kmem_cache_invalid_free fail.) > > So add a new KASAN hook that allows KASAN to pre-validate a > kmem_cache_free() operation before SLUB actually starts modifying the > object or its metadata. A few more minor comments below. With that: Reviewed-by: Andrey Konovalov Thank you! > Inside KASAN, this: > > - moves checks from poison_slab_object() into check_slab_free() > - moves kasan_arch_is_ready() up into callers of poison_slab_object() > - removes "ip" argument of poison_slab_object() and __kasan_slab_free() > (since those functions no longer do any reporting) > - renames check_slab_free() to check_slab_allocation() check_slab_allocation() is introduced in this patch, so technically you don't rename anything. > Acked-by: Vlastimil Babka #slub > Signed-off-by: Jann Horn > --- > include/linux/kasan.h | 43 ++++++++++++++++++++++++++++++++++--- > mm/kasan/common.c | 59 +++++++++++++++++++++++++++++++--------------= ------ > mm/slub.c | 7 ++++++ > 3 files changed, 83 insertions(+), 26 deletions(-) > > diff --git a/include/linux/kasan.h b/include/linux/kasan.h > index 70d6a8f6e25d..34cb7a25aacb 100644 > --- a/include/linux/kasan.h > +++ b/include/linux/kasan.h > @@ -172,19 +172,50 @@ static __always_inline void * __must_check kasan_in= it_slab_obj( > { > if (kasan_enabled()) > return __kasan_init_slab_obj(cache, object); > return (void *)object; > } > > -bool __kasan_slab_free(struct kmem_cache *s, void *object, > - unsigned long ip, bool init); > +bool __kasan_slab_pre_free(struct kmem_cache *s, void *object, > + unsigned long ip); > +/** > + * kasan_slab_pre_free - Validate a slab object freeing request. > + * @object: Object to free. > + * > + * This function checks whether freeing the given object might be permit= ted; it > + * checks things like whether the given object is properly aligned and n= ot > + * already freed. > + * > + * This function is only intended for use by the slab allocator. > + * > + * @Return true if freeing the object is known to be invalid; false othe= rwise. > + */ Let's reword this to: kasan_slab_pre_free - Check whether freeing a slab object is safe. @object: Object to be freed. This function checks whether freeing the given object is safe. It performs checks to detect double-free and invalid-free bugs and reports them. This function is intended only for use by the slab allocator. @Return true if freeing the object is not safe; false otherwise. > +static __always_inline bool kasan_slab_pre_free(struct kmem_cache *s, > + void *object) > +{ > + if (kasan_enabled()) > + return __kasan_slab_pre_free(s, object, _RET_IP_); > + return false; > +} > + > +bool __kasan_slab_free(struct kmem_cache *s, void *object, bool init); > +/** > + * kasan_slab_free - Possibly handle slab object freeing. > + * @object: Object to free. > + * > + * This hook is called from the slab allocator to give KASAN a chance to= take > + * ownership of the object and handle its freeing. > + * kasan_slab_pre_free() must have already been called on the same objec= t. > + * > + * @Return true if KASAN took ownership of the object; false otherwise. > + */ kasan_slab_free - Poison, initialize, and quarantine a slab object. @object: Object to be freed. @init: Whether to initialize the object. This function poisons a slab object and saves a free stack trace for it, except for SLAB_TYPESAFE_BY_RCU caches. For KASAN modes that have integrated memory initialization (kasan_has_integrated_init() =3D=3D true), this function also initializes the object's memory. For other modes, the @init argument is ignored. For the Generic mode, this function might also quarantine the object. When this happens, KASAN will defer freeing the object to a later stage and handle it internally then. The return value indicates whether the object was quarantined. This function is intended only for use by the slab allocator. @Return true if KASAN quarantined the object; false otherwise. > static __always_inline bool kasan_slab_free(struct kmem_cache *s, > void *object, bool init) > { > if (kasan_enabled()) > - return __kasan_slab_free(s, object, _RET_IP_, init); > + return __kasan_slab_free(s, object, init); > return false; > } > > void __kasan_kfree_large(void *ptr, unsigned long ip); > static __always_inline void kasan_kfree_large(void *ptr) > { > @@ -368,12 +399,18 @@ static inline void kasan_poison_new_object(struct k= mem_cache *cache, > void *object) {} > static inline void *kasan_init_slab_obj(struct kmem_cache *cache, > const void *object) > { > return (void *)object; > } > + > +static inline bool kasan_slab_pre_free(struct kmem_cache *s, void *objec= t) > +{ > + return false; > +} > + > static inline bool kasan_slab_free(struct kmem_cache *s, void *object, b= ool init) > { > return false; > } > static inline void kasan_kfree_large(void *ptr) {} > static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object, > diff --git a/mm/kasan/common.c b/mm/kasan/common.c > index 85e7c6b4575c..8cede1ce00e1 100644 > --- a/mm/kasan/common.c > +++ b/mm/kasan/common.c > @@ -205,59 +205,65 @@ void * __must_check __kasan_init_slab_obj(struct km= em_cache *cache, > /* Tag is ignored in set_tag() without CONFIG_KASAN_SW/HW_TAGS */ > object =3D set_tag(object, assign_tag(cache, object, true)); > > return (void *)object; > } > > -static inline bool poison_slab_object(struct kmem_cache *cache, void *ob= ject, > - unsigned long ip, bool init) > +/* returns true for invalid request */ "Returns true when freeing the object is not safe." > +static bool check_slab_allocation(struct kmem_cache *cache, void *object= , > + unsigned long ip) > { > - void *tagged_object; > - > - if (!kasan_arch_is_ready()) > - return false; > + void *tagged_object =3D object; > > - tagged_object =3D object; > object =3D kasan_reset_tag(object); > > if (unlikely(nearest_obj(cache, virt_to_slab(object), object) != =3D object)) { > kasan_report_invalid_free(tagged_object, ip, KASAN_REPORT= _INVALID_FREE); > return true; > } > > - /* RCU slabs could be legally used after free within the RCU peri= od. */ > - if (unlikely(cache->flags & SLAB_TYPESAFE_BY_RCU)) > - return false; > - > if (!kasan_byte_accessible(tagged_object)) { > kasan_report_invalid_free(tagged_object, ip, KASAN_REPORT= _DOUBLE_FREE); > return true; > } > > + return false; > +} > + > +static inline void poison_slab_object(struct kmem_cache *cache, void *ob= ject, > + bool init) > +{ > + void *tagged_object =3D object; > + > + object =3D kasan_reset_tag(object); > + > + /* RCU slabs could be legally used after free within the RCU peri= od. */ > + if (unlikely(cache->flags & SLAB_TYPESAFE_BY_RCU)) > + return; > + > kasan_poison(object, round_up(cache->object_size, KASAN_GRANULE_S= IZE), > KASAN_SLAB_FREE, init); > > if (kasan_stack_collection_enabled()) > kasan_save_free_info(cache, tagged_object); > +} > > - return false; > +bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object, > + unsigned long ip) > +{ > + if (!kasan_arch_is_ready() || is_kfence_address(object)) > + return false; > + return check_slab_allocation(cache, object, ip); > } > > -bool __kasan_slab_free(struct kmem_cache *cache, void *object, > - unsigned long ip, bool init) > +bool __kasan_slab_free(struct kmem_cache *cache, void *object, bool init= ) > { > - if (is_kfence_address(object)) > + if (!kasan_arch_is_ready() || is_kfence_address(object)) > return false; > > - /* > - * If the object is buggy, do not let slab put the object onto th= e > - * freelist. The object will thus never be allocated again and it= s > - * metadata will never get released. > - */ > - if (poison_slab_object(cache, object, ip, init)) > - return true; > + poison_slab_object(cache, object, init); > > /* > * If the object is put into quarantine, do not let slab put the = object > * onto the freelist for now. The object's metadata is kept until= the > * object gets evicted from quarantine. > */ > @@ -503,15 +509,22 @@ bool __kasan_mempool_poison_object(void *ptr, unsig= ned long ip) > kasan_poison(ptr, folio_size(folio), KASAN_PAGE_FREE, fal= se); > return true; > } > > if (is_kfence_address(ptr)) > return false; > + if (!kasan_arch_is_ready()) > + return true; Hm, I think we had a bug here: the function should return true in both cases. This seems reasonable: if KASAN is not checking the object, the caller can do whatever they want with it. > slab =3D folio_slab(folio); > - return !poison_slab_object(slab->slab_cache, ptr, ip, false); > + > + if (check_slab_allocation(slab->slab_cache, ptr, ip)) > + return false; > + > + poison_slab_object(slab->slab_cache, ptr, false); > + return true; > } > > void __kasan_mempool_unpoison_object(void *ptr, size_t size, unsigned lo= ng ip) > { > struct slab *slab; > gfp_t flags =3D 0; /* Might be executing under a lock. */ > diff --git a/mm/slub.c b/mm/slub.c > index 3520acaf9afa..0c98b6a2124f 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2223,12 +2223,19 @@ bool slab_free_hook(struct kmem_cache *s, void *x= , bool init) > __kcsan_check_access(x, s->object_size, > KCSAN_ACCESS_WRITE | KCSAN_ACCESS_AS= SERT); > > if (kfence_free(x)) > return false; > > + /* > + * Give KASAN a chance to notice an invalid free operation before= we > + * modify the object. > + */ > + if (kasan_slab_pre_free(s, x)) > + return false; > + > /* > * As memory initialization might be integrated into KASAN, > * kasan_slab_free and initialization memset's must be > * kept together to avoid discrepancies in behavior. > * > * The initialization memset's clear the object and the metadata, > > -- > 2.46.0.rc1.232.g9752f9e123-goog >