From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32DB0C3DA49 for ; Thu, 25 Jul 2024 16:06:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B3C3E6B0082; Thu, 25 Jul 2024 12:06:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AEC956B0083; Thu, 25 Jul 2024 12:06:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 918846B0093; Thu, 25 Jul 2024 12:06:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6AC106B0082 for ; Thu, 25 Jul 2024 12:06:21 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 03D871411D7 for ; Thu, 25 Jul 2024 16:06:20 +0000 (UTC) X-FDA: 82378752162.26.38E6D6A Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by imf02.hostedemail.com (Postfix) with ESMTP id 6516E80015 for ; Thu, 25 Jul 2024 16:06:17 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=26F7tr18; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=zoYAmGN2; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=26F7tr18; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=zoYAmGN2; dmarc=none; spf=pass (imf02.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721923524; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=atCrfLpqCItwLbU6RYWL9jaXMO3SoRG/K7yz6Y4y8VE=; b=HFeXO/AbxFcDn4Ha6fGwj05owysruiKNzQOq4dxM1q0dbaoXCWl0inHWZds5vetf/i4MfG Gk/EK1goJNlkKsF3ThiIrxHKbhvFYaq1g5nrHQme3iQW3DenFoWlBHWbIR1xrO3j/W0TAx fY1ycFY8EWo2AMWqyVblQLk1zDQCCGg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721923524; a=rsa-sha256; cv=none; b=ZuGshQP94Jv6WElW9cWQIbwNeYxPQRRHHCsHrqHcRSDu0oy3WP1/sud2qxXhVvlc1K0hYO RavPIBxPz2iM2bkA+tglbS2ND8n0e2mNSggVBJPLvros11dE7GppGMxKKe5vshgYn0USmz 1s4CrsVgiueUEQAQ1OAfxB60rmXhCbU= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=26F7tr18; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=zoYAmGN2; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=26F7tr18; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=zoYAmGN2; dmarc=none; spf=pass (imf02.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 42B0521ABA; Thu, 25 Jul 2024 16:06:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1721923576; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=atCrfLpqCItwLbU6RYWL9jaXMO3SoRG/K7yz6Y4y8VE=; b=26F7tr18QNiAZoCCdQQ1WpRUFWdyd+/utc8HaaQ7h7d2XYRMCsXzJXEeR3mOJBQOJWacQD rBoZ9prQ1pkdCpqKKFqBE3VBIL5oB/C5ZecqpSF4tav4WC+fzTUB/Km2QDPu+DZHmVVSGc TEacZUEcBXAFt7d6O5Q8/ar7owmbK50= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1721923576; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=atCrfLpqCItwLbU6RYWL9jaXMO3SoRG/K7yz6Y4y8VE=; b=zoYAmGN2ToE3UiMPmNGGeFAJy7YXPQNt+8wABQnvPxDsGGK8Kq6+Ql45HzKq021/H/W/6J sIyzjepm++GL5xAQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1721923576; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=atCrfLpqCItwLbU6RYWL9jaXMO3SoRG/K7yz6Y4y8VE=; b=26F7tr18QNiAZoCCdQQ1WpRUFWdyd+/utc8HaaQ7h7d2XYRMCsXzJXEeR3mOJBQOJWacQD rBoZ9prQ1pkdCpqKKFqBE3VBIL5oB/C5ZecqpSF4tav4WC+fzTUB/Km2QDPu+DZHmVVSGc TEacZUEcBXAFt7d6O5Q8/ar7owmbK50= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1721923576; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=atCrfLpqCItwLbU6RYWL9jaXMO3SoRG/K7yz6Y4y8VE=; b=zoYAmGN2ToE3UiMPmNGGeFAJy7YXPQNt+8wABQnvPxDsGGK8Kq6+Ql45HzKq021/H/W/6J sIyzjepm++GL5xAQ== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 1177D1368A; Thu, 25 Jul 2024 16:06:16 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id SVDPA/h3omYFNAAAD6G6ig (envelope-from ); Thu, 25 Jul 2024 16:06:16 +0000 Message-ID: <45d91310-1c0c-4e14-b705-eb35260be04a@suse.cz> Date: Thu, 25 Jul 2024 18:06:15 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 2/2] slub: Introduce CONFIG_SLUB_RCU_DEBUG Content-Language: en-US To: Jann Horn , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: Marco Elver , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <20240725-kasan-tsbrcu-v3-0-51c92f8f1101@google.com> <20240725-kasan-tsbrcu-v3-2-51c92f8f1101@google.com> From: Vlastimil Babka Autocrypt: addr=vbabka@suse.cz; keydata= xsFNBFZdmxYBEADsw/SiUSjB0dM+vSh95UkgcHjzEVBlby/Fg+g42O7LAEkCYXi/vvq31JTB KxRWDHX0R2tgpFDXHnzZcQywawu8eSq0LxzxFNYMvtB7sV1pxYwej2qx9B75qW2plBs+7+YB 87tMFA+u+L4Z5xAzIimfLD5EKC56kJ1CsXlM8S/LHcmdD9Ctkn3trYDNnat0eoAcfPIP2OZ+ 9oe9IF/R28zmh0ifLXyJQQz5ofdj4bPf8ecEW0rhcqHfTD8k4yK0xxt3xW+6Exqp9n9bydiy tcSAw/TahjW6yrA+6JhSBv1v2tIm+itQc073zjSX8OFL51qQVzRFr7H2UQG33lw2QrvHRXqD Ot7ViKam7v0Ho9wEWiQOOZlHItOOXFphWb2yq3nzrKe45oWoSgkxKb97MVsQ+q2SYjJRBBH4 8qKhphADYxkIP6yut/eaj9ImvRUZZRi0DTc8xfnvHGTjKbJzC2xpFcY0DQbZzuwsIZ8OPJCc LM4S7mT25NE5kUTG/TKQCk922vRdGVMoLA7dIQrgXnRXtyT61sg8PG4wcfOnuWf8577aXP1x 6mzw3/jh3F+oSBHb/GcLC7mvWreJifUL2gEdssGfXhGWBo6zLS3qhgtwjay0Jl+kza1lo+Cv BB2T79D4WGdDuVa4eOrQ02TxqGN7G0Biz5ZLRSFzQSQwLn8fbwARAQABzSBWbGFzdGltaWwg QmFia2EgPHZiYWJrYUBzdXNlLmN6PsLBlAQTAQoAPgIbAwULCQgHAwUVCgkICwUWAgMBAAIe AQIXgBYhBKlA1DSZLC6OmRA9UCJPp+fMgqZkBQJkBREIBQkRadznAAoJECJPp+fMgqZkNxIQ ALZRqwdUGzqL2aeSavbum/VF/+td+nZfuH0xeWiO2w8mG0+nPd5j9ujYeHcUP1edE7uQrjOC Gs9sm8+W1xYnbClMJTsXiAV88D2btFUdU1mCXURAL9wWZ8Jsmz5ZH2V6AUszvNezsS/VIT87 AmTtj31TLDGwdxaZTSYLwAOOOtyqafOEq+gJB30RxTRE3h3G1zpO7OM9K6ysLdAlwAGYWgJJ V4JqGsQ/lyEtxxFpUCjb5Pztp7cQxhlkil0oBYHkudiG8j1U3DG8iC6rnB4yJaLphKx57NuQ PIY0Bccg+r9gIQ4XeSK2PQhdXdy3UWBr913ZQ9AI2usid3s5vabo4iBvpJNFLgUmxFnr73SJ KsRh/2OBsg1XXF/wRQGBO9vRuJUAbnaIVcmGOUogdBVS9Sun/Sy4GNA++KtFZK95U7J417/J Hub2xV6Ehc7UGW6fIvIQmzJ3zaTEfuriU1P8ayfddrAgZb25JnOW7L1zdYL8rXiezOyYZ8Fm ZyXjzWdO0RpxcUEp6GsJr11Bc4F3aae9OZtwtLL/jxc7y6pUugB00PodgnQ6CMcfR/HjXlae h2VS3zl9+tQWHu6s1R58t5BuMS2FNA58wU/IazImc/ZQA+slDBfhRDGYlExjg19UXWe/gMcl De3P1kxYPgZdGE2eZpRLIbt+rYnqQKy8UxlszsBNBFsZNTUBCACfQfpSsWJZyi+SHoRdVyX5 J6rI7okc4+b571a7RXD5UhS9dlVRVVAtrU9ANSLqPTQKGVxHrqD39XSw8hxK61pw8p90pg4G /N3iuWEvyt+t0SxDDkClnGsDyRhlUyEWYFEoBrrCizbmahOUwqkJbNMfzj5Y7n7OIJOxNRkB IBOjPdF26dMP69BwePQao1M8Acrrex9sAHYjQGyVmReRjVEtv9iG4DoTsnIR3amKVk6si4Ea X/mrapJqSCcBUVYUFH8M7bsm4CSxier5ofy8jTEa/CfvkqpKThTMCQPNZKY7hke5qEq1CBk2 wxhX48ZrJEFf1v3NuV3OimgsF2odzieNABEBAAHCwXwEGAEKACYCGwwWIQSpQNQ0mSwujpkQ PVAiT6fnzIKmZAUCZAUSmwUJDK5EZgAKCRAiT6fnzIKmZOJGEACOKABgo9wJXsbWhGWYO7mD 8R8mUyJHqbvaz+yTLnvRwfe/VwafFfDMx5GYVYzMY9TWpA8psFTKTUIIQmx2scYsRBUwm5VI EurRWKqENcDRjyo+ol59j0FViYysjQQeobXBDDE31t5SBg++veI6tXfpco/UiKEsDswL1WAr tEAZaruo7254TyH+gydURl2wJuzo/aZ7Y7PpqaODbYv727Dvm5eX64HCyyAH0s6sOCyGF5/p eIhrOn24oBf67KtdAN3H9JoFNUVTYJc1VJU3R1JtVdgwEdr+NEciEfYl0O19VpLE/PZxP4wX PWnhf5WjdoNI1Xec+RcJ5p/pSel0jnvBX8L2cmniYnmI883NhtGZsEWj++wyKiS4NranDFlA HdDM3b4lUth1pTtABKQ1YuTvehj7EfoWD3bv9kuGZGPrAeFNiHPdOT7DaXKeHpW9homgtBxj 8aX/UkSvEGJKUEbFL9cVa5tzyialGkSiZJNkWgeHe+jEcfRT6pJZOJidSCdzvJpbdJmm+eED w9XOLH1IIWh7RURU7G1iOfEfmImFeC3cbbS73LQEFGe1urxvIH5K/7vX+FkNcr9ujwWuPE9b 1C2o4i/yZPLXIVy387EjA6GZMqvQUFuSTs/GeBcv0NjIQi8867H3uLjz+mQy63fAitsDwLmR EP+ylKVEKb0Q2A== In-Reply-To: <20240725-kasan-tsbrcu-v3-2-51c92f8f1101@google.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Action: no action X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 6516E80015 X-Stat-Signature: k7bdjz8bm1fd1x5gu3yz8w7x79amt77k X-Rspam-User: X-HE-Tag: 1721923577-385523 X-HE-Meta: U2FsdGVkX1/7KWZgP1UiWNcdCA3ZL94Iudafj5TxHP7KikksQH7bW6iK2m5MgZh5XUXA6o84X/W+wYfLYC54ZDgRvZUxOKzEzy/era6NICn/1YFEOjTBRtAnR3/b++Xb5fs7JvApwP9soEJoWTQQusaBt5vfLeypkT6uD+fQ1iN2OEAzSPYMn8ySsKlXa2QilWhJBKWjPhN8QBDfQIvxu8wlHro8CRrrmCaH9Zq+WPX9ACl0x8J8VBj0NVVlTpn4fsKvFQgdjfKQivjXIJx/vhWuTBhxkcHpUhZ/5MqreMscCQSazNq3BYMw6QQeY2jvi0YI/z+1kORG/Hbqd9KoeBeMLd1v9U75vGhMXlt+hpvjFn8OHG+Xnvhqmw0WdO1+0QWL8svlnYMcaKM4/Dsxan1amRCWe+vbczUrDX6Xfa7m20qJTEBqg0MLf0LiXv92rrNwpywr5CPDGqs+5ngyk0SkJtUoDA5hjpY+2SEiz8bOmrlQPmUmJJwquQodT+pUeceXDCyNoxiCCIha4XKOa64RML4PM0fNs+fm74mT9ygMw9NYooWJm2cAqsFNxvUijr2hPmj7TcSm0KkYcbjGL0U424F7Sa/Wj9AUZeab4le5PctaQGM4NZraqivIMuzh8ofZzFnYPzZgd4pJAl+Wf7RGvpOXLWYM7Be9u2djH03mqvppEtHm7be072nCEA3LaQJsSTVRoeXPfBuyvMLkrDIX8pA5f398wKObV3BzGuX72/UONaswAIsMgFhiDDxjQCBFFi3QO4Hao9Xj7cAqU8JS+T1KYFaR81RXIZZvYbfw1j9Y3aF/Zulf1oEzb0pc8HRawhh+HQEgPLyB38MK/lcIE4ZMdKj7OKViAXm+kVjPu3hJVqzHGRuS/SRgt+sRNEd6RYCvX8cdV4cf5SFnGghZaXFLBZOefDsWHu+DpOdkeU1v9mh9N12kvANaBtrf0e7G7G2BuowkkL49eR3 9IH+U9xj ixekqWBe623bNqoa6IvdLdUbCy55ljeZQKrtlU0g7WPl49PBmdRXYYE0oNh1KE/6fNvPsRUVc2SeEGmFNrHj9pR/6pLLEfYoUcYl9H8n7UR/x9mmWZ2ntlicl1AwMKCvyvPPmVscSoPKFzfeQgc4RWMJuzAIHCmT84z6JrqtEFBhE6LTS1R8m8QVCJ0o8b3VsK7e3CC995yb5lqcEhPB8woOKUEoqdExkPmPrUUTPpUkNjomeDILA7gHu38VYCvozKId1pv7QCB1UcXP9TqKNgL+aFXmSsvoPgw/gUcwVdVaTHjT3LhH4PvO8PnuiX+OADxg4iQvkCVkHtA7SZGcawovQWBZw56Y4lPfiR/V7/AcUGvhgvreEi8qY2Ww1ATY5dnnQdMS6LvGYvnWkZESTAUF5xQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 7/25/24 5:31 PM, Jann Horn wrote: > Currently, KASAN is unable to catch use-after-free in SLAB_TYPESAFE_BY_RCU > slabs because use-after-free is allowed within the RCU grace period by > design. > > Add a SLUB debugging feature which RCU-delays every individual > kmem_cache_free() before either actually freeing the object or handing it > off to KASAN, and change KASAN to poison freed objects as normal when this > option is enabled. > > For now I've configured Kconfig.debug to default-enable this feature in the > KASAN GENERIC and SW_TAGS modes; I'm not enabling it by default in HW_TAGS > mode because I'm not sure if it might have unwanted performance degradation > effects there. > > Note that this is mostly useful with KASAN in the quarantine-based GENERIC > mode; SLAB_TYPESAFE_BY_RCU slabs are basically always also slabs with a > ->ctor, and KASAN's assign_tag() currently has to assign fixed tags for > those, reducing the effectiveness of SW_TAGS/HW_TAGS mode. > (A possible future extension of this work would be to also let SLUB call > the ->ctor() on every allocation instead of only when the slab page is > allocated; then tag-based modes would be able to assign new tags on every > reallocation.) > > Signed-off-by: Jann Horn Yeah this "we try but might fail" looks to be a suitable tradeoff for this debuggin feature in that it keeps the complexity lower. Thanks. Acked-by: Vlastimil Babka #slab > --- > include/linux/kasan.h | 14 ++++++---- > mm/Kconfig.debug | 29 ++++++++++++++++++++ > mm/kasan/common.c | 13 +++++---- > mm/kasan/kasan_test.c | 44 +++++++++++++++++++++++++++++ > mm/slab_common.c | 12 ++++++++ > mm/slub.c | 76 +++++++++++++++++++++++++++++++++++++++++++++------ > 6 files changed, 170 insertions(+), 18 deletions(-) > > diff --git a/include/linux/kasan.h b/include/linux/kasan.h > index ebd93c843e78..c64483d3e2bd 100644 > --- a/include/linux/kasan.h > +++ b/include/linux/kasan.h > @@ -186,12 +186,15 @@ static __always_inline bool kasan_slab_pre_free(struct kmem_cache *s, > } > > bool __kasan_slab_free(struct kmem_cache *s, void *object, > - unsigned long ip, bool init); > + unsigned long ip, bool init, bool after_rcu_delay); > static __always_inline bool kasan_slab_free(struct kmem_cache *s, > - void *object, bool init) > + void *object, bool init, > + bool after_rcu_delay) > { > - if (kasan_enabled()) > - return __kasan_slab_free(s, object, _RET_IP_, init); > + if (kasan_enabled()) { > + return __kasan_slab_free(s, object, _RET_IP_, init, > + after_rcu_delay); > + } > return false; > } > > @@ -387,7 +390,8 @@ static inline bool kasan_slab_pre_free(struct kmem_cache *s, void *object) > return false; > } > > -static inline bool kasan_slab_free(struct kmem_cache *s, void *object, bool init) > +static inline bool kasan_slab_free(struct kmem_cache *s, void *object, > + bool init, bool after_rcu_delay) > { > return false; > } > diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug > index afc72fde0f03..0c088532f5a7 100644 > --- a/mm/Kconfig.debug > +++ b/mm/Kconfig.debug > @@ -70,6 +70,35 @@ config SLUB_DEBUG_ON > off in a kernel built with CONFIG_SLUB_DEBUG_ON by specifying > "slab_debug=-". > > +config SLUB_RCU_DEBUG > + bool "Make use-after-free detection possible in TYPESAFE_BY_RCU caches" > + depends on SLUB_DEBUG > + default KASAN_GENERIC || KASAN_SW_TAGS > + help > + Make SLAB_TYPESAFE_BY_RCU caches behave approximately as if the cache > + was not marked as SLAB_TYPESAFE_BY_RCU and every caller used > + kfree_rcu() instead. > + > + This is intended for use in combination with KASAN, to enable KASAN to > + detect use-after-free accesses in such caches. > + (KFENCE is able to do that independent of this flag.) > + > + This might degrade performance. > + Unfortunately this also prevents a very specific bug pattern from > + triggering (insufficient checks against an object being recycled > + within the RCU grace period); so this option can be turned off even on > + KASAN builds, in case you want to test for such a bug. > + > + If you're using this for testing bugs / fuzzing and care about > + catching all the bugs WAY more than performance, you might want to > + also turn on CONFIG_RCU_STRICT_GRACE_PERIOD. > + > + WARNING: > + This is designed as a debugging feature, not a security feature. > + Objects are sometimes recycled without RCU delay under memory pressure. > + > + If unsure, say N. > + > config PAGE_OWNER > bool "Track page owner" > depends on DEBUG_KERNEL && STACKTRACE_SUPPORT > diff --git a/mm/kasan/common.c b/mm/kasan/common.c > index 7c7fc6ce7eb7..d92cb2e9189d 100644 > --- a/mm/kasan/common.c > +++ b/mm/kasan/common.c > @@ -238,7 +238,8 @@ static enum free_validation_result check_slab_free(struct kmem_cache *cache, > } > > static inline bool poison_slab_object(struct kmem_cache *cache, void *object, > - unsigned long ip, bool init) > + unsigned long ip, bool init, > + bool after_rcu_delay) > { > void *tagged_object = object; > enum free_validation_result valid = check_slab_free(cache, object, ip); > @@ -251,7 +252,8 @@ static inline bool poison_slab_object(struct kmem_cache *cache, void *object, > object = kasan_reset_tag(object); > > /* RCU slabs could be legally used after free within the RCU period. */ > - if (unlikely(cache->flags & SLAB_TYPESAFE_BY_RCU)) > + if (unlikely(cache->flags & SLAB_TYPESAFE_BY_RCU) && > + !after_rcu_delay) > return false; > > kasan_poison(object, round_up(cache->object_size, KASAN_GRANULE_SIZE), > @@ -270,7 +272,8 @@ bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object, > } > > bool __kasan_slab_free(struct kmem_cache *cache, void *object, > - unsigned long ip, bool init) > + unsigned long ip, bool init, > + bool after_rcu_delay) > { > if (is_kfence_address(object)) > return false; > @@ -280,7 +283,7 @@ bool __kasan_slab_free(struct kmem_cache *cache, void *object, > * freelist. The object will thus never be allocated again and its > * metadata will never get released. > */ > - if (poison_slab_object(cache, object, ip, init)) > + if (poison_slab_object(cache, object, ip, init, after_rcu_delay)) > return true; > > /* > @@ -535,7 +538,7 @@ bool __kasan_mempool_poison_object(void *ptr, unsigned long ip) > return false; > > slab = folio_slab(folio); > - return !poison_slab_object(slab->slab_cache, ptr, ip, false); > + return !poison_slab_object(slab->slab_cache, ptr, ip, false, false); > } > > void __kasan_mempool_unpoison_object(void *ptr, size_t size, unsigned long ip) > diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c > index 7b32be2a3cf0..cba782a4b072 100644 > --- a/mm/kasan/kasan_test.c > +++ b/mm/kasan/kasan_test.c > @@ -996,6 +996,49 @@ static void kmem_cache_invalid_free(struct kunit *test) > kmem_cache_destroy(cache); > } > > +static void kmem_cache_rcu_uaf(struct kunit *test) > +{ > + char *p; > + size_t size = 200; > + struct kmem_cache *cache; > + > + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB_RCU_DEBUG); > + > + cache = kmem_cache_create("test_cache", size, 0, SLAB_TYPESAFE_BY_RCU, > + NULL); > + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, cache); > + > + p = kmem_cache_alloc(cache, GFP_KERNEL); > + if (!p) { > + kunit_err(test, "Allocation failed: %s\n", __func__); > + kmem_cache_destroy(cache); > + return; > + } > + *p = 1; > + > + rcu_read_lock(); > + > + /* Free the object - this will internally schedule an RCU callback. */ > + kmem_cache_free(cache, p); > + > + /* We should still be allowed to access the object at this point because > + * the cache is SLAB_TYPESAFE_BY_RCU and we've been in an RCU read-side > + * critical section since before the kmem_cache_free(). > + */ > + READ_ONCE(*p); > + > + rcu_read_unlock(); > + > + /* Wait for the RCU callback to execute; after this, the object should > + * have actually been freed from KASAN's perspective. > + */ > + rcu_barrier(); > + > + KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*p)); > + > + kmem_cache_destroy(cache); > +} > + > static void empty_cache_ctor(void *object) { } > > static void kmem_cache_double_destroy(struct kunit *test) > @@ -1937,6 +1980,7 @@ static struct kunit_case kasan_kunit_test_cases[] = { > KUNIT_CASE(kmem_cache_oob), > KUNIT_CASE(kmem_cache_double_free), > KUNIT_CASE(kmem_cache_invalid_free), > + KUNIT_CASE(kmem_cache_rcu_uaf), > KUNIT_CASE(kmem_cache_double_destroy), > KUNIT_CASE(kmem_cache_accounted), > KUNIT_CASE(kmem_cache_bulk), > diff --git a/mm/slab_common.c b/mm/slab_common.c > index 1560a1546bb1..19511e34017b 100644 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -450,6 +450,18 @@ static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work) > > static int shutdown_cache(struct kmem_cache *s) > { > + if (IS_ENABLED(CONFIG_SLUB_RCU_DEBUG) && > + (s->flags & SLAB_TYPESAFE_BY_RCU)) { > + /* > + * Under CONFIG_SLUB_RCU_DEBUG, when objects in a > + * SLAB_TYPESAFE_BY_RCU slab are freed, SLUB will internally > + * defer their freeing with call_rcu(). > + * Wait for such call_rcu() invocations here before actually > + * destroying the cache. > + */ > + rcu_barrier(); > + } > + > /* free asan quarantined objects */ > kasan_cache_shutdown(s); > > diff --git a/mm/slub.c b/mm/slub.c > index 34724704c52d..f44eec209e3e 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2144,15 +2144,26 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, > } > #endif /* CONFIG_MEMCG_KMEM */ > > +#ifdef CONFIG_SLUB_RCU_DEBUG > +static void slab_free_after_rcu_debug(struct rcu_head *rcu_head); > + > +struct rcu_delayed_free { > + struct rcu_head head; > + void *object; > +}; > +#endif > + > /* > * Hooks for other subsystems that check memory allocations. In a typical > * production configuration these hooks all should produce no code at all. > * > * Returns true if freeing of the object can proceed, false if its reuse > - * was delayed by KASAN quarantine, or it was returned to KFENCE. > + * was delayed by CONFIG_SLUB_RCU_DEBUG or KASAN quarantine, or it was returned > + * to KFENCE. > */ > static __always_inline > -bool slab_free_hook(struct kmem_cache *s, void *x, bool init) > +bool slab_free_hook(struct kmem_cache *s, void *x, bool init, > + bool after_rcu_delay) > { > kmemleak_free_recursive(x, s->flags); > kmsan_slab_free(s, x); > @@ -2163,7 +2174,7 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init) > debug_check_no_obj_freed(x, s->object_size); > > /* Use KCSAN to help debug racy use-after-free. */ > - if (!(s->flags & SLAB_TYPESAFE_BY_RCU)) > + if (!(s->flags & SLAB_TYPESAFE_BY_RCU) || after_rcu_delay) > __kcsan_check_access(x, s->object_size, > KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT); > > @@ -2177,6 +2188,28 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init) > if (kasan_slab_pre_free(s, x)) > return false; > > +#ifdef CONFIG_SLUB_RCU_DEBUG > + if ((s->flags & SLAB_TYPESAFE_BY_RCU) && !after_rcu_delay) { > + struct rcu_delayed_free *delayed_free; > + > + delayed_free = kmalloc(sizeof(*delayed_free), GFP_NOWAIT); > + if (delayed_free) { > + /* > + * Let KASAN track our call stack as a "related work > + * creation", just like if the object had been freed > + * normally via kfree_rcu(). > + * We have to do this manually because the rcu_head is > + * not located inside the object. > + */ > + kasan_record_aux_stack_noalloc(x); > + > + delayed_free->object = x; > + call_rcu(&delayed_free->head, slab_free_after_rcu_debug); > + return false; > + } > + } > +#endif /* CONFIG_SLUB_RCU_DEBUG */ > + > /* > * As memory initialization might be integrated into KASAN, > * kasan_slab_free and initialization memset's must be > @@ -2200,7 +2233,7 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init) > s->size - inuse - rsize); > } > /* KASAN might put x into memory quarantine, delaying its reuse. */ > - return !kasan_slab_free(s, x, init); > + return !kasan_slab_free(s, x, init, after_rcu_delay); > } > > static __fastpath_inline > @@ -2214,7 +2247,7 @@ bool slab_free_freelist_hook(struct kmem_cache *s, void **head, void **tail, > bool init; > > if (is_kfence_address(next)) { > - slab_free_hook(s, next, false); > + slab_free_hook(s, next, false, false); > return false; > } > > @@ -2229,7 +2262,7 @@ bool slab_free_freelist_hook(struct kmem_cache *s, void **head, void **tail, > next = get_freepointer(s, object); > > /* If object's reuse doesn't have to be delayed */ > - if (likely(slab_free_hook(s, object, init))) { > + if (likely(slab_free_hook(s, object, init, false))) { > /* Move object to the new freelist */ > set_freepointer(s, object, *head); > *head = object; > @@ -4442,7 +4475,7 @@ void slab_free(struct kmem_cache *s, struct slab *slab, void *object, > memcg_slab_free_hook(s, slab, &object, 1); > alloc_tagging_slab_free_hook(s, slab, &object, 1); > > - if (likely(slab_free_hook(s, object, slab_want_init_on_free(s)))) > + if (likely(slab_free_hook(s, object, slab_want_init_on_free(s), false))) > do_slab_free(s, slab, object, object, 1, addr); > } > > @@ -4451,7 +4484,7 @@ void slab_free(struct kmem_cache *s, struct slab *slab, void *object, > static noinline > void memcg_alloc_abort_single(struct kmem_cache *s, void *object) > { > - if (likely(slab_free_hook(s, object, slab_want_init_on_free(s)))) > + if (likely(slab_free_hook(s, object, slab_want_init_on_free(s), false))) > do_slab_free(s, virt_to_slab(object), object, object, 1, _RET_IP_); > } > #endif > @@ -4470,6 +4503,33 @@ void slab_free_bulk(struct kmem_cache *s, struct slab *slab, void *head, > do_slab_free(s, slab, head, tail, cnt, addr); > } > > +#ifdef CONFIG_SLUB_RCU_DEBUG > +static void slab_free_after_rcu_debug(struct rcu_head *rcu_head) > +{ > + struct rcu_delayed_free *delayed_free = > + container_of(rcu_head, struct rcu_delayed_free, head); > + void *object = delayed_free->object; > + struct slab *slab = virt_to_slab(object); > + struct kmem_cache *s; > + > + if (WARN_ON(is_kfence_address(rcu_head))) > + return; > + > + /* find the object and the cache again */ > + if (WARN_ON(!slab)) > + return; > + s = slab->slab_cache; > + if (WARN_ON(!(s->flags & SLAB_TYPESAFE_BY_RCU))) > + return; > + > + /* resume freeing */ > + if (!slab_free_hook(s, object, slab_want_init_on_free(s), true)) > + return; > + do_slab_free(s, slab, object, NULL, 1, _THIS_IP_); > + kfree(delayed_free); > +} > +#endif /* CONFIG_SLUB_RCU_DEBUG */ > + > #ifdef CONFIG_KASAN_GENERIC > void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr) > { >