From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E71D5C07E9D for ; Mon, 26 Sep 2022 17:50:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1ADB96B0163; Mon, 26 Sep 2022 13:50:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 135ED6B0164; Mon, 26 Sep 2022 13:50:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ECAA98E0066; Mon, 26 Sep 2022 13:50:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id D80236B0163 for ; Mon, 26 Sep 2022 13:50:40 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id A9286A048D for ; Mon, 26 Sep 2022 17:50:40 +0000 (UTC) X-FDA: 79954976640.26.CD3ABA4 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) by imf16.hostedemail.com (Postfix) with ESMTP id 5055F180005 for ; Mon, 26 Sep 2022 17:50:40 +0000 (UTC) Received: by mail-pl1-f169.google.com with SMTP id v1so6905063plo.9 for ; Mon, 26 Sep 2022 10:50:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date; bh=TNDUsCjQ4zB2/5GZ7X+uS9zeoK4Uz9rg4Dmc7gG+Kts=; b=diPFMelUGqg5Olz6I8z2UqQgP/qT0p6U2CTPr1IKXAPPQDlXjIbGiW+Mq8ZW/AJerP YVtmTqGjAus4WfzbTh4Tizc48Roo/vvFh6ncx714UJ4hLWN/DikhBXoZzoyhCy7LE6Y/ 3OYOMIYymLSLRxuRHD8IaRlLtQ4iFKBE+1Png= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date; bh=TNDUsCjQ4zB2/5GZ7X+uS9zeoK4Uz9rg4Dmc7gG+Kts=; b=iGS3qg1lkdwIgzaadBwq/ttf/4UNpgOnTgwlIuwiyNkLqa8L0ZJnUljKFFEd5CuHFZ m9mIjdAnJeIbfrb4XplT6HX4s+Ooay9Ad0Zd+D6/CvKOXBFQBFZy3CpqekyhEuODFbly E3POS6sYZ5FJ0wwq0maq4FBkAdu7DM6NS2t3UBtIXBJdXB5AsZJzG8KcLfa9TR/MxQ1z pR8bJXdIm/b0SzhaYyohUXo8K7vwsFOR/+s9jwyTXNoKfZ6QD2zY1rCYmqubkLuX61hW aBGh0EKWT6wk0YtISzz8zLGK+U3ZcXB3ELBCWn6t/f4Tw7kS6wBUpRFnhHqAm4cUcRoP 5HlA== X-Gm-Message-State: ACrzQf1RUCdfoM1H0syzpAl5JQhxdDRkxOhsAq24NmtB1f67NJkUzAbg v/ve2EhHYCSXI6LFS6aFg+Y7SA== X-Google-Smtp-Source: AMsMyM53HUNBt6kGZBRWVtwFK3cgJAVMJrfeOBQoCVB2uG4qpYwM3FiGKu6g/YmgMRQUrRqqychKmQ== X-Received: by 2002:a17:90a:a09:b0:202:ab93:2afb with SMTP id o9-20020a17090a0a0900b00202ab932afbmr37124281pjo.60.1664214639071; Mon, 26 Sep 2022 10:50:39 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id g13-20020aa79f0d000000b00536097dd45bsm12539497pfr.134.2022.09.26.10.50.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Sep 2022 10:50:37 -0700 (PDT) Date: Mon, 26 Sep 2022 10:50:36 -0700 From: Kees Cook To: Vlastimil Babka Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , linux-mm@kvack.org, "Ruhl, Michael J" , Hyeonggon Yoo <42.hyeyoo@gmail.com>, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Greg Kroah-Hartman , Nick Desaulniers , Alex Elder , Josef Bacik , David Sterba , Sumit Semwal , Christian =?iso-8859-1?Q?K=F6nig?= , Jesse Brandeburg , Daniel Micay , Yonghong Song , Marco Elver , Miguel Ojeda , linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, linux-fsdevel@vger.kernel.org, intel-wired-lan@lists.osuosl.org, dev@openvswitch.org, x86@kernel.org, llvm@lists.linux.dev, linux-hardening@vger.kernel.org Subject: Re: [PATCH v2 02/16] slab: Introduce kmalloc_size_roundup() Message-ID: <202209261050.560459B@keescook> References: <20220923202822.2667581-1-keescook@chromium.org> <20220923202822.2667581-3-keescook@chromium.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=diPFMelU; spf=pass (imf16.hostedemail.com: domain of keescook@chromium.org designates 209.85.214.169 as permitted sender) smtp.mailfrom=keescook@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1664214640; a=rsa-sha256; cv=none; b=iayRMvRsDo4QshR1lzN/hZZZ9bsga3fF/p+k0Cf8OI11Y44ZU5jrFgxx5jpvVh6PY3V/pf yEsOHDd+enJqJvW9HnQmz8XNpYID8seZz8AfdOaaQ3Mk8GAqAeoa8HRHmG0Dt+p+kEqQHe usNITnJsYgsdEW5NlRoVbeIanlgRqyU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1664214640; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=TNDUsCjQ4zB2/5GZ7X+uS9zeoK4Uz9rg4Dmc7gG+Kts=; b=3YHzOB/budxNKNqxqI+ceWCCLvccJ6DOib0CxUkqGSr+bjYB4zzu5DGjWphu7PEHO8WUE6 7tu8TEVjg4t1QtDBMfQrflsO63ku59muYPX3E8b2vHA08NQ8uirkCpWg8nQTdqM0/IcHvq sMxE/Y1wba4glVHIIcFdsc2fG7qhDLg= X-Rspam-User: Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=diPFMelU; spf=pass (imf16.hostedemail.com: domain of keescook@chromium.org designates 209.85.214.169 as permitted sender) smtp.mailfrom=keescook@chromium.org; dmarc=pass (policy=none) header.from=chromium.org X-Rspamd-Server: rspam01 X-Stat-Signature: ct3m66kpxse383jcsjx91bqmmw48sa6y X-Rspamd-Queue-Id: 5055F180005 X-HE-Tag: 1664214640-789736 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Sep 26, 2022 at 03:15:22PM +0200, Vlastimil Babka wrote: > On 9/23/22 22:28, Kees Cook wrote: > > In the effort to help the compiler reason about buffer sizes, the > > __alloc_size attribute was added to allocators. This improves the scope > > of the compiler's ability to apply CONFIG_UBSAN_BOUNDS and (in the near > > future) CONFIG_FORTIFY_SOURCE. For most allocations, this works well, > > as the vast majority of callers are not expecting to use more memory > > than what they asked for. > > > > There is, however, one common exception to this: anticipatory resizing > > of kmalloc allocations. These cases all use ksize() to determine the > > actual bucket size of a given allocation (e.g. 128 when 126 was asked > > for). This comes in two styles in the kernel: > > > > 1) An allocation has been determined to be too small, and needs to be > > resized. Instead of the caller choosing its own next best size, it > > wants to minimize the number of calls to krealloc(), so it just uses > > ksize() plus some additional bytes, forcing the realloc into the next > > bucket size, from which it can learn how large it is now. For example: > > > > data = krealloc(data, ksize(data) + 1, gfp); > > data_len = ksize(data); > > > > 2) The minimum size of an allocation is calculated, but since it may > > grow in the future, just use all the space available in the chosen > > bucket immediately, to avoid needing to reallocate later. A good > > example of this is skbuff's allocators: > > > > data = kmalloc_reserve(size, gfp_mask, node, &pfmemalloc); > > ... > > /* kmalloc(size) might give us more room than requested. > > * Put skb_shared_info exactly at the end of allocated zone, > > * to allow max possible filling before reallocation. > > */ > > osize = ksize(data); > > size = SKB_WITH_OVERHEAD(osize); > > > > In both cases, the "how much was actually allocated?" question is answered > > _after_ the allocation, where the compiler hinting is not in an easy place > > to make the association any more. This mismatch between the compiler's > > view of the buffer length and the code's intention about how much it is > > going to actually use has already caused problems[1]. It is possible to > > fix this by reordering the use of the "actual size" information. > > > > We can serve the needs of users of ksize() and still have accurate buffer > > length hinting for the compiler by doing the bucket size calculation > > _before_ the allocation. Code can instead ask "how large an allocation > > would I get for a given size?". > > > > Introduce kmalloc_size_roundup(), to serve this function so we can start > > replacing the "anticipatory resizing" uses of ksize(). > > > > [1] https://github.com/ClangBuiltLinux/linux/issues/1599 > > https://github.com/KSPP/linux/issues/183 > > > > Cc: Vlastimil Babka > > Cc: Christoph Lameter > > Cc: Pekka Enberg > > Cc: David Rientjes > > Cc: Joonsoo Kim > > Cc: Andrew Morton > > Cc: linux-mm@kvack.org > > Signed-off-by: Kees Cook > > OK, added patch 1+2 to slab.git for-next branch. > Had to adjust this one a bit, see below. > > > --- > > include/linux/slab.h | 31 +++++++++++++++++++++++++++++++ > > mm/slab.c | 9 ++++++--- > > mm/slab_common.c | 20 ++++++++++++++++++++ > > 3 files changed, 57 insertions(+), 3 deletions(-) > > > > diff --git a/include/linux/slab.h b/include/linux/slab.h > > index 41bd036e7551..727640173568 100644 > > --- a/include/linux/slab.h > > +++ b/include/linux/slab.h > > @@ -188,7 +188,21 @@ void * __must_check krealloc(const void *objp, size_t new_size, gfp_t flags) __r > > void kfree(const void *objp); > > void kfree_sensitive(const void *objp); > > size_t __ksize(const void *objp); > > + > > +/** > > + * ksize - Report actual allocation size of associated object > > + * > > + * @objp: Pointer returned from a prior kmalloc()-family allocation. > > + * > > + * This should not be used for writing beyond the originally requested > > + * allocation size. Either use krealloc() or round up the allocation size > > + * with kmalloc_size_roundup() prior to allocation. If this is used to > > + * access beyond the originally requested allocation size, UBSAN_BOUNDS > > + * and/or FORTIFY_SOURCE may trip, since they only know about the > > + * originally allocated size via the __alloc_size attribute. > > + */ > > size_t ksize(const void *objp); > > + > > #ifdef CONFIG_PRINTK > > bool kmem_valid_obj(void *object); > > void kmem_dump_obj(void *object); > > @@ -779,6 +793,23 @@ extern void kvfree(const void *addr); > > extern void kvfree_sensitive(const void *addr, size_t len); > > unsigned int kmem_cache_size(struct kmem_cache *s); > > + > > +/** > > + * kmalloc_size_roundup - Report allocation bucket size for the given size > > + * > > + * @size: Number of bytes to round up from. > > + * > > + * This returns the number of bytes that would be available in a kmalloc() > > + * allocation of @size bytes. For example, a 126 byte request would be > > + * rounded up to the next sized kmalloc bucket, 128 bytes. (This is strictly > > + * for the general-purpose kmalloc()-based allocations, and is not for the > > + * pre-sized kmem_cache_alloc()-based allocations.) > > + * > > + * Use this to kmalloc() the full bucket size ahead of time instead of using > > + * ksize() to query the size after an allocation. > > + */ > > +size_t kmalloc_size_roundup(size_t size); > > + > > void __init kmem_cache_init_late(void); > > #if defined(CONFIG_SMP) && defined(CONFIG_SLAB) > > diff --git a/mm/slab.c b/mm/slab.c > > index 10e96137b44f..2da862bf6226 100644 > > --- a/mm/slab.c > > +++ b/mm/slab.c > > @@ -4192,11 +4192,14 @@ void __check_heap_object(const void *ptr, unsigned long n, > > #endif /* CONFIG_HARDENED_USERCOPY */ > > /** > > - * __ksize -- Uninstrumented ksize. > > + * __ksize -- Report full size of underlying allocation > > * @objp: pointer to the object > > * > > - * Unlike ksize(), __ksize() is uninstrumented, and does not provide the same > > - * safety checks as ksize() with KASAN instrumentation enabled. > > + * This should only be used internally to query the true size of allocations. > > + * It is not meant to be a way to discover the usable size of an allocation > > + * after the fact. Instead, use kmalloc_size_roundup(). Using memory beyond > > + * the originally requested allocation size may trigger KASAN, UBSAN_BOUNDS, > > + * and/or FORTIFY_SOURCE. > > * > > * Return: size of the actual memory used by @objp in bytes > > */ > > diff --git a/mm/slab_common.c b/mm/slab_common.c > > index 457671ace7eb..d7420cf649f8 100644 > > --- a/mm/slab_common.c > > +++ b/mm/slab_common.c > > @@ -721,6 +721,26 @@ struct kmem_cache *kmalloc_slab(size_t size, gfp_t flags) > > return kmalloc_caches[kmalloc_type(flags)][index]; > > } > > +size_t kmalloc_size_roundup(size_t size) > > +{ > > + struct kmem_cache *c; > > + > > + /* Short-circuit the 0 size case. */ > > + if (unlikely(size == 0)) > > + return 0; > > + /* Short-circuit saturated "too-large" case. */ > > + if (unlikely(size == SIZE_MAX)) > > + return SIZE_MAX; > > + /* Above the smaller buckets, size is a multiple of page size. */ > > + if (size > KMALLOC_MAX_CACHE_SIZE) > > + return PAGE_SIZE << get_order(size); > > + > > + /* The flags don't matter since size_index is common to all. */ > > + c = kmalloc_slab(size, GFP_KERNEL); > > + return c ? c->object_size : 0; > > +} > > +EXPORT_SYMBOL(kmalloc_size_roundup); > > We need a SLOB version too as it's not yet removed... I added this: > > diff --git a/mm/slob.c b/mm/slob.c > index 2bd4f476c340..5dbdf6ad8bcc 100644 > --- a/mm/slob.c > +++ b/mm/slob.c > @@ -574,6 +574,20 @@ void kfree(const void *block) > } > EXPORT_SYMBOL(kfree); > +size_t kmalloc_size_roundup(size_t size) > +{ > + /* Short-circuit the 0 size case. */ > + if (unlikely(size == 0)) > + return 0; > + /* Short-circuit saturated "too-large" case. */ > + if (unlikely(size == SIZE_MAX)) > + return SIZE_MAX; > + > + return ALIGN(size, ARCH_KMALLOC_MINALIGN); > +} > + > +EXPORT_SYMBOL(kmalloc_size_roundup); Ah, perfect! Thanks for catching that. :) FWIW: Reviewed-by: Kees Cook -- Kees Cook