From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DC63C433F5 for ; Wed, 22 Sep 2021 22:41:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1310361107 for ; Wed, 22 Sep 2021 22:41:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1310361107 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 6F1256B006C; Wed, 22 Sep 2021 18:41:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6A09E6B0071; Wed, 22 Sep 2021 18:41:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 56870900002; Wed, 22 Sep 2021 18:41:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0037.hostedemail.com [216.40.44.37]) by kanga.kvack.org (Postfix) with ESMTP id 41D2E6B006C for ; Wed, 22 Sep 2021 18:41:23 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id ECDE122BEE for ; Wed, 22 Sep 2021 22:41:22 +0000 (UTC) X-FDA: 78616682004.01.4D8E219 Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) by imf11.hostedemail.com (Postfix) with ESMTP id 98209F0000B9 for ; Wed, 22 Sep 2021 22:41:22 +0000 (UTC) Received: by mail-pf1-f171.google.com with SMTP id w19so3933467pfn.12 for ; Wed, 22 Sep 2021 15:41:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=Clu/CMqYqn57kn46zv2wUM8gp0cFgkCwKnhfOw8qQMI=; b=hH+Rj6HlaGlPQp6NFxLkmROyfSvwLx3LMaDtWfSjaFXTqrZCHbHXaEsmv6Y47g/PcV RBFDqTp3/UIXHeRV63Sr4Ba6HoGY/8m5UCnuU8HfyIVQSnCDlKkXMJCa/+tfwrVRFvpE ROF6Jy/7wkMSv2XeWz8Z4C4VWUF1DbRKg1gw8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=Clu/CMqYqn57kn46zv2wUM8gp0cFgkCwKnhfOw8qQMI=; b=x0kGIiQFl1HUoFnZmcXg9wWTlKAUUieVc5hT61dpXWrp/qrrQD4sAe9SfhI9Lws1Oh VsiH3V+iW5KoO/ZnsjQOJDTVRLHDhbLRPj+nXHjj12um6mStbJwrnyRsQYIU+LR76AsH TYDnJxPXlQqdWT+TSwAxAMuJddZBzR1ne+8x4dkFsoHiBWdp4uEn5mvjJLm40LxAA9l3 TzPub7/UMengHEfsfvvzf/UcfbB5UcRKphzq7i3TluJIXmhxv8q+lsVwpHeQCvzm1syu VupW68nCqvhX3Ckb3rQbYthtHsnTAJvvilcT3usC6zXlwQfz8VjGRGnpLXXd6B2j08wT 2Ouw== X-Gm-Message-State: AOAM5303BU8CQs+s2z3vEvUjk33SQcRDhQzDX2z974Dp//bhZfuGgayM 26BjGZFLlYAEdP4/RC3JwOJsow== X-Google-Smtp-Source: ABdhPJxf+QLmUC5cRgbS/VVEhwW9zCuavMHGFysr+qdZC0KIYuUCa1BenaP9lNNzkbho+Il3Wr6kog== X-Received: by 2002:a63:b04c:: with SMTP id z12mr1164353pgo.371.1632350481421; Wed, 22 Sep 2021 15:41:21 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id z62sm6856436pjj.53.2021.09.22.15.41.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 22 Sep 2021 15:41:20 -0700 (PDT) Date: Wed, 22 Sep 2021 15:41:19 -0700 From: Kees Cook To: Nick Desaulniers Cc: linux-kernel@vger.kernel.org, Daniel Micay , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , linux-mm@kvack.org, Joe Perches , Miguel Ojeda , Nathan Chancellor , Andy Whitcroft , Dwaipayan Ray , Lukas Bulwahn , Dennis Zhou , Tejun Heo , Masahiro Yamada , Michal Marek , clang-built-linux@googlegroups.com, linux-kbuild@vger.kernel.org, linux-hardening@vger.kernel.org Subject: Re: [PATCH v2 4/7] slab: Add __alloc_size attributes for better bounds checking Message-ID: <202109211608.B9B6DEE@keescook> References: <20210818214021.2476230-1-keescook@chromium.org> <20210818214021.2476230-5-keescook@chromium.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 98209F0000B9 X-Stat-Signature: 89sg9h7jhj8e64nxwrt6guwubz9hcgyb Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=hH+Rj6Hl; spf=pass (imf11.hostedemail.com: domain of keescook@chromium.org designates 209.85.210.171 as permitted sender) smtp.mailfrom=keescook@chromium.org; dmarc=pass (policy=none) header.from=chromium.org X-HE-Tag: 1632350482-933629 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Aug 25, 2021 at 02:31:34PM -0700, Nick Desaulniers wrote: > On Wed, Aug 18, 2021 at 2:40 PM Kees Cook wrote: > > > > As already done in GrapheneOS, add the __alloc_size attribute for > > regular kmalloc interfaces, to provide additional hinting for better > > bounds checking, assisting CONFIG_FORTIFY_SOURCE and other compiler > > optimizations. > > > > Co-developed-by: Daniel Micay > > Signed-off-by: Daniel Micay > > Cc: Christoph Lameter > > Cc: Pekka Enberg > > Cc: David Rientjes > > Cc: Joonsoo Kim > > Cc: Andrew Morton > > Cc: Vlastimil Babka > > Cc: linux-mm@kvack.org > > Signed-off-by: Kees Cook > > This is a good start, so > Reviewed-by: Nick Desaulniers Thanks! > Do we also want to attribute: > * __kmalloc_index This is just the bucketizer (it returns "int" for the kmalloc bucket). > * kmem_cache_free_bulk Not an allocator. > * kmem_cache_alloc_bulk This allocates a list of pointers, where "size" is the length of the list. > * kmalloc_order > * kmalloc_order_trace > * kmalloc_large Yes, these should be marked, good point. > * kmalloc_node This was already marked. > * kmem_cache_alloc_trace > * __kmalloc_track_caller > * __kmalloc_node_track_caller Yeah, these might get passed through in LTO situations. I'll add them. > * kmalloc_array_node I'll add this -- I thought it was already here but it got missed. Thanks! -Kees > > > --- > > include/linux/slab.h | 20 ++++++++++++++++++-- > > 1 file changed, 18 insertions(+), 2 deletions(-) > > > > diff --git a/include/linux/slab.h b/include/linux/slab.h > > index 10fd0a8c816a..6ce826d8194d 100644 > > --- a/include/linux/slab.h > > +++ b/include/linux/slab.h > > @@ -181,7 +181,7 @@ int kmem_cache_shrink(struct kmem_cache *s); > > /* > > * Common kmalloc functions provided by all allocators > > */ > > -__must_check > > +__must_check __alloc_size(2) > > void *krealloc(const void *objp, size_t new_size, gfp_t flags); > > void kfree(const void *objp); > > void kfree_sensitive(const void *objp); > > @@ -426,6 +426,7 @@ static __always_inline unsigned int __kmalloc_index(size_t size, > > #define kmalloc_index(s) __kmalloc_index(s, true) > > #endif /* !CONFIG_SLOB */ > > > > +__alloc_size(1) > > void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __malloc; > > void *kmem_cache_alloc(struct kmem_cache *s, gfp_t flags) __assume_kmalloc_alignment __malloc; > > void kmem_cache_free(struct kmem_cache *s, void *objp); > > @@ -450,6 +451,7 @@ static __always_inline void kfree_bulk(size_t size, void **p) > > } > > > > #ifdef CONFIG_NUMA > > +__alloc_size(1) > > void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_slab_alignment __malloc; > > void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) > > __assume_slab_alignment __malloc; > > @@ -574,6 +576,7 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags) > > * Try really hard to succeed the allocation but fail > > * eventually. > > */ > > +__alloc_size(1) > > static __always_inline void *kmalloc(size_t size, gfp_t flags) > > { > > if (__builtin_constant_p(size)) { > > @@ -596,6 +599,7 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags) > > return __kmalloc(size, flags); > > } > > > > +__alloc_size(1) > > static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node) > > { > > #ifndef CONFIG_SLOB > > @@ -620,6 +624,7 @@ static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node) > > * @size: element size. > > * @flags: the type of memory to allocate (see kmalloc). > > */ > > +__alloc_size(1, 2) > > static inline void *kmalloc_array(size_t n, size_t size, gfp_t flags) > > { > > size_t bytes; > > @@ -638,7 +643,7 @@ static inline void *kmalloc_array(size_t n, size_t size, gfp_t flags) > > * @new_size: new size of a single member of the array > > * @flags: the type of memory to allocate (see kmalloc) > > */ > > -__must_check > > +__must_check __alloc_size(2, 3) > > static inline void *krealloc_array(void *p, size_t new_n, size_t new_size, > > gfp_t flags) > > { > > @@ -656,6 +661,7 @@ static inline void *krealloc_array(void *p, size_t new_n, size_t new_size, > > * @size: element size. > > * @flags: the type of memory to allocate (see kmalloc). > > */ > > +__alloc_size(1, 2) > > static inline void *kcalloc(size_t n, size_t size, gfp_t flags) > > { > > return kmalloc_array(n, size, flags | __GFP_ZERO); > > @@ -685,6 +691,7 @@ static inline void *kmalloc_array_node(size_t n, size_t size, gfp_t flags, > > return __kmalloc_node(bytes, flags, node); > > } > > > > +__alloc_size(1, 2) > > static inline void *kcalloc_node(size_t n, size_t size, gfp_t flags, int node) > > { > > return kmalloc_array_node(n, size, flags | __GFP_ZERO, node); > > @@ -718,6 +725,7 @@ static inline void *kmem_cache_zalloc(struct kmem_cache *k, gfp_t flags) > > * @size: how many bytes of memory are required. > > * @flags: the type of memory to allocate (see kmalloc). > > */ > > +__alloc_size(1) > > static inline void *kzalloc(size_t size, gfp_t flags) > > { > > return kmalloc(size, flags | __GFP_ZERO); > > @@ -729,25 +737,31 @@ static inline void *kzalloc(size_t size, gfp_t flags) > > * @flags: the type of memory to allocate (see kmalloc). > > * @node: memory node from which to allocate > > */ > > +__alloc_size(1) > > static inline void *kzalloc_node(size_t size, gfp_t flags, int node) > > { > > return kmalloc_node(size, flags | __GFP_ZERO, node); > > } > > > > +__alloc_size(1) > > extern void *kvmalloc_node(size_t size, gfp_t flags, int node); > > +__alloc_size(1) > > static inline void *kvmalloc(size_t size, gfp_t flags) > > { > > return kvmalloc_node(size, flags, NUMA_NO_NODE); > > } > > +__alloc_size(1) > > static inline void *kvzalloc_node(size_t size, gfp_t flags, int node) > > { > > return kvmalloc_node(size, flags | __GFP_ZERO, node); > > } > > +__alloc_size(1) > > static inline void *kvzalloc(size_t size, gfp_t flags) > > { > > return kvmalloc(size, flags | __GFP_ZERO); > > } > > > > +__alloc_size(1, 2) > > static inline void *kvmalloc_array(size_t n, size_t size, gfp_t flags) > > { > > size_t bytes; > > @@ -758,11 +772,13 @@ static inline void *kvmalloc_array(size_t n, size_t size, gfp_t flags) > > return kvmalloc(bytes, flags); > > } > > > > +__alloc_size(1, 2) > > static inline void *kvcalloc(size_t n, size_t size, gfp_t flags) > > { > > return kvmalloc_array(n, size, flags | __GFP_ZERO); > > } > > > > +__alloc_size(3) > > extern void *kvrealloc(const void *p, size_t oldsize, size_t newsize, > > gfp_t flags); > > extern void kvfree(const void *addr); > > -- > > -- > Thanks, > ~Nick Desaulniers -- Kees Cook