From: Hyeonggon Yoo <42.hyeyoo@gmail.com>
To: Feng Tang <feng.tang@intel.com>
Cc: Vlastimil Babka <vbabka@suse.cz>,
Andrew Morton <akpm@linux-foundation.org>,
Christoph Lameter <cl@linux.com>,
Pekka Enberg <penberg@kernel.org>,
David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Roman Gushchin <roman.gushchin@linux.dev>,
Andrey Konovalov <andreyknvl@gmail.com>,
Marco Elver <elver@google.com>,
Alexander Potapenko <glider@google.com>,
Dmitry Vyukov <dvyukov@google.com>,
Danilo Krummrich <dakr@kernel.org>,
Narasimhan.V@amd.com, linux-mm@kvack.org,
kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v3 2/3] mm/slub: Improve redzone check and zeroing for krealloc()
Date: Thu, 14 Nov 2024 22:34:10 +0900 [thread overview]
Message-ID: <CAB=+i9QUC+zscxC6AuK9qUaD-Y9VmAv2-Ovqt8JRJJARWxZ-EQ@mail.gmail.com> (raw)
In-Reply-To: <20241016154152.1376492-3-feng.tang@intel.com>
On Thu, Oct 17, 2024 at 12:42 AM Feng Tang <feng.tang@intel.com> wrote:
>
> For current krealloc(), one problem is its caller doesn't pass the old
> request size, say the object is 64 bytes kmalloc one, but caller may
> only requested 48 bytes. Then when krealloc() shrinks or grows in the
> same object, or allocate a new bigger object, it lacks this 'original
> size' information to do accurate data preserving or zeroing (when
> __GFP_ZERO is set).
>
> Thus with slub debug redzone and object tracking enabled, parts of the
> object after krealloc() might contain redzone data instead of zeroes,
> which is violating the __GFP_ZERO guarantees. Good thing is in this
> case, kmalloc caches do have this 'orig_size' feature. So solve the
> problem by utilize 'org_size' to do accurate data zeroing and preserving.
>
> [Thanks to syzbot and V, Narasimhan for discovering kfence and big
> kmalloc related issues in early patch version]
>
> Suggested-by: Vlastimil Babka <vbabka@suse.cz>
> Signed-off-by: Feng Tang <feng.tang@intel.com>
> ---
> mm/slub.c | 84 +++++++++++++++++++++++++++++++++++++++----------------
> 1 file changed, 60 insertions(+), 24 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 1d348899f7a3..958f7af79fad 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -4718,34 +4718,66 @@ static __always_inline __realloc_size(2) void *
> __do_krealloc(const void *p, size_t new_size, gfp_t flags)
> {
> void *ret;
> - size_t ks;
> -
> - /* Check for double-free before calling ksize. */
> - if (likely(!ZERO_OR_NULL_PTR(p))) {
> - if (!kasan_check_byte(p))
> - return NULL;
> - ks = ksize(p);
> - } else
> - ks = 0;
> -
> - /* If the object still fits, repoison it precisely. */
> - if (ks >= new_size) {
> - /* Zero out spare memory. */
> - if (want_init_on_alloc(flags)) {
> - kasan_disable_current();
> + size_t ks = 0;
> + int orig_size = 0;
> + struct kmem_cache *s = NULL;
> +
> + /* Check for double-free. */
> + if (unlikely(ZERO_OR_NULL_PTR(p)))
> + goto alloc_new;
nit: I think kasan_check_bytes() is the function that checks for double-free?
Otherwise looks good to me,
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> + if (!kasan_check_byte(p))
> + return NULL;
> +
> + if (is_kfence_address(p)) {
> + ks = orig_size = kfence_ksize(p);
> + } else {
> + struct folio *folio;
> +
> + folio = virt_to_folio(p);
> + if (unlikely(!folio_test_slab(folio))) {
> + /* Big kmalloc object */
> + WARN_ON(folio_size(folio) <= KMALLOC_MAX_CACHE_SIZE);
> + WARN_ON(p != folio_address(folio));
> + ks = folio_size(folio);
> + } else {
> + s = folio_slab(folio)->slab_cache;
> + orig_size = get_orig_size(s, (void *)p);
> + ks = s->object_size;
> + }
> + }
> +
> + /* If the old object doesn't fit, allocate a bigger one */
> + if (new_size > ks)
> + goto alloc_new;
> +
> + /* Zero out spare memory. */
> + if (want_init_on_alloc(flags)) {
> + kasan_disable_current();
> + if (orig_size && orig_size < new_size)
> + memset((void *)p + orig_size, 0, new_size - orig_size);
> + else
> memset((void *)p + new_size, 0, ks - new_size);
> - kasan_enable_current();
> - }
> + kasan_enable_current();
> + }
>
> - p = kasan_krealloc((void *)p, new_size, flags);
> - return (void *)p;
> + /* Setup kmalloc redzone when needed */
> + if (s && slub_debug_orig_size(s)) {
> + set_orig_size(s, (void *)p, new_size);
> + if (s->flags & SLAB_RED_ZONE && new_size < ks)
> + memset_no_sanitize_memory((void *)p + new_size,
> + SLUB_RED_ACTIVE, ks - new_size);
> }
> + p = kasan_krealloc((void *)p, new_size, flags);
> + return (void *)p;
> +
> +alloc_new:
> ret = kmalloc_node_track_caller_noprof(new_size, flags, NUMA_NO_NODE, _RET_IP_);
> if (ret && p) {
> /* Disable KASAN checks as the object's redzone is accessed. */
> kasan_disable_current();
> - memcpy(ret, kasan_reset_tag(p), ks);
> + memcpy(ret, kasan_reset_tag(p), orig_size ?: ks);
> kasan_enable_current();
> }
>
> @@ -4766,16 +4798,20 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
> * memory allocation is flagged with __GFP_ZERO. Otherwise, it is possible that
> * __GFP_ZERO is not fully honored by this API.
> *
> - * This is the case, since krealloc() only knows about the bucket size of an
> - * allocation (but not the exact size it was allocated with) and hence
> - * implements the following semantics for shrinking and growing buffers with
> - * __GFP_ZERO.
> + * When slub_debug_orig_size() is off, krealloc() only knows about the bucket
> + * size of an allocation (but not the exact size it was allocated with) and
> + * hence implements the following semantics for shrinking and growing buffers
> + * with __GFP_ZERO.
> *
> * new bucket
> * 0 size size
> * |--------|----------------|
> * | keep | zero |
> *
> + * Otherwise, the original allocation size 'orig_size' could be used to
> + * precisely clear the requested size, and the new size will also be stored
> + * as the new 'orig_size'.
> + *
> * In any case, the contents of the object pointed to are preserved up to the
> * lesser of the new and old sizes.
> *
> --
> 2.27.0
>
next prev parent reply other threads:[~2024-11-14 13:34 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-16 15:41 [PATCH v3 0/3] mm/slub: Improve data handling of krealloc() when orig_size is enabled Feng Tang
2024-10-16 15:41 ` [PATCH v3 1/3] mm/slub: Consider kfence case for get_orig_size() Feng Tang
2024-11-14 13:38 ` Hyeonggon Yoo
2024-10-16 15:41 ` [PATCH v3 2/3] mm/slub: Improve redzone check and zeroing for krealloc() Feng Tang
2024-11-14 13:34 ` Hyeonggon Yoo [this message]
2024-11-15 13:29 ` Vlastimil Babka
2024-10-16 15:41 ` [PATCH v3 3/3] mm/slub, kunit: Add testcase for krealloc redzone and zeroing Feng Tang
2024-10-18 9:51 ` [PATCH v3 0/3] mm/slub: Improve data handling of krealloc() when orig_size is enabled Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAB=+i9QUC+zscxC6AuK9qUaD-Y9VmAv2-Ovqt8JRJJARWxZ-EQ@mail.gmail.com' \
--to=42.hyeyoo@gmail.com \
--cc=Narasimhan.V@amd.com \
--cc=akpm@linux-foundation.org \
--cc=andreyknvl@gmail.com \
--cc=cl@linux.com \
--cc=dakr@kernel.org \
--cc=dvyukov@google.com \
--cc=elver@google.com \
--cc=feng.tang@intel.com \
--cc=glider@google.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=kasan-dev@googlegroups.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox