From: Laura Abbott <labbott@redhat.com>
To: Kees Cook <keescook@chromium.org>, Christoph Lameter <cl@linux.com>
Cc: Daniel Micay <danielmicay@gmail.com>,
Pekka Enberg <penberg@kernel.org>,
David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Andrew Morton <akpm@linux-foundation.org>,
"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
Ingo Molnar <mingo@kernel.org>, Andy Lutomirski <luto@kernel.org>,
Nicolas Pitre <nicolas.pitre@linaro.org>,
Tejun Heo <tj@kernel.org>, Daniel Mack <daniel@zonque.org>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Sergey Senozhatsky <sergey.senozhatsky@gmail.com>,
Helge Deller <deller@gmx.de>, Rik van Riel <riel@redhat.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
kernel-hardening@lists.openwall.com
Subject: Re: [PATCH] mm: Add SLUB free list pointer obfuscation
Date: Tue, 20 Jun 2017 11:05:17 -0700 [thread overview]
Message-ID: <505961f9-b266-191a-f4b7-931410a55149@redhat.com> (raw)
In-Reply-To: <20170620030112.GA140256@beast>
On 06/19/2017 08:01 PM, Kees Cook wrote:
> This SLUB free list pointer obfuscation code is modified from Brad
> Spengler/PaX Team's code in the last public patch of grsecurity/PaX based
> on my understanding of the code. Changes or omissions from the original
> code are mine and don't reflect the original grsecurity/PaX code.
>
> This adds a per-cache random value to SLUB caches that is XORed with
> their freelist pointers. This adds nearly zero overhead and frustrates the
> very common heap overflow exploitation method of overwriting freelist
> pointers. A recent example of the attack is written up here:
> http://cyseclabs.com/blog/cve-2016-6187-heap-off-by-one-exploit
>
> This is based on patches by Daniel Micay, and refactored to avoid lots
> of #ifdef code.
>
> Suggested-by: Daniel Micay <danielmicay@gmail.com>
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
> include/linux/slub_def.h | 4 ++++
> init/Kconfig | 10 ++++++++++
> mm/slub.c | 32 +++++++++++++++++++++++++++-----
> 3 files changed, 41 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
> index 07ef550c6627..0258d6d74e9c 100644
> --- a/include/linux/slub_def.h
> +++ b/include/linux/slub_def.h
> @@ -93,6 +93,10 @@ struct kmem_cache {
> #endif
> #endif
>
> +#ifdef CONFIG_SLAB_HARDENED
> + unsigned long random;
> +#endif
> +
> #ifdef CONFIG_NUMA
> /*
> * Defragmentation by allocating from a remote node.
> diff --git a/init/Kconfig b/init/Kconfig
> index 1d3475fc9496..eb91082546bf 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -1900,6 +1900,16 @@ config SLAB_FREELIST_RANDOM
> security feature reduces the predictability of the kernel slab
> allocator against heap overflows.
>
> +config SLAB_HARDENED
> + bool "Harden slab cache infrastructure"
> + default y
> + depends on SLAB_FREELIST_RANDOM && SLUB> + help
> + Many kernel heap attacks try to target slab cache metadata and
> + other infrastructure. This options makes minor performance
> + sacrifies to harden the kernel slab allocator against common
> + exploit methods.
> +
Going to bikeshed on SLAB_HARDENED unless this is intended to be used for
more things. Perhaps SLAB_FREELIST_HARDENED?
What's the reason for the dependency on SLAB_FREELIST_RANDOM?
> config SLUB_CPU_PARTIAL
> default y
> depends on SLUB && SMP
> diff --git a/mm/slub.c b/mm/slub.c
> index 57e5156f02be..ffede2e0c5c1 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -34,6 +34,7 @@
> #include <linux/stacktrace.h>
> #include <linux/prefetch.h>
> #include <linux/memcontrol.h>
> +#include <linux/random.h>
>
> #include <trace/events/kmem.h>
>
> @@ -238,30 +239,50 @@ static inline void stat(const struct kmem_cache *s, enum stat_item si)
> * Core slab cache functions
> *******************************************************************/
>
> +#ifdef CONFIG_SLAB_HARDENED
> +# define initialize_random(s) \
> + do { \
> + s->random = get_random_long(); \
> + } while (0)
> +# define FREEPTR_VAL(ptr, ptr_addr, s) \
> + (void *)((unsigned long)(ptr) ^ s->random ^ (ptr_addr))
> +#else
> +# define initialize_random(s) do { } while (0)
> +# define FREEPTR_VAL(ptr, addr, s) ((void *)(ptr))
> +#endif
> +#define FREELIST_ENTRY(ptr_addr, s) \
> + FREEPTR_VAL(*(unsigned long *)(ptr_addr), \
> + (unsigned long)ptr_addr, s)
> +
> static inline void *get_freepointer(struct kmem_cache *s, void *object)
> {
> - return *(void **)(object + s->offset);
> + return FREELIST_ENTRY(object + s->offset, s);
> }
>
> static void prefetch_freepointer(const struct kmem_cache *s, void *object)
> {
> - prefetch(object + s->offset);
> + if (object)
> + prefetch(FREELIST_ENTRY(object + s->offset, s));
> }
>
> static inline void *get_freepointer_safe(struct kmem_cache *s, void *object)
> {
> + unsigned long freepointer_addr;
> void *p;
>
> if (!debug_pagealloc_enabled())
> return get_freepointer(s, object);
>
> - probe_kernel_read(&p, (void **)(object + s->offset), sizeof(p));
> - return p;
> + freepointer_addr = (unsigned long)object + s->offset;
> + probe_kernel_read(&p, (void **)freepointer_addr, sizeof(p));
> + return FREEPTR_VAL(p, freepointer_addr, s);
> }
>
> static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp)
> {
> - *(void **)(object + s->offset) = fp;
> + unsigned long freeptr_addr = (unsigned long)object + s->offset;
> +
> + *(void **)freeptr_addr = FREEPTR_VAL(fp, freeptr_addr, s);
> }
>
> /* Loop over all objects in a slab */
> @@ -3536,6 +3557,7 @@ static int kmem_cache_open(struct kmem_cache *s, unsigned long flags)
> {
> s->flags = kmem_cache_flags(s->size, flags, s->name, s->ctor);
> s->reserved = 0;
> + initialize_random(s);
>
> if (need_reserve_slab_rcu && (s->flags & SLAB_TYPESAFE_BY_RCU))
> s->reserved = sizeof(struct rcu_head);
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-06-20 18:05 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-06-20 3:01 Kees Cook
2017-06-20 18:05 ` Laura Abbott [this message]
2017-06-20 18:08 ` Kees Cook
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=505961f9-b266-191a-f4b7-931410a55149@redhat.com \
--to=labbott@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=bigeasy@linutronix.de \
--cc=cl@linux.com \
--cc=daniel@zonque.org \
--cc=danielmicay@gmail.com \
--cc=deller@gmx.de \
--cc=iamjoonsoo.kim@lge.com \
--cc=keescook@chromium.org \
--cc=kernel-hardening@lists.openwall.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=mingo@kernel.org \
--cc=nicolas.pitre@linaro.org \
--cc=paulmck@linux.vnet.ibm.com \
--cc=penberg@kernel.org \
--cc=riel@redhat.com \
--cc=rientjes@google.com \
--cc=sergey.senozhatsky@gmail.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox