linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/6] slab: Introduce dedicated bucket allocator
@ 2024-05-31 19:14 Kees Cook
  2024-05-31 19:14 ` [PATCH v4 1/6] mm/slab: Introduce kmem_buckets typedef Kees Cook
                   ` (5 more replies)
  0 siblings, 6 replies; 14+ messages in thread
From: Kees Cook @ 2024-05-31 19:14 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Kees Cook, GONG, Ruiqi, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, jvoisin, Andrew Morton,
	Roman Gushchin, Hyeonggon Yoo, Xiu Jianfeng, Suren Baghdasaryan,
	Kent Overstreet, Jann Horn, Matteo Rizzo, Thomas Graf,
	Herbert Xu, linux-kernel, linux-mm, linux-hardening, netdev

Hi,

 v4:
  - Rebase to v6.10-rc1
  - Add CONFIG_SLAB_BUCKETS to turn off the feature
 v3: https://lore.kernel.org/lkml/20240424213019.make.366-kees@kernel.org/
 v2: https://lore.kernel.org/lkml/20240305100933.it.923-kees@kernel.org/
 v1: https://lore.kernel.org/lkml/20240304184252.work.496-kees@kernel.org/

For the cover letter, I'm repeating commit log for patch 4 here, which has
additional clarifications and rationale since v2:

    Dedicated caches are available for fixed size allocations via
    kmem_cache_alloc(), but for dynamically sized allocations there is only
    the global kmalloc API's set of buckets available. This means it isn't
    possible to separate specific sets of dynamically sized allocations into
    a separate collection of caches.
    
    This leads to a use-after-free exploitation weakness in the Linux
    kernel since many heap memory spraying/grooming attacks depend on using
    userspace-controllable dynamically sized allocations to collide with
    fixed size allocations that end up in same cache.
    
    While CONFIG_RANDOM_KMALLOC_CACHES provides a probabilistic defense
    against these kinds of "type confusion" attacks, including for fixed
    same-size heap objects, we can create a complementary deterministic
    defense for dynamically sized allocations that are directly user
    controlled. Addressing these cases is limited in scope, so isolation these
    kinds of interfaces will not become an unbounded game of whack-a-mole. For
    example, pass through memdup_user(), making isolation there very
    effective.
    
    In order to isolate user-controllable sized allocations from system
    allocations, introduce kmem_buckets_create(), which behaves like
    kmem_cache_create(). Introduce kmem_buckets_alloc(), which behaves like
    kmem_cache_alloc(). Introduce kmem_buckets_alloc_track_caller() for
    where caller tracking is needed. Introduce kmem_buckets_valloc() for
    cases where vmalloc callback is needed.
    
    Allows for confining allocations to a dedicated set of sized caches
    (which have the same layout as the kmalloc caches).
    
    This can also be used in the future to extend codetag allocation
    annotations to implement per-caller allocation cache isolation[1] even
    for dynamic allocations.
    
    Memory allocation pinning[2] is still needed to plug the Use-After-Free
    cross-allocator weakness, but that is an existing and separate issue
    which is complementary to this improvement. Development continues for
    that feature via the SLAB_VIRTUAL[3] series (which could also provide
    guard pages -- another complementary improvement).
    
    Link: https://lore.kernel.org/lkml/202402211449.401382D2AF@keescook [1]
    Link: https://googleprojectzero.blogspot.com/2021/10/how-simple-linux-kernel-memory.html [2]
    Link: https://lore.kernel.org/lkml/20230915105933.495735-1-matteorizzo@google.com/ [3]

After the core implementation are 2 patches that cover the most heavily
abused "repeat offenders" used in exploits. Repeating those details here:

    The msg subsystem is a common target for exploiting[1][2][3][4][5][6]
    use-after-free type confusion flaws in the kernel for both read and
    write primitives. Avoid having a user-controlled size cache share the
    global kmalloc allocator by using a separate set of kmalloc buckets.
    
    Link: https://blog.hacktivesecurity.com/index.php/2022/06/13/linux-kernel-exploit-development-1day-case-study/ [1]
    Link: https://hardenedvault.net/blog/2022-11-13-msg_msg-recon-mitigation-ved/ [2]
    Link: https://www.willsroot.io/2021/08/corctf-2021-fire-of-salvation-writeup.html [3]
    Link: https://a13xp0p0v.github.io/2021/02/09/CVE-2021-26708.html [4]
    Link: https://google.github.io/security-research/pocs/linux/cve-2021-22555/writeup.html [5]
    Link: https://zplin.me/papers/ELOISE.pdf [6]
    Link: https://syst3mfailure.io/wall-of-perdition/ [7]

    Both memdup_user() and vmemdup_user() handle allocations that are
    regularly used for exploiting use-after-free type confusion flaws in
    the kernel (e.g. prctl() PR_SET_VMA_ANON_NAME[1] and setxattr[2][3][4]
    respectively).
    
    Since both are designed for contents coming from userspace, it allows
    for userspace-controlled allocation sizes. Use a dedicated set of kmalloc
    buckets so these allocations do not share caches with the global kmalloc
    buckets.
    
    Link: https://starlabs.sg/blog/2023/07-prctl-anon_vma_name-an-amusing-heap-spray/ [1]
    Link: https://duasynt.com/blog/linux-kernel-heap-spray [2]
    Link: https://etenal.me/archives/1336 [3]
    Link: https://github.com/a13xp0p0v/kernel-hack-drill/blob/master/drill_exploit_uaf.c [4]

Thanks!

-Kees


Kees Cook (6):
  mm/slab: Introduce kmem_buckets typedef
  mm/slab: Plumb kmem_buckets into __do_kmalloc_node()
  mm/slab: Introduce kvmalloc_buckets_node() that can take kmem_buckets
    argument
  mm/slab: Introduce kmem_buckets_create() and family
  ipc, msg: Use dedicated slab buckets for alloc_msg()
  mm/util: Use dedicated slab buckets for memdup_user()

 include/linux/slab.h | 70 ++++++++++++++++++++++++++++-------
 ipc/msgutil.c        | 13 ++++++-
 lib/rhashtable.c     |  2 +-
 mm/Kconfig           | 15 ++++++++
 mm/slab.h            |  6 ++-
 mm/slab_common.c     | 87 ++++++++++++++++++++++++++++++++++++++++++--
 mm/slub.c            | 34 ++++++++++++-----
 mm/util.c            | 29 +++++++++++----
 8 files changed, 217 insertions(+), 39 deletions(-)

-- 
2.34.1



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 1/6] mm/slab: Introduce kmem_buckets typedef
  2024-05-31 19:14 [PATCH v4 0/6] slab: Introduce dedicated bucket allocator Kees Cook
@ 2024-05-31 19:14 ` Kees Cook
  2024-05-31 19:14 ` [PATCH v4 2/6] mm/slab: Plumb kmem_buckets into __do_kmalloc_node() Kees Cook
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: Kees Cook @ 2024-05-31 19:14 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Kees Cook, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, jvoisin, Andrew Morton, Roman Gushchin,
	Hyeonggon Yoo, linux-mm, GONG, Ruiqi, Xiu Jianfeng,
	Suren Baghdasaryan, Kent Overstreet, Jann Horn, Matteo Rizzo,
	Thomas Graf, Herbert Xu, linux-kernel, linux-hardening, netdev

Encapsulate the concept of a single set of kmem_caches that are used
for the kmalloc size buckets. Redefine kmalloc_caches as an array
of these buckets (for the different global cache buckets).

Signed-off-by: Kees Cook <kees@kernel.org>
---
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: jvoisin <julien.voisin@dustri.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Cc: linux-mm@kvack.org
---
 include/linux/slab.h | 5 +++--
 mm/slab_common.c     | 3 +--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index 7247e217e21b..de2b7209cd05 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -426,8 +426,9 @@ enum kmalloc_cache_type {
 	NR_KMALLOC_TYPES
 };
 
-extern struct kmem_cache *
-kmalloc_caches[NR_KMALLOC_TYPES][KMALLOC_SHIFT_HIGH + 1];
+typedef struct kmem_cache * kmem_buckets[KMALLOC_SHIFT_HIGH + 1];
+
+extern kmem_buckets kmalloc_caches[NR_KMALLOC_TYPES];
 
 /*
  * Define gfp bits that should not be set for KMALLOC_NORMAL.
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 1560a1546bb1..e0b1c109bed2 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -653,8 +653,7 @@ static struct kmem_cache *__init create_kmalloc_cache(const char *name,
 	return s;
 }
 
-struct kmem_cache *
-kmalloc_caches[NR_KMALLOC_TYPES][KMALLOC_SHIFT_HIGH + 1] __ro_after_init =
+kmem_buckets kmalloc_caches[NR_KMALLOC_TYPES] __ro_after_init =
 { /* initialization for https://llvm.org/pr42570 */ };
 EXPORT_SYMBOL(kmalloc_caches);
 
-- 
2.34.1



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 2/6] mm/slab: Plumb kmem_buckets into __do_kmalloc_node()
  2024-05-31 19:14 [PATCH v4 0/6] slab: Introduce dedicated bucket allocator Kees Cook
  2024-05-31 19:14 ` [PATCH v4 1/6] mm/slab: Introduce kmem_buckets typedef Kees Cook
@ 2024-05-31 19:14 ` Kees Cook
  2024-06-03 17:06   ` Vlastimil Babka
  2024-05-31 19:14 ` [PATCH v4 3/6] mm/slab: Introduce kvmalloc_buckets_node() that can take kmem_buckets argument Kees Cook
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Kees Cook @ 2024-05-31 19:14 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Kees Cook, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, jvoisin, Andrew Morton, Roman Gushchin,
	Hyeonggon Yoo, linux-mm, linux-hardening, GONG, Ruiqi,
	Xiu Jianfeng, Suren Baghdasaryan, Kent Overstreet, Jann Horn,
	Matteo Rizzo, Thomas Graf, Herbert Xu, linux-kernel, netdev

Introduce CONFIG_SLAB_BUCKETS which provides the infrastructure to
support separated kmalloc buckets (in the follow kmem_buckets_create()
patches and future codetag-based separation). Since this will provide
a mitigation for a very common case of exploits, enable it by default.

To be able to choose which buckets to allocate from, make the buckets
available to the internal kmalloc interfaces by adding them as the
first argument, rather than depending on the buckets being chosen from
the fixed set of global buckets. Where the bucket is not available,
pass NULL, which means "use the default system kmalloc bucket set"
(the prior existing behavior), as implemented in kmalloc_slab().

To avoid adding the extra argument when !CONFIG_SLAB_BUCKETS, only the
top-level macros and static inlines use the buckets argument (where
they are stripped out and compiled out respectively). The actual extern
functions can then been built without the argument, and the internals
fall back to the global kmalloc buckets unconditionally.

Signed-off-by: Kees Cook <kees@kernel.org>
---
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: jvoisin <julien.voisin@dustri.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Cc: linux-mm@kvack.org
Cc: linux-hardening@vger.kernel.org
---
 include/linux/slab.h | 34 ++++++++++++++++++++++++++--------
 mm/Kconfig           | 15 +++++++++++++++
 mm/slab.h            |  6 ++++--
 mm/slab_common.c     |  4 ++--
 mm/slub.c            | 34 ++++++++++++++++++++++++----------
 mm/util.c            |  2 +-
 6 files changed, 72 insertions(+), 23 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index de2b7209cd05..b1165b22cc6f 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -569,8 +569,17 @@ static __always_inline void kfree_bulk(size_t size, void **p)
 	kmem_cache_free_bulk(NULL, size, p);
 }
 
-void *__kmalloc_node_noprof(size_t size, gfp_t flags, int node) __assume_kmalloc_alignment
-							 __alloc_size(1);
+#ifdef CONFIG_SLAB_BUCKETS
+void *__kmalloc_buckets_node_noprof(kmem_buckets *b, size_t size, gfp_t flags, int node)
+				__assume_kmalloc_alignment __alloc_size(2);
+# define __kmalloc_node_noprof(b, size, flags, node)	\
+	__kmalloc_buckets_node_noprof(b, size, flags, node)
+#else
+void *__kmalloc_buckets_node_noprof(size_t size, gfp_t flags, int node)
+				__assume_kmalloc_alignment __alloc_size(1);
+# define __kmalloc_node_noprof(b, size, flags, node)	\
+	__kmalloc_buckets_node_noprof(size, flags, node)
+#endif
 #define __kmalloc_node(...)			alloc_hooks(__kmalloc_node_noprof(__VA_ARGS__))
 
 void *kmem_cache_alloc_node_noprof(struct kmem_cache *s, gfp_t flags,
@@ -679,7 +688,7 @@ static __always_inline __alloc_size(1) void *kmalloc_node_noprof(size_t size, gf
 				kmalloc_caches[kmalloc_type(flags, _RET_IP_)][index],
 				flags, node, size);
 	}
-	return __kmalloc_node_noprof(size, flags, node);
+	return __kmalloc_node_noprof(NULL, size, flags, node);
 }
 #define kmalloc_node(...)			alloc_hooks(kmalloc_node_noprof(__VA_ARGS__))
 
@@ -730,10 +739,19 @@ static inline __realloc_size(2, 3) void * __must_check krealloc_array_noprof(voi
  */
 #define kcalloc(n, size, flags)		kmalloc_array(n, size, (flags) | __GFP_ZERO)
 
-void *kmalloc_node_track_caller_noprof(size_t size, gfp_t flags, int node,
-				  unsigned long caller) __alloc_size(1);
+#ifdef CONFIG_SLAB_BUCKETS
+void *__kmalloc_node_track_caller_noprof(kmem_buckets *b, size_t size, gfp_t flags, int node,
+					 unsigned long caller) __alloc_size(2);
+# define kmalloc_node_track_caller_noprof(b, size, flags, node, caller)	\
+	__kmalloc_node_track_caller_noprof(b, size, flags, node, caller)
+#else
+void *__kmalloc_node_track_caller_noprof(size_t size, gfp_t flags, int node,
+					 unsigned long caller) __alloc_size(1);
+# define kmalloc_node_track_caller_noprof(b, size, flags, node, caller)	\
+	__kmalloc_node_track_caller_noprof(size, flags, node, caller)
+#endif
 #define kmalloc_node_track_caller(...)		\
-	alloc_hooks(kmalloc_node_track_caller_noprof(__VA_ARGS__, _RET_IP_))
+	alloc_hooks(kmalloc_node_track_caller_noprof(NULL, __VA_ARGS__, _RET_IP_))
 
 /*
  * kmalloc_track_caller is a special version of kmalloc that records the
@@ -746,7 +764,7 @@ void *kmalloc_node_track_caller_noprof(size_t size, gfp_t flags, int node,
 #define kmalloc_track_caller(...)		kmalloc_node_track_caller(__VA_ARGS__, NUMA_NO_NODE)
 
 #define kmalloc_track_caller_noprof(...)	\
-		kmalloc_node_track_caller_noprof(__VA_ARGS__, NUMA_NO_NODE, _RET_IP_)
+		kmalloc_node_track_caller_noprof(NULL, __VA_ARGS__, NUMA_NO_NODE, _RET_IP_)
 
 static inline __alloc_size(1, 2) void *kmalloc_array_node_noprof(size_t n, size_t size, gfp_t flags,
 							  int node)
@@ -757,7 +775,7 @@ static inline __alloc_size(1, 2) void *kmalloc_array_node_noprof(size_t n, size_
 		return NULL;
 	if (__builtin_constant_p(n) && __builtin_constant_p(size))
 		return kmalloc_node_noprof(bytes, flags, node);
-	return __kmalloc_node_noprof(bytes, flags, node);
+	return __kmalloc_node_noprof(NULL, bytes, flags, node);
 }
 #define kmalloc_array_node(...)			alloc_hooks(kmalloc_array_node_noprof(__VA_ARGS__))
 
diff --git a/mm/Kconfig b/mm/Kconfig
index b4cb45255a54..8c29af7835cc 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -273,6 +273,21 @@ config SLAB_FREELIST_HARDENED
 	  sacrifices to harden the kernel slab allocator against common
 	  freelist exploit methods.
 
+config SLAB_BUCKETS
+	bool "Support allocation from separate kmalloc buckets"
+	default y
+	depends on !SLUB_TINY
+	help
+	  Kernel heap attacks frequently depend on being able to create
+	  specifically-sized allocations with user-controlled contents
+	  that will be allocated into the same kmalloc bucket as a
+	  target object. To avoid sharing these allocation buckets,
+	  provide an explicitly separated set of buckets to be used for
+	  user-controlled allocations. This may very slightly increase
+	  memory fragmentation, though in practice it's only a handful
+	  of extra pages since the bulk of user-controlled allocations
+	  are relatively long-lived.
+
 config SLUB_STATS
 	default n
 	bool "Enable performance statistics"
diff --git a/mm/slab.h b/mm/slab.h
index 5f8f47c5bee0..f459cd338852 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -403,16 +403,18 @@ static inline unsigned int size_index_elem(unsigned int bytes)
  * KMALLOC_MAX_CACHE_SIZE and the caller must check that.
  */
 static inline struct kmem_cache *
-kmalloc_slab(size_t size, gfp_t flags, unsigned long caller)
+kmalloc_slab(kmem_buckets *b, size_t size, gfp_t flags, unsigned long caller)
 {
 	unsigned int index;
 
+	if (!b)
+		b = &kmalloc_caches[kmalloc_type(flags, caller)];
 	if (size <= 192)
 		index = kmalloc_size_index[size_index_elem(size)];
 	else
 		index = fls(size - 1);
 
-	return kmalloc_caches[kmalloc_type(flags, caller)][index];
+	return (*b)[index];
 }
 
 gfp_t kmalloc_fix_flags(gfp_t flags);
diff --git a/mm/slab_common.c b/mm/slab_common.c
index e0b1c109bed2..b5c879fa66bc 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -702,7 +702,7 @@ size_t kmalloc_size_roundup(size_t size)
 		 * The flags don't matter since size_index is common to all.
 		 * Neither does the caller for just getting ->object_size.
 		 */
-		return kmalloc_slab(size, GFP_KERNEL, 0)->object_size;
+		return kmalloc_slab(NULL, size, GFP_KERNEL, 0)->object_size;
 	}
 
 	/* Above the smaller buckets, size is a multiple of page size. */
@@ -1179,7 +1179,7 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
 		return (void *)p;
 	}
 
-	ret = kmalloc_node_track_caller_noprof(new_size, flags, NUMA_NO_NODE, _RET_IP_);
+	ret = kmalloc_node_track_caller_noprof(NULL, new_size, flags, NUMA_NO_NODE, _RET_IP_);
 	if (ret && p) {
 		/* Disable KASAN checks as the object's redzone is accessed. */
 		kasan_disable_current();
diff --git a/mm/slub.c b/mm/slub.c
index 0809760cf789..ec682a325abe 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4099,7 +4099,7 @@ void *kmalloc_large_node_noprof(size_t size, gfp_t flags, int node)
 EXPORT_SYMBOL(kmalloc_large_node_noprof);
 
 static __always_inline
-void *__do_kmalloc_node(size_t size, gfp_t flags, int node,
+void *__do_kmalloc_node(kmem_buckets *b, size_t size, gfp_t flags, int node,
 			unsigned long caller)
 {
 	struct kmem_cache *s;
@@ -4115,7 +4115,7 @@ void *__do_kmalloc_node(size_t size, gfp_t flags, int node,
 	if (unlikely(!size))
 		return ZERO_SIZE_PTR;
 
-	s = kmalloc_slab(size, flags, caller);
+	s = kmalloc_slab(b, size, flags, caller);
 
 	ret = slab_alloc_node(s, NULL, flags, node, caller, size);
 	ret = kasan_kmalloc(s, ret, size, flags);
@@ -4123,24 +4123,38 @@ void *__do_kmalloc_node(size_t size, gfp_t flags, int node,
 	return ret;
 }
 
-void *__kmalloc_node_noprof(size_t size, gfp_t flags, int node)
+#ifdef CONFIG_SLAB_BUCKETS
+# define __do_kmalloc_buckets_node(b, size, flags, node, caller)	\
+	__do_kmalloc_node(b, size, flags, node, caller)
+void *__kmalloc_buckets_node_noprof(kmem_buckets *b, size_t size, gfp_t flags, int node)
+#else
+# define __do_kmalloc_buckets_node(b, size, flags, node, caller)	\
+	__do_kmalloc_node(NULL, size, flags, node, caller)
+void *__kmalloc_buckets_node_noprof(size_t size, gfp_t flags, int node)
+#endif
 {
-	return __do_kmalloc_node(size, flags, node, _RET_IP_);
+	return __do_kmalloc_buckets_node(b, size, flags, node, _RET_IP_);
 }
-EXPORT_SYMBOL(__kmalloc_node_noprof);
+EXPORT_SYMBOL(__kmalloc_buckets_node_noprof);
 
 void *__kmalloc_noprof(size_t size, gfp_t flags)
 {
-	return __do_kmalloc_node(size, flags, NUMA_NO_NODE, _RET_IP_);
+	return __do_kmalloc_buckets_node(NULL, size, flags, NUMA_NO_NODE, _RET_IP_);
 }
 EXPORT_SYMBOL(__kmalloc_noprof);
 
-void *kmalloc_node_track_caller_noprof(size_t size, gfp_t flags,
-				       int node, unsigned long caller)
+#ifdef CONFIG_SLAB_BUCKETS
+void *__kmalloc_node_track_caller_noprof(kmem_buckets *b, size_t size, gfp_t flags,
+					 int node, unsigned long caller)
+#else
+void *__kmalloc_node_track_caller_noprof(size_t size, gfp_t flags,
+					 int node, unsigned long caller)
+#endif
 {
-	return __do_kmalloc_node(size, flags, node, caller);
+	return __do_kmalloc_buckets_node(b, size, flags, node, caller);
+
 }
-EXPORT_SYMBOL(kmalloc_node_track_caller_noprof);
+EXPORT_SYMBOL(__kmalloc_node_track_caller_noprof);
 
 void *kmalloc_trace_noprof(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
diff --git a/mm/util.c b/mm/util.c
index c9e519e6811f..80430e5ba981 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -128,7 +128,7 @@ void *kmemdup_noprof(const void *src, size_t len, gfp_t gfp)
 {
 	void *p;
 
-	p = kmalloc_node_track_caller_noprof(len, gfp, NUMA_NO_NODE, _RET_IP_);
+	p = kmalloc_node_track_caller_noprof(NULL, len, gfp, NUMA_NO_NODE, _RET_IP_);
 	if (p)
 		memcpy(p, src, len);
 	return p;
-- 
2.34.1



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 3/6] mm/slab: Introduce kvmalloc_buckets_node() that can take kmem_buckets argument
  2024-05-31 19:14 [PATCH v4 0/6] slab: Introduce dedicated bucket allocator Kees Cook
  2024-05-31 19:14 ` [PATCH v4 1/6] mm/slab: Introduce kmem_buckets typedef Kees Cook
  2024-05-31 19:14 ` [PATCH v4 2/6] mm/slab: Plumb kmem_buckets into __do_kmalloc_node() Kees Cook
@ 2024-05-31 19:14 ` Kees Cook
  2024-05-31 19:14 ` [PATCH v4 4/6] mm/slab: Introduce kmem_buckets_create() and family Kees Cook
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: Kees Cook @ 2024-05-31 19:14 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Kees Cook, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, jvoisin, Andrew Morton, Roman Gushchin,
	Hyeonggon Yoo, linux-mm, GONG, Ruiqi, Xiu Jianfeng,
	Suren Baghdasaryan, Kent Overstreet, Jann Horn, Matteo Rizzo,
	Thomas Graf, Herbert Xu, linux-kernel, linux-hardening, netdev

Plumb kmem_buckets arguments through kvmalloc_node_noprof() so it is
possible to provide an API to perform kvmalloc-style allocations with
a particular set of buckets. Introduce kvmalloc_buckets_node() that takes a
kmem_buckets argument.

Signed-off-by: Kees Cook <kees@kernel.org>
---
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: jvoisin <julien.voisin@dustri.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Cc: linux-mm@kvack.org
---
 include/linux/slab.h | 19 +++++++++++++++----
 lib/rhashtable.c     |  2 +-
 mm/util.c            | 13 +++++++++----
 3 files changed, 25 insertions(+), 9 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index b1165b22cc6f..8853c6eb20b4 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -799,11 +799,22 @@ static inline __alloc_size(1) void *kzalloc_noprof(size_t size, gfp_t flags)
 #define kzalloc(...)				alloc_hooks(kzalloc_noprof(__VA_ARGS__))
 #define kzalloc_node(_size, _flags, _node)	kmalloc_node(_size, (_flags)|__GFP_ZERO, _node)
 
-extern void *kvmalloc_node_noprof(size_t size, gfp_t flags, int node) __alloc_size(1);
-#define kvmalloc_node(...)			alloc_hooks(kvmalloc_node_noprof(__VA_ARGS__))
+#ifdef CONFIG_SLAB_BUCKETS
+extern void *kvmalloc_buckets_node_noprof(kmem_buckets *b, size_t size, gfp_t flags, int node)
+					__alloc_size(2);
+# define kvmalloc_node_noprof(b, size, flags, node)	\
+	kvmalloc_buckets_node_noprof(b, size, flags, node)
+#else
+extern void *kvmalloc_buckets_node_noprof(size_t size, gfp_t flags, int node)
+					__alloc_size(1);
+# define kvmalloc_node_noprof(b, size, flags, node)	\
+	kvmalloc_buckets_node_noprof(size, flags, node)
+#endif
+#define kvmalloc_buckets_node(...)		alloc_hooks(kvmalloc_node_noprof(__VA_ARGS__))
+#define kvmalloc_node(...)			kvmalloc_buckets_node(NULL, __VA_ARGS__)
 
 #define kvmalloc(_size, _flags)			kvmalloc_node(_size, _flags, NUMA_NO_NODE)
-#define kvmalloc_noprof(_size, _flags)		kvmalloc_node_noprof(_size, _flags, NUMA_NO_NODE)
+#define kvmalloc_noprof(_size, _flags)		kvmalloc_node_noprof(NULL, _size, _flags, NUMA_NO_NODE)
 #define kvzalloc(_size, _flags)			kvmalloc(_size, (_flags)|__GFP_ZERO)
 
 #define kvzalloc_node(_size, _flags, _node)	kvmalloc_node(_size, (_flags)|__GFP_ZERO, _node)
@@ -816,7 +827,7 @@ kvmalloc_array_node_noprof(size_t n, size_t size, gfp_t flags, int node)
 	if (unlikely(check_mul_overflow(n, size, &bytes)))
 		return NULL;
 
-	return kvmalloc_node_noprof(bytes, flags, node);
+	return kvmalloc_node_noprof(NULL, bytes, flags, node);
 }
 
 #define kvmalloc_array_noprof(...)		kvmalloc_array_node_noprof(__VA_ARGS__, NUMA_NO_NODE)
diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index dbbed19f8fff..ef0f496e4aed 100644
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -184,7 +184,7 @@ static struct bucket_table *bucket_table_alloc(struct rhashtable *ht,
 	static struct lock_class_key __key;
 
 	tbl = alloc_hooks_tag(ht->alloc_tag,
-			kvmalloc_node_noprof(struct_size(tbl, buckets, nbuckets),
+			kvmalloc_node_noprof(NULL, struct_size(tbl, buckets, nbuckets),
 					     gfp|__GFP_ZERO, NUMA_NO_NODE));
 
 	size = nbuckets;
diff --git a/mm/util.c b/mm/util.c
index 80430e5ba981..53f7fc5912bd 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -593,9 +593,11 @@ unsigned long vm_mmap(struct file *file, unsigned long addr,
 }
 EXPORT_SYMBOL(vm_mmap);
 
+#ifdef CONFIG_SLAB_BUCKETS
 /**
- * kvmalloc_node - attempt to allocate physically contiguous memory, but upon
+ * kvmalloc_buckets_node_noprof - attempt to allocate physically contiguous memory, but upon
  * failure, fall back to non-contiguous (vmalloc) allocation.
+ * @b: which set of kmalloc buckets to allocate from.
  * @size: size of the request.
  * @flags: gfp mask for the allocation - must be compatible (superset) with GFP_KERNEL.
  * @node: numa node to allocate from
@@ -609,7 +611,10 @@ EXPORT_SYMBOL(vm_mmap);
  *
  * Return: pointer to the allocated memory of %NULL in case of failure
  */
-void *kvmalloc_node_noprof(size_t size, gfp_t flags, int node)
+void *kvmalloc_buckets_node_noprof(kmem_buckets *b, size_t size, gfp_t flags, int node)
+#else
+void *kvmalloc_buckets_node_noprof(size_t size, gfp_t flags, int node)
+#endif
 {
 	gfp_t kmalloc_flags = flags;
 	void *ret;
@@ -631,7 +636,7 @@ void *kvmalloc_node_noprof(size_t size, gfp_t flags, int node)
 		kmalloc_flags &= ~__GFP_NOFAIL;
 	}
 
-	ret = kmalloc_node_noprof(size, kmalloc_flags, node);
+	ret = __kmalloc_node_noprof(b, size, kmalloc_flags, node);
 
 	/*
 	 * It doesn't really make sense to fallback to vmalloc for sub page
@@ -660,7 +665,7 @@ void *kvmalloc_node_noprof(size_t size, gfp_t flags, int node)
 			flags, PAGE_KERNEL, VM_ALLOW_HUGE_VMAP,
 			node, __builtin_return_address(0));
 }
-EXPORT_SYMBOL(kvmalloc_node_noprof);
+EXPORT_SYMBOL(kvmalloc_buckets_node_noprof);
 
 /**
  * kvfree() - Free memory.
-- 
2.34.1



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 4/6] mm/slab: Introduce kmem_buckets_create() and family
  2024-05-31 19:14 [PATCH v4 0/6] slab: Introduce dedicated bucket allocator Kees Cook
                   ` (2 preceding siblings ...)
  2024-05-31 19:14 ` [PATCH v4 3/6] mm/slab: Introduce kvmalloc_buckets_node() that can take kmem_buckets argument Kees Cook
@ 2024-05-31 19:14 ` Kees Cook
  2024-06-04 15:02   ` Simon Horman
  2024-05-31 19:14 ` [PATCH v4 5/6] ipc, msg: Use dedicated slab buckets for alloc_msg() Kees Cook
  2024-05-31 19:14 ` [PATCH v4 6/6] mm/util: Use dedicated slab buckets for memdup_user() Kees Cook
  5 siblings, 1 reply; 14+ messages in thread
From: Kees Cook @ 2024-05-31 19:14 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Kees Cook, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, jvoisin, Andrew Morton, Roman Gushchin,
	Hyeonggon Yoo, linux-mm, GONG, Ruiqi, Xiu Jianfeng,
	Suren Baghdasaryan, Kent Overstreet, Jann Horn, Matteo Rizzo,
	Thomas Graf, Herbert Xu, linux-kernel, linux-hardening, netdev

Dedicated caches are available for fixed size allocations via
kmem_cache_alloc(), but for dynamically sized allocations there is only
the global kmalloc API's set of buckets available. This means it isn't
possible to separate specific sets of dynamically sized allocations into
a separate collection of caches.

This leads to a use-after-free exploitation weakness in the Linux
kernel since many heap memory spraying/grooming attacks depend on using
userspace-controllable dynamically sized allocations to collide with
fixed size allocations that end up in same cache.

While CONFIG_RANDOM_KMALLOC_CACHES provides a probabilistic defense
against these kinds of "type confusion" attacks, including for fixed
same-size heap objects, we can create a complementary deterministic
defense for dynamically sized allocations that are directly user
controlled. Addressing these cases is limited in scope, so isolating these
kinds of interfaces will not become an unbounded game of whack-a-mole. For
example, many pass through memdup_user(), making isolation there very
effective.

In order to isolate user-controllable dynamically-sized
allocations from the common system kmalloc allocations, introduce
kmem_buckets_create(), which behaves like kmem_cache_create(). Introduce
kmem_buckets_alloc(), which behaves like kmem_cache_alloc(). Introduce
kmem_buckets_alloc_track_caller() for where caller tracking is
needed. Introduce kmem_buckets_valloc() for cases where vmalloc fallback
is needed.

This can also be used in the future to extend allocation profiling's use
of code tagging to implement per-caller allocation cache isolation[1]
even for dynamic allocations.

Memory allocation pinning[2] is still needed to plug the Use-After-Free
cross-allocator weakness, but that is an existing and separate issue
which is complementary to this improvement. Development continues for
that feature via the SLAB_VIRTUAL[3] series (which could also provide
guard pages -- another complementary improvement).

Link: https://lore.kernel.org/lkml/202402211449.401382D2AF@keescook [1]
Link: https://googleprojectzero.blogspot.com/2021/10/how-simple-linux-kernel-memory.html [2]
Link: https://lore.kernel.org/lkml/20230915105933.495735-1-matteorizzo@google.com/ [3]
Signed-off-by: Kees Cook <kees@kernel.org>
---
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: jvoisin <julien.voisin@dustri.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Cc: linux-mm@kvack.org
---
 include/linux/slab.h | 12 +++++++
 mm/slab_common.c     | 80 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 92 insertions(+)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index 8853c6eb20b4..b48c50d90aae 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -552,6 +552,11 @@ void *kmem_cache_alloc_lru_noprof(struct kmem_cache *s, struct list_lru *lru,
 
 void kmem_cache_free(struct kmem_cache *s, void *objp);
 
+kmem_buckets *kmem_buckets_create(const char *name, unsigned int align,
+				  slab_flags_t flags,
+				  unsigned int useroffset, unsigned int usersize,
+				  void (*ctor)(void *));
+
 /*
  * Bulk allocation and freeing operations. These are accelerated in an
  * allocator specific way to avoid taking locks repeatedly or building
@@ -675,6 +680,12 @@ static __always_inline __alloc_size(1) void *kmalloc_noprof(size_t size, gfp_t f
 }
 #define kmalloc(...)				alloc_hooks(kmalloc_noprof(__VA_ARGS__))
 
+#define kmem_buckets_alloc(_b, _size, _flags)	\
+	alloc_hooks(__kmalloc_node_noprof(_b, _size, _flags, NUMA_NO_NODE))
+
+#define kmem_buckets_alloc_track_caller(_b, _size, _flags)	\
+	alloc_hooks(kmalloc_node_track_caller_noprof(_b, _size, _flags, NUMA_NO_NODE, _RET_IP_))
+
 static __always_inline __alloc_size(1) void *kmalloc_node_noprof(size_t size, gfp_t flags, int node)
 {
 	if (__builtin_constant_p(size) && size) {
@@ -818,6 +829,7 @@ extern void *kvmalloc_buckets_node_noprof(size_t size, gfp_t flags, int node)
 #define kvzalloc(_size, _flags)			kvmalloc(_size, (_flags)|__GFP_ZERO)
 
 #define kvzalloc_node(_size, _flags, _node)	kvmalloc_node(_size, (_flags)|__GFP_ZERO, _node)
+#define kmem_buckets_valloc(_b, _size, _flags)	kvmalloc_buckets_node(_b, _size, _flags, NUMA_NO_NODE)
 
 static inline __alloc_size(1, 2) void *
 kvmalloc_array_node_noprof(size_t n, size_t size, gfp_t flags, int node)
diff --git a/mm/slab_common.c b/mm/slab_common.c
index b5c879fa66bc..f42a98d368a9 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -392,6 +392,82 @@ kmem_cache_create(const char *name, unsigned int size, unsigned int align,
 }
 EXPORT_SYMBOL(kmem_cache_create);
 
+static struct kmem_cache *kmem_buckets_cache __ro_after_init;
+
+kmem_buckets *kmem_buckets_create(const char *name, unsigned int align,
+				  slab_flags_t flags,
+				  unsigned int useroffset,
+				  unsigned int usersize,
+				  void (*ctor)(void *))
+{
+	kmem_buckets *b;
+	int idx;
+
+	/*
+	 * When the separate buckets API is not built in, just return
+	 * a non-NULL value for the kmem_buckets pointer, which will be
+	 * unused when performing allocations.
+	 */
+	if (!IS_ENABLED(CONFIG_SLAB_BUCKETS))
+		return ZERO_SIZE_PTR;
+
+	if (WARN_ON(!kmem_buckets_cache))
+		return NULL;
+
+	b = kmem_cache_alloc(kmem_buckets_cache, GFP_KERNEL|__GFP_ZERO);
+	if (WARN_ON(!b))
+		return NULL;
+
+	flags |= SLAB_NO_MERGE;
+
+	for (idx = 0; idx < ARRAY_SIZE(kmalloc_caches[KMALLOC_NORMAL]); idx++) {
+		char *short_size, *cache_name;
+		unsigned int cache_useroffset, cache_usersize;
+		unsigned int size;
+
+		if (!kmalloc_caches[KMALLOC_NORMAL][idx])
+			continue;
+
+		size = kmalloc_caches[KMALLOC_NORMAL][idx]->object_size;
+		if (!size)
+			continue;
+
+		short_size = strchr(kmalloc_caches[KMALLOC_NORMAL][idx]->name, '-');
+		if (WARN_ON(!short_size))
+			goto fail;
+
+		cache_name = kasprintf(GFP_KERNEL, "%s-%s", name, short_size + 1);
+		if (WARN_ON(!cache_name))
+			goto fail;
+
+		if (useroffset >= size) {
+			cache_useroffset = 0;
+			cache_usersize = 0;
+		} else {
+			cache_useroffset = useroffset;
+			cache_usersize = min(size - cache_useroffset, usersize);
+		}
+		(*b)[idx] = kmem_cache_create_usercopy(cache_name, size,
+					align, flags, cache_useroffset,
+					cache_usersize, ctor);
+		kfree(cache_name);
+		if (WARN_ON(!(*b)[idx]))
+			goto fail;
+	}
+
+	return b;
+
+fail:
+	for (idx = 0; idx < ARRAY_SIZE(kmalloc_caches[KMALLOC_NORMAL]); idx++) {
+		if ((*b)[idx])
+			kmem_cache_destroy((*b)[idx]);
+	}
+	kfree(b);
+
+	return NULL;
+}
+EXPORT_SYMBOL(kmem_buckets_create);
+
 #ifdef SLAB_SUPPORTS_SYSFS
 /*
  * For a given kmem_cache, kmem_cache_destroy() should only be called
@@ -931,6 +1007,10 @@ void __init create_kmalloc_caches(void)
 
 	/* Kmalloc array is now usable */
 	slab_state = UP;
+
+	kmem_buckets_cache = kmem_cache_create("kmalloc_buckets",
+					       sizeof(kmem_buckets),
+					       0, 0, NULL);
 }
 
 /**
-- 
2.34.1



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 5/6] ipc, msg: Use dedicated slab buckets for alloc_msg()
  2024-05-31 19:14 [PATCH v4 0/6] slab: Introduce dedicated bucket allocator Kees Cook
                   ` (3 preceding siblings ...)
  2024-05-31 19:14 ` [PATCH v4 4/6] mm/slab: Introduce kmem_buckets_create() and family Kees Cook
@ 2024-05-31 19:14 ` Kees Cook
  2024-05-31 19:14 ` [PATCH v4 6/6] mm/util: Use dedicated slab buckets for memdup_user() Kees Cook
  5 siblings, 0 replies; 14+ messages in thread
From: Kees Cook @ 2024-05-31 19:14 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Kees Cook, GONG, Ruiqi, Xiu Jianfeng, Suren Baghdasaryan,
	Kent Overstreet, Jann Horn, Matteo Rizzo, jvoisin,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Roman Gushchin, Hyeonggon Yoo, Thomas Graf,
	Herbert Xu, linux-kernel, linux-mm, linux-hardening, netdev

The msg subsystem is a common target for exploiting[1][2][3][4][5][6][7]
use-after-free type confusion flaws in the kernel for both read and write
primitives. Avoid having a user-controlled dynamically-size allocation
share the global kmalloc cache by using a separate set of kmalloc buckets
via the kmem_buckets API.

Link: https://blog.hacktivesecurity.com/index.php/2022/06/13/linux-kernel-exploit-development-1day-case-study/ [1]
Link: https://hardenedvault.net/blog/2022-11-13-msg_msg-recon-mitigation-ved/ [2]
Link: https://www.willsroot.io/2021/08/corctf-2021-fire-of-salvation-writeup.html [3]
Link: https://a13xp0p0v.github.io/2021/02/09/CVE-2021-26708.html [4]
Link: https://google.github.io/security-research/pocs/linux/cve-2021-22555/writeup.html [5]
Link: https://zplin.me/papers/ELOISE.pdf [6]
Link: https://syst3mfailure.io/wall-of-perdition/ [7]
Signed-off-by: Kees Cook <kees@kernel.org>
---
Cc: "GONG, Ruiqi" <gongruiqi@huaweicloud.com>
Cc: Xiu Jianfeng <xiujianfeng@huawei.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Kent Overstreet <kent.overstreet@linux.dev>
Cc: Jann Horn <jannh@google.com>
Cc: Matteo Rizzo <matteorizzo@google.com>
Cc: jvoisin <julien.voisin@dustri.org>
---
 ipc/msgutil.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/ipc/msgutil.c b/ipc/msgutil.c
index d0a0e877cadd..f392f30a057a 100644
--- a/ipc/msgutil.c
+++ b/ipc/msgutil.c
@@ -42,6 +42,17 @@ struct msg_msgseg {
 #define DATALEN_MSG	((size_t)PAGE_SIZE-sizeof(struct msg_msg))
 #define DATALEN_SEG	((size_t)PAGE_SIZE-sizeof(struct msg_msgseg))
 
+static kmem_buckets *msg_buckets __ro_after_init;
+
+static int __init init_msg_buckets(void)
+{
+	msg_buckets = kmem_buckets_create("msg_msg", 0, SLAB_ACCOUNT,
+					  sizeof(struct msg_msg),
+					  DATALEN_MSG, NULL);
+
+	return 0;
+}
+subsys_initcall(init_msg_buckets);
 
 static struct msg_msg *alloc_msg(size_t len)
 {
@@ -50,7 +61,7 @@ static struct msg_msg *alloc_msg(size_t len)
 	size_t alen;
 
 	alen = min(len, DATALEN_MSG);
-	msg = kmalloc(sizeof(*msg) + alen, GFP_KERNEL_ACCOUNT);
+	msg = kmem_buckets_alloc(msg_buckets, sizeof(*msg) + alen, GFP_KERNEL);
 	if (msg == NULL)
 		return NULL;
 
-- 
2.34.1



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 6/6] mm/util: Use dedicated slab buckets for memdup_user()
  2024-05-31 19:14 [PATCH v4 0/6] slab: Introduce dedicated bucket allocator Kees Cook
                   ` (4 preceding siblings ...)
  2024-05-31 19:14 ` [PATCH v4 5/6] ipc, msg: Use dedicated slab buckets for alloc_msg() Kees Cook
@ 2024-05-31 19:14 ` Kees Cook
  5 siblings, 0 replies; 14+ messages in thread
From: Kees Cook @ 2024-05-31 19:14 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Kees Cook, GONG, Ruiqi, Xiu Jianfeng, Suren Baghdasaryan,
	Kent Overstreet, Jann Horn, Matteo Rizzo, jvoisin, linux-mm,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Roman Gushchin, Hyeonggon Yoo, Thomas Graf,
	Herbert Xu, linux-kernel, linux-hardening, netdev

Both memdup_user() and vmemdup_user() handle allocations that are
regularly used for exploiting use-after-free type confusion flaws in
the kernel (e.g. prctl() PR_SET_VMA_ANON_NAME[1] and setxattr[2][3][4]
respectively).

Since both are designed for contents coming from userspace, it allows
for userspace-controlled allocation sizes. Use a dedicated set of kmalloc
buckets so these allocations do not share caches with the global kmalloc
buckets.

After a fresh boot under Ubuntu 23.10, we can see the caches are already
in active use:

 # grep ^memdup /proc/slabinfo
 memdup_user-8k         4      4   8192    4    8 : ...
 memdup_user-4k         8      8   4096    8    8 : ...
 memdup_user-2k        16     16   2048   16    8 : ...
 memdup_user-1k         0      0   1024   16    4 : ...
 memdup_user-512        0      0    512   16    2 : ...
 memdup_user-256        0      0    256   16    1 : ...
 memdup_user-128        0      0    128   32    1 : ...
 memdup_user-64       256    256     64   64    1 : ...
 memdup_user-32       512    512     32  128    1 : ...
 memdup_user-16      1024   1024     16  256    1 : ...
 memdup_user-8       2048   2048      8  512    1 : ...
 memdup_user-192        0      0    192   21    1 : ...
 memdup_user-96       168    168     96   42    1 : ...

Link: https://starlabs.sg/blog/2023/07-prctl-anon_vma_name-an-amusing-heap-spray/ [1]
Link: https://duasynt.com/blog/linux-kernel-heap-spray [2]
Link: https://etenal.me/archives/1336 [3]
Link: https://github.com/a13xp0p0v/kernel-hack-drill/blob/master/drill_exploit_uaf.c [4]
Signed-off-by: Kees Cook <kees@kernel.org>
---
Cc: "GONG, Ruiqi" <gongruiqi@huaweicloud.com>
Cc: Xiu Jianfeng <xiujianfeng@huawei.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Kent Overstreet <kent.overstreet@linux.dev>
Cc: Jann Horn <jannh@google.com>
Cc: Matteo Rizzo <matteorizzo@google.com>
Cc: jvoisin <julien.voisin@dustri.org>
Cc: linux-mm@kvack.org
---
 mm/util.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/mm/util.c b/mm/util.c
index 53f7fc5912bd..f30460c82641 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -198,6 +198,16 @@ char *kmemdup_nul(const char *s, size_t len, gfp_t gfp)
 }
 EXPORT_SYMBOL(kmemdup_nul);
 
+static kmem_buckets *user_buckets __ro_after_init;
+
+static int __init init_user_buckets(void)
+{
+	user_buckets = kmem_buckets_create("memdup_user", 0, 0, 0, INT_MAX, NULL);
+
+	return 0;
+}
+subsys_initcall(init_user_buckets);
+
 /**
  * memdup_user - duplicate memory region from user space
  *
@@ -211,7 +221,7 @@ void *memdup_user(const void __user *src, size_t len)
 {
 	void *p;
 
-	p = kmalloc_track_caller(len, GFP_USER | __GFP_NOWARN);
+	p = kmem_buckets_alloc_track_caller(user_buckets, len, GFP_USER | __GFP_NOWARN);
 	if (!p)
 		return ERR_PTR(-ENOMEM);
 
@@ -237,7 +247,7 @@ void *vmemdup_user(const void __user *src, size_t len)
 {
 	void *p;
 
-	p = kvmalloc(len, GFP_USER);
+	p = kmem_buckets_valloc(user_buckets, len, GFP_USER);
 	if (!p)
 		return ERR_PTR(-ENOMEM);
 
-- 
2.34.1



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 2/6] mm/slab: Plumb kmem_buckets into __do_kmalloc_node()
  2024-05-31 19:14 ` [PATCH v4 2/6] mm/slab: Plumb kmem_buckets into __do_kmalloc_node() Kees Cook
@ 2024-06-03 17:06   ` Vlastimil Babka
  2024-06-03 22:44     ` Kees Cook
  0 siblings, 1 reply; 14+ messages in thread
From: Vlastimil Babka @ 2024-06-03 17:06 UTC (permalink / raw)
  To: Kees Cook
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	jvoisin, Andrew Morton, Roman Gushchin, Hyeonggon Yoo, linux-mm,
	linux-hardening, GONG, Ruiqi, Xiu Jianfeng, Suren Baghdasaryan,
	Kent Overstreet, Jann Horn, Matteo Rizzo, Thomas Graf,
	Herbert Xu, linux-kernel, netdev

On 5/31/24 9:14 PM, Kees Cook wrote:
> Introduce CONFIG_SLAB_BUCKETS which provides the infrastructure to
> support separated kmalloc buckets (in the follow kmem_buckets_create()
> patches and future codetag-based separation). Since this will provide
> a mitigation for a very common case of exploits, enable it by default.

Are you sure? I thought there was a policy that nobody is special enough
to have stuff enabled by default. Is it worth risking Linus shouting? :)
 
> To be able to choose which buckets to allocate from, make the buckets
> available to the internal kmalloc interfaces by adding them as the
> first argument, rather than depending on the buckets being chosen from
> the fixed set of global buckets. Where the bucket is not available,
> pass NULL, which means "use the default system kmalloc bucket set"
> (the prior existing behavior), as implemented in kmalloc_slab().
> 
> To avoid adding the extra argument when !CONFIG_SLAB_BUCKETS, only the
> top-level macros and static inlines use the buckets argument (where
> they are stripped out and compiled out respectively). The actual extern
> functions can then been built without the argument, and the internals
> fall back to the global kmalloc buckets unconditionally.
> 
> Signed-off-by: Kees Cook <kees@kernel.org>
> ---
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: jvoisin <julien.voisin@dustri.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Roman Gushchin <roman.gushchin@linux.dev>
> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Cc: linux-mm@kvack.org
> Cc: linux-hardening@vger.kernel.org
> ---
>  include/linux/slab.h | 34 ++++++++++++++++++++++++++--------
>  mm/Kconfig           | 15 +++++++++++++++
>  mm/slab.h            |  6 ++++--
>  mm/slab_common.c     |  4 ++--
>  mm/slub.c            | 34 ++++++++++++++++++++++++----------
>  mm/util.c            |  2 +-
>  6 files changed, 72 insertions(+), 23 deletions(-)
> 
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index de2b7209cd05..b1165b22cc6f 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -569,8 +569,17 @@ static __always_inline void kfree_bulk(size_t size, void **p)
>  	kmem_cache_free_bulk(NULL, size, p);
>  }
>  
> -void *__kmalloc_node_noprof(size_t size, gfp_t flags, int node) __assume_kmalloc_alignment
> -							 __alloc_size(1);
> +#ifdef CONFIG_SLAB_BUCKETS
> +void *__kmalloc_buckets_node_noprof(kmem_buckets *b, size_t size, gfp_t flags, int node)
> +				__assume_kmalloc_alignment __alloc_size(2);
> +# define __kmalloc_node_noprof(b, size, flags, node)	\
> +	__kmalloc_buckets_node_noprof(b, size, flags, node)
> +#else
> +void *__kmalloc_buckets_node_noprof(size_t size, gfp_t flags, int node)
> +				__assume_kmalloc_alignment __alloc_size(1);
> +# define __kmalloc_node_noprof(b, size, flags, node)	\
> +	__kmalloc_buckets_node_noprof(size, flags, node)
> +#endif
>  #define __kmalloc_node(...)			alloc_hooks(__kmalloc_node_noprof(__VA_ARGS__))

I found this too verbose and tried a different approach, in the end rewrote
everything to verify the idea works. So I'll just link to the result in git:

https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-buckets-v4-rewrite

It's also rebased on slab.git:slab/for-6.11/cleanups with some alloc_hooks()
cleanups that would cause conflicts otherwkse.

But the crux of that approach is:

/*
 * These macros allow declaring a kmem_buckets * parameter alongside size, which
 * can be compiled out with CONFIG_SLAB_BUCKETS=n so that a large number of call
 * sites don't have to pass NULL.
 */
#ifdef CONFIG_SLAB_BUCKETS
#define DECL_BUCKET_PARAMS(_size, _b)   size_t (_size), kmem_buckets *(_b)
#define PASS_BUCKET_PARAMS(_size, _b)   (_size), (_b)
#define PASS_BUCKET_PARAM(_b)           (_b)
#else
#define DECL_BUCKET_PARAMS(_size, _b)   size_t (_size)
#define PASS_BUCKET_PARAMS(_size, _b)   (_size)
#define PASS_BUCKET_PARAM(_b)           NULL
#endif

Then we have declaration e.g.

void *__kmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flags, int node)
                                __assume_kmalloc_alignment __alloc_size(1);

and the function is called like (from code not using buckets)
return __kmalloc_node_noprof(PASS_BUCKET_PARAMS(size, NULL), flags, node);

or (from code using buckets)
#define kmem_buckets_alloc(_b, _size, _flags)   \
        alloc_hooks(__kmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, _b), _flags, NUMA_NO_NODE))

And implementation looks like:

void *__kmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flags, int node)
{
        return __do_kmalloc_node(size, PASS_BUCKET_PARAM(b), flags, node, _RET_IP_);
}

The size param is always the first, so the __alloc_size(1) doesn't need tweaking.
size is also used in the macros even if it's never mangled, because it's easy
to pass one param instead of two, but not zero params instead of one, if we want
the ending comma not be part of the macro (which would look awkward).

Does it look ok to you? Of course names of the macros could be tweaked. Anyway feel
free to use the branch for the followup. Hopefully this way is also compatible with
the planned codetag based followup.

>  
>  void *kmem_cache_alloc_node_noprof(struct kmem_cache *s, gfp_t flags,
> @@ -679,7 +688,7 @@ static __always_inline __alloc_size(1) void *kmalloc_node_noprof(size_t size, gf
>  				kmalloc_caches[kmalloc_type(flags, _RET_IP_)][index],
>  				flags, node, size);
>  	}
> -	return __kmalloc_node_noprof(size, flags, node);
> +	return __kmalloc_node_noprof(NULL, size, flags, node);
>  }
>  #define kmalloc_node(...)			alloc_hooks(kmalloc_node_noprof(__VA_ARGS__))
>  
> @@ -730,10 +739,19 @@ static inline __realloc_size(2, 3) void * __must_check krealloc_array_noprof(voi
>   */
>  #define kcalloc(n, size, flags)		kmalloc_array(n, size, (flags) | __GFP_ZERO)
>  
> -void *kmalloc_node_track_caller_noprof(size_t size, gfp_t flags, int node,
> -				  unsigned long caller) __alloc_size(1);
> +#ifdef CONFIG_SLAB_BUCKETS
> +void *__kmalloc_node_track_caller_noprof(kmem_buckets *b, size_t size, gfp_t flags, int node,
> +					 unsigned long caller) __alloc_size(2);
> +# define kmalloc_node_track_caller_noprof(b, size, flags, node, caller)	\
> +	__kmalloc_node_track_caller_noprof(b, size, flags, node, caller)
> +#else
> +void *__kmalloc_node_track_caller_noprof(size_t size, gfp_t flags, int node,
> +					 unsigned long caller) __alloc_size(1);
> +# define kmalloc_node_track_caller_noprof(b, size, flags, node, caller)	\
> +	__kmalloc_node_track_caller_noprof(size, flags, node, caller)
> +#endif
>  #define kmalloc_node_track_caller(...)		\
> -	alloc_hooks(kmalloc_node_track_caller_noprof(__VA_ARGS__, _RET_IP_))
> +	alloc_hooks(kmalloc_node_track_caller_noprof(NULL, __VA_ARGS__, _RET_IP_))
>  
>  /*
>   * kmalloc_track_caller is a special version of kmalloc that records the
> @@ -746,7 +764,7 @@ void *kmalloc_node_track_caller_noprof(size_t size, gfp_t flags, int node,
>  #define kmalloc_track_caller(...)		kmalloc_node_track_caller(__VA_ARGS__, NUMA_NO_NODE)
>  
>  #define kmalloc_track_caller_noprof(...)	\
> -		kmalloc_node_track_caller_noprof(__VA_ARGS__, NUMA_NO_NODE, _RET_IP_)
> +		kmalloc_node_track_caller_noprof(NULL, __VA_ARGS__, NUMA_NO_NODE, _RET_IP_)
>  
>  static inline __alloc_size(1, 2) void *kmalloc_array_node_noprof(size_t n, size_t size, gfp_t flags,
>  							  int node)
> @@ -757,7 +775,7 @@ static inline __alloc_size(1, 2) void *kmalloc_array_node_noprof(size_t n, size_
>  		return NULL;
>  	if (__builtin_constant_p(n) && __builtin_constant_p(size))
>  		return kmalloc_node_noprof(bytes, flags, node);
> -	return __kmalloc_node_noprof(bytes, flags, node);
> +	return __kmalloc_node_noprof(NULL, bytes, flags, node);
>  }
>  #define kmalloc_array_node(...)			alloc_hooks(kmalloc_array_node_noprof(__VA_ARGS__))
>  
> diff --git a/mm/Kconfig b/mm/Kconfig
> index b4cb45255a54..8c29af7835cc 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -273,6 +273,21 @@ config SLAB_FREELIST_HARDENED
>  	  sacrifices to harden the kernel slab allocator against common
>  	  freelist exploit methods.
>  
> +config SLAB_BUCKETS
> +	bool "Support allocation from separate kmalloc buckets"
> +	default y
> +	depends on !SLUB_TINY
> +	help
> +	  Kernel heap attacks frequently depend on being able to create
> +	  specifically-sized allocations with user-controlled contents
> +	  that will be allocated into the same kmalloc bucket as a
> +	  target object. To avoid sharing these allocation buckets,
> +	  provide an explicitly separated set of buckets to be used for
> +	  user-controlled allocations. This may very slightly increase
> +	  memory fragmentation, though in practice it's only a handful
> +	  of extra pages since the bulk of user-controlled allocations
> +	  are relatively long-lived.
> +
>  config SLUB_STATS
>  	default n
>  	bool "Enable performance statistics"
> diff --git a/mm/slab.h b/mm/slab.h
> index 5f8f47c5bee0..f459cd338852 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -403,16 +403,18 @@ static inline unsigned int size_index_elem(unsigned int bytes)
>   * KMALLOC_MAX_CACHE_SIZE and the caller must check that.
>   */
>  static inline struct kmem_cache *
> -kmalloc_slab(size_t size, gfp_t flags, unsigned long caller)
> +kmalloc_slab(kmem_buckets *b, size_t size, gfp_t flags, unsigned long caller)
>  {
>  	unsigned int index;
>  
> +	if (!b)
> +		b = &kmalloc_caches[kmalloc_type(flags, caller)];
>  	if (size <= 192)
>  		index = kmalloc_size_index[size_index_elem(size)];
>  	else
>  		index = fls(size - 1);
>  
> -	return kmalloc_caches[kmalloc_type(flags, caller)][index];
> +	return (*b)[index];
>  }
>  
>  gfp_t kmalloc_fix_flags(gfp_t flags);
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index e0b1c109bed2..b5c879fa66bc 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -702,7 +702,7 @@ size_t kmalloc_size_roundup(size_t size)
>  		 * The flags don't matter since size_index is common to all.
>  		 * Neither does the caller for just getting ->object_size.
>  		 */
> -		return kmalloc_slab(size, GFP_KERNEL, 0)->object_size;
> +		return kmalloc_slab(NULL, size, GFP_KERNEL, 0)->object_size;
>  	}
>  
>  	/* Above the smaller buckets, size is a multiple of page size. */
> @@ -1179,7 +1179,7 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
>  		return (void *)p;
>  	}
>  
> -	ret = kmalloc_node_track_caller_noprof(new_size, flags, NUMA_NO_NODE, _RET_IP_);
> +	ret = kmalloc_node_track_caller_noprof(NULL, new_size, flags, NUMA_NO_NODE, _RET_IP_);
>  	if (ret && p) {
>  		/* Disable KASAN checks as the object's redzone is accessed. */
>  		kasan_disable_current();
> diff --git a/mm/slub.c b/mm/slub.c
> index 0809760cf789..ec682a325abe 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -4099,7 +4099,7 @@ void *kmalloc_large_node_noprof(size_t size, gfp_t flags, int node)
>  EXPORT_SYMBOL(kmalloc_large_node_noprof);
>  
>  static __always_inline
> -void *__do_kmalloc_node(size_t size, gfp_t flags, int node,
> +void *__do_kmalloc_node(kmem_buckets *b, size_t size, gfp_t flags, int node,
>  			unsigned long caller)
>  {
>  	struct kmem_cache *s;
> @@ -4115,7 +4115,7 @@ void *__do_kmalloc_node(size_t size, gfp_t flags, int node,
>  	if (unlikely(!size))
>  		return ZERO_SIZE_PTR;
>  
> -	s = kmalloc_slab(size, flags, caller);
> +	s = kmalloc_slab(b, size, flags, caller);
>  
>  	ret = slab_alloc_node(s, NULL, flags, node, caller, size);
>  	ret = kasan_kmalloc(s, ret, size, flags);
> @@ -4123,24 +4123,38 @@ void *__do_kmalloc_node(size_t size, gfp_t flags, int node,
>  	return ret;
>  }
>  
> -void *__kmalloc_node_noprof(size_t size, gfp_t flags, int node)
> +#ifdef CONFIG_SLAB_BUCKETS
> +# define __do_kmalloc_buckets_node(b, size, flags, node, caller)	\
> +	__do_kmalloc_node(b, size, flags, node, caller)
> +void *__kmalloc_buckets_node_noprof(kmem_buckets *b, size_t size, gfp_t flags, int node)
> +#else
> +# define __do_kmalloc_buckets_node(b, size, flags, node, caller)	\
> +	__do_kmalloc_node(NULL, size, flags, node, caller)
> +void *__kmalloc_buckets_node_noprof(size_t size, gfp_t flags, int node)
> +#endif
>  {
> -	return __do_kmalloc_node(size, flags, node, _RET_IP_);
> +	return __do_kmalloc_buckets_node(b, size, flags, node, _RET_IP_);
>  }
> -EXPORT_SYMBOL(__kmalloc_node_noprof);
> +EXPORT_SYMBOL(__kmalloc_buckets_node_noprof);
>  
>  void *__kmalloc_noprof(size_t size, gfp_t flags)
>  {
> -	return __do_kmalloc_node(size, flags, NUMA_NO_NODE, _RET_IP_);
> +	return __do_kmalloc_buckets_node(NULL, size, flags, NUMA_NO_NODE, _RET_IP_);
>  }
>  EXPORT_SYMBOL(__kmalloc_noprof);
>  
> -void *kmalloc_node_track_caller_noprof(size_t size, gfp_t flags,
> -				       int node, unsigned long caller)
> +#ifdef CONFIG_SLAB_BUCKETS
> +void *__kmalloc_node_track_caller_noprof(kmem_buckets *b, size_t size, gfp_t flags,
> +					 int node, unsigned long caller)
> +#else
> +void *__kmalloc_node_track_caller_noprof(size_t size, gfp_t flags,
> +					 int node, unsigned long caller)
> +#endif
>  {
> -	return __do_kmalloc_node(size, flags, node, caller);
> +	return __do_kmalloc_buckets_node(b, size, flags, node, caller);
> +
>  }
> -EXPORT_SYMBOL(kmalloc_node_track_caller_noprof);
> +EXPORT_SYMBOL(__kmalloc_node_track_caller_noprof);
>  
>  void *kmalloc_trace_noprof(struct kmem_cache *s, gfp_t gfpflags, size_t size)
>  {
> diff --git a/mm/util.c b/mm/util.c
> index c9e519e6811f..80430e5ba981 100644
> --- a/mm/util.c
> +++ b/mm/util.c
> @@ -128,7 +128,7 @@ void *kmemdup_noprof(const void *src, size_t len, gfp_t gfp)
>  {
>  	void *p;
>  
> -	p = kmalloc_node_track_caller_noprof(len, gfp, NUMA_NO_NODE, _RET_IP_);
> +	p = kmalloc_node_track_caller_noprof(NULL, len, gfp, NUMA_NO_NODE, _RET_IP_);
>  	if (p)
>  		memcpy(p, src, len);
>  	return p;



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 2/6] mm/slab: Plumb kmem_buckets into __do_kmalloc_node()
  2024-06-03 17:06   ` Vlastimil Babka
@ 2024-06-03 22:44     ` Kees Cook
  2024-06-04 12:30       ` Vlastimil Babka
  0 siblings, 1 reply; 14+ messages in thread
From: Kees Cook @ 2024-06-03 22:44 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	jvoisin, Andrew Morton, Roman Gushchin, Hyeonggon Yoo, linux-mm,
	linux-hardening, GONG, Ruiqi, Xiu Jianfeng, Suren Baghdasaryan,
	Kent Overstreet, Jann Horn, Matteo Rizzo, Thomas Graf,
	Herbert Xu, linux-kernel, netdev

On Mon, Jun 03, 2024 at 07:06:15PM +0200, Vlastimil Babka wrote:
> On 5/31/24 9:14 PM, Kees Cook wrote:
> > Introduce CONFIG_SLAB_BUCKETS which provides the infrastructure to
> > support separated kmalloc buckets (in the follow kmem_buckets_create()
> > patches and future codetag-based separation). Since this will provide
> > a mitigation for a very common case of exploits, enable it by default.
> 
> Are you sure? I thought there was a policy that nobody is special enough
> to have stuff enabled by default. Is it worth risking Linus shouting? :)

I think it's important to have this enabled given how common the
exploitation methodology is and how cheap this solution is. Regardless,
if you want it "default n", I can change it.

> I found this too verbose and tried a different approach, in the end rewrote
> everything to verify the idea works. So I'll just link to the result in git:
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-buckets-v4-rewrite
> 
> It's also rebased on slab.git:slab/for-6.11/cleanups with some alloc_hooks()
> cleanups that would cause conflicts otherwkse.
> 
> But the crux of that approach is:
> 
> /*
>  * These macros allow declaring a kmem_buckets * parameter alongside size, which
>  * can be compiled out with CONFIG_SLAB_BUCKETS=n so that a large number of call
>  * sites don't have to pass NULL.
>  */
> #ifdef CONFIG_SLAB_BUCKETS
> #define DECL_BUCKET_PARAMS(_size, _b)   size_t (_size), kmem_buckets *(_b)
> #define PASS_BUCKET_PARAMS(_size, _b)   (_size), (_b)
> #define PASS_BUCKET_PARAM(_b)           (_b)
> #else
> #define DECL_BUCKET_PARAMS(_size, _b)   size_t (_size)
> #define PASS_BUCKET_PARAMS(_size, _b)   (_size)
> #define PASS_BUCKET_PARAM(_b)           NULL
> #endif
> 
> Then we have declaration e.g.
> 
> void *__kmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flags, int node)
>                                 __assume_kmalloc_alignment __alloc_size(1);
> 
> and the function is called like (from code not using buckets)
> return __kmalloc_node_noprof(PASS_BUCKET_PARAMS(size, NULL), flags, node);
> 
> or (from code using buckets)
> #define kmem_buckets_alloc(_b, _size, _flags)   \
>         alloc_hooks(__kmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, _b), _flags, NUMA_NO_NODE))
> 
> And implementation looks like:
> 
> void *__kmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flags, int node)
> {
>         return __do_kmalloc_node(size, PASS_BUCKET_PARAM(b), flags, node, _RET_IP_);
> }
> 
> The size param is always the first, so the __alloc_size(1) doesn't need tweaking.
> size is also used in the macros even if it's never mangled, because it's easy
> to pass one param instead of two, but not zero params instead of one, if we want
> the ending comma not be part of the macro (which would look awkward).
> 
> Does it look ok to you? Of course names of the macros could be tweaked. Anyway feel
> free to use the branch for the followup. Hopefully this way is also compatible with
> the planned codetag based followup.

This looks really nice, thank you! This is well aligned with the codetag
followup, which also needs to have "size" be very easy to find (to the
macros can check for compile-time-constant or not).

I will go work from your branch...

Thanks!

-Kees

-- 
Kees Cook


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 2/6] mm/slab: Plumb kmem_buckets into __do_kmalloc_node()
  2024-06-03 22:44     ` Kees Cook
@ 2024-06-04 12:30       ` Vlastimil Babka
  0 siblings, 0 replies; 14+ messages in thread
From: Vlastimil Babka @ 2024-06-04 12:30 UTC (permalink / raw)
  To: Kees Cook
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	jvoisin, Andrew Morton, Roman Gushchin, Hyeonggon Yoo, linux-mm,
	linux-hardening, GONG, Ruiqi, Xiu Jianfeng, Suren Baghdasaryan,
	Kent Overstreet, Jann Horn, Matteo Rizzo, Thomas Graf,
	Herbert Xu, linux-kernel, netdev

On 6/4/24 12:44 AM, Kees Cook wrote:
> On Mon, Jun 03, 2024 at 07:06:15PM +0200, Vlastimil Babka wrote:
>> On 5/31/24 9:14 PM, Kees Cook wrote:
>> > Introduce CONFIG_SLAB_BUCKETS which provides the infrastructure to
>> > support separated kmalloc buckets (in the follow kmem_buckets_create()
>> > patches and future codetag-based separation). Since this will provide
>> > a mitigation for a very common case of exploits, enable it by default.
>> 
>> Are you sure? I thought there was a policy that nobody is special enough
>> to have stuff enabled by default. Is it worth risking Linus shouting? :)
> 
> I think it's important to have this enabled given how common the
> exploitation methodology is and how cheap this solution is. Regardless,
> if you want it "default n", I can change it.

Yeah, I'd just recommend it in the help, noting it has a bit of memory
overhead. Defaults are not that important anyway IMHO, either it's distro
doing the config, and individually security conscious people should know
what they are doing.

> 
> This looks really nice, thank you! This is well aligned with the codetag
> followup, which also needs to have "size" be very easy to find (to the
> macros can check for compile-time-constant or not).
> 
> I will go work from your branch...

Great!

> Thanks!
> 
> -Kees
> 



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 4/6] mm/slab: Introduce kmem_buckets_create() and family
  2024-05-31 19:14 ` [PATCH v4 4/6] mm/slab: Introduce kmem_buckets_create() and family Kees Cook
@ 2024-06-04 15:02   ` Simon Horman
  2024-06-04 22:13     ` Tycho Andersen
  0 siblings, 1 reply; 14+ messages in thread
From: Simon Horman @ 2024-06-04 15:02 UTC (permalink / raw)
  To: Kees Cook
  Cc: Vlastimil Babka, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, jvoisin, Andrew Morton, Roman Gushchin,
	Hyeonggon Yoo, linux-mm, GONG, Ruiqi, Xiu Jianfeng,
	Suren Baghdasaryan, Kent Overstreet, Jann Horn, Matteo Rizzo,
	Thomas Graf, Herbert Xu, linux-kernel, linux-hardening, netdev

On Fri, May 31, 2024 at 12:14:56PM -0700, Kees Cook wrote:

...

> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index b5c879fa66bc..f42a98d368a9 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -392,6 +392,82 @@ kmem_cache_create(const char *name, unsigned int size, unsigned int align,
>  }
>  EXPORT_SYMBOL(kmem_cache_create);
>  
> +static struct kmem_cache *kmem_buckets_cache __ro_after_init;
> +
> +kmem_buckets *kmem_buckets_create(const char *name, unsigned int align,
> +				  slab_flags_t flags,
> +				  unsigned int useroffset,
> +				  unsigned int usersize,
> +				  void (*ctor)(void *))
> +{
> +	kmem_buckets *b;
> +	int idx;
> +
> +	/*
> +	 * When the separate buckets API is not built in, just return
> +	 * a non-NULL value for the kmem_buckets pointer, which will be
> +	 * unused when performing allocations.
> +	 */
> +	if (!IS_ENABLED(CONFIG_SLAB_BUCKETS))
> +		return ZERO_SIZE_PTR;
> +
> +	if (WARN_ON(!kmem_buckets_cache))
> +		return NULL;
> +
> +	b = kmem_cache_alloc(kmem_buckets_cache, GFP_KERNEL|__GFP_ZERO);
> +	if (WARN_ON(!b))
> +		return NULL;
> +
> +	flags |= SLAB_NO_MERGE;
> +
> +	for (idx = 0; idx < ARRAY_SIZE(kmalloc_caches[KMALLOC_NORMAL]); idx++) {
> +		char *short_size, *cache_name;
> +		unsigned int cache_useroffset, cache_usersize;
> +		unsigned int size;
> +
> +		if (!kmalloc_caches[KMALLOC_NORMAL][idx])
> +			continue;
> +
> +		size = kmalloc_caches[KMALLOC_NORMAL][idx]->object_size;
> +		if (!size)
> +			continue;
> +
> +		short_size = strchr(kmalloc_caches[KMALLOC_NORMAL][idx]->name, '-');
> +		if (WARN_ON(!short_size))
> +			goto fail;
> +
> +		cache_name = kasprintf(GFP_KERNEL, "%s-%s", name, short_size + 1);
> +		if (WARN_ON(!cache_name))
> +			goto fail;
> +
> +		if (useroffset >= size) {
> +			cache_useroffset = 0;
> +			cache_usersize = 0;
> +		} else {
> +			cache_useroffset = useroffset;
> +			cache_usersize = min(size - cache_useroffset, usersize);
> +		}
> +		(*b)[idx] = kmem_cache_create_usercopy(cache_name, size,
> +					align, flags, cache_useroffset,
> +					cache_usersize, ctor);
> +		kfree(cache_name);
> +		if (WARN_ON(!(*b)[idx]))
> +			goto fail;
> +	}
> +
> +	return b;
> +
> +fail:
> +	for (idx = 0; idx < ARRAY_SIZE(kmalloc_caches[KMALLOC_NORMAL]); idx++) {
> +		if ((*b)[idx])
> +			kmem_cache_destroy((*b)[idx]);

nit: I don't think it is necessary to guard this with a check for NULL.

> +	}
> +	kfree(b);
> +
> +	return NULL;
> +}
> +EXPORT_SYMBOL(kmem_buckets_create);


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 4/6] mm/slab: Introduce kmem_buckets_create() and family
  2024-06-04 15:02   ` Simon Horman
@ 2024-06-04 22:13     ` Tycho Andersen
  2024-06-05  0:49       ` Kees Cook
  0 siblings, 1 reply; 14+ messages in thread
From: Tycho Andersen @ 2024-06-04 22:13 UTC (permalink / raw)
  To: Simon Horman
  Cc: Kees Cook, Vlastimil Babka, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, jvoisin, Andrew Morton,
	Roman Gushchin, Hyeonggon Yoo, linux-mm, GONG, Ruiqi,
	Xiu Jianfeng, Suren Baghdasaryan, Kent Overstreet, Jann Horn,
	Matteo Rizzo, Thomas Graf, Herbert Xu, linux-kernel,
	linux-hardening, netdev

On Tue, Jun 04, 2024 at 04:02:28PM +0100, Simon Horman wrote:
> On Fri, May 31, 2024 at 12:14:56PM -0700, Kees Cook wrote:
> > +	for (idx = 0; idx < ARRAY_SIZE(kmalloc_caches[KMALLOC_NORMAL]); idx++) {
> > +		char *short_size, *cache_name;
> > +		unsigned int cache_useroffset, cache_usersize;
> > +		unsigned int size;
> > +
> > +		if (!kmalloc_caches[KMALLOC_NORMAL][idx])
> > +			continue;
> > +
> > +		size = kmalloc_caches[KMALLOC_NORMAL][idx]->object_size;
> > +		if (!size)
> > +			continue;
> > +
> > +		short_size = strchr(kmalloc_caches[KMALLOC_NORMAL][idx]->name, '-');
> > +		if (WARN_ON(!short_size))
> > +			goto fail;
> > +
> > +		cache_name = kasprintf(GFP_KERNEL, "%s-%s", name, short_size + 1);
> > +		if (WARN_ON(!cache_name))
> > +			goto fail;
> > +
> > +		if (useroffset >= size) {
> > +			cache_useroffset = 0;
> > +			cache_usersize = 0;
> > +		} else {
> > +			cache_useroffset = useroffset;
> > +			cache_usersize = min(size - cache_useroffset, usersize);
> > +		}
> > +		(*b)[idx] = kmem_cache_create_usercopy(cache_name, size,
> > +					align, flags, cache_useroffset,
> > +					cache_usersize, ctor);
> > +		kfree(cache_name);
> > +		if (WARN_ON(!(*b)[idx]))
> > +			goto fail;
> > +	}
> > +
> > +	return b;
> > +
> > +fail:
> > +	for (idx = 0; idx < ARRAY_SIZE(kmalloc_caches[KMALLOC_NORMAL]); idx++) {
> > +		if ((*b)[idx])
> > +			kmem_cache_destroy((*b)[idx]);
> 
> nit: I don't think it is necessary to guard this with a check for NULL.

Isn't it? What if a kasprintf() fails halfway through the loop?

Tycho


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 4/6] mm/slab: Introduce kmem_buckets_create() and family
  2024-06-04 22:13     ` Tycho Andersen
@ 2024-06-05  0:49       ` Kees Cook
  2024-06-05 19:54         ` Simon Horman
  0 siblings, 1 reply; 14+ messages in thread
From: Kees Cook @ 2024-06-05  0:49 UTC (permalink / raw)
  To: Tycho Andersen
  Cc: Simon Horman, Vlastimil Babka, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, jvoisin, Andrew Morton,
	Roman Gushchin, Hyeonggon Yoo, linux-mm, GONG, Ruiqi,
	Xiu Jianfeng, Suren Baghdasaryan, Kent Overstreet, Jann Horn,
	Matteo Rizzo, Thomas Graf, Herbert Xu, linux-kernel,
	linux-hardening, netdev

On Tue, Jun 04, 2024 at 04:13:32PM -0600, Tycho Andersen wrote:
> On Tue, Jun 04, 2024 at 04:02:28PM +0100, Simon Horman wrote:
> > On Fri, May 31, 2024 at 12:14:56PM -0700, Kees Cook wrote:
> > > +	for (idx = 0; idx < ARRAY_SIZE(kmalloc_caches[KMALLOC_NORMAL]); idx++) {
> > > +		char *short_size, *cache_name;
> > > +		unsigned int cache_useroffset, cache_usersize;
> > > +		unsigned int size;
> > > +
> > > +		if (!kmalloc_caches[KMALLOC_NORMAL][idx])
> > > +			continue;
> > > +
> > > +		size = kmalloc_caches[KMALLOC_NORMAL][idx]->object_size;
> > > +		if (!size)
> > > +			continue;
> > > +
> > > +		short_size = strchr(kmalloc_caches[KMALLOC_NORMAL][idx]->name, '-');
> > > +		if (WARN_ON(!short_size))
> > > +			goto fail;
> > > +
> > > +		cache_name = kasprintf(GFP_KERNEL, "%s-%s", name, short_size + 1);
> > > +		if (WARN_ON(!cache_name))
> > > +			goto fail;
> > > +
> > > +		if (useroffset >= size) {
> > > +			cache_useroffset = 0;
> > > +			cache_usersize = 0;
> > > +		} else {
> > > +			cache_useroffset = useroffset;
> > > +			cache_usersize = min(size - cache_useroffset, usersize);
> > > +		}
> > > +		(*b)[idx] = kmem_cache_create_usercopy(cache_name, size,
> > > +					align, flags, cache_useroffset,
> > > +					cache_usersize, ctor);
> > > +		kfree(cache_name);
> > > +		if (WARN_ON(!(*b)[idx]))
> > > +			goto fail;
> > > +	}
> > > +
> > > +	return b;
> > > +
> > > +fail:
> > > +	for (idx = 0; idx < ARRAY_SIZE(kmalloc_caches[KMALLOC_NORMAL]); idx++) {
> > > +		if ((*b)[idx])
> > > +			kmem_cache_destroy((*b)[idx]);
> > 
> > nit: I don't think it is necessary to guard this with a check for NULL.
> 
> Isn't it? What if a kasprintf() fails halfway through the loop?

He means that kmem_cache_destroy() already checks for NULL. Quite right!

void kmem_cache_destroy(struct kmem_cache *s)
{
        int err = -EBUSY;
        bool rcu_set;

        if (unlikely(!s) || !kasan_check_byte(s))
                return;


-- 
Kees Cook


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 4/6] mm/slab: Introduce kmem_buckets_create() and family
  2024-06-05  0:49       ` Kees Cook
@ 2024-06-05 19:54         ` Simon Horman
  0 siblings, 0 replies; 14+ messages in thread
From: Simon Horman @ 2024-06-05 19:54 UTC (permalink / raw)
  To: Kees Cook
  Cc: Tycho Andersen, Vlastimil Babka, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, jvoisin, Andrew Morton,
	Roman Gushchin, Hyeonggon Yoo, linux-mm, GONG, Ruiqi,
	Xiu Jianfeng, Suren Baghdasaryan, Kent Overstreet, Jann Horn,
	Matteo Rizzo, Thomas Graf, Herbert Xu, linux-kernel,
	linux-hardening, netdev

On Tue, Jun 04, 2024 at 05:49:20PM -0700, Kees Cook wrote:
> On Tue, Jun 04, 2024 at 04:13:32PM -0600, Tycho Andersen wrote:
> > On Tue, Jun 04, 2024 at 04:02:28PM +0100, Simon Horman wrote:
> > > On Fri, May 31, 2024 at 12:14:56PM -0700, Kees Cook wrote:
> > > > +	for (idx = 0; idx < ARRAY_SIZE(kmalloc_caches[KMALLOC_NORMAL]); idx++) {
> > > > +		char *short_size, *cache_name;
> > > > +		unsigned int cache_useroffset, cache_usersize;
> > > > +		unsigned int size;
> > > > +
> > > > +		if (!kmalloc_caches[KMALLOC_NORMAL][idx])
> > > > +			continue;
> > > > +
> > > > +		size = kmalloc_caches[KMALLOC_NORMAL][idx]->object_size;
> > > > +		if (!size)
> > > > +			continue;
> > > > +
> > > > +		short_size = strchr(kmalloc_caches[KMALLOC_NORMAL][idx]->name, '-');
> > > > +		if (WARN_ON(!short_size))
> > > > +			goto fail;
> > > > +
> > > > +		cache_name = kasprintf(GFP_KERNEL, "%s-%s", name, short_size + 1);
> > > > +		if (WARN_ON(!cache_name))
> > > > +			goto fail;
> > > > +
> > > > +		if (useroffset >= size) {
> > > > +			cache_useroffset = 0;
> > > > +			cache_usersize = 0;
> > > > +		} else {
> > > > +			cache_useroffset = useroffset;
> > > > +			cache_usersize = min(size - cache_useroffset, usersize);
> > > > +		}
> > > > +		(*b)[idx] = kmem_cache_create_usercopy(cache_name, size,
> > > > +					align, flags, cache_useroffset,
> > > > +					cache_usersize, ctor);
> > > > +		kfree(cache_name);
> > > > +		if (WARN_ON(!(*b)[idx]))
> > > > +			goto fail;
> > > > +	}
> > > > +
> > > > +	return b;
> > > > +
> > > > +fail:
> > > > +	for (idx = 0; idx < ARRAY_SIZE(kmalloc_caches[KMALLOC_NORMAL]); idx++) {
> > > > +		if ((*b)[idx])
> > > > +			kmem_cache_destroy((*b)[idx]);
> > > 
> > > nit: I don't think it is necessary to guard this with a check for NULL.
> > 
> > Isn't it? What if a kasprintf() fails halfway through the loop?
> 
> He means that kmem_cache_destroy() already checks for NULL. Quite right!
> 
> void kmem_cache_destroy(struct kmem_cache *s)
> {
>         int err = -EBUSY;
>         bool rcu_set;
> 
>         if (unlikely(!s) || !kasan_check_byte(s))
>                 return;

Yes, thanks. That is what I was referring to.


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2024-06-05 19:54 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-05-31 19:14 [PATCH v4 0/6] slab: Introduce dedicated bucket allocator Kees Cook
2024-05-31 19:14 ` [PATCH v4 1/6] mm/slab: Introduce kmem_buckets typedef Kees Cook
2024-05-31 19:14 ` [PATCH v4 2/6] mm/slab: Plumb kmem_buckets into __do_kmalloc_node() Kees Cook
2024-06-03 17:06   ` Vlastimil Babka
2024-06-03 22:44     ` Kees Cook
2024-06-04 12:30       ` Vlastimil Babka
2024-05-31 19:14 ` [PATCH v4 3/6] mm/slab: Introduce kvmalloc_buckets_node() that can take kmem_buckets argument Kees Cook
2024-05-31 19:14 ` [PATCH v4 4/6] mm/slab: Introduce kmem_buckets_create() and family Kees Cook
2024-06-04 15:02   ` Simon Horman
2024-06-04 22:13     ` Tycho Andersen
2024-06-05  0:49       ` Kees Cook
2024-06-05 19:54         ` Simon Horman
2024-05-31 19:14 ` [PATCH v4 5/6] ipc, msg: Use dedicated slab buckets for alloc_msg() Kees Cook
2024-05-31 19:14 ` [PATCH v4 6/6] mm/util: Use dedicated slab buckets for memdup_user() Kees Cook

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox