linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/9] slab: Introduce dedicated bucket allocator
@ 2024-03-05 10:10 Kees Cook
  2024-03-05 10:10 ` [PATCH v2 1/9] slab: Introduce kmem_buckets typedef Kees Cook
                   ` (10 more replies)
  0 siblings, 11 replies; 23+ messages in thread
From: Kees Cook @ 2024-03-05 10:10 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Kees Cook, Andrew Morton, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Roman Gushchin, Hyeonggon Yoo, GONG,
	Ruiqi, Xiu Jianfeng, Suren Baghdasaryan, Kent Overstreet,
	Jann Horn, Matteo Rizzo, linux-kernel, linux-mm, linux-hardening

Hi,

Repeating the commit logs for patch 4 here:

    Dedicated caches are available For fixed size allocations via
    kmem_cache_alloc(), but for dynamically sized allocations there is only
    the global kmalloc API's set of buckets available. This means it isn't
    possible to separate specific sets of dynamically sized allocations into
    a separate collection of caches.

    This leads to a use-after-free exploitation weakness in the Linux
    kernel since many heap memory spraying/grooming attacks depend on using
    userspace-controllable dynamically sized allocations to collide with
    fixed size allocations that end up in same cache.

    While CONFIG_RANDOM_KMALLOC_CACHES provides a probabilistic defense
    against these kinds of "type confusion" attacks, including for fixed
    same-size heap objects, we can create a complementary deterministic
    defense for dynamically sized allocations.

    In order to isolate user-controllable sized allocations from system
    allocations, introduce kmem_buckets_create(), which behaves like
    kmem_cache_create(). (The next patch will introduce kmem_buckets_alloc(),
    which behaves like kmem_cache_alloc().)

    Allows for confining allocations to a dedicated set of sized caches
    (which have the same layout as the kmalloc caches).

    This can also be used in the future once codetag allocation annotations
    exist to implement per-caller allocation cache isolation[0] even for
    dynamic allocations.

    Link: https://lore.kernel.org/lkml/202402211449.401382D2AF@keescook [0]

After the implemetation are 2 example patches of how this could be used
for some repeat "offenders" that get used in exploits. There are more to
be isolated beyond just these. Repeating the commit log for patch 8 here:

    The msg subsystem is a common target for exploiting[1][2][3][4][5][6]
    use-after-free type confusion flaws in the kernel for both read and
    write primitives. Avoid having a user-controlled size cache share the
    global kmalloc allocator by using a separate set of kmalloc buckets.

    Link: https://blog.hacktivesecurity.com/index.php/2022/06/13/linux-kernel-exploit-development-1day-case-study/ [1]
    Link: https://hardenedvault.net/blog/2022-11-13-msg_msg-recon-mitigation-ved/ [2]
    Link: https://www.willsroot.io/2021/08/corctf-2021-fire-of-salvation-writeup.html [3]
    Link: https://a13xp0p0v.github.io/2021/02/09/CVE-2021-26708.html [4]
    Link: https://google.github.io/security-research/pocs/linux/cve-2021-22555/writeup.html [5]
    Link: https://zplin.me/papers/ELOISE.pdf [6]

-Kees

 v2: significant rewrite, generalized the buckets type, added kvmalloc style
 v1: https://lore.kernel.org/lkml/20240304184252.work.496-kees@kernel.org/

Kees Cook (9):
  slab: Introduce kmem_buckets typedef
  slub: Plumb kmem_buckets into __do_kmalloc_node()
  util: Introduce __kvmalloc_node() that can take kmem_buckets argument
  slab: Introduce kmem_buckets_create()
  slab: Introduce kmem_buckets_alloc()
  slub: Introduce kmem_buckets_alloc_track_caller()
  slab: Introduce kmem_buckets_valloc()
  ipc, msg: Use dedicated slab buckets for alloc_msg()
  mm/util: Use dedicated slab buckets for memdup_user()

 include/linux/slab.h | 50 +++++++++++++++++++++-------
 ipc/msgutil.c        | 13 +++++++-
 lib/fortify_kunit.c  |  2 +-
 mm/slab.h            |  6 ++--
 mm/slab_common.c     | 77 ++++++++++++++++++++++++++++++++++++++++++--
 mm/slub.c            | 14 ++++----
 mm/util.c            | 23 +++++++++----
 7 files changed, 154 insertions(+), 31 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2024-03-26 19:41 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-05 10:10 [PATCH v2 0/9] slab: Introduce dedicated bucket allocator Kees Cook
2024-03-05 10:10 ` [PATCH v2 1/9] slab: Introduce kmem_buckets typedef Kees Cook
2024-03-05 10:10 ` [PATCH v2 2/9] slub: Plumb kmem_buckets into __do_kmalloc_node() Kees Cook
2024-03-05 10:10 ` [PATCH v2 3/9] util: Introduce __kvmalloc_node() that can take kmem_buckets argument Kees Cook
2024-03-05 10:10 ` [PATCH v2 4/9] slab: Introduce kmem_buckets_create() Kees Cook
2024-03-25 19:40   ` Kent Overstreet
2024-03-25 20:40     ` Kees Cook
2024-03-25 21:49       ` Kent Overstreet
2024-03-25 23:13         ` Kees Cook
2024-03-05 10:10 ` [PATCH v2 5/9] slab: Introduce kmem_buckets_alloc() Kees Cook
2024-03-05 10:10 ` [PATCH v2 6/9] slub: Introduce kmem_buckets_alloc_track_caller() Kees Cook
2024-03-05 10:10 ` [PATCH v2 7/9] slab: Introduce kmem_buckets_valloc() Kees Cook
2024-03-05 10:10 ` [PATCH v2 8/9] ipc, msg: Use dedicated slab buckets for alloc_msg() Kees Cook
2024-03-05 10:10 ` [PATCH v2 9/9] mm/util: Use dedicated slab buckets for memdup_user() Kees Cook
2024-03-06  1:47 ` [PATCH v2 0/9] slab: Introduce dedicated bucket allocator GONG, Ruiqi
2024-03-07 20:31   ` Kees Cook
2024-03-15 10:28     ` GONG, Ruiqi
2024-03-25  9:03 ` Vlastimil Babka
2024-03-25 18:24   ` Kees Cook
2024-03-26 18:07     ` julien.voisin
2024-03-26 19:41       ` Kees Cook
2024-03-25 19:32   ` Kent Overstreet
2024-03-25 20:26     ` Kees Cook

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox