linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/3] Secure prandom_u32 invocations
@ 2023-01-13 21:33 david.keisarschm
  2023-01-13 21:37 ` [PATCH v4 2/3] slab_allocator: mm/slab_common.c: Replace invocation of weak PRNG david.keisarschm
  0 siblings, 1 reply; 2+ messages in thread
From: david.keisarschm @ 2023-01-13 21:33 UTC (permalink / raw)
  To: linux-kernel
  Cc: Jason, linux-mm, akpm, vbabka, 42.hyeyoo, mingo, hpa, keescook,
	David Keisar Schmidt, aksecurity, ilay.bahat1

From: David Keisar Schmidt <david.keisarschm@mail.huji.ac.il>

Hi,

The security improvements for prandom_u32 done in commits c51f8f88d705
from October 2020 and d4150779e60f from May 2022 didn't handle the cases
when prandom_bytes_state() and prandom_u32_state() are used.

Specifically, this weak randomization takes place in three cases:
    1.	mm/slab.c
    2.	mm/slab_common.c
    3.	arch/x86/mm/kaslr.c

The first two invocations (mm/slab.c, mm/slab_common.c) are used to create
randomization in the slab allocator freelists.
This is done to make sure attackers can’t obtain information on the heap state.

The last invocation, inside arch/x86/mm/kaslr.c,
randomizes the virtual address space of kernel memory regions.
The use of prandom_bytes_state() is justified since it has a dedicated
state and draws only 3 pseudo random values, but the seeding state takes advantage
of only 32 bits out of 64 bits of the seed.
Hence, we have added the necessary changes to make those randomizations stronger,
switching the invocation of prandom_seed_state to a more secure version, which
we implemented inside kaslr.c.
---
Changes since v3:
* arch/x86/mm/kaslr.c: secure the way the region offsets are generated in the
  seeding state - Adding a revised version of prandom_seed_state
* edited commit messages

Changes since v2:
* edited commit message.
* replaced instances of get_random_u32 with get_random_u32_below
      in mm/slab.c, mm/slab_common.c

Regards,

David Keisar Schmidt (3):
  Replace invocation of weak PRNG in mm/slab.c
  Replace invocation of weak PRNG inside mm/slab_common.c
  Add 64bits prandom_seed_state to arch/x86/mm/kaslr.c

 arch/x86/mm/kaslr.c | 26 +++++++++++++++++++++++++-
 mm/slab.c           | 25 ++++++++++---------------
 mm/slab_common.c    | 11 +++--------
 3 files changed, 38 insertions(+), 24 deletions(-)

-- 
2.38.0



^ permalink raw reply	[flat|nested] 2+ messages in thread

* [PATCH v4 2/3] slab_allocator: mm/slab_common.c: Replace invocation of weak PRNG
  2023-01-13 21:33 [PATCH v4 0/3] Secure prandom_u32 invocations david.keisarschm
@ 2023-01-13 21:37 ` david.keisarschm
  0 siblings, 0 replies; 2+ messages in thread
From: david.keisarschm @ 2023-01-13 21:37 UTC (permalink / raw)
  To: linux-kernel, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Vlastimil Babka, Roman Gushchin,
	Hyeonggon Yoo
  Cc: Jason, linux-mm, David Keisar Schmidt, aksecurity, ilay.bahat1

From: David Keisar Schmidt <david.keisarschm@mail.huji.ac.il>

The Slab allocator randomization inside slab_common.c uses the prandom_u32 PRNG.
That was added to prevent attackers to obtain information on the heap state.

However, this PRNG turned out to be weak, as noted in commit c51f8f88d705
To fix it, we have changed the invocation of prandom_u32_state to get_random_u32
to ensure the PRNG is strong.

Since a modulo operation is applied right after that,
in the Fisher-Yates shuffle, we used get_random_u32_below, to achieve uniformity.

Signed-off-by: David Keisar Schmidt <david.keisarschm@mail.huji.ac.il>
---
Changes since v3:
* edited commit message.

Changes since v2:
* replaced instances of get_random_u32 with get_random_u32_below
    in mm/slab_common.c.


 mm/slab_common.c | 11 +++--------
 1 file changed, 3 insertions(+), 8 deletions(-)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index 0042fb273..e254b2f55 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1130,7 +1130,7 @@ EXPORT_SYMBOL(kmalloc_large_node);
 
 #ifdef CONFIG_SLAB_FREELIST_RANDOM
 /* Randomize a generic freelist */
-static void freelist_randomize(struct rnd_state *state, unsigned int *list,
+static void freelist_randomize(unsigned int *list,
 			       unsigned int count)
 {
 	unsigned int rand;
@@ -1141,8 +1141,7 @@ static void freelist_randomize(struct rnd_state *state, unsigned int *list,
 
 	/* Fisher-Yates shuffle */
 	for (i = count - 1; i > 0; i--) {
-		rand = prandom_u32_state(state);
-		rand %= (i + 1);
+		rand = get_random_u32_below(i+1);
 		swap(list[i], list[rand]);
 	}
 }
@@ -1151,7 +1150,6 @@ static void freelist_randomize(struct rnd_state *state, unsigned int *list,
 int cache_random_seq_create(struct kmem_cache *cachep, unsigned int count,
 				    gfp_t gfp)
 {
-	struct rnd_state state;
 
 	if (count < 2 || cachep->random_seq)
 		return 0;
@@ -1160,10 +1158,7 @@ int cache_random_seq_create(struct kmem_cache *cachep, unsigned int count,
 	if (!cachep->random_seq)
 		return -ENOMEM;
 
-	/* Get best entropy at this stage of boot */
-	prandom_seed_state(&state, get_random_long());
-
-	freelist_randomize(&state, cachep->random_seq, count);
+	freelist_randomize(cachep->random_seq, count);
 	return 0;
 }
 
-- 
2.38.0



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2023-01-13 21:37 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-13 21:33 [PATCH v4 0/3] Secure prandom_u32 invocations david.keisarschm
2023-01-13 21:37 ` [PATCH v4 2/3] slab_allocator: mm/slab_common.c: Replace invocation of weak PRNG david.keisarschm

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox