linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/slab: Achieve better kmalloc caches randomization in kvmalloc
@ 2025-01-22  7:48 GONG Ruiqi
  2025-01-22 16:02 ` Christoph Lameter (Ampere)
  0 siblings, 1 reply; 4+ messages in thread
From: GONG Ruiqi @ 2025-01-22  7:48 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Vlastimil Babka, Kees Cook
  Cc: Tamas Koczka, Roman Gushchin, Hyeonggon Yoo, Xiu Jianfeng,
	linux-mm, linux-hardening, linux-kernel

As revealed by this writeup[1], due to the fact that __kmalloc_node (now
renamed to __kmalloc_node_noprof) is an exported symbol and will never
get inlined, using it in kvmalloc_node (now is __kvmalloc_node_noprof)
would make the RET_IP inside always point to the same address:

    upper_caller
        kvmalloc
        kvmalloc_node
        kvmalloc_node_noprof
        __kvmalloc_node_noprof	<-- all macros all the way down here
            __kmalloc_node_noprof
                __do_kmalloc_node(.., _RET_IP_)
            ...			<-- _RET_IP_ points to

That literally means all kmalloc invoked via kvmalloc would use the same
seed for cache randomization (CONFIG_RANDOM_KMALLOC_CACHES), which makes
this hardening unfunctional.

The root cause of this problem, IMHO, is that using RET_IP only cannot
identify the actual allocation site in case of kmalloc being called
inside wrappers or helper functions. And I believe there could be
similar cases in other functions. Nevertheless, I haven't thought of
any good solution for this. So for now let's solve this specific case
first.

For __kvmalloc_node_noprof, replace __kmalloc_node_noprof with an inline
version, so that RET_IP can take the return address of kvmalloc and
differentiate each kvmalloc invocation:

    upper_caller
        kvmalloc
        kvmalloc_node
        kvmalloc_node_noprof
        __kvmalloc_node_noprof	<-- all macros all the way down here
            __kmalloc_node_inline(.., _RET_IP_)
        ...			<-- _RET_IP_ points to

Thanks to Tamás Koczka for the report and discussion!

Links:
[1] https://github.com/google/security-research/pull/83/files#diff-1604319b55a48c39a210ee52034ed7ff5b9cdc3d704d2d9e34eb230d19fae235R200

Signed-off-by: GONG Ruiqi <gongruiqi1@huawei.com>
---
 include/linux/slab.h | 3 +++
 mm/slub.c            | 7 +++++++
 mm/util.c            | 4 ++--
 3 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index 10a971c2bde3..e03ca4a95511 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -834,6 +834,9 @@ void *__kmalloc_large_noprof(size_t size, gfp_t flags)
 void *__kmalloc_large_node_noprof(size_t size, gfp_t flags, int node)
 				__assume_page_alignment __alloc_size(1);
 
+void *__kmalloc_node_inline(size_t size, kmem_buckets *b, gfp_t flags,
+				int node, unsigned long caller);
+
 /**
  * kmalloc - allocate kernel memory
  * @size: how many bytes of memory are required.
diff --git a/mm/slub.c b/mm/slub.c
index c2151c9fee22..ec75070345c6 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4319,6 +4319,13 @@ void *__kmalloc_node_track_caller_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flag
 }
 EXPORT_SYMBOL(__kmalloc_node_track_caller_noprof);
 
+__always_inline void *__kmalloc_node_inline(size_t size, kmem_buckets *b,
+					    gfp_t flags, int node,
+					    unsigned long caller)
+{
+	return __do_kmalloc_node(size, b, flags, node, caller);
+}
+
 void *__kmalloc_cache_noprof(struct kmem_cache *s, gfp_t gfpflags, size_t size)
 {
 	void *ret = slab_alloc_node(s, NULL, gfpflags, NUMA_NO_NODE,
diff --git a/mm/util.c b/mm/util.c
index 60aa40f612b8..3910d1d1f595 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -642,9 +642,9 @@ void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flags, int node)
 	 * It doesn't really make sense to fallback to vmalloc for sub page
 	 * requests
 	 */
-	ret = __kmalloc_node_noprof(PASS_BUCKET_PARAMS(size, b),
+	ret = __kmalloc_node_inline(size, PASS_BUCKET_PARAM(b),
 				    kmalloc_gfp_adjust(flags, size),
-				    node);
+				    node, _RET_IP_);
 	if (ret || size <= PAGE_SIZE)
 		return ret;
 
-- 
2.25.1



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] mm/slab: Achieve better kmalloc caches randomization in kvmalloc
  2025-01-22  7:48 [PATCH] mm/slab: Achieve better kmalloc caches randomization in kvmalloc GONG Ruiqi
@ 2025-01-22 16:02 ` Christoph Lameter (Ampere)
  2025-01-24 15:19   ` Vlastimil Babka
  0 siblings, 1 reply; 4+ messages in thread
From: Christoph Lameter (Ampere) @ 2025-01-22 16:02 UTC (permalink / raw)
  To: GONG Ruiqi
  Cc: Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Vlastimil Babka, Kees Cook, Tamas Koczka, Roman Gushchin,
	Hyeonggon Yoo, Xiu Jianfeng, linux-mm, linux-hardening,
	linux-kernel

On Wed, 22 Jan 2025, GONG Ruiqi wrote:

>
> +void *__kmalloc_node_inline(size_t size, kmem_buckets *b, gfp_t flags,
> +				int node, unsigned long caller);
> +


Huh? Is this inline? Where is the body of the function?

> diff --git a/mm/slub.c b/mm/slub.c
> index c2151c9fee22..ec75070345c6 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -4319,6 +4319,13 @@ void *__kmalloc_node_track_caller_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flag
>  }
>  EXPORT_SYMBOL(__kmalloc_node_track_caller_noprof);
>
> +__always_inline void *__kmalloc_node_inline(size_t size, kmem_buckets *b,
> +					    gfp_t flags, int node,
> +					    unsigned long caller)
> +{
> +	return __do_kmalloc_node(size, b, flags, node, caller);
> +}
> +

inline functions need to be defined in the header file AFAICT.




^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] mm/slab: Achieve better kmalloc caches randomization in kvmalloc
  2025-01-22 16:02 ` Christoph Lameter (Ampere)
@ 2025-01-24 15:19   ` Vlastimil Babka
  2025-01-26  7:17     ` GONG Ruiqi
  0 siblings, 1 reply; 4+ messages in thread
From: Vlastimil Babka @ 2025-01-24 15:19 UTC (permalink / raw)
  To: Christoph Lameter (Ampere), GONG Ruiqi
  Cc: Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Kees Cook, Tamas Koczka, Roman Gushchin, Hyeonggon Yoo,
	Xiu Jianfeng, linux-mm, linux-hardening, linux-kernel,
	Uladzislau Rezki (Sony)

On 1/22/25 17:02, Christoph Lameter (Ampere) wrote:
> On Wed, 22 Jan 2025, GONG Ruiqi wrote:
> 
>>
>> +void *__kmalloc_node_inline(size_t size, kmem_buckets *b, gfp_t flags,
>> +				int node, unsigned long caller);
>> +
> 
> 
> Huh? Is this inline? Where is the body of the function?
> 
>> diff --git a/mm/slub.c b/mm/slub.c
>> index c2151c9fee22..ec75070345c6 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -4319,6 +4319,13 @@ void *__kmalloc_node_track_caller_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flag
>>  }
>>  EXPORT_SYMBOL(__kmalloc_node_track_caller_noprof);
>>
>> +__always_inline void *__kmalloc_node_inline(size_t size, kmem_buckets *b,
>> +					    gfp_t flags, int node,
>> +					    unsigned long caller)
>> +{
>> +	return __do_kmalloc_node(size, b, flags, node, caller);
>> +}
>> +
> 
> inline functions need to be defined in the header file AFAICT.

Yeah, this could possibly inline only with LTO (dunno if it does). But the
real difference is passing __kvmalloc_node_noprof()'s _RET_IP_ as caller.

Maybe instead of this new wrapper we could just move
__kvmalloc_node_noprof() to slub.c and access __do_kmalloc_node() directly.
For consistency also kvfree() and whatever necessary dependencies. The
placement in util.c is kinda weird anyway and IIRC we already moved
krealloc() due to needing deeper involvement with slab internals. The
vmalloc part of kvmalloc/kvfree is kinda a self-contained fallback that can
be just called from slub.c as well as from util.c.




^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] mm/slab: Achieve better kmalloc caches randomization in kvmalloc
  2025-01-24 15:19   ` Vlastimil Babka
@ 2025-01-26  7:17     ` GONG Ruiqi
  0 siblings, 0 replies; 4+ messages in thread
From: GONG Ruiqi @ 2025-01-26  7:17 UTC (permalink / raw)
  To: Vlastimil Babka, Christoph Lameter (Ampere)
  Cc: Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	Kees Cook, Tamas Koczka, Roman Gushchin, Hyeonggon Yoo,
	Xiu Jianfeng, linux-mm, linux-hardening, linux-kernel,
	Uladzislau Rezki (Sony)



On 2025/01/24 23:19, Vlastimil Babka wrote:
> On 1/22/25 17:02, Christoph Lameter (Ampere) wrote:
>> On Wed, 22 Jan 2025, GONG Ruiqi wrote:
>>
>>>
>>> +void *__kmalloc_node_inline(size_t size, kmem_buckets *b, gfp_t flags,
>>> +				int node, unsigned long caller);
>>> +
>>
>>
>> Huh? Is this inline? Where is the body of the function?
>>
>>> diff --git a/mm/slub.c b/mm/slub.c
>>> index c2151c9fee22..ec75070345c6 100644
>>> --- a/mm/slub.c
>>> +++ b/mm/slub.c
>>> @@ -4319,6 +4319,13 @@ void *__kmalloc_node_track_caller_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flag
>>>  }
>>>  EXPORT_SYMBOL(__kmalloc_node_track_caller_noprof);
>>>
>>> +__always_inline void *__kmalloc_node_inline(size_t size, kmem_buckets *b,
>>> +					    gfp_t flags, int node,
>>> +					    unsigned long caller)
>>> +{
>>> +	return __do_kmalloc_node(size, b, flags, node, caller);
>>> +}
>>> +
>>
>> inline functions need to be defined in the header file AFAICT.
> 
> Yeah, this could possibly inline only with LTO (dunno if it does). But the
> real difference is passing __kvmalloc_node_noprof()'s _RET_IP_ as caller.
> 
> Maybe instead of this new wrapper we could just move
> __kvmalloc_node_noprof() to slub.c and access __do_kmalloc_node() directly.
> For consistency also kvfree() and whatever necessary dependencies. The
> placement in util.c is kinda weird anyway and IIRC we already moved
> krealloc() due to needing deeper involvement with slab internals. The
> vmalloc part of kvmalloc/kvfree is kinda a self-contained fallback that can
> be just called from slub.c as well as from util.c.

Thanks for the advice!

I will send a V2 based on moving __kvmalloc_node_noprof() and kvfree()
to slub.c as soon as possible.

BR,
Ruiqi


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2025-01-26  7:19 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-01-22  7:48 [PATCH] mm/slab: Achieve better kmalloc caches randomization in kvmalloc GONG Ruiqi
2025-01-22 16:02 ` Christoph Lameter (Ampere)
2025-01-24 15:19   ` Vlastimil Babka
2025-01-26  7:17     ` GONG Ruiqi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox