linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V2] mm/slab: ensure all metadata in slab object are word-aligned
@ 2025-10-27 12:00 Harry Yoo
  2025-10-27 12:07 ` Harry Yoo
  2025-10-29 14:36 ` Andrey Ryabinin
  0 siblings, 2 replies; 3+ messages in thread
From: Harry Yoo @ 2025-10-27 12:00 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: David Rientjes, Alexander Potapenko, Roman Gushchin,
	Andrew Morton, Vincenzo Frascino, Harry Yoo, Andrey Ryabinin,
	Feng Tang, Christoph Lameter, Dmitry Vyukov, Andrey Konovalov,
	linux-mm, Pedro Falcato, linux-kernel, kasan-dev, stable

When the SLAB_STORE_USER debug flag is used, any metadata placed after
the original kmalloc request size (orig_size) is not properly aligned
on 64-bit architectures because its type is unsigned int. When both KASAN
and SLAB_STORE_USER are enabled, kasan_alloc_meta is misaligned.

Note that 64-bit architectures without HAVE_EFFICIENT_UNALIGNED_ACCESS
are assumed to require 64-bit accesses to be 64-bit aligned.
See HAVE_64BIT_ALIGNED_ACCESS and commit adab66b71abf ("Revert:
"ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS"") for more details.

Because not all architectures support unaligned memory accesses,
ensure that all metadata (track, orig_size, kasan_{alloc,free}_meta)
in a slab object are word-aligned. struct track, kasan_{alloc,free}_meta
are aligned by adding __aligned(__alignof__(unsigned long)).

For orig_size, use ALIGN(sizeof(unsigned int), sizeof(unsigned long)) to
make clear that its size remains unsigned int but it must be aligned to
a word boundary. On 64-bit architectures, this reserves 8 bytes for
orig_size, which is acceptable since kmalloc's original request size
tracking is intended for debugging rather than production use.

Cc: stable@vger.kernel.org
Fixes: 6edf2576a6cc ("mm/slub: enable debugging memory wasting of kmalloc")
Acked-by: Andrey Konovalov <andreyknvl@gmail.com>
Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
---

v1 -> v2:
- Added Andrey's Acked-by.
- Added references to HAVE_64BIT_ALIGNED_ACCESS and the commit that
  resurrected it.
- Used __alignof__() instead of sizeof(), as suggested by Pedro (off-list).
  Note: either __alignof__ or sizeof() produces the exactly same mm/slub.o
  files, so there's no functional difference.

Thanks!

 mm/kasan/kasan.h |  4 ++--
 mm/slub.c        | 16 +++++++++++-----
 2 files changed, 13 insertions(+), 7 deletions(-)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 129178be5e64..b86b6e9f456a 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -265,7 +265,7 @@ struct kasan_alloc_meta {
 	struct kasan_track alloc_track;
 	/* Free track is stored in kasan_free_meta. */
 	depot_stack_handle_t aux_stack[2];
-};
+} __aligned(__alignof__(unsigned long));
 
 struct qlist_node {
 	struct qlist_node *next;
@@ -289,7 +289,7 @@ struct qlist_node {
 struct kasan_free_meta {
 	struct qlist_node quarantine_link;
 	struct kasan_track free_track;
-};
+} __aligned(__alignof__(unsigned long));
 
 #endif /* CONFIG_KASAN_GENERIC */
 
diff --git a/mm/slub.c b/mm/slub.c
index a585d0ac45d4..462a39d57b3a 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -344,7 +344,7 @@ struct track {
 	int cpu;		/* Was running on cpu */
 	int pid;		/* Pid context */
 	unsigned long when;	/* When did the operation occur */
-};
+} __aligned(__alignof__(unsigned long));
 
 enum track_item { TRACK_ALLOC, TRACK_FREE };
 
@@ -1196,7 +1196,7 @@ static void print_trailer(struct kmem_cache *s, struct slab *slab, u8 *p)
 		off += 2 * sizeof(struct track);
 
 	if (slub_debug_orig_size(s))
-		off += sizeof(unsigned int);
+		off += ALIGN(sizeof(unsigned int), __alignof__(unsigned long));
 
 	off += kasan_metadata_size(s, false);
 
@@ -1392,7 +1392,8 @@ static int check_pad_bytes(struct kmem_cache *s, struct slab *slab, u8 *p)
 		off += 2 * sizeof(struct track);
 
 		if (s->flags & SLAB_KMALLOC)
-			off += sizeof(unsigned int);
+			off += ALIGN(sizeof(unsigned int),
+				     __alignof__(unsigned long));
 	}
 
 	off += kasan_metadata_size(s, false);
@@ -7820,9 +7821,14 @@ static int calculate_sizes(struct kmem_cache_args *args, struct kmem_cache *s)
 		 */
 		size += 2 * sizeof(struct track);
 
-		/* Save the original kmalloc request size */
+		/*
+		 * Save the original kmalloc request size.
+		 * Although the request size is an unsigned int,
+		 * make sure that is aligned to word boundary.
+		 */
 		if (flags & SLAB_KMALLOC)
-			size += sizeof(unsigned int);
+			size += ALIGN(sizeof(unsigned int),
+				      __alignof__(unsigned long));
 	}
 #endif
 
-- 
2.43.0



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH V2] mm/slab: ensure all metadata in slab object are word-aligned
  2025-10-27 12:00 [PATCH V2] mm/slab: ensure all metadata in slab object are word-aligned Harry Yoo
@ 2025-10-27 12:07 ` Harry Yoo
  2025-10-29 14:36 ` Andrey Ryabinin
  1 sibling, 0 replies; 3+ messages in thread
From: Harry Yoo @ 2025-10-27 12:07 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: David Rientjes, Alexander Potapenko, Roman Gushchin,
	Andrew Morton, Vincenzo Frascino, Andrey Ryabinin, Feng Tang,
	Christoph Lameter, Dmitry Vyukov, Andrey Konovalov, linux-mm,
	Pedro Falcato, linux-kernel, kasan-dev, stable

On Mon, Oct 27, 2025 at 09:00:28PM +0900, Harry Yoo wrote:
> When the SLAB_STORE_USER debug flag is used, any metadata placed after
> the original kmalloc request size (orig_size) is not properly aligned
> on 64-bit architectures because its type is unsigned int. When both KASAN
> and SLAB_STORE_USER are enabled, kasan_alloc_meta is misaligned.
> 
> Note that 64-bit architectures without HAVE_EFFICIENT_UNALIGNED_ACCESS
> are assumed to require 64-bit accesses to be 64-bit aligned.
> See HAVE_64BIT_ALIGNED_ACCESS and commit adab66b71abf ("Revert:
> "ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS"") for more details.
> 
> Because not all architectures support unaligned memory accesses,
> ensure that all metadata (track, orig_size, kasan_{alloc,free}_meta)
> in a slab object are word-aligned. struct track, kasan_{alloc,free}_meta
> are aligned by adding __aligned(__alignof__(unsigned long)).
> 
> For orig_size, use ALIGN(sizeof(unsigned int), sizeof(unsigned long)) to
                                                 ^ Uh, here I intended
						 to say:
                                                 __aligneof__(unsigned long))

> make clear that its size remains unsigned int but it must be aligned to
> a word boundary. On 64-bit architectures, this reserves 8 bytes for
> orig_size, which is acceptable since kmalloc's original request size
> tracking is intended for debugging rather than production use.
> 
> Cc: stable@vger.kernel.org
> Fixes: 6edf2576a6cc ("mm/slub: enable debugging memory wasting of kmalloc")
> Acked-by: Andrey Konovalov <andreyknvl@gmail.com>
> Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
> ---
> 
> v1 -> v2:
> - Added Andrey's Acked-by.
> - Added references to HAVE_64BIT_ALIGNED_ACCESS and the commit that
>   resurrected it.
> - Used __alignof__() instead of sizeof(), as suggested by Pedro (off-list).
>   Note: either __alignof__ or sizeof() produces the exactly same mm/slub.o
>   files, so there's no functional difference.
> 
> Thanks!
> 
>  mm/kasan/kasan.h |  4 ++--
>  mm/slub.c        | 16 +++++++++++-----
>  2 files changed, 13 insertions(+), 7 deletions(-)

-- 
Cheers,
Harry / Hyeonggon


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH V2] mm/slab: ensure all metadata in slab object are word-aligned
  2025-10-27 12:00 [PATCH V2] mm/slab: ensure all metadata in slab object are word-aligned Harry Yoo
  2025-10-27 12:07 ` Harry Yoo
@ 2025-10-29 14:36 ` Andrey Ryabinin
  1 sibling, 0 replies; 3+ messages in thread
From: Andrey Ryabinin @ 2025-10-29 14:36 UTC (permalink / raw)
  To: Harry Yoo, Vlastimil Babka
  Cc: David Rientjes, Alexander Potapenko, Roman Gushchin,
	Andrew Morton, Vincenzo Frascino, Feng Tang, Christoph Lameter,
	Dmitry Vyukov, Andrey Konovalov, linux-mm, Pedro Falcato,
	linux-kernel, kasan-dev, stable



On 10/27/25 1:00 PM, Harry Yoo wrote:
> When the SLAB_STORE_USER debug flag is used, any metadata placed after
> the original kmalloc request size (orig_size) is not properly aligned
> on 64-bit architectures because its type is unsigned int. When both KASAN
> and SLAB_STORE_USER are enabled, kasan_alloc_meta is misaligned.
> 

kasan_alloc_meta is properly aligned. It consists of 4 32-bit words,
so the proper alignment is 32bit regardless of architecture bitness.

kasan_free_meta however requires 'unsigned long' alignment
and could be misaligned if placed at 32-bit boundary on 64-bit arch

> Note that 64-bit architectures without HAVE_EFFICIENT_UNALIGNED_ACCESS
> are assumed to require 64-bit accesses to be 64-bit aligned.
> See HAVE_64BIT_ALIGNED_ACCESS and commit adab66b71abf ("Revert:
> "ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS"") for more details.
> 
> Because not all architectures support unaligned memory accesses,
> ensure that all metadata (track, orig_size, kasan_{alloc,free}_meta)
> in a slab object are word-aligned. struct track, kasan_{alloc,free}_meta
> are aligned by adding __aligned(__alignof__(unsigned long)).
> 

__aligned() attribute ensures nothing. It tells compiler what alignment to expect
and affects compiler controlled placement of struct in memory (e.g. stack/.bss/.data)
But it can't enforce placement in dynamic memory.

Also for struct kasan_free_meta, struct track alignof(unsigned long) already dictated
by C standard, so adding this __aligned() have zero effect.
And there is no reason to increase alignment requirement for kasan_alloc_meta struct.

> For orig_size, use ALIGN(sizeof(unsigned int), sizeof(unsigned long)) to
> make clear that its size remains unsigned int but it must be aligned to
> a word boundary. On 64-bit architectures, this reserves 8 bytes for
> orig_size, which is acceptable since kmalloc's original request size
> tracking is intended for debugging rather than production use.
I would suggest to use 'unsigned long' for orig_size. It changes nothing for 32-bit,
and it shouldn't increase memory usage for 64-bit since we currently wasting it anyway
to align next object to ARCH_KMALLOC_MINALIGN.


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2025-10-29 14:36 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-10-27 12:00 [PATCH V2] mm/slab: ensure all metadata in slab object are word-aligned Harry Yoo
2025-10-27 12:07 ` Harry Yoo
2025-10-29 14:36 ` Andrey Ryabinin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox