From: Catalin Marinas <catalin.marinas@arm.com>
To: Petr Tesarik <ptesarik@suse.com>
Cc: Vlastimil Babka <vbabka@suse.cz>,
Feng Tang <feng.tang@linux.alibaba.com>,
Harry Yoo <harry.yoo@oracle.com>, Peng Fan <peng.fan@nxp.com>,
Hyeonggon Yoo <42.hyeyoo@gmail.com>,
David Rientjes <rientjes@google.com>,
Christoph Lameter <cl@linux.com>,
"linux-mm@kvack.org" <linux-mm@kvack.org>
Subject: Re: slub - extended kmalloc redzone and dma alignment
Date: Tue, 8 Apr 2025 16:07:19 +0100 [thread overview]
Message-ID: <Z_U7p78VCoazBIOi@arm.com> (raw)
In-Reply-To: <20250408072732.32db7809@mordecai>
On Tue, Apr 08, 2025 at 07:27:32AM +0200, Petr Tesarik wrote:
> On Mon, 7 Apr 2025 18:12:09 +0100
> Catalin Marinas <catalin.marinas@arm.com> wrote:
> > Thanks for looping me in. I'm just catching up with this thread.
> >
> > On Mon, Apr 07, 2025 at 09:54:41AM +0200, Vlastimil Babka wrote:
> > > On 4/7/25 09:21, Feng Tang wrote:
> > > > On Sun, Apr 06, 2025 at 10:02:40PM +0800, Feng Tang wrote:
> > > > [...]
> > > >> > I can remember this series, as well as my confusion why
> > > >> > 192-byte kmalloc caches were missing on arm64.
> > > >> >
> > > >> > Nevertheless, I believe ARCH_DMA_MINALIGN is required to avoid
> > > >> > putting a DMA buffer on the same cache line as some other data
> > > >> > that might be _written_ by the CPU while the corresponding
> > > >> > main memory is modified by another bus-mastering device.
> > > >> >
> > > >> > Consider this layout:
> > > >> >
> > > >> > ... | DMA buffer | other data | ...
> > > >> > ^ ^
> > > >> > +-------------------------+-- cache line boundaries
> > > >> >
> > > >> > When you prepare for DMA, you make sure that the DMA buffer is
> > > >> > not cached by the CPU, so you flush the cache line (from all
> > > >> > levels). Then you tell the device to write into the DMA
> > > >> > buffer. However, before the device finishes the DMA
> > > >> > transaction, the CPU accesses "other data", loading this cache
> > > >> > line from main memory with partial results. Worse, if the CPU
> > > >> > writes to "other data", it may write the cache line back into
> > > >> > main memory, racing with the device writing to DMA buffer, and
> > > >> > you end up with corrupted data in DMA buffer.
> >
> > Yes, cache evictions from 'other data; can override the DMA. Another
> > problem, when the DMA completed, the kernel does a cache invalidation
> > to remove any speculatively loaded cache lines from the DMA buffer
> > but that would also invalidate 'other data', potentially corrupting
> > it if it was dirty.
> >
> > So it's not safe to have DMA into buffers less than ARCH_DMA_MINALIGN
> > (and unaligned).
>
> It's not safe to DMA into buffers that share a CPU cache line with other
> data, which could be before or after the DMA buffer, of course.
Was the original problem reported for an arm64 platform? It wasn't clear
to me from the thread.
For arm64, the only problem is if the other data is being modified
_while_ the transfer is taking place. Otherwise when mapping the buffer
for device, the kernel cleans the caches and writes other data to RAM.
See arch_sync_dma_for_device(). This is non-destructive w.r.t. the data
in both the DMA buffer and the red zone.
After the transfer (FROM_DEVICE), the arch_sync_dma_for_cpu()
invalidates the caches, including other data, but since they were
previously written to RAM in the for_device case, they'd be read into
the cache on access without any corruption.
Of course, this assumes that the device keeps within the limits and does
not write beyond the DMA buffer into the red zone. If it does, the
buffer overflow warning is valid.
While I think we are ok for arm64, other architectures may invalidate
the caches in the arch_sync_dma_for_device() which could discard the red
zone data. A quick grep for arch_sync_dma_for_device() shows several
architectures invalidating the caches in the FROM_DEVICE case.
> > What I did with reducing the minimum kmalloc()
> > alignment was to force bouncing via swiotlb if the size passed to the
> > DMA API is small. It may end up bouncing buffers that did not
> > originate from kmalloc() or have proper alignment (with padding) but
> > that's some heuristics we were willing to accept to be able to use
> > small kmalloc() caches on arm64 - see dma_kmalloc_needs_bounce().
> >
> > Does redzoning apply to kmalloc() or kmem_cache_create() (or both)? I
> > haven't checked yet but if the red zone is within ARCH_DMA_MINALIGN
> > (or rather dma_get_cache_alignment()), we could have issues with
> > either corrupting the DMA buffer or the red zone. [...]
>
> I'm sorry if I'm being thick, but IIUC the red zone does not have to be
> protected. Yes, we might miss red zone corruption if it happens to race
> with a DMA transaction, but I have assumed that this is permissible. I
> regard the red zone as a useful debugging tool, not a safety measure
> that is guaranteed to detect any write beyond the buffer end.
Yeah, it's debugging, but too many false positives make the feature
pretty useless.
> > > > I'm not familiar with DMA stuff, but Vlastimil's idea does make it
> > > > easier for driver developer to write a driver to be used on
> > > > different ARCHs, which have different DMA alignment requirement.
> > > > Say if the minimal safe size is 8 bytes, the driver can just
> > > > request 8 bytes and ARCH_DMA_MINALIGN will automatically chose
> > > > the right size for it, which can save memory for ARCHs with
> > > > smaller alignment requirement. Meanwhile it does sacrifice part
> > > > of the redzone check ability, so I don't have preference here :)
> > >
> > > Let's clarify first who's expected to ensure the word alignment for
> > > DMA, if it's not kmalloc() then I'd rather resist moving it there
> > > :)
> >
> > In theory, the DMA API should handle the alignment as I tried to
> > remove it from the kmalloc() code.
>
> Are we talking about the alignment of the starting address, or buffer
> size, or both?
The DMA API bouncing logic only checks the buffer size and assumes
start/end would be aligned to kmalloc_size_roundup() with no valid data
between them (FROM_DEVICE).
> > With kmem_cache_create() (or kmalloc() as well), if the object size is
> > not cacheline-aligned, is there risk of redzoning around the object
> > without any alignment restrictions? The logic in
> > dma_kmalloc_size_aligned() would fail for sufficiently large buffers
> > but with unaligned red zone around the object.
>
> This red zone is the extra memory that is normally wasted by kmalloc()
> rounding up the requested size to the bucket size.
> But dma_kmalloc_size_aligned() already uses kmalloc_size_roundup(size),
> so it seems to be covered.
Assuming I got kmalloc redzoning right, I think there's still a
potential issue. Let's say we have a system with 128-byte DMA alignment
required (the largest cache line size). We do a kmalloc(104) and
kmalloc_size_roundup() returns 128, so all seems good to the DMA code.
However, kmalloc() redzones from 104 to 128 as it tracks the original
size. The DMA bouncing doesn't spot it since the
kmalloc_size_roundup(104) is aligned to 128.
The above is a problem even for architectures that don't select
DMA_BOUNCE_UNALIGNED_KMALLOC but have non-coherent DMA (well, selecting
it may have a better chance of working if the buffers are small).
So I think 946fa0dbf2d8 ("mm/slub: extend redzone check to extra
allocated kmalloc space than requested") is broken on most architectures
that select ARCH_HAS_SYNC_DMA_FOR_DEVICE (arm64 is ok as it does a
write-back in arch_sync_dma_for_device() irrespective of direction).
We can hide this extended kmalloc() redzoning behind an arch select but,
as it is, I'd only do redzoning from an ALIGN(orig_size,
dma_get_cache_alignment()) offset.
Is the combination of SLAB_HWCACHE_ALIGN and SLAB_RED_ZONE similarly
affected? At least that's an explicit opt in and people shouldn't pass
it if they intend the objects to be used for DMA.
--
Catalin
next prev parent reply other threads:[~2025-04-08 15:07 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-04 9:30 Vlastimil Babka
2025-04-04 10:30 ` Harry Yoo
2025-04-04 11:12 ` Petr Tesarik
2025-04-04 12:45 ` Vlastimil Babka
2025-04-04 13:53 ` Petr Tesarik
2025-04-06 14:02 ` Feng Tang
2025-04-07 7:21 ` Feng Tang
2025-04-07 7:54 ` Vlastimil Babka
2025-04-07 9:50 ` Petr Tesarik
2025-04-07 17:12 ` Catalin Marinas
2025-04-08 5:27 ` Petr Tesarik
2025-04-08 15:07 ` Catalin Marinas [this message]
2025-04-09 8:39 ` Petr Tesarik
2025-04-09 9:05 ` Petr Tesarik
2025-04-09 9:47 ` Catalin Marinas
2025-04-09 12:18 ` Petr Tesarik
2025-04-09 12:49 ` Catalin Marinas
2025-04-09 13:41 ` Petr Tesarik
2025-04-09 8:51 ` Vlastimil Babka
2025-04-09 11:11 ` Catalin Marinas
2025-04-09 12:22 ` Vlastimil Babka
2025-04-09 14:30 ` Catalin Marinas
2025-04-10 1:54 ` Feng Tang
2025-04-07 7:45 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z_U7p78VCoazBIOi@arm.com \
--to=catalin.marinas@arm.com \
--cc=42.hyeyoo@gmail.com \
--cc=cl@linux.com \
--cc=feng.tang@linux.alibaba.com \
--cc=harry.yoo@oracle.com \
--cc=linux-mm@kvack.org \
--cc=peng.fan@nxp.com \
--cc=ptesarik@suse.com \
--cc=rientjes@google.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox