From: Catalin Marinas <catalin.marinas@arm.com>
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Isaac Manjarres <isaacmanjarres@google.com>,
Herbert Xu <herbert@gondor.apana.org.au>,
Ard Biesheuvel <ardb@kernel.org>, Will Deacon <will@kernel.org>,
Marc Zyngier <maz@kernel.org>, Arnd Bergmann <arnd@arndb.de>,
Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
Andrew Morton <akpm@linux-foundation.org>,
Linux Memory Management List <linux-mm@kvack.org>,
Linux ARM <linux-arm-kernel@lists.infradead.org>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
"David S. Miller" <davem@davemloft.net>,
Saravana Kannan <saravanak@google.com>,
kernel-team@android.com
Subject: Re: [PATCH 07/10] crypto: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN
Date: Sat, 1 Oct 2022 23:29:51 +0100 [thread overview]
Message-ID: <Yzi/X12rQTuT9Uqk@arm.com> (raw)
In-Reply-To: <CAHk-=wgPqauyKD9CoQg2AAtV=ygpS_fAahhgzPAe99k5Kush6A@mail.gmail.com>
On Fri, Sep 30, 2022 at 12:35:45PM -0700, Linus Torvalds wrote:
> On Fri, Sep 30, 2022 at 11:33 AM Catalin Marinas
> <catalin.marinas@arm.com> wrote:
> > I started refreshing the series but I got stuck on having to do bouncing
> > for small buffers even if when they go through the iommu (and I don't
> > have the set up to test it yet).
>
> May I suggest doing that "force bouncing" and "change kmalloc to have
> a 8-byte minalign" to be the two first commits?
>
> IOW, if we force bouncing for unaligned DMA, then that *should* mean
> that allocation alignment is no longer a correctness issue, it's
> purely a performance one due to the bouncing.
I've been thinking about this and even ended up with a CBMC model
(included below; it found a bug in dma_kmalloc_needs_bounce()).
The "force bouncing" in my series currently only checks for small
(potentially kmalloc'ed) sizes under the assumption that intra-object
DMA buffers were properly aligned to 128. So for something like below:
struct devres {
struct devres_node node;
u8 __aligned(ARCH_DMA_MINALIGN) data[];
};
we'd need ARCH_DMA_MINALIGN of 128 even if ARCH_KMALLOC_MINALIGN is 8.
Original the code has __aligned(ARCH_KMALLOC_MINALIGN), so lowering the
latter to 8 without any changes would be problematic (the sizeof(devres)
may be sufficiently large to look cacheline-aligned).
If data[] contains a single DMA buffer, dma_kmalloc_needs_bounce() can
get the start of the buffer as another parameter and check that it's a
multiple of cache_line_size().
However, things get more complicated if data[] is used for several
sub-allocations of multiples of ARCH_KMALLOC_MINALIGN. Not much to do
with kmalloc() caches at this point. I haven't got my head around the
crypto code but it looked to me like it needs ARCH_DMA_MINALIGN in some
places if we are to lower ARCH_KMALLOC_MINALIGN. We could attempt to
force bouncing in dma_kmalloc_needs_bounce() by:
if (ptr % dma_align != || size % dma_align != 0)
return true;
but that's orthogonal to the kmalloc caches. I tried this some years ago
and IIRC many buffers get bounced even with ARCH_KMALLOC_MINALIGN of 128
because drivers don't necessarily have cacheline-aligned sized
structures shared with devices (but they are allocated from a
cacheline-aligned slab). So this check results in unnecessary bouncing.
So my series attempts to (1) fix the (static) alignment for intra-object
buffers by changing a few ARCH_KMALLOC_MINALIGN uses to
ARCH_DMA_MINALIGN and (2) address the kmalloc() DMA safety by bouncing
non-cacheline-aligned sizes. I don't think we can do (2) first as the
logic for handling (1) in the absence of a large ARCH_DMA_MINALIGN is
different.
And that's the CMBC model:
------------------------------------8<----------------------------
// SPDX-License-Identifier: GPL-2.0-only
/*
* Check with:
* cbmc --trace dma-bounce.c
*/
#define PAGE_SIZE 4096
#define ARCH_KMALLOC_MINALIGN 8
#define ARCH_DMA_MINALIGN 128
#define KMALLOC_MIN_SIZE ARCH_KMALLOC_MINALIGN
#define KMALLOC_SHIFT_LOW 3
#define KMALLOC_SHIFT_HIGH 25
#define KMALLOC_MAX_SIZE (1UL << KMALLOC_SHIFT_HIGH)
#define INIT_KMALLOC_INFO(__size, __short_size) \
{ \
.size = __size, \
}
static unsigned int nondet_uint(void);
struct kmalloc_info_struct {
unsigned int size;
};
struct kmalloc_slab {
unsigned int ptr;
unsigned int size;
};
static const struct kmalloc_info_struct kmalloc_info[] = {
INIT_KMALLOC_INFO(0, 0),
INIT_KMALLOC_INFO(96, 96),
INIT_KMALLOC_INFO(192, 192),
INIT_KMALLOC_INFO(8, 8),
INIT_KMALLOC_INFO(16, 16),
INIT_KMALLOC_INFO(32, 32),
INIT_KMALLOC_INFO(64, 64),
INIT_KMALLOC_INFO(128, 128),
INIT_KMALLOC_INFO(256, 256),
INIT_KMALLOC_INFO(512, 512),
INIT_KMALLOC_INFO(1024, 1k),
INIT_KMALLOC_INFO(2048, 2k),
INIT_KMALLOC_INFO(4096, 4k),
INIT_KMALLOC_INFO(8192, 8k),
INIT_KMALLOC_INFO(16384, 16k),
INIT_KMALLOC_INFO(32768, 32k),
INIT_KMALLOC_INFO(65536, 64k),
INIT_KMALLOC_INFO(131072, 128k),
INIT_KMALLOC_INFO(262144, 256k),
INIT_KMALLOC_INFO(524288, 512k),
INIT_KMALLOC_INFO(1048576, 1M),
INIT_KMALLOC_INFO(2097152, 2M),
INIT_KMALLOC_INFO(4194304, 4M),
INIT_KMALLOC_INFO(8388608, 8M),
INIT_KMALLOC_INFO(16777216, 16M),
INIT_KMALLOC_INFO(33554432, 32M)
};
static unsigned int cache_line_size(void)
{
static const unsigned int cls = nondet_uint();
__CPROVER_assume(cls == 32 || cls == 64 || cls == 128);
return cls;
}
static unsigned int kmalloc_index(unsigned int size)
{
if (!size)
return 0;
if (size <= KMALLOC_MIN_SIZE)
return KMALLOC_SHIFT_LOW;
if (KMALLOC_MIN_SIZE <= 32 && size > 64 && size <= 96)
return 1;
if (KMALLOC_MIN_SIZE <= 64 && size > 128 && size <= 192)
return 2;
if (size <= 8) return 3;
if (size <= 16) return 4;
if (size <= 32) return 5;
if (size <= 64) return 6;
if (size <= 128) return 7;
if (size <= 256) return 8;
if (size <= 512) return 9;
if (size <= 1024) return 10;
if (size <= 2 * 1024) return 11;
if (size <= 4 * 1024) return 12;
if (size <= 8 * 1024) return 13;
if (size <= 16 * 1024) return 14;
if (size <= 32 * 1024) return 15;
if (size <= 64 * 1024) return 16;
if (size <= 128 * 1024) return 17;
if (size <= 256 * 1024) return 18;
if (size <= 512 * 1024) return 19;
if (size <= 1024 * 1024) return 20;
if (size <= 2 * 1024 * 1024) return 21;
if (size <= 4 * 1024 * 1024) return 22;
if (size <= 8 * 1024 * 1024) return 23;
if (size <= 16 * 1024 * 1024) return 24;
if (size <= 32 * 1024 * 1024) return 25;
__CPROVER_assert(0, "Invalid kmalloc() size");
return -1;
}
unsigned int kmalloc(unsigned int size, struct kmalloc_slab *slab)
{
unsigned int nr = nondet_uint();
slab->size = kmalloc_info[kmalloc_index(size)].size;
slab->ptr = nr * slab->size;
__CPROVER_assume(slab->ptr < PAGE_SIZE);
__CPROVER_assume(slab->ptr % slab->size == 0);
return slab->ptr;
}
/*
* Implemented only for 32, 64 and 128 cache line sizes.
*/
int dma_kmalloc_needs_bounce(unsigned int size)
{
unsigned int dma_align = cache_line_size();
/*
* Less than half dma_align, there's definitely a smaller kmalloc()
* cache.
*/
if (size <= dma_align / 2)
return 1;
/*
* From this point, any kmalloc cache size is 32-byte aligned.
*/
if (dma_align == 32)
return 0;
/*
* dma_align == 64 => 96 needs bouncing.
* dma_align == 128 => 96 and 192 need bouncing.
*/
if (size > 64 && size <= 96)
return 1;
if (dma_align == 128 && size > 128 && size <= 192)
return 1;
return 0;
}
/*
* Simulate DMA cache maintenance. The 'slab' object is only used for
* verification.
*/
void dma_map_single(unsigned int ptr, unsigned int size,
struct kmalloc_slab *slab)
{
unsigned int mask = cache_line_size() - 1;
if (dma_kmalloc_needs_bounce(size)) {
/* was the bounce really necessary? */
__CPROVER_assert((ptr & mask) != 0 || (size & mask) != 0,
"Bouncing aligned DMA buffer");
return;
}
/*
* Check for cache maintenance outside the kmalloc'ed object. We don't
* care about intra-object overlap, it's the caller's responsibility
* to ensure alignment.
*/
__CPROVER_assert((ptr & ~mask) >= slab->ptr, "DMA cache maintenance underflow");
__CPROVER_assert(((ptr + size + mask) & ~mask) <= slab->ptr + slab->size,
"DMA cache maintenance overflow");
}
int main(void)
{
struct kmalloc_slab slab;
unsigned int size = nondet_uint();
unsigned int offset = nondet_uint();
unsigned int ptr;
__CPROVER_assume(size <= KMALLOC_MAX_SIZE);
__CPROVER_assume(offset < size);
__CPROVER_assume(offset % ARCH_DMA_MINALIGN == 0);
ptr = kmalloc(size, &slab);
dma_map_single(ptr + offset, size - offset, &slab);
return 0;
}
--
Catalin
next prev parent reply other threads:[~2022-10-01 22:30 UTC|newest]
Thread overview: 139+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-05 13:57 [PATCH 00/10] mm, arm64: Reduce ARCH_KMALLOC_MINALIGN below the cache line size Catalin Marinas
2022-04-05 13:57 ` [PATCH 01/10] mm/slab: Decouple ARCH_KMALLOC_MINALIGN from ARCH_DMA_MINALIGN Catalin Marinas
2022-04-05 23:59 ` Hyeonggon Yoo
2022-04-06 8:53 ` Catalin Marinas
[not found] ` <CAK8P3a1K0=jwYEHVu=X7oAWk9dzaOYAdFsidwVRKCJVReSV3+g@mail.gmail.com>
2022-04-06 12:09 ` Hyeonggon Yoo
2022-04-08 6:42 ` Hyeonggon Yoo
2022-04-08 9:06 ` Hyeonggon Yoo
2022-04-08 9:11 ` Catalin Marinas
2022-04-11 10:37 ` Hyeonggon Yoo
2022-04-11 14:02 ` Catalin Marinas
2022-04-05 13:57 ` [PATCH 02/10] drivers/base: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN Catalin Marinas
2022-04-11 14:57 ` Andy Shevchenko
2022-04-11 17:39 ` Catalin Marinas
2022-04-05 13:57 ` [PATCH 03/10] drivers/gpu: " Catalin Marinas
2022-04-05 13:57 ` [PATCH 04/10] drivers/md: " Catalin Marinas
2022-04-05 13:57 ` [PATCH 05/10] drivers/spi: " Catalin Marinas
2022-04-05 14:05 ` Mark Brown
2022-04-05 13:57 ` [PATCH 06/10] drivers/usb: " Catalin Marinas
2022-04-05 13:57 ` [PATCH 07/10] crypto: " Catalin Marinas
2022-04-05 22:57 ` Herbert Xu
2022-04-06 6:53 ` Ard Biesheuvel
2022-04-06 8:49 ` Catalin Marinas
2022-04-06 9:41 ` Ard Biesheuvel
2022-04-07 4:30 ` Herbert Xu
2022-04-07 11:01 ` Catalin Marinas
2022-04-07 11:40 ` Herbert Xu
2022-04-07 16:28 ` Catalin Marinas
2022-04-08 3:25 ` Herbert Xu
2022-04-08 9:04 ` Catalin Marinas
2022-04-08 9:11 ` Herbert Xu
2022-04-12 9:32 ` Catalin Marinas
2022-04-12 9:40 ` Herbert Xu
2022-04-12 10:02 ` Catalin Marinas
2022-04-12 10:18 ` Herbert Xu
2022-04-12 12:31 ` Catalin Marinas
2022-04-12 22:01 ` Ard Biesheuvel
2022-04-13 8:47 ` Catalin Marinas
2022-04-13 19:53 ` Linus Torvalds
2022-04-14 5:38 ` Greg Kroah-Hartman
2022-04-14 13:52 ` Ard Biesheuvel
2022-04-14 14:27 ` Greg Kroah-Hartman
2022-04-14 14:36 ` Ard Biesheuvel
2022-04-14 14:52 ` Greg Kroah-Hartman
2022-04-14 15:01 ` Ard Biesheuvel
2022-04-14 15:10 ` Ard Biesheuvel
2022-04-14 19:49 ` Catalin Marinas
2022-04-14 22:25 ` Linus Torvalds
2022-04-15 6:03 ` Ard Biesheuvel
2022-04-15 11:09 ` Arnd Bergmann
2022-04-16 9:42 ` Catalin Marinas
2022-04-20 19:07 ` Catalin Marinas
2022-04-20 19:33 ` Linus Torvalds
2022-04-14 14:30 ` Ard Biesheuvel
2022-04-15 6:51 ` Herbert Xu
2022-04-15 7:49 ` Ard Biesheuvel
2022-04-15 7:51 ` Herbert Xu
2022-04-15 8:05 ` Ard Biesheuvel
2022-04-15 8:12 ` Herbert Xu
2022-04-15 9:51 ` Ard Biesheuvel
2022-04-15 10:04 ` Ard Biesheuvel
2022-04-15 10:12 ` Herbert Xu
2022-04-15 10:22 ` Ard Biesheuvel
2022-04-15 10:45 ` Herbert Xu
2022-04-15 11:38 ` Ard Biesheuvel
2022-04-17 8:08 ` Herbert Xu
2022-04-17 8:31 ` Catalin Marinas
2022-04-17 8:35 ` Herbert Xu
2022-04-17 8:50 ` Catalin Marinas
2022-04-17 8:58 ` Herbert Xu
2022-04-17 16:30 ` Catalin Marinas
2022-04-18 8:37 ` Herbert Xu
2022-04-18 9:19 ` Catalin Marinas
2022-04-18 16:44 ` Catalin Marinas
2022-04-19 21:50 ` Ard Biesheuvel
2022-04-20 10:36 ` Catalin Marinas
2022-04-20 11:29 ` Arnd Bergmann
2022-04-21 7:20 ` Christoph Hellwig
2022-04-21 7:36 ` Arnd Bergmann
2022-04-21 7:44 ` Christoph Hellwig
2022-04-21 8:05 ` Ard Biesheuvel
2022-04-21 11:06 ` Catalin Marinas
2022-04-21 12:28 ` Arnd Bergmann
2022-04-21 13:25 ` Catalin Marinas
2022-04-21 13:47 ` Arnd Bergmann
2022-04-21 14:44 ` Catalin Marinas
2022-04-21 14:47 ` Arnd Bergmann
2022-05-10 11:03 ` [RFC PATCH 0/7] crypto: Add helpers for allocating with DMA alignment Herbert Xu
2022-05-10 11:07 ` [RFC PATCH 1/7] crypto: Prepare to move crypto_tfm_ctx Herbert Xu
2022-05-10 11:07 ` [RFC PATCH 2/7] crypto: api - Add crypto_tfm_ctx_dma Herbert Xu
2022-05-10 17:10 ` Catalin Marinas
2022-05-12 3:57 ` Herbert Xu
2022-05-10 11:07 ` [RFC PATCH 3/7] crypto: aead - Add ctx helpers with DMA alignment Herbert Xu
2022-05-10 11:07 ` [RFC PATCH 4/7] crypto: hash " Herbert Xu
2022-05-10 11:07 ` [RFC PATCH 5/7] crypto: skcipher " Herbert Xu
2022-05-10 11:07 ` [RFC PATCH 6/7] crypto: api - Increase MAX_ALGAPI_ALIGNMASK to 127 Herbert Xu
2022-05-10 11:07 ` [RFC PATCH 7/7] crypto: caam - Explicitly request DMA alignment Herbert Xu
2022-04-15 12:18 ` [PATCH 07/10] crypto: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN Catalin Marinas
2022-04-15 12:25 ` Ard Biesheuvel
2022-04-15 9:51 ` Catalin Marinas
2022-04-15 12:31 ` Catalin Marinas
2022-04-17 8:11 ` Herbert Xu
2022-04-17 8:38 ` Catalin Marinas
2022-04-17 8:43 ` Herbert Xu
2022-04-17 16:29 ` Catalin Marinas
2022-07-15 22:23 ` Isaac Manjarres
2022-07-16 3:25 ` Herbert Xu
2022-07-18 17:53 ` Catalin Marinas
2022-09-21 0:47 ` Isaac Manjarres
2022-09-30 18:32 ` Catalin Marinas
2022-09-30 19:35 ` Linus Torvalds
2022-10-01 22:29 ` Catalin Marinas [this message]
2022-10-02 17:00 ` Linus Torvalds
2022-10-02 22:08 ` Ard Biesheuvel
2022-10-02 22:24 ` Linus Torvalds
2022-10-03 17:39 ` Catalin Marinas
2022-10-12 17:45 ` Isaac Manjarres
2022-10-13 16:57 ` Catalin Marinas
2022-10-13 18:58 ` Saravana Kannan
2022-10-14 16:25 ` Catalin Marinas
2022-10-14 20:23 ` Saravana Kannan
2022-10-14 20:44 ` Linus Torvalds
2022-10-16 21:37 ` Catalin Marinas
2022-04-12 10:20 ` Catalin Marinas
2022-04-07 6:14 ` Muchun Song
2022-04-07 9:25 ` Catalin Marinas
2022-04-07 10:00 ` Muchun Song
2022-04-07 11:06 ` Catalin Marinas
2022-04-05 13:57 ` [PATCH 08/10] mm/slab: Allow dynamic kmalloc() minimum alignment Catalin Marinas
2022-04-07 3:46 ` Hyeonggon Yoo
2022-04-07 8:50 ` Catalin Marinas
2022-04-07 9:18 ` Hyeonggon Yoo
2022-04-07 9:35 ` Catalin Marinas
2022-04-07 12:26 ` Hyeonggon Yoo
2022-04-11 11:55 ` Hyeonggon Yoo
2022-04-05 13:57 ` [PATCH 09/10] mm/slab: Simplify create_kmalloc_cache() args and make it static Catalin Marinas
2022-04-05 13:57 ` [PATCH 10/10] arm64: Enable dynamic kmalloc() minimum alignment Catalin Marinas
2022-04-07 14:40 ` [PATCH 00/10] mm, arm64: Reduce ARCH_KMALLOC_MINALIGN below the cache line size Vlastimil Babka
2022-04-07 17:48 ` Catalin Marinas
2022-04-08 14:37 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Yzi/X12rQTuT9Uqk@arm.com \
--to=catalin.marinas@arm.com \
--cc=akpm@linux-foundation.org \
--cc=ardb@kernel.org \
--cc=arnd@arndb.de \
--cc=davem@davemloft.net \
--cc=gregkh@linuxfoundation.org \
--cc=herbert@gondor.apana.org.au \
--cc=isaacmanjarres@google.com \
--cc=kernel-team@android.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=maz@kernel.org \
--cc=saravanak@google.com \
--cc=torvalds@linux-foundation.org \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox