From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53F44C433EF for ; Thu, 14 Apr 2022 15:10:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AC7346B0071; Thu, 14 Apr 2022 11:10:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A74956B0073; Thu, 14 Apr 2022 11:10:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 915B96B0074; Thu, 14 Apr 2022 11:10:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0132.hostedemail.com [216.40.44.132]) by kanga.kvack.org (Postfix) with ESMTP id 836E56B0071 for ; Thu, 14 Apr 2022 11:10:17 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 41A0EA9433 for ; Thu, 14 Apr 2022 15:10:17 +0000 (UTC) X-FDA: 79355820474.30.24FB6AD Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf26.hostedemail.com (Postfix) with ESMTP id 88B31140011 for ; Thu, 14 Apr 2022 15:10:16 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E51D961AA3 for ; Thu, 14 Apr 2022 15:10:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 535A4C385AA for ; Thu, 14 Apr 2022 15:10:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649949015; bh=zxt6JUVMzZPwH5WgJ31EFmQ0h11rAel2nJKFwtl33ow=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=DxN+WjEsLnYzL2aEcpHEG28PkDhxBtQOu2L8KoF8whsFmywtJj808QiCq71XgFfNj zIQaMwUAcf3WwfPpMlwFfva/2fb13uHbWeLclLQlN3bds5lM8W3BNy54Oi8Nrg/Fc7 FGPpeua2WNNzOn/F6nJGzQccZB81/U8/gYW13JpUUcFA8YKAG3U8aPpxmH3Is1SnNi tXmf2S/s0ci2LbiXOwl5aVjsooBwpDSDySvZhlC+1b7HM3Hzn3whjH1a/sddHA4+8i dLZZ4FclPEwgoEBfwXTCrVtqDzWAbJJGFBuVQgDZd8vlpuVAwJQn/Zox8l4Oo0US/a 5LyqHORQoRLvA== Received: by mail-oi1-f181.google.com with SMTP id w127so5671652oig.10 for ; Thu, 14 Apr 2022 08:10:15 -0700 (PDT) X-Gm-Message-State: AOAM532T5IuTsXX0YH0qTaDb21Vwp3LTh2s62XraVnW3azI2eDBSBym8 92V8cI6fttQX7a20aRW6hOnBj7aRjWlgLm71SWU= X-Google-Smtp-Source: ABdhPJx7ecZsYx8JxAkdpXF4I9NrwdRBrRftIAP1IoE986vfJjKmkfPkKWIYw+repOBNPYI4Y/NC4uHpTnzwnnR7Ihg= X-Received: by 2002:a05:6808:1513:b0:2fa:7a40:c720 with SMTP id u19-20020a056808151300b002fa7a40c720mr1554006oiw.126.1649949014411; Thu, 14 Apr 2022 08:10:14 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Ard Biesheuvel Date: Thu, 14 Apr 2022 17:10:02 +0200 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH 07/10] crypto: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN To: Greg Kroah-Hartman Cc: Linus Torvalds , Catalin Marinas , Herbert Xu , Will Deacon , Marc Zyngier , Arnd Bergmann , Andrew Morton , Linux Memory Management List , Linux ARM , Linux Kernel Mailing List , "David S. Miller" Content-Type: text/plain; charset="UTF-8" Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=DxN+WjEs; spf=pass (imf26.hostedemail.com: domain of ardb@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=ardb@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Stat-Signature: asczjrr97oohbhxktefmt3uefsbda5wx X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 88B31140011 X-HE-Tag: 1649949016-205277 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, 14 Apr 2022 at 17:01, Ard Biesheuvel wrote: > > On Thu, 14 Apr 2022 at 16:53, Greg Kroah-Hartman > wrote: > > > > On Thu, Apr 14, 2022 at 04:36:46PM +0200, Ard Biesheuvel wrote: > > > On Thu, 14 Apr 2022 at 16:27, Greg Kroah-Hartman > > > wrote: > > > > > > > > On Thu, Apr 14, 2022 at 03:52:53PM +0200, Ard Biesheuvel wrote: > ... > > > > > What we might do, given the fact that only inbound non-cache coherent > > > > > DMA is problematic, is dropping the kmalloc alignment to 8 like on > > > > > x86, and falling back to bounce buffering when a misaligned, non-cache > > > > > coherent inbound DMA mapping is created, using the SWIOTLB bounce > > > > > buffering code that we already have, and is already in use on most > > > > > affected systems for other reasons (i.e., DMA addressing limits) > > > > > > > > Ick, that's a mess. > > > > > > > > > This will cause some performance regressions, but in a way that seems > > > > > fixable to me: taking network drivers as an example, the RX buffers > > > > > that are filled using inbound DMA are typically owned by the driver > > > > > itself, which could be updated to round up its allocations and DMA > > > > > mappings. Block devices typically operate on quantities that are > > > > > aligned sufficiently already. In other cases, we will likely notice > > > > > if/when this fallback is taken on a hot path, but if we don't, at > > > > > least we know a bounce buffer is being used whenever we cannot perform > > > > > the DMA safely in-place. > > > > > > > > We can move to having an "allocator-per-bus" for memory like this to > > > > allow the bus to know if this is a DMA requirement or not. > > > > > > > > So for all USB drivers, we would have: > > > > usb_kmalloc(size, flags); > > > > and then it might even be easier to verify with static tools that the > > > > USB drivers are sending only properly allocated data. Same for SPI and > > > > other busses. > > > > > > > > > > As I pointed out earlier in the thread, alignment/padding requirements > > > for non-coherent DMA are a property of the CPU's cache hierarchy, not > > > of the device. So I'm not sure I follow how a per-subsystem > > > distinction would help here. In the case of USB especially, would that > > > mean that block, media and networking subsystems would need to be > > > aware of the USB-ness of the underlying transport? > > > > That's what we have required today, yes. That's only because we knew > > that for some USB controllers, that was a requirement and we had no way > > of passing that information back up the stack so we just made it a > > requirement. > > > > But I do agree this is messy. It's even messier for things like USB > > where it's not the USB device itself that matters, it's the USB > > controller that the USB device is attached to. And that can be _way_ up > > the device hierarchy. Attach something like a NFS mount over a PPP > > network connection on a USB to serial device and ugh, where do you > > begin? :) > > > > Exactly. > > > And is this always just an issue of the CPU cache hierarchy? And not the > > specific bridge that a device is connected to that CPU on? Or am I > > saying the same thing here? > > > > Yes, this is a system property not a device property, and the driver > typically doesn't have any knowledge of this. For example, if a PCI > host bridge happens to be integrated in a non-cache coherent way, any > PCI device plugged into it becomes non-coherent, and the associated > driver needs to do the right thing. This is why we rely on the DMA > layer to take care of this. > > > I mean take a USB controller for example. We could have a system where > > one USB controller is on a PCI bus, while another is on a "platform" > > bus. Both of those are connected to the CPU in different ways and so > > could have different DMA rules. Do we downgrade everything in the > > system for the worst connection possible? > > > > No, we currently support a mix of coherent and non-coherent just fine, > and this shouldn't change. It's just that the mere fact that > non-coherent devices might exist is increasing the memory footprint of > all kmalloc allocations. > > > Again, consider a USB driver allocating memory to transfer stuff, should > > it somehow know the cache hierarchy that it is connected to? Right now > > we punt and do not do that at the expense of a bit of potentially > > wasted memory for small allocations. > > > > This whole discussion is based on the premise that this is an expense > we would prefer to avoid. Currently, every kmalloc allocation is > rounded up to 128 bytes on arm64, while x86 uses only 8. I guess I didn't answer that last question. Yes, I guess dma_kmalloc() should be used in such cases. Combined with my bounce buffering hack, the penalty for using plain kmalloc() instead would be a potential performance hit when used for inbound DMA, instead of data corruption (if we'd reduce the kmalloc() alignment when introducing dma_kmalloc())