From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7EF1BC433F5 for ; Thu, 21 Apr 2022 08:06:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D54986B0071; Thu, 21 Apr 2022 04:06:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CDCBE6B0073; Thu, 21 Apr 2022 04:06:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B7D436B0074; Thu, 21 Apr 2022 04:06:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id A53796B0071 for ; Thu, 21 Apr 2022 04:06:04 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 814F222E28 for ; Thu, 21 Apr 2022 08:06:04 +0000 (UTC) X-FDA: 79380153048.25.BCF5B87 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf22.hostedemail.com (Postfix) with ESMTP id B1E26C000F for ; Thu, 21 Apr 2022 08:06:03 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 99C61B822B9 for ; Thu, 21 Apr 2022 08:06:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 66A27C385AA for ; Thu, 21 Apr 2022 08:06:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1650528361; bh=9MnyHMj4/WZWXBUrOQs4qeUaIaUlxdy8igZsS3lMkVw=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=MIvk92Srxr23LX7TlLH90nQXY6T/havYMoWz5qtfT/UymbihSbaG+YUcaBK7o6qoN ncMEgsU0p12Mme6v+eLiYgWGS9GpFqHmYgS2hkoi/P1oQruLwiC8FPyZMGdXSDKGz4 R3twXJr1noDPWn9GSzmx1c+SWKpEBr2o0jMTB7vOK4x1430MtOzsEGJ5WP2a6rTamD sEEytXiegjePrMi10SwomqSqZrNF2ZdWxnAhj+Y3pXajj/cj4saAUE8xplNgbAy+ER O9yLgO6SXeq1aZ0ijCdnanLFIWNl3NIneyKJeRnOdR1oNmU2+55T4i4YRUuzAJpH10 vSMz3v5ZgHqmQ== Received: by mail-oa1-f45.google.com with SMTP id 586e51a60fabf-deb9295679so4645231fac.6 for ; Thu, 21 Apr 2022 01:06:01 -0700 (PDT) X-Gm-Message-State: AOAM533jUTRaYWDh+Ple1eh0K4Ui8E3Y6MS2wtXs9thiFy1ZswZ8R0vY X0lck6pJYdTARxXXQuLr0j4+emYDPNT7hRWyos4= X-Google-Smtp-Source: ABdhPJzuKwwfoRfldur0eTFFvHflMSJoW3hPIFAXLb7ZszW8X7w95tURymH4a711BiVbokZ6QdpjBaOqWffm9M8yDDQ= X-Received: by 2002:a05:6871:297:b0:e5:f100:602f with SMTP id i23-20020a056871029700b000e5f100602fmr3358921oae.126.1650528360636; Thu, 21 Apr 2022 01:06:00 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Ard Biesheuvel Date: Thu, 21 Apr 2022 10:05:49 +0200 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH 07/10] crypto: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN To: Christoph Hellwig Cc: Arnd Bergmann , Catalin Marinas , Herbert Xu , Will Deacon , Marc Zyngier , Greg Kroah-Hartman , Andrew Morton , Linus Torvalds , Linux Memory Management List , Linux ARM , Linux Kernel Mailing List , "David S. Miller" Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Rspamd-Queue-Id: B1E26C000F X-Stat-Signature: ceix8cebfutx64dw8b1wub5ikr4d8i5g Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=MIvk92Sr; spf=pass (imf22.hostedemail.com: domain of ardb@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=ardb@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Rspamd-Server: rspam01 X-HE-Tag: 1650528363-357566 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, 21 Apr 2022 at 09:20, Christoph Hellwig wrote: > > Btw, there is another option: Most real systems already require having > swiotlb to bounce buffer in some cases. We could simply force bounce > buffering in the dma mapping code for too small or not properly aligned > transfers and just decrease the dma alignment. Strongly agree. As I pointed out before, we'd only need to do this for misaligned, non-cache coherent inbound DMA, and we'd only have to worry about performance regressions, not data corruption issues. And given the natural alignment of block I/O, and the fact that network drivers typically allocate and map their own RX buffers (which means they could reasonably be fixed if a performance bottleneck pops up), I think the risk for showstopper performance regressions is likely to be acceptable.