From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67410C77B75 for ; Fri, 19 May 2023 17:09:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D94D0900004; Fri, 19 May 2023 13:09:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D4449900003; Fri, 19 May 2023 13:09:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C5A27900004; Fri, 19 May 2023 13:09:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B5D69900003 for ; Fri, 19 May 2023 13:09:56 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 681C11C7309 for ; Fri, 19 May 2023 17:09:56 +0000 (UTC) X-FDA: 80807641992.15.69362E4 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf19.hostedemail.com (Postfix) with ESMTP id 534511A0026 for ; Fri, 19 May 2023 17:09:53 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of robin.murphy@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=robin.murphy@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684516194; a=rsa-sha256; cv=none; b=tZYLevNq122h8oo0Or/gQpGihtw5WE59Ai6DC3nZPssl94oFxVx6mfd6ESBv/IdlcsKOtC saeq1WTtSZKIRhQ1O6nQkzpmFyTjVdOB58LvyoB3yq5rpWY4h4dhPnItR2ts6u2drIxQGu uqQtRZwI86XFYBq9b1LTCnUUBdlUgs8= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of robin.murphy@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=robin.murphy@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684516194; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fBF3QpVtbQTf1FZxYCpqikeX9JYxPfz7oAGtK14erfQ=; b=6j+925ju5Tmi5fan3Qs27Uh4D9rQNTsgzLrFYltpIc+8UfRabP+kKXCyK6WE6V33Dy6x4o 5A7eMFFsJgkbhGgkJNoAO3Daqi187pEMaTl/WEtQdbcBsA7HnJ/bAWME2jk9bREq6OW3xI lm2QWtR6r6TpXO8KgBpRmQGRQK/5MKo= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E4D9C1FB; Fri, 19 May 2023 10:10:37 -0700 (PDT) Received: from [10.1.196.40] (e121345-lin.cambridge.arm.com [10.1.196.40]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9653B3F59C; Fri, 19 May 2023 10:09:50 -0700 (PDT) Message-ID: <30a91384-157c-0192-443c-12c835ad3b35@arm.com> Date: Fri, 19 May 2023 18:09:45 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux aarch64; rv:102.0) Gecko/20100101 Thunderbird/102.10.0 Subject: Re: [PATCH v4 13/15] iommu/dma: Force bouncing if the size is not cacheline-aligned To: Catalin Marinas Cc: Linus Torvalds , Arnd Bergmann , Christoph Hellwig , Greg Kroah-Hartman , Will Deacon , Marc Zyngier , Andrew Morton , Herbert Xu , Ard Biesheuvel , Isaac Manjarres , Saravana Kannan , Alasdair Kergon , Daniel Vetter , Joerg Roedel , Mark Brown , Mike Snitzer , "Rafael J. Wysocki" , linux-mm@kvack.org, iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org References: <20230518173403.1150549-1-catalin.marinas@arm.com> <20230518173403.1150549-14-catalin.marinas@arm.com> <9dc3e036-75cd-debf-7093-177ef6c7a3ae@arm.com> Content-Language: en-GB From: Robin Murphy In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 534511A0026 X-Stat-Signature: 3dkbzmngphtd98cj75eohk5q4qfw7cf7 X-HE-Tag: 1684516193-813630 X-HE-Meta: U2FsdGVkX181id52EC/3vwxm4wetEMZQ+HJFxo82VT9cdO/Pnej3rffPxj+jWzG2cO9eeHXJNd+AvHB8E1CaSftg76cPKfch5xWYkHB2WAAU0/LQqbD/WdTcCMl44wIl/4bvT9Dmj7PLiGuVxKJJpPjSetIHv0LefsRTI2/KbFkvD3Sgkh/Vt39DQtGZbFPjfCygldNQLbyo4sEU4gi7lnp8KQuKKawd+UbidaMnIa7rKuY1bJCVyqmUM6anE85xfj2LNZCIrhaubNwrRJGeDOwmf7imgmJ3EGxuxZMvbbLDxxOzx3g9TfKJJ/PeDaVFSeZVHPYE9QQf8qZi8gzoJVACPioCoUks5x2dpyWrVa0S3+WgyNsZHRRzzbUXuYne+xTMPbEwKvPqZezS3DSsipzjlVrJyQ8tr42diYTSoVIkXajYzBT0uditz8X+C4londduSChomHkA1Gq0oW1a6ICaxe7NxGrjZa0LRWkTIAeQTbQR9JfEeF3/lsIKBIGdMBbWkqpcwNI8UQWDhpeBC+V5B+70GwtWMCsQSpU2/p2xoZ57b123OyntRnqmZyvXWTiIW9UNHtqmsto6dIBKCSVokKn3aUeQay5LFK9KzNxqVbwfYdbxA0pKdvxToOIcuKZEC+0M/eeAaShcmh/z8p84UcYwFIYRJS/xY2M6Q0q7sxEsEfA/kLFG8QiyT1245L5cRJdxADXZrJQV8eeH+ZeSShp6i0btxKPjH01jCPWC6DEWONGagjAWmFJg2AvYig9Ri6q/Au/Dpe0czada1C/YAll0m3AzlrjsiKDCGP7VWuNXrw0UUp22nhtNhU7J+TSy587hI3vNPb2y9yzCx1cdfonk8CJkJ81gMFbtdVmKHFseV31kUytrKHtCKNTlZEpP4epkDvfeVgaTbV0HV/2b2kKsadrpnk9YmbCts/rX2/C59vd/BvmmMjnMsdAF+yLEh88sknJWCAss7I9 ZgRdcwLz 6mKLNaKU/WFeGlu4wGDE/rHZ8McqUxEI2VNZZiPc/+dHmq+U5rwzwXZKVdj/ij8KFeW7xF1lgOOdovYNCp1xZQvlCJ7L2dZcJ88AFZKy6jX5QagE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 19/05/2023 3:02 pm, Catalin Marinas wrote: > On Fri, May 19, 2023 at 01:29:38PM +0100, Robin Murphy wrote: >> On 2023-05-18 18:34, Catalin Marinas wrote: >>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c >>> index 7a9f0b0bddbd..ab1c1681c06e 100644 >>> --- a/drivers/iommu/dma-iommu.c >>> +++ b/drivers/iommu/dma-iommu.c >>> @@ -956,7 +956,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev, >>> struct scatterlist *sg; >>> int i; >>> - if (dev_use_swiotlb(dev)) >>> + if (dev_use_swiotlb(dev) || sg_is_dma_bounced(sgl)) >>> for_each_sg(sgl, sg, nelems, i) >>> iommu_dma_sync_single_for_cpu(dev, sg_dma_address(sg), >>> sg->length, dir); >>> @@ -972,7 +972,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev, >>> struct scatterlist *sg; >>> int i; >>> - if (dev_use_swiotlb(dev)) >>> + if (dev_use_swiotlb(dev) || sg_is_dma_bounced(sgl)) >>> for_each_sg(sgl, sg, nelems, i) >>> iommu_dma_sync_single_for_device(dev, >>> sg_dma_address(sg), >>> @@ -998,7 +998,8 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, >>> * If both the physical buffer start address and size are >>> * page aligned, we don't need to use a bounce page. >>> */ >>> - if (dev_use_swiotlb(dev) && iova_offset(iovad, phys | size)) { >>> + if ((dev_use_swiotlb(dev) && iova_offset(iovad, phys | size)) || >>> + dma_kmalloc_needs_bounce(dev, size, dir)) { >>> void *padding_start; >>> size_t padding_size, aligned_size; >>> @@ -1210,7 +1211,21 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, >>> goto out; >>> } >>> - if (dev_use_swiotlb(dev)) >>> + /* >>> + * If kmalloc() buffers are not DMA-safe for this device and >>> + * direction, check the individual lengths in the sg list. If one of >>> + * the buffers is deemed unsafe, follow the iommu_dma_map_sg_swiotlb() >>> + * path for potential bouncing. >>> + */ >>> + if (!dma_kmalloc_safe(dev, dir)) { >>> + for_each_sg(sg, s, nents, i) >>> + if (!dma_kmalloc_size_aligned(s->length)) { >> >> Just to remind myself, we're not checking s->offset on the grounds that if >> anyone wants to DMA into an unaligned part of a larger allocation that >> remains at their own risk, is that right? > > Right. That's the case currently as well and those users that were > relying on ARCH_KMALLOC_MINALIGN for this have either been migrated to > ARCH_DMA_MINALIGN in this series or the logic rewritten (as in the > crypto code). OK, I did manage to summon a vague memory of this being discussed before, which at least stopped me asking "Should we be checking..." - perhaps a comment on dma_kmalloc_safe() to help remember that reasoning might not go amiss? >> Do we care about the (probably theoretical) case where someone might build a >> scatterlist for multiple small allocations such that ones which happen to be >> adjacent might get combined into a single segment of apparently "safe" >> length but still at "unsafe" alignment? > > I'd say that's theoretical only. One could write such code but normally > you'd go for an array rather than relying on the randomness of the > kmalloc pointers to figure out adjacent objects. It also only works if > the individual struct size is exactly one of the kmalloc cache sizes, so > not generic enough. FWIW I was imagining something like sg_alloc_table_from_pages() but at a smaller scale, queueing up some list/array of, say, 32-byte buffers into a scatterlist to submit as a single DMA job. I'm not aware that such a thing exists though, and I'm inclined to agree that it probably is sufficiently unrealistic to be concerned about. As usual I just want to feel comfortable that we've explored all the possibilities :) >>> + sg_dma_mark_bounced(sg); >> >> I'd prefer to have iommu_dma_map_sg_swiotlb() mark the segments, since >> that's in charge of the actual bouncing. Then we can fold the alignment >> check into dev_use_swiotlb() (with the dev_is_untrusted() condition taking >> priority), and sync/unmap can simply rely on sg_is_dma_bounced() alone. > > With this patch we only set the SG_DMA_BOUNCED on the first element of > the sglist. Do you want to set this flag only on individual elements > being bounced? It makes some sense in principle but the > iommu_dma_unmap_sg() path would need to scan the list again to decide > whether to go the swiotlb path. > > If we keep the SG_DMA_BOUNCED flag only on the first element, I can > change it to your suggestion, assuming I understood it. Indeed that should be fine - sync_sg/unmap_sg always have to be given the same arguments which were passed to map_sg (and note that in the normal case, the DMA address/length will often end up concatenated entirely into the first element), so while we still have the two distinct flows internally, I don't think there's any issue with only tagging the head of the list to steer between them. Of course if it then works out to be trivial enough to tag *all* the segments for good measure, there should be no harm in that either - at the moment the flag is destined to have more of a "this might be bounced, so needs checking" meaning than "this definitely is bounced" either way. Cheers, Robin.