From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EED7EC36002 for ; Wed, 9 Apr 2025 09:47:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 93C54280043; Wed, 9 Apr 2025 05:47:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8C52D6B016C; Wed, 9 Apr 2025 05:47:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 73F0B280043; Wed, 9 Apr 2025 05:47:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 4BCE16B016B for ; Wed, 9 Apr 2025 05:47:43 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id A61AF16032A for ; Wed, 9 Apr 2025 09:47:44 +0000 (UTC) X-FDA: 83314028448.02.3519149 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf26.hostedemail.com (Postfix) with ESMTP id DC16E140009 for ; Wed, 9 Apr 2025 09:47:42 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=none; spf=pass (imf26.hostedemail.com: domain of cmarinas@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=cmarinas@kernel.org; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744192063; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oQCB9JZ2+oa8P0ZQFEZagVD2N2Xx9RTMs0Q3ZPeVtGc=; b=Ens2rGFzi6QzWfae3s5fD3BS9DfPxu7etfQ6SBPi/3aH184O3HVo8qvtVc2IFYLjqzjWxF WTYq8yM44s20u66KyFyEaq4t7Oy4IgsJdOiqMJqnIsRTX+GqdIOTvsXOcBKjxSJi34VISZ gNZ6R3IUc0pAIbVnroA1Ryh46bPX+Yc= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none; spf=pass (imf26.hostedemail.com: domain of cmarinas@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=cmarinas@kernel.org; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744192063; a=rsa-sha256; cv=none; b=CZnL3R8753EU4NE0dJp+E4Ob1tYVjbkBLR7h6CXrUzLUZQ9NL1Mls7lWPVJEhh9qXMtBBV /SeXWBQZz34Mu7qXipeBpZzPReRzerQ5E9bNUzGDlnwFzHlSgwR48pGkizRHdiG7TTChTp nOF1OHycT9I9Le5oB6YeEpaz0ve7MtM= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 7889D5C0FB0; Wed, 9 Apr 2025 09:45:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 07B6CC4CEEA; Wed, 9 Apr 2025 09:47:38 +0000 (UTC) Date: Wed, 9 Apr 2025 10:47:36 +0100 From: Catalin Marinas To: Petr Tesarik Cc: Vlastimil Babka , Feng Tang , Harry Yoo , Peng Fan , Hyeonggon Yoo <42.hyeyoo@gmail.com>, David Rientjes , Christoph Lameter , "linux-mm@kvack.org" , Robin Murphy , Sean Christopherson , Halil Pasic Subject: Re: slub - extended kmalloc redzone and dma alignment Message-ID: References: <20250404155303.2e0cdd27@mordecai> <39657cf9-e24d-4b85-9773-45fe26dd16ae@suse.cz> <20250408072732.32db7809@mordecai> <20250409103904.54a19faa@mordecai> <20250409110529.3ad65b3c@mordecai> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250409110529.3ad65b3c@mordecai> X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: DC16E140009 X-Stat-Signature: oixiutddggu3sdjapun966deiibrdsmy X-Rspam-User: X-HE-Tag: 1744192062-400754 X-HE-Meta: U2FsdGVkX1/SwoGvv1rHpCZ8yIAy6Z4zYdFk8RjPLYuvlJKs4y1EpygfRS+KBs+YiguQNXwovSCKRRlCcS21TXFqyxWCx4n3z61A1mTiox5R3RxD7EBzu/l20gyGuuUMkro9v//IeKr8VLZe0s4iZv35dBVzsLONLzwXKaENjpKebHJjBtOkZHKb3y9YOIV6gDCZFixTfI+2r+EyjnRkqi4HApyst8Ua8zGt3x/E2vv/QYs4J40HfjcmuPDfU/0uMFohdztKLtVyEWZhWqt/HYF/HHQiSiYrkA3efqLS/CFYL0dgNc+nehvJUwn1Kc8BY6gnmvSOdX8N49lToGSMBsTiNGAlQVA6ZMDr0bXPzAfRCiYkjS4lM/9Zl3tiVSXlWzdsq2W6ddg3N3ljp5zawZljDLlKO6lYtWHIyQ4+34Jdtz0tb+6+sswpEJLN0ejUyBM0Zam3fzQfX4Gte6jNvL0bVS/Zki5HsocbMEZ5jJ9LCX9CZhLRxFd/bFj24RQuKLqJLL+PwtIUm+GCax8xGqAagxQ7k5hO9K5xOz0aERUjiKxYL0QOwSe6sWV8Q/fObiwqiUgs1AyvPF+UVAwntNBFEIbM7IjA5Y5y4Od2x09HY8VprRWSOzPSBI5PkLepWja+o9cbZrENVzu/6/hQE/RWVvnX0/7+xgZKMN/55u30zf6ouMKwgSHOcz8HBFsPJKirUJkz0FvQRRkJS0iicR0mHkS5kq2PPyHwr5z6CgCUN8TWje4uwD+B8VzaxJM/i2dW0c2XQaEeRTg3+BcV3WbdDne8ZlQkv1QEQbJL78IjEABRRmQ4MvVxnWF4VwBMZWNjy/5okG2VL2QF8vywkRWIs0l67v3D6/dJ00D3x6zUyTV3MD/zjDe91Cys7VkAgmZDyO21OBYkO49gvDwN29U7RyWLBlaKvAQAoKjybnn9j2uIJ4+qRb+t7Pmq+6u9pdnpu2P2971zLn8f0rq dBOLvGG4 WwrVYL8tWIvVL4fEpiOEKSkZPxuixW1VOZHnMNMaligosYxbNvciCGEuGW7FmJZ9mkiPJzDRiMJ2dQmI3xskvPfQeRe4bAZfFxzkW6pbxH1DibxPqsGKR276u8/8PhTANjW2yUVApa66mQDPNA//DSP2v5Hpk6jw8bUay2sYgiD7pq/j1xzGMJWrL7/cvOvMLAtjggjKhPU6hTRdVYcKW6mMxm2gEjq6yg8V/kZpHxlWEKcJ4L2q4O5aqGgT4Q3inRZw2HTnT46RX3wr9WcQ5EXV6Ovl9rwI47M7Euy05Mq3WC1ee6LUOyvB6h4FakZZ5FeGhLvqJAde5YQ1z3GUOJ2wSsq7aWjL4FVxitokcWkZODNebozyeB1pIfFeJQbBPp8l5mx/vWP4pfABz486Rty+cT2J7pWhUdqns/fZ1qzhvPJ3vFWGtwJ9ZHxMIH6Q5NfsEXOgw4W7jWk5vTH54tm1sml7lqzEYDMhuZGlGsAUbzTfbbl14WQbUPC5cntntH9NPE/THNnKpdu4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Apr 09, 2025 at 11:05:29AM +0200, Petr Tesarik wrote: > On Wed, 9 Apr 2025 10:39:04 +0200 > Petr Tesarik wrote: > > I believe there is potential for a nasty race condition, and maybe even > > info leak. Consider this: > > > > 1. DMA buffer is allocated by kmalloc(). The memory area previously > > contained sensitive information, which had been written to main > > memory. > > 2. The DMA buffer is initialized with zeroes, but this new content > > stays in a CPU cache (because this is kernel memory with a write > > behind cache policy). > > 3. DMA is set up, but nothing is written to main memory by the > > bus-mastering device. > > 4. The CPU cache line is now discarded in arch_sync_dma_for_cpu(). > > > > IIUC the zeroes were never written to main memory, and previous content > > can now be read by the CPU through the DMA buffer. > > > > I haven't checked if any architecture is affected, but I strongly > > believe that the CPU cache MUST be flushed both before and after the > > DMA transfer. Any architecture which does not do it that way should be > > fixed. > > > > Or did I miss a crucial detail (again)? > > Just after sending this, I realized I did. :( > > There is a step between 2 and 3: > > 2a. arch_sync_dma_for_device() invalidates the CPU cache line. > Architectures which do not write previous content to main memory > effectively undo the zeroing here. Good point, that's a problem on those architectures that invalidate the caches in arch_sync_dma_for_device(). We fixed it for arm64 in 5.19 - c50f11c6196f ("arm64: mm: Don't invalidate FROM_DEVICE buffers at start of DMA transfer") - for the same reasons, information leak. So we could ignore all those architectures. If they people complain about redzone failures, we can ask them to fix. Well, crude attempt below at fixing those. I skipped powerpc and for arch/arm I only addressed cache-v7. Completely untested. But I wonder whether it's easier to fix the callers of arch_sync_dma_for_device and always pass DMA_BIDIRECTIONAL for security reasons: ----------------------------8<------------------------------ diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index cb7e29dcac15..73ee3826a825 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1103,7 +1103,7 @@ void iommu_dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, swiotlb_sync_single_for_device(dev, phys, size, dir); if (!dev_is_dma_coherent(dev)) - arch_sync_dma_for_device(phys, size, dir); + arch_sync_dma_for_device(phys, size, DMA_BIDIRECTIONAL); } void iommu_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sgl, @@ -1134,7 +1134,8 @@ void iommu_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sgl, sg->length, dir); else if (!dev_is_dma_coherent(dev)) for_each_sg(sgl, sg, nelems, i) - arch_sync_dma_for_device(sg_phys(sg), sg->length, dir); + arch_sync_dma_for_device(sg_phys(sg), sg->length, + DMA_BIDIRECTIONAL); } dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, @@ -1189,7 +1190,7 @@ dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, } if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) - arch_sync_dma_for_device(phys, size, dir); + arch_sync_dma_for_device(phys, size, DMA_BIDIRECTIONAL); iova = __iommu_dma_map(dev, phys, size, prot, dma_mask); if (iova == DMA_MAPPING_ERROR) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 1f65795cf5d7..6e508d7f4010 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -247,7 +247,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, done: if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) { if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dev_addr)))) - arch_sync_dma_for_device(phys, size, dir); + arch_sync_dma_for_device(phys, size, DMA_BIDIRECTIONAL); else xen_dma_sync_for_device(dev, dev_addr, size, dir); } @@ -316,7 +316,7 @@ xen_swiotlb_sync_single_for_device(struct device *dev, dma_addr_t dma_addr, if (!dev_is_dma_coherent(dev)) { if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr)))) - arch_sync_dma_for_device(paddr, size, dir); + arch_sync_dma_for_device(paddr, size, DMA_BIDIRECTIONAL); else xen_dma_sync_for_device(dev, dma_addr, size, dir); } diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index b8fe0b3d0ffb..f4e8d23fd086 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -408,7 +408,7 @@ void dma_direct_sync_sg_for_device(struct device *dev, if (!dev_is_dma_coherent(dev)) arch_sync_dma_for_device(paddr, sg->length, - dir); + DMA_BIDIRECTIONAL); } } #endif diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index d2c0b7e632fc..d5f575e1a623 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -61,7 +61,7 @@ static inline void dma_direct_sync_single_for_device(struct device *dev, swiotlb_sync_single_for_device(dev, paddr, size, dir); if (!dev_is_dma_coherent(dev)) - arch_sync_dma_for_device(paddr, size, dir); + arch_sync_dma_for_device(paddr, size, DMA_BIDIRECTIONAL); } static inline void dma_direct_sync_single_for_cpu(struct device *dev, @@ -107,7 +107,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev, } if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) - arch_sync_dma_for_device(phys, size, dir); + arch_sync_dma_for_device(phys, size, DMA_BIDIRECTIONAL); return dma_addr; } diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index abcf3fa63a56..1e21bd65b08c 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -1598,7 +1598,7 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t paddr, size_t size, } if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) - arch_sync_dma_for_device(swiotlb_addr, size, dir); + arch_sync_dma_for_device(swiotlb_addr, size, DMA_BIDIRECTIONAL); return dma_addr; } ----------------------------8<------------------------------ And that's the partial change for most arches but I'd rather go with the above: ----------------------------8<------------------------------ diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c index 6b85e94f3275..2902b3378b21 100644 --- a/arch/arc/mm/dma.c +++ b/arch/arc/mm/dma.c @@ -51,22 +51,7 @@ void arch_dma_prep_coherent(struct page *page, size_t size) void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, enum dma_data_direction dir) { - switch (dir) { - case DMA_TO_DEVICE: - dma_cache_wback(paddr, size); - break; - - case DMA_FROM_DEVICE: - dma_cache_inv(paddr, size); - break; - - case DMA_BIDIRECTIONAL: - dma_cache_wback_inv(paddr, size); - break; - - default: - break; - } + dma_cache_wback(paddr, size); } void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, diff --git a/arch/arm/mm/cache-v7.S b/arch/arm/mm/cache-v7.S index 201ca05436fa..3787c4b839dd 100644 --- a/arch/arm/mm/cache-v7.S +++ b/arch/arm/mm/cache-v7.S @@ -441,8 +441,6 @@ SYM_FUNC_END(v7_dma_flush_range) */ SYM_TYPED_FUNC_START(v7_dma_map_area) add r1, r1, r0 - teq r2, #DMA_FROM_DEVICE - beq v7_dma_inv_range b v7_dma_clean_range SYM_FUNC_END(v7_dma_map_area) diff --git a/arch/arm/mm/dma-mapping-nommu.c b/arch/arm/mm/dma-mapping-nommu.c index fecac107fd0d..b2432726b082 100644 --- a/arch/arm/mm/dma-mapping-nommu.c +++ b/arch/arm/mm/dma-mapping-nommu.c @@ -17,11 +17,7 @@ void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, enum dma_data_direction dir) { dmac_map_area(__va(paddr), size, dir); - - if (dir == DMA_FROM_DEVICE) - outer_inv_range(paddr, paddr + size); - else - outer_clean_range(paddr, paddr + size); + outer_clean_range(paddr, paddr + size); } void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 88c2d68a69c9..ceae4c027f53 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -682,13 +682,7 @@ static void __dma_page_cpu_to_dev(struct page *page, unsigned long off, phys_addr_t paddr; dma_cache_maint_page(page, off, size, dir, dmac_map_area); - - paddr = page_to_phys(page) + off; - if (dir == DMA_FROM_DEVICE) { - outer_inv_range(paddr, paddr + size); - } else { - outer_clean_range(paddr, paddr + size); - } + outer_clean_range(paddr, paddr + size); /* FIXME: non-speculating: flush on bidirectional mappings? */ } diff --git a/arch/csky/mm/dma-mapping.c b/arch/csky/mm/dma-mapping.c index 82447029feb4..3862a56cb3ac 100644 --- a/arch/csky/mm/dma-mapping.c +++ b/arch/csky/mm/dma-mapping.c @@ -58,17 +58,7 @@ void arch_dma_prep_coherent(struct page *page, size_t size) void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, enum dma_data_direction dir) { - switch (dir) { - case DMA_TO_DEVICE: - cache_op(paddr, size, dma_wb_range); - break; - case DMA_FROM_DEVICE: - case DMA_BIDIRECTIONAL: - cache_op(paddr, size, dma_wbinv_range); - break; - default: - BUG(); - } + cache_op(paddr, size, dma_wb_range); } void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, diff --git a/arch/hexagon/kernel/dma.c b/arch/hexagon/kernel/dma.c index 882680e81a30..8ca011acc4fc 100644 --- a/arch/hexagon/kernel/dma.c +++ b/arch/hexagon/kernel/dma.c @@ -14,22 +14,8 @@ void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, { void *addr = phys_to_virt(paddr); - switch (dir) { - case DMA_TO_DEVICE: - hexagon_clean_dcache_range((unsigned long) addr, - (unsigned long) addr + size); - break; - case DMA_FROM_DEVICE: - hexagon_inv_dcache_range((unsigned long) addr, - (unsigned long) addr + size); - break; - case DMA_BIDIRECTIONAL: - flush_dcache_range((unsigned long) addr, - (unsigned long) addr + size); - break; - default: - BUG(); - } + hexagon_clean_dcache_range((unsigned long) addr, + (unsigned long) addr + size); } /* diff --git a/arch/m68k/kernel/dma.c b/arch/m68k/kernel/dma.c index 16063783aa80..95902d306412 100644 --- a/arch/m68k/kernel/dma.c +++ b/arch/m68k/kernel/dma.c @@ -29,17 +29,5 @@ pgprot_t pgprot_dmacoherent(pgprot_t prot) void arch_sync_dma_for_device(phys_addr_t handle, size_t size, enum dma_data_direction dir) { - switch (dir) { - case DMA_BIDIRECTIONAL: - case DMA_TO_DEVICE: - cache_push(handle, size); - break; - case DMA_FROM_DEVICE: - cache_clear(handle, size); - break; - default: - pr_err_ratelimited("dma_sync_single_for_device: unsupported dir %u\n", - dir); - break; - } + cache_push(handle, size); } diff --git a/arch/microblaze/kernel/dma.c b/arch/microblaze/kernel/dma.c index 04d091ade417..68e6c946d273 100644 --- a/arch/microblaze/kernel/dma.c +++ b/arch/microblaze/kernel/dma.c @@ -14,14 +14,19 @@ #include #include -static void __dma_sync(phys_addr_t paddr, size_t size, - enum dma_data_direction direction) +void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, + enum dma_data_direction dir) +{ + flush_dcache_range(paddr, paddr + size); +} + +void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, + enum dma_data_direction dir) { switch (direction) { case DMA_TO_DEVICE: - case DMA_BIDIRECTIONAL: - flush_dcache_range(paddr, paddr + size); break; + case DMA_BIDIRECTIONAL: case DMA_FROM_DEVICE: invalidate_dcache_range(paddr, paddr + size); break; @@ -29,15 +34,3 @@ static void __dma_sync(phys_addr_t paddr, size_t size, BUG(); } } - -void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, - enum dma_data_direction dir) -{ - __dma_sync(paddr, size, dir); -} - -void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, - enum dma_data_direction dir) -{ - __dma_sync(paddr, size, dir); -} diff --git a/arch/nios2/mm/dma-mapping.c b/arch/nios2/mm/dma-mapping.c index fd887d5f3f9a..35730ad8787d 100644 --- a/arch/nios2/mm/dma-mapping.c +++ b/arch/nios2/mm/dma-mapping.c @@ -23,23 +23,12 @@ void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, { void *vaddr = phys_to_virt(paddr); - switch (dir) { - case DMA_FROM_DEVICE: - invalidate_dcache_range((unsigned long)vaddr, - (unsigned long)(vaddr + size)); - break; - case DMA_TO_DEVICE: - /* - * We just need to flush the caches here , but Nios2 flush - * instruction will do both writeback and invalidate. - */ - case DMA_BIDIRECTIONAL: /* flush and invalidate */ - flush_dcache_range((unsigned long)vaddr, - (unsigned long)(vaddr + size)); - break; - default: - BUG(); - } + /* + * We just need to flush the caches here , but Nios2 flush + * instruction will do both writeback and invalidate. + */ + flush_dcache_range((unsigned long)vaddr, + (unsigned long)(vaddr + size)); } void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, diff --git a/arch/openrisc/kernel/dma.c b/arch/openrisc/kernel/dma.c index b3edbb33b621..747218e17237 100644 --- a/arch/openrisc/kernel/dma.c +++ b/arch/openrisc/kernel/dma.c @@ -101,25 +101,8 @@ void arch_sync_dma_for_device(phys_addr_t addr, size_t size, unsigned long cl; struct cpuinfo_or1k *cpuinfo = &cpuinfo_or1k[smp_processor_id()]; - switch (dir) { - case DMA_TO_DEVICE: - /* Flush the dcache for the requested range */ - for (cl = addr; cl < addr + size; - cl += cpuinfo->dcache_block_size) - mtspr(SPR_DCBFR, cl); - break; - case DMA_FROM_DEVICE: - /* Invalidate the dcache for the requested range */ - for (cl = addr; cl < addr + size; - cl += cpuinfo->dcache_block_size) - mtspr(SPR_DCBIR, cl); - break; - default: - /* - * NOTE: If dir == DMA_BIDIRECTIONAL then there's no need to - * flush nor invalidate the cache here as the area will need - * to be manually synced anyway. - */ - break; - } + /* Flush the dcache for the requested range */ + for (cl = addr; cl < addr + size; + cl += cpuinfo->dcache_block_size) + mtspr(SPR_DCBFR, cl); } diff --git a/arch/riscv/mm/dma-noncoherent.c b/arch/riscv/mm/dma-noncoherent.c index cb89d7e0ba88..2e6734c2a20b 100644 --- a/arch/riscv/mm/dma-noncoherent.c +++ b/arch/riscv/mm/dma-noncoherent.c @@ -69,30 +69,7 @@ static inline bool arch_sync_dma_cpu_needs_post_dma_flush(void) void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, enum dma_data_direction dir) { - switch (dir) { - case DMA_TO_DEVICE: - arch_dma_cache_wback(paddr, size); - break; - - case DMA_FROM_DEVICE: - if (!arch_sync_dma_clean_before_fromdevice()) { - arch_dma_cache_inv(paddr, size); - break; - } - fallthrough; - - case DMA_BIDIRECTIONAL: - /* Skip the invalidate here if it's done later */ - if (IS_ENABLED(CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU) && - arch_sync_dma_cpu_needs_post_dma_flush()) - arch_dma_cache_wback(paddr, size); - else - arch_dma_cache_wback_inv(paddr, size); - break; - - default: - break; - } + arch_dma_cache_wback(paddr, size); } void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, diff --git a/arch/sh/kernel/dma-coherent.c b/arch/sh/kernel/dma-coherent.c index 6a44c0e7ba40..1e0491f9b026 100644 --- a/arch/sh/kernel/dma-coherent.c +++ b/arch/sh/kernel/dma-coherent.c @@ -17,17 +17,5 @@ void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, { void *addr = sh_cacheop_vaddr(phys_to_virt(paddr)); - switch (dir) { - case DMA_FROM_DEVICE: /* invalidate only */ - __flush_invalidate_region(addr, size); - break; - case DMA_TO_DEVICE: /* writeback only */ - __flush_wback_region(addr, size); - break; - case DMA_BIDIRECTIONAL: /* writeback and invalidate */ - __flush_purge_region(addr, size); - break; - default: - BUG(); - } + __flush_wback_region(addr, size); } diff --git a/arch/xtensa/kernel/pci-dma.c b/arch/xtensa/kernel/pci-dma.c index 94955caa4488..3da1ee2b5d84 100644 --- a/arch/xtensa/kernel/pci-dma.c +++ b/arch/xtensa/kernel/pci-dma.c @@ -64,20 +64,8 @@ void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, enum dma_data_direction dir) { - switch (dir) { - case DMA_BIDIRECTIONAL: - case DMA_TO_DEVICE: - if (XCHAL_DCACHE_IS_WRITEBACK) - do_cache_op(paddr, size, __flush_dcache_range); - break; - - case DMA_NONE: - BUG(); - break; - - default: - break; - } + if (XCHAL_DCACHE_IS_WRITEBACK) + do_cache_op(paddr, size, __flush_dcache_range); } void arch_dma_prep_coherent(struct page *page, size_t size) -- Catalin