From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56907C4332F for ; Wed, 2 Nov 2022 11:06:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7FA808E0002; Wed, 2 Nov 2022 07:06:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7AAF18E0001; Wed, 2 Nov 2022 07:06:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6739A8E0002; Wed, 2 Nov 2022 07:06:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 569798E0001 for ; Wed, 2 Nov 2022 07:06:03 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 2685EABA98 for ; Wed, 2 Nov 2022 11:06:03 +0000 (UTC) X-FDA: 80088222606.14.AC130F0 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf06.hostedemail.com (Postfix) with ESMTP id C5010180003 for ; Wed, 2 Nov 2022 11:06:01 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A031F617CE; Wed, 2 Nov 2022 11:06:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BDCBCC433D6; Wed, 2 Nov 2022 11:05:57 +0000 (UTC) Date: Wed, 2 Nov 2022 11:05:54 +0000 From: Catalin Marinas To: Isaac Manjarres Cc: Christoph Hellwig , Greg Kroah-Hartman , Linus Torvalds , Arnd Bergmann , Will Deacon , Marc Zyngier , Andrew Morton , Herbert Xu , Ard Biesheuvel , Saravana Kannan , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH v2 2/2] treewide: Add the __GFP_PACKED flag to several non-DMA kmalloc() allocations Message-ID: References: <20221030084718.GC5278@lst.de> <20221030091349.GA5600@lst.de> <20221101105919.GA13872@lst.de> <20221101172416.GB20381@lst.de> <20221101173940.GA20821@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1667387161; a=rsa-sha256; cv=none; b=5nO+SDLOVRnnj0aq81OKJE5j2Qcik8osW8DOXo7ZHNUrbYV2NJwUA5WXyyPX03GH8v/80u PJ3QtE3Q2vbUKTd4KacHi5q34DKZyZ7MVK+Xi/fmRRQuFrjzdmPMYzEqjvgxd0JsD22W7P icJfhCvlOz/ubVzUe5LmN7Vrc/jLlcg= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none); spf=pass (imf06.hostedemail.com: domain of cmarinas@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=cmarinas@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1667387161; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nOXhIHUT3tE+4dTXRHZvQb9yjGleLdq7xhnROlVVAwE=; b=bLvALd1W5hYMX4ytaSlOivTV+ANVwluV5L/izVWVaRKxXpTm4vAS4M8+4UKWosOdUVeqHO mfEa5+NaZE3mExUwOTRGJEEaQuxrU3Tx6g9RZ0e9w2HHKC2sKK+yZsXQkRC2JEYUrzFXCB xaxQdjeTvkwh6JI8P9IDXH+ZftBmn2I= X-Stat-Signature: 8dp8o4c88h1ad1b3fhn3a935y8px9shs X-Rspamd-Queue-Id: C5010180003 Authentication-Results: imf06.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none); spf=pass (imf06.hostedemail.com: domain of cmarinas@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=cmarinas@kernel.org X-Rspam-User: X-Rspamd-Server: rspam07 X-HE-Tag: 1667387161-871973 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Nov 01, 2022 at 12:10:51PM -0700, Isaac Manjarres wrote: > On Tue, Nov 01, 2022 at 06:39:40PM +0100, Christoph Hellwig wrote: > > On Tue, Nov 01, 2022 at 05:32:14PM +0000, Catalin Marinas wrote: > > > There's also the case of low-end phones with all RAM below 4GB and arm64 > > > doesn't allocate the swiotlb. Not sure those vendors would go with a > > > recent kernel anyway. > > > > > > So the need for swiotlb now changes from 32-bit DMA to any DMA > > > (non-coherent but we can't tell upfront when booting, devices may be > > > initialised pretty late). > > Not only low-end phones, but there are other form-factors that can fall > into this category and are also memory constrained (e.g. wearable > devices), so the memory headroom impact from enabling SWIOTLB might be > non-negligible for all of these devices. I also think it's feasible for > those devices to use recent kernels. Another option I had in mind is to disable this bouncing if there's no swiotlb buffer, so kmalloc() will return ARCH_DMA_MINALIGN (or the typically lower cache_line_size()) aligned objects. That's at least until we find a lighter way to do bouncing. Those devices would work as before. > > Yes. The other option would be to use the dma coherent pool for the > > bouncing, which must be present on non-coherent systems anyway. But > > it would require us to write a new set of bounce buffering routines. > > I think in addition to having to write new bounce buffering routines, > this approach still suffers the same problem as SWIOTLB, which is that > the memory for SWIOTLB and/or the dma coherent pool is not reclaimable, > even when it is not used. The dma coherent pool at least it has the advantage that its size can be increased at run-time and we can start with a small one. Not decreased though, but if really needed I guess it can be added. We'd also skip some cache maintenance here since the coherent pool is mapped as non-cacheable already. But to Christoph's point, it does require some reworking of the current bouncing code. > There's not enough context in the DMA mapping routines to know if we need > an atomic allocation, so if we used kmalloc(), instead of SWIOTLB, to > dynamically allocate memory, it would always have to use GFP_ATOMIC. I've seen the expression below in a couple of places in the kernel, though IIUC in_atomic() doesn't always detect atomic contexts: gfpflags = (in_atomic() || irqs_disabled()) ? GFP_ATOMIC : GFP_KERNEL; > But what about having a pool that has a small amount of memory and is > composed of several objects that can be used for small DMA transfers? > If the amount of memory in the pool starts falling below a certain > threshold, there can be a worker thread--so that we don't have to use > GFP_ATOMIC--that can add more memory to the pool? If the rate of allocation is high, it may end up calling a slab allocator directly with GFP_ATOMIC. The main downside of any memory pool is identifying the original pool in dma_unmap_*(). We have a simple is_swiotlb_buffer() check looking just at the bounce buffer boundaries. For the coherent pool we have the more complex dma_free_from_pool(). With a kmem_cache-based allocator (whether it's behind a mempool or not), we'd need something like virt_to_cache() and checking whether it is from our DMA cache. I'm not a big fan of digging into the slab internals for this. An alternative could be some xarray to remember the bounced dma_addr. Anyway, I propose that we try the swiotlb first and look at optimising it from there, initially using the dma coherent pool. -- Catalin