From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16743C48BC3 for ; Tue, 20 Feb 2024 16:36:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 66BC66B0072; Tue, 20 Feb 2024 11:36:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 61B4F6B0074; Tue, 20 Feb 2024 11:36:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4BCBA6B0075; Tue, 20 Feb 2024 11:36:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 3A05F6B0072 for ; Tue, 20 Feb 2024 11:36:14 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 777631C07C4 for ; Tue, 20 Feb 2024 16:36:13 +0000 (UTC) X-FDA: 81812734626.27.F93FFC4 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf17.hostedemail.com (Postfix) with ESMTP id 7693240020 for ; Tue, 20 Feb 2024 16:36:11 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1708446972; a=rsa-sha256; cv=none; b=rnU7p2AwYcYjP9cqrygovovbvt/WI+DrLE9zg4WWRi7DqSJlYsXUUCirvXh8GzQlEW21Tw d5nj7fArXSUDOU9xfEXHYxoamaBJhjOzKjOnILQF34aG7BYrBSDwaD8AmIFEvlGGV+MAGr CJPmn0m5VN+7tADCNyaW1+EWjT8/aJA= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1708446972; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Jx7eUtvwKotedRwpQhPZfQbmmHbPOPj2TFGp/OqtYm8=; b=GEmLEiGIxBcFYdRYZrYZvqntj+O9OxxJp+6Bw+7wlKmdRSixYXjOQLIlbxyEOIMMr4VLrZ jKbd9SZapox0KBn2WMq2DXw3Yr101n3Mew46So5EcHPHuS6pDOdwT2j02SIYhAauk5FZmW FFMRZIV21Fu81rwc91f6eyR5WSehTvs= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4F848FEC; Tue, 20 Feb 2024 08:36:49 -0800 (PST) Received: from raptor (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 20E3A3F762; Tue, 20 Feb 2024 08:36:06 -0800 (PST) Date: Tue, 20 Feb 2024 16:36:03 +0000 From: Alexandru Elisei To: David Hildenbrand Cc: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, rppt@kernel.org, akpm@linux-foundation.org, peterz@infradead.org, konrad.wilk@oracle.com, willy@infradead.org, jgross@suse.com, hch@lst.de, geert@linux-m68k.org, vitaly.wool@konsulko.com, ddstreet@ieee.org, sjenning@redhat.com, hughd@google.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, alexandru.elisei@arm.com Subject: Re: arm64 MTE tag storage reuse - alternatives to MIGRATE_CMA Message-ID: References: <70d77490-9036-48ac-afc9-4b976433070d@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 7693240020 X-Stat-Signature: emtxk73rxkfspfqf9jrugyazm3qsas6q X-HE-Tag: 1708446971-240372 X-HE-Meta: U2FsdGVkX1+m2IVhBywKiIPUX1LhcyBlEGv0ahBEFIIi2ou5iikInqaMPmnacBDtzC5iVPPHIu8MAuNgXR6PSWc8OuxXCms6C1LPKem4IvqGTay2Oe08UKIBPuFpgnR60bdEHMcoMwyezlAv6clZBxwXRq8Qk8BrOBXJrfz2ZqwuH4WPEVuJUnHO48/JxxlT6vCRbaiGmNyCkC3fX8xsZlayVNXp0oVtY01JheEJBnQiPg4T/i2uGxdDF5Qupn/+DevOrrI+nuI3ByP8DfhyXujmdmApzIozRcAqfA0rtnWNyijsngI6P6ya1tpAsoVH0nur7A5dU7I1STwDCU+l1B1oXQiHWlhOQSpjUZOdJtj2dr4dtRo8qb+2/bvLtkkqcbEeBf2EHcamRgTVAgq5oZEIfLfq1NXhizpHRy35ZokLWEwTuwPWBHH5q9bvCj0sdpLWhSWzeOKajwC143Q/scRDxIHHwPXSoQYCT4Cr9Qgl7joi6Obf4SmL0a029pM9KbR6VuWcKptc++LfkuY7y25XFz5QPN+zvS/DmLMl9f+mXRl9zI/kN7z+UHUxQnd3sOvbtTVVr5J9If0uPRFHOl2px8xRmgAwCN+nm0G6/OzdtotzU/powHZ8tlw5FeBHyrETWuTY1YAwB9my8y6eG/9FvJoqudvT1MBZ8912noD2yvpDftBCSbhKboTM6AgOiN8V4y18TrPA5Iu6h62zvDvhFZ9CDUyaj0BSNhuZOVdU0L+0UxRD/+U0l6dggs9/z88WwaWmxoDf0zi2ZyeMkqyMTj3hctLQAgFeSt122nc7aipF7zhvpQAhQCD0NAgVdA1f0TOHRxoSWFEBGWkEwMfnOin3e2VqEklYePxU3dU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi, On Tue, Feb 20, 2024 at 05:16:26PM +0100, David Hildenbrand wrote: > > > > > > I believe this is a very good fit for tag storage reuse, because it allows > > > > > > tag storage to be allocated even in atomic contexts, which enables MTE in > > > > > > the kernel. As a bonus, all of the changes to MM from the current approach > > > > > > wouldn't be needed, as tag storage allocation can be handled entirely in > > > > > > set_ptes_at(), copy_*highpage() or arch_swap_restore(). > > > > > > > > > > > > Is this a viable approach that would be upstreamable? Are there other > > > > > > solutions that I haven't considered? I'm very much open to any alternatives > > > > > > that would make tag storage reuse viable. > > > > > > > > > > As raised recently, I had similar ideas with something like virtio-mem in > > > > > the past (wanted to call it virtio-tmem back then), but didn't have time to > > > > > look into it yet. > > > > > > > > > > I considered both, using special device memory as "cleancache" backend, and > > > > > using it as backend storage for something similar to zswap. We would not > > > > > need a memmap/"struct page" for that special device memory, which reduces > > > > > memory overhead and makes "adding more memory" a more reliable operation. > > > > > > > > Hm... this might not work with tag storage memory, the kernel needs to > > > > perform cache maintenance on the memory when it transitions to and from > > > > storing tags and storing data, so the memory must be mapped by the kernel. > > > > > > The direct map will definitely be required I think (copy in/out data). But > > > memmap for tag memory will likely not be required. Of course, it depends how > > > to manage tag storage. Likely we have to store some metadata, hopefully we > > > can avoid the full memmap and just use something else. > > > > So I guess instead of ZONE_DEVICE I should try to use arch_add_memory() > > directly? That has the limitation that it cannot be used by a driver > > (symbol not exported to modules). > You can certainly start with something simple, and we can work on removing > that memmap allocation later. > > Maybe we have to expose new primitives in the context of such drivers. > arch_add_memory() likely also doesn't do what you need. > > I recall that we had a way of only messing with the direct map. > > Last time I worked with that was in the context of memtrace > (arch/powerpc/platforms/powernv/memtrace.c) > > There, we call arch_create_linear_mapping()/arch_remove_linear_mapping(). > > ... and now my memory comes back: we never finished factoring out > arch_create_linear_mapping/arch_remove_linear_mapping so they would be > available on all architectures. > > > Your driver will be very arm64 specific, so doing it in an arm64-special way > might be good enough initially. For example, the arm64-core could detect > that special memory region and just statically prepare the direct map and > not expose the memory to the buddy/allocate a memmap. Similar to how we > handle the crashkernel/kexec IIRC (we likely do not have a direct map for > that, though; ). > > [I was also wondering if we could simply dynamically map/unmap when required > so you can just avoid creating the entire direct map; might bot be the best > approach performance-wise, though] > > There are a bunch of details to be sorted out, but I don't consider the > directmap/memmap side of things a big problem. Sounds reasonable, thank you for the feedback! Thanks, Alex > > -- > Cheers, > > David / dhildenb >