From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98EE0C433EF for ; Thu, 20 Jan 2022 16:54:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C1A986B00A0; Thu, 20 Jan 2022 11:54:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BCA636B00A2; Thu, 20 Jan 2022 11:54:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A6BBD6B00A4; Thu, 20 Jan 2022 11:54:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0088.hostedemail.com [216.40.44.88]) by kanga.kvack.org (Postfix) with ESMTP id 988FC6B00A0 for ; Thu, 20 Jan 2022 11:54:29 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 5041E92754 for ; Thu, 20 Jan 2022 16:54:29 +0000 (UTC) X-FDA: 79051263858.12.5F4224C Received: from pandora.armlinux.org.uk (pandora.armlinux.org.uk [78.32.30.218]) by imf22.hostedemail.com (Postfix) with ESMTP id A9573C000C for ; Thu, 20 Jan 2022 16:54:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=armlinux.org.uk; s=pandora-2019; h=Sender:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=T5bwAQqDRAJBkRoYYvtB7BfHGQVPfm+SXuenWerqqkQ=; b=CRrBFYN7EFM/74TrEV71MrhFrc vqGZbsgN5nJFrPJgK+yooB1O7Tlb28X/v7o5xX71zh8eMVJ9IKc7p38OWcVRcyuXyVcV7TeNizahk jMmjQ0Nxhc1x6YG5FCRF3rRDd/Ajci42EwkcLSGtcYN2C0Dj8v2bi/0XynXaFjlW6UoJNKf6Mg6nb vbSPBNL5f0j+aVnWnRUTFdF7lcsBIwBi7s9JneqlfRHPrwMHjsb1+7Vf5A9scoCos65NlvBWGpVMF hQq5NvC3zYK97AguNfAH+rK2rFdg/NbGecrf9VDvr1pahqm9QwiAL/GaS+85RsnugVWatOl0LdwvK ZS1Yd1sA==; Received: from shell.armlinux.org.uk ([fd8f:7570:feb6:1:5054:ff:fe00:4ec]:56802) by pandora.armlinux.org.uk with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1nAahN-0005o6-0C; Thu, 20 Jan 2022 16:54:09 +0000 Received: from linux by shell.armlinux.org.uk with local (Exim 4.94.2) (envelope-from ) id 1nAahH-0006Ni-T9; Thu, 20 Jan 2022 16:54:03 +0000 Date: Thu, 20 Jan 2022 16:54:03 +0000 From: "Russell King (Oracle)" To: Robin Murphy Cc: Matthew Wilcox , Yury Norov , Catalin Marinas , Will Deacon , Andrew Morton , Nicholas Piggin , Ding Tianhong , Anshuman Khandual , Alexey Klimov , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH] vmap(): don't allow invalid pages Message-ID: References: <20220118235244.540103-1-yury.norov@gmail.com> <319b09bc-56a2-207f-6180-3cc7d8cd43d1@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: A9573C000C X-Stat-Signature: y51f1ct7g3a35dnzp9e9w6h1faw17pfz Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=armlinux.org.uk header.s=pandora-2019 header.b=CRrBFYN7; dmarc=pass (policy=none) header.from=armlinux.org.uk; spf=none (imf22.hostedemail.com: domain of "linux+linux-mm=kvack.org@armlinux.org.uk" has no SPF policy when checking 78.32.30.218) smtp.mailfrom="linux+linux-mm=kvack.org@armlinux.org.uk" X-Rspamd-Server: rspam03 X-HE-Tag: 1642697666-238316 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Jan 20, 2022 at 04:37:01PM +0000, Robin Murphy wrote: > On 2022-01-20 13:03, Russell King (Oracle) wrote: > > On Thu, Jan 20, 2022 at 12:22:35PM +0000, Robin Murphy wrote: > > > On 2022-01-19 19:12, Russell King (Oracle) wrote: > > > > On Wed, Jan 19, 2022 at 06:43:10PM +0000, Robin Murphy wrote: > > > > > Indeed, my impression is that the only legitimate way to get hold of a page > > > > > pointer without assumed provenance is via pfn_to_page(), which is where > > > > > pfn_valid() comes in. Thus pfn_valid(page_to_pfn()) really *should* be a > > > > > tautology. > > > > > > > > That can only be true if pfn == page_to_pfn(pfn_to_page(pfn)) for all > > > > values of pfn. > > > > > > > > Given how pfn_to_page() is defined in the sparsemem case: > > > > > > > > #define __pfn_to_page(pfn) \ > > > > ({ unsigned long __pfn = (pfn); \ > > > > struct mem_section *__sec = __pfn_to_section(__pfn); \ > > > > __section_mem_map_addr(__sec) + __pfn; \ > > > > }) > > > > #define page_to_pfn __page_to_pfn > > > > > > > > that isn't the case, especially when looking at page_to_pfn(): > > > > > > > > #define __page_to_pfn(pg) \ > > > > ({ const struct page *__pg = (pg); \ > > > > int __sec = page_to_section(__pg); \ > > > > (unsigned long)(__pg - __section_mem_map_addr(__nr_to_section(__sec))); \ > > > > }) > > > > > > > > Where: > > > > > > > > static inline unsigned long page_to_section(const struct page *page) > > > > { > > > > return (page->flags >> SECTIONS_PGSHIFT) & SECTIONS_MASK; > > > > } > > > > > > > > So if page_to_section() returns something that is, e.g. zero for an > > > > invalid page in a non-zero section, you're not going to end up with > > > > the right pfn from page_to_pfn(). > > > > > > Right, I emphasised "should" in an attempt to imply "in the absence of > > > serious bugs that have further-reaching consequences anyway". > > > > > > > As I've said now a couple of times, trying to determine of a struct > > > > page pointer is valid is the wrong question to be asking. > > > > > > And doing so in one single place, on the justification of avoiding an > > > incredibly niche symptom, is even more so. Not to mention that an address > > > size fault is one of the best possible outcomes anyway, vs. the untold > > > damage that may stem from accesses actually going through to random parts of > > > the physical memory map. > > > > I don't see it as a "niche" symptom. > > The commit message specifically cites a Data Abort "at address translation > later". Broadly speaking, a Data Abort due to an address size fault only > occurs if you've been lucky enough that the bogus PA which got mapped is so > spectacularly wrong that it's beyond the range configured in TCR.IPS. How > many other architectures even have a mechanism like that? I think we're misinterpreting each other. > > If we start off with the struct page being invalid, then the result of > > page_to_pfn() can not be relied upon to produce something that is > > meaningful - which is exactly why the vmap() issue arises. > > > > With a pfn_valid() check, we at least know that the PFN points at > > memory. > > No, we know it points to some PA space which has a struct page to represent > it. pfn_valid() only says that pfn_to_page() will yield a valid result. That > also includes things like reserved pages covering non-RAM areas, where a > kernel VA mapping existing at all could potentially be fatal to the system > even if it's never explicitly accessed - for all we know it might be a > carveout belonging to overly-aggressive Secure software such that even a > speculative prefetch might trigger an instant system reset. So are you saying that the "address size fault" can happen because we've mapped something for which pfn_valid() returns true? > > However, that memory could be _anything_ in the system - it > > could be the kernel image, and it could give userspace access to > > change kernel code. > > > > So, while it is useful to do a pfn_valid() check in vmap(), as I said > > to willy, this must _not_ be the primary check. It should IMHO use > > WARN_ON() to make it blatently obvious that it should be something we > > expect _not_ to trigger under normal circumstances, but is there to > > catch programming errors elsewhere. > > Rather, "to partially catch unrelated programming errors elsewhere, provided > the buggy code happens to call vmap() rather than any of the many other > functions with a struct page * argument." That's where it stretches my > definition of "useful" just a bit too far. It's not about perfect being the > enemy of good, it's about why vmap() should be special, and death by a > thousand "useful" cuts - if we don't trust the pointer, why not check its > alignment for basic plausibility first? If it seems valid, why not check if > the page flags look sensible to make sure? How many useful little checks is > too many? Every bit of code footprint and execution overhead imposed > unconditionally on all end users to theoretically save developers' debugging > time still adds up. Although on that note, it looks like arch/arm's > pfn_valid() is still a linear scan of the memblock array, so the overhead of > adding that for every page in every vmap() might not even be so small... Well, I think I've adequately explained why I believe: pfn_is_valid(page_to_pfn(page)) being used as the primary check is substandard, and will likely lead to a future CVE. When generating an array of struct page's, I believe that it is the responsibility for the generator to ensure that the array only contains valid pages. -- RMK's Patch system: https://www.armlinux.org.uk/developer/patches/ FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!