From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1347EC49EA5 for ; Thu, 24 Jun 2021 17:25:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A5987613F2 for ; Thu, 24 Jun 2021 17:25:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A5987613F2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 985606B0036; Thu, 24 Jun 2021 13:25:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 95BE46B005D; Thu, 24 Jun 2021 13:25:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 823886B006C; Thu, 24 Jun 2021 13:25:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0020.hostedemail.com [216.40.44.20]) by kanga.kvack.org (Postfix) with ESMTP id 4A6986B0036 for ; Thu, 24 Jun 2021 13:25:01 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 682C322869 for ; Thu, 24 Jun 2021 17:25:01 +0000 (UTC) X-FDA: 78289292802.30.367F425 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf19.hostedemail.com (Postfix) with ESMTP id 8FE209001E40 for ; Thu, 24 Jun 2021 17:25:00 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A1800ED1; Thu, 24 Jun 2021 10:24:59 -0700 (PDT) Received: from [10.57.9.136] (unknown [10.57.9.136]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 609573F719; Thu, 24 Jun 2021 10:24:57 -0700 (PDT) Subject: Re: [BUG] arm64: an infinite loop in generic_perform_write() To: Al Viro Cc: Matthew Wilcox , Christoph Hellwig , Chen Huang , Mark Rutland , Andrew Morton , Stephen Rothwell , Randy Dunlap , Catalin Marinas , Will Deacon , Linux ARM , linux-mm , open list References: <20210623132223.GA96264@C02TD0UTHF1T.local> <1c635945-fb25-8871-7b34-f475f75b2caf@huawei.com> <27fbb8c1-2a65-738f-6bec-13f450395ab7@arm.com> <7896a3c7-2e14-d0f4-dbb9-286b6f7181b5@arm.com> From: Robin Murphy Message-ID: <1aa40be9-2a47-007a-990f-a7eea6721a23@arm.com> Date: Thu, 24 Jun 2021 18:24:52 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Thunderbird/78.10.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 8FE209001E40 X-Stat-Signature: 5y1gsmudyurg9bd8dc3ogefhhwu5g8w6 Authentication-Results: imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of robin.murphy@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=robin.murphy@arm.com; dmarc=pass (policy=none) header.from=arm.com X-HE-Tag: 1624555500-720912 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2021-06-24 17:39, Al Viro wrote: > On Thu, Jun 24, 2021 at 05:38:35PM +0100, Robin Murphy wrote: >> On 2021-06-24 17:27, Al Viro wrote: >>> On Thu, Jun 24, 2021 at 02:22:27PM +0100, Robin Murphy wrote: >>> >>>> FWIW I think the only way to make the kernel behaviour any more robust here >>>> would be to make the whole uaccess API more expressive, such that rather >>>> than simply saying "I only got this far" it could actually differentiate >>>> between stopping due to a fault which may be recoverable and worth retrying, >>>> and one which definitely isn't. >>> >>> ... and propagate that "more expressive" information through what, 3 or 4 >>> levels in the call chain? >>> >>> From include/linux/uaccess.h: >>> >>> * If raw_copy_{to,from}_user(to, from, size) returns N, size - N bytes starting >>> * at to must become equal to the bytes fetched from the corresponding area >>> * starting at from. All data past to + size - N must be left unmodified. >>> * >>> * If copying succeeds, the return value must be 0. If some data cannot be >>> * fetched, it is permitted to copy less than had been fetched; the only >>> * hard requirement is that not storing anything at all (i.e. returning size) >>> * should happen only when nothing could be copied. In other words, you don't >>> * have to squeeze as much as possible - it is allowed, but not necessary. >>> >>> arm64 instances violate the aforementioned hard requirement. Please, fix >>> it there; it's not hard. All you need is an exception handler in .Ltiny15 >>> that would fall back to (short) byte-by-byte copy if the faulting address >>> happened to be unaligned. Or just do one-byte copy, not that it had been >>> considerably cheaper than a loop. Will be cheaper than propagating that extra >>> information up the call chain, let alone paying for extra ->write_begin() >>> and ->write_end() for single byte in generic_perform_write(). >> >> And what do we do if we then continue to fault with an external abort >> because whatever it is that warranted being mapped as Device-type memory in >> the first place doesn't support byte accesses? > > If it does not support byte access, it would've failed on fault-in. OK, if I'm understanding the code correctly and fault-in touches the exact byte that copy_to_user() is going to start on, and faulting anywhere *after* that byte is still OK, then that seems mostly workable, although there are still potential corner cases like a device register accepting byte reads but not byte writes. Basically if privileged userspace is going to do dumb things with mmap()ed MMIO, the kernel can't *guarantee* to save it from itself without a hell of a lot of invasive work for no other gain. Sure we can add some extra fallback paths in our arch code for a best-effort attempt to mitigate alignment faults - revamping the usercopy routines is on my to-do list so I'll bear this in mind, and I think it's basically the same idea we mooted some time ago for tag faults anyway - but I'm sure someone will inevitably still find some new way to trip it up. Fortunately on modern systems many of the aforementioned dumb things won't actually fault synchronously, so even if triggered by a usercopy accesses the payback will come slightly later via asynchronous SError and be considerably more terminal. Thanks, Robin.