From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51FFBC2B9F4 for ; Wed, 23 Jun 2021 02:39:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B7B0761375 for ; Wed, 23 Jun 2021 02:39:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B7B0761375 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 69D866B0011; Tue, 22 Jun 2021 22:39:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 64CE56B0036; Tue, 22 Jun 2021 22:39:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5142C6B006C; Tue, 22 Jun 2021 22:39:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0126.hostedemail.com [216.40.44.126]) by kanga.kvack.org (Postfix) with ESMTP id 087566B0011 for ; Tue, 22 Jun 2021 22:39:37 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 3DDCE181B04B3 for ; Wed, 23 Jun 2021 02:39:38 +0000 (UTC) X-FDA: 78283432836.11.64AB618 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf12.hostedemail.com (Postfix) with ESMTP id 679CF3C3 for ; Wed, 23 Jun 2021 02:39:37 +0000 (UTC) Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.54]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4G8nQl3nYbz1BQlm; Wed, 23 Jun 2021 10:34:23 +0800 (CST) Received: from dggema756-chm.china.huawei.com (10.1.198.198) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2176.2; Wed, 23 Jun 2021 10:39:33 +0800 Received: from [10.174.177.134] (10.174.177.134) by dggema756-chm.china.huawei.com (10.1.198.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Wed, 23 Jun 2021 10:39:32 +0800 From: Chen Huang Subject: [BUG] arm64: an infinite loop in generic_perform_write() To: Andrew Morton , Stephen Rothwell , "Matthew Wilcox (Oracle)" , "Al Viro" , Randy Dunlap , "Catalin Marinas" , Will Deacon CC: Linux ARM , linux-mm , open list Message-ID: Date: Wed, 23 Jun 2021 10:39:31 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 Content-Type: text/plain; charset="gbk" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.134] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggema756-chm.china.huawei.com (10.1.198.198) X-CFilter-Loop: Reflected Authentication-Results: imf12.hostedemail.com; dkim=none; spf=pass (imf12.hostedemail.com: domain of chenhuang5@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=chenhuang5@huawei.com; dmarc=pass (policy=none) header.from=huawei.com X-Stat-Signature: 89znsjewdn9jkj3fqm1p3q9kdgnbqkq9 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 679CF3C3 X-HE-Tag: 1624415977-628292 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When we access a device memory in userspace, then perform an unaligned write to a file. For example, we register a uio device and mmap the device, then perform an write to a file, like that: device_addr = mmap(device_fd); write(file_fd, device_addr + unaligned_num, size); We found that the infinite loop happened in generic_perform_write function: again: copied = copy_page_from_iter_atomic(); //copied = 0 status = ops->write_end(); //status = 0 if (status == 0) goto again; In copy_page_from_iter_atomic, the copyin() function finally call __arch_copy_from_user which create an exception table entry for 'insn'. Then when kernel handles the alignment_fault, it will not panic. As the arm64 memory model spec said, when the address is not a multiple of the element size, the access is unaligned. Unaligned accesses are allowed to addresses marked as Normal, but not to Device regions. An unaligned access to a Device region will trigger an exception (alignment fault). do_alignment_fault do_bad_area __do_kernel_fault fixup_exception But that fixup cann't handle the unaligned copy, so the copy_page_from_iter_atomic returns 0 and traps in loop. Reported-by: Chen Huang