From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 966CEC48BC2 for ; Wed, 23 Jun 2021 03:25:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 26C3A61186 for ; Wed, 23 Jun 2021 03:25:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 26C3A61186 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CEB6D6B0011; Tue, 22 Jun 2021 23:25:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C9BEF6B0036; Tue, 22 Jun 2021 23:25:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B64466B006C; Tue, 22 Jun 2021 23:25:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0139.hostedemail.com [216.40.44.139]) by kanga.kvack.org (Postfix) with ESMTP id 85F826B0011 for ; Tue, 22 Jun 2021 23:25:06 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id BB5E7181CC1CB for ; Wed, 23 Jun 2021 03:25:06 +0000 (UTC) X-FDA: 78283547412.22.D9F2796 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf01.hostedemail.com (Postfix) with ESMTP id C55655001703 for ; Wed, 23 Jun 2021 03:25:05 +0000 (UTC) Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4G8pTj2nBkzZhvD; Wed, 23 Jun 2021 11:22:01 +0800 (CST) Received: from dggema774-chm.china.huawei.com (10.1.198.216) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2176.2; Wed, 23 Jun 2021 11:25:02 +0800 Received: from [10.67.102.197] (10.67.102.197) by dggema774-chm.china.huawei.com (10.1.198.216) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Wed, 23 Jun 2021 11:25:01 +0800 Subject: Re: [BUG] arm64: an infinite loop in generic_perform_write() To: Al Viro , Chen Huang CC: Andrew Morton , Stephen Rothwell , "Matthew Wilcox (Oracle)" , Randy Dunlap , Catalin Marinas , Will Deacon , Linux ARM , linux-mm , "open list" References: From: Xiaoming Ni Message-ID: <92fa298d-9d88-0ca4-40d9-13690dcd42f9@huawei.com> Date: Wed, 23 Jun 2021 11:24:54 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.0.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="gbk"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.67.102.197] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggema774-chm.china.huawei.com (10.1.198.216) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: C55655001703 X-Stat-Signature: axbcyrc5y9mtbdfm3nab5khs7ny9djge Authentication-Results: imf01.hostedemail.com; dkim=none; spf=pass (imf01.hostedemail.com: domain of nixiaoming@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=nixiaoming@huawei.com; dmarc=pass (policy=none) header.from=huawei.com X-HE-Tag: 1624418705-526530 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2021/6/23 10:50, Al Viro wrote: > On Wed, Jun 23, 2021 at 10:39:31AM +0800, Chen Huang wrote: > >> Then when kernel handles the alignment_fault, it will not panic. As the >> arm64 memory model spec said, when the address is not a multiple of the >> element size, the access is unaligned. Unaligned accesses are allowed to >> addresses marked as Normal, but not to Device regions. An unaligned access >> to a Device region will trigger an exception (alignment fault). >> >> do_alignment_fault >> do_bad_area >> __do_kernel_fault >> fixup_exception >> >> But that fixup cann't handle the unaligned copy, so the >> copy_page_from_iter_atomic returns 0 and traps in loop. > > Looks like you need to fix your raw_copy_from_user(), then... > . > Exit loop when iov_iter_copy_from_user_atomic() returns 0. This should solve the problem, too, and it's easier. Thanks. Xiaoming Ni