From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.5 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F5DFC433E7 for ; Sat, 17 Oct 2020 01:55:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 567432083B for ; Sat, 17 Oct 2020 01:55:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 567432083B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 74E206B005D; Fri, 16 Oct 2020 21:55:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6FDE06B0062; Fri, 16 Oct 2020 21:55:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5C4B96B0068; Fri, 16 Oct 2020 21:55:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0038.hostedemail.com [216.40.44.38]) by kanga.kvack.org (Postfix) with ESMTP id 256766B005D for ; Fri, 16 Oct 2020 21:55:54 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id ABAD9180AD811 for ; Sat, 17 Oct 2020 01:55:53 +0000 (UTC) X-FDA: 77379751386.22.frame00_010da1627221 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 8261618038E67 for ; Sat, 17 Oct 2020 01:55:53 +0000 (UTC) X-HE-Tag: frame00_010da1627221 X-Filterd-Recvd-Size: 3433 Received: from huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Sat, 17 Oct 2020 01:55:52 +0000 (UTC) Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 6F35DD5519210D144968; Sat, 17 Oct 2020 09:55:49 +0800 (CST) Received: from [10.174.177.6] (10.174.177.6) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.487.0; Sat, 17 Oct 2020 09:55:42 +0800 Subject: Re: [PATCH] mm: fix potential pte_unmap_unlock pte error To: , Michal Hocko CC: , , , , References: <20201015121534.50910-1-luoshijie1@huawei.com> <20201016123137.GH22589@dhcp22.suse.cz> <20201016131112.GJ22589@dhcp22.suse.cz> <20201016131531.GK22589@dhcp22.suse.cz> <20201016134215.GL22589@dhcp22.suse.cz> <8b1e52b7a07b9ff1be9badb73209abda@suse.de> From: Shijie Luo Message-ID: Date: Sat, 17 Oct 2020 09:55:42 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: <8b1e52b7a07b9ff1be9badb73209abda@suse.de> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US X-Originating-IP: [10.174.177.6] X-CFilter-Loop: Reflected Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2020/10/16 22:05, osalvador@suse.de wrote: > On 2020-10-16 15:42, Michal Hocko wrote: >> OK, I finally managed to convince my friday brain to think and grasped >> what the code is intended to do. The loop is hairy and we want to >> prevent from spurious EIO when all the pages are on a proper node. So >> the check has to be done inside the loop. Anyway I would find the >> following fix less error prone and easier to follow >> diff --git a/mm/mempolicy.c b/mm/mempolicy.c >> index eddbe4e56c73..8cc1fc9c4d13 100644 >> --- a/mm/mempolicy.c >> +++ b/mm/mempolicy.c >> @@ -525,7 +525,7 @@ static int queue_pages_pte_range(pmd_t *pmd, >> unsigned long addr, >> =C2=A0=C2=A0=C2=A0=C2=A0 unsigned long flags =3D qp->flags; >> =C2=A0=C2=A0=C2=A0=C2=A0 int ret; >> =C2=A0=C2=A0=C2=A0=C2=A0 bool has_unmovable =3D false; >> -=C2=A0=C2=A0=C2=A0 pte_t *pte; >> +=C2=A0=C2=A0=C2=A0 pte_t *pte, *mapped_pte; >> =C2=A0=C2=A0=C2=A0=C2=A0 spinlock_t *ptl; >> >> =C2=A0=C2=A0=C2=A0=C2=A0 ptl =3D pmd_trans_huge_lock(pmd, vma); >> @@ -539,7 +539,7 @@ static int queue_pages_pte_range(pmd_t *pmd, >> unsigned long addr, >> =C2=A0=C2=A0=C2=A0=C2=A0 if (pmd_trans_unstable(pmd)) >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return 0; >> >> -=C2=A0=C2=A0=C2=A0 pte =3D pte_offset_map_lock(walk->mm, pmd, addr, &= ptl); >> +=C2=A0=C2=A0=C2=A0 mapped_pte =3D pte =3D pte_offset_map_lock(walk->m= m, pmd, addr, &ptl); >> =C2=A0=C2=A0=C2=A0=C2=A0 for (; addr !=3D end; pte++, addr +=3D PAGE_S= IZE) { >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (!pte_present(*pte= )) >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= continue; >> @@ -571,7 +571,7 @@ static int queue_pages_pte_range(pmd_t *pmd, >> unsigned long addr, >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } else >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= break; >> =C2=A0=C2=A0=C2=A0=C2=A0 } >> -=C2=A0=C2=A0=C2=A0 pte_unmap_unlock(pte - 1, ptl); >> +=C2=A0=C2=A0=C2=A0 pte_unmap_unlock(mapped_pte, ptl); >> =C2=A0=C2=A0=C2=A0=C2=A0 cond_resched(); >> >> =C2=A0=C2=A0=C2=A0=C2=A0 if (has_unmovable) > > It is more clear to grasp, definitely. Yeah, this one is more comprehensible, I 'll send a v2 patch, thank you.