From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F8A2C43457 for ; Fri, 16 Oct 2020 14:05:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6D1E820848 for ; Fri, 16 Oct 2020 14:05:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6D1E820848 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A01626B005C; Fri, 16 Oct 2020 10:05:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 98BA36B005D; Fri, 16 Oct 2020 10:05:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 879BA6B0062; Fri, 16 Oct 2020 10:05:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0168.hostedemail.com [216.40.44.168]) by kanga.kvack.org (Postfix) with ESMTP id 6CAF96B005C for ; Fri, 16 Oct 2020 10:05:41 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 18FD91EF1 for ; Fri, 16 Oct 2020 14:05:41 +0000 (UTC) X-FDA: 77377961682.19.coast63_5400d0e2721d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 51AEB1AD31E for ; Fri, 16 Oct 2020 14:05:40 +0000 (UTC) X-HE-Tag: coast63_5400d0e2721d X-Filterd-Recvd-Size: 2749 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Fri, 16 Oct 2020 14:05:39 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 16BC0AD19; Fri, 16 Oct 2020 14:05:38 +0000 (UTC) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit Date: Fri, 16 Oct 2020 16:05:37 +0200 From: osalvador@suse.de To: Michal Hocko Cc: Shijie Luo , akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linmiaohe@huawei.com, linfeilong@huawei.com Subject: Re: [PATCH] mm: fix potential pte_unmap_unlock pte error In-Reply-To: <20201016134215.GL22589@dhcp22.suse.cz> References: <20201015121534.50910-1-luoshijie1@huawei.com> <20201016123137.GH22589@dhcp22.suse.cz> <20201016131112.GJ22589@dhcp22.suse.cz> <20201016131531.GK22589@dhcp22.suse.cz> <20201016134215.GL22589@dhcp22.suse.cz> User-Agent: Roundcube Webmail Message-ID: <8b1e52b7a07b9ff1be9badb73209abda@suse.de> X-Sender: osalvador@suse.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2020-10-16 15:42, Michal Hocko wrote: > OK, I finally managed to convince my friday brain to think and grasped > what the code is intended to do. The loop is hairy and we want to > prevent from spurious EIO when all the pages are on a proper node. So > the check has to be done inside the loop. Anyway I would find the > following fix less error prone and easier to follow > diff --git a/mm/mempolicy.c b/mm/mempolicy.c > index eddbe4e56c73..8cc1fc9c4d13 100644 > --- a/mm/mempolicy.c > +++ b/mm/mempolicy.c > @@ -525,7 +525,7 @@ static int queue_pages_pte_range(pmd_t *pmd, > unsigned long addr, > unsigned long flags = qp->flags; > int ret; > bool has_unmovable = false; > - pte_t *pte; > + pte_t *pte, *mapped_pte; > spinlock_t *ptl; > > ptl = pmd_trans_huge_lock(pmd, vma); > @@ -539,7 +539,7 @@ static int queue_pages_pte_range(pmd_t *pmd, > unsigned long addr, > if (pmd_trans_unstable(pmd)) > return 0; > > - pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); > + mapped_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); > for (; addr != end; pte++, addr += PAGE_SIZE) { > if (!pte_present(*pte)) > continue; > @@ -571,7 +571,7 @@ static int queue_pages_pte_range(pmd_t *pmd, > unsigned long addr, > } else > break; > } > - pte_unmap_unlock(pte - 1, ptl); > + pte_unmap_unlock(mapped_pte, ptl); > cond_resched(); > > if (has_unmovable) It is more clear to grasp, definitely.