From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.4 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5C23C2BB1D for ; Thu, 12 Mar 2020 01:28:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EA3C920735 for ; Thu, 12 Mar 2020 01:28:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="rs6KvATA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EA3C920735 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 431386B0003; Wed, 11 Mar 2020 21:28:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3E21B6B0006; Wed, 11 Mar 2020 21:28:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2CFF36B0007; Wed, 11 Mar 2020 21:28:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0059.hostedemail.com [216.40.44.59]) by kanga.kvack.org (Postfix) with ESMTP id 1108A6B0003 for ; Wed, 11 Mar 2020 21:28:34 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id CDDB7180AD838 for ; Thu, 12 Mar 2020 01:28:33 +0000 (UTC) X-FDA: 76584975306.04.work72_4ab21db73cf4b X-HE-Tag: work72_4ab21db73cf4b X-Filterd-Recvd-Size: 5221 Received: from hqnvemgate26.nvidia.com (hqnvemgate26.nvidia.com [216.228.121.65]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Thu, 12 Mar 2020 01:28:33 +0000 (UTC) Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 11 Mar 2020 18:28:18 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Wed, 11 Mar 2020 18:28:31 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Wed, 11 Mar 2020 18:28:31 -0700 Received: from rcampbell-dev.nvidia.com (172.20.13.39) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 12 Mar 2020 01:28:30 +0000 Subject: Re: [PATCH hmm 1/8] mm/hmm: add missing unmaps of the ptep during hmm_vma_handle_pte() To: Jason Gunthorpe , Jerome Glisse , CC: , John Hubbard , , , Christoph Hellwig , Philip Yang , "Jason Gunthorpe" References: <20200311183506.3997-1-jgg@ziepe.ca> <20200311183506.3997-2-jgg@ziepe.ca> From: Ralph Campbell X-Nvconfidentiality: public Message-ID: <1cfdfdff-6d41-b73a-fe48-c7a10c221482@nvidia.com> Date: Wed, 11 Mar 2020 18:28:30 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.2 MIME-Version: 1.0 In-Reply-To: <20200311183506.3997-2-jgg@ziepe.ca> X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1583976498; bh=EHutxl2gy79IofDsdKbyczDfHdWQ2w9tZDEpao1aPoI=; h=X-PGP-Universal:Subject:To:CC:References:From:X-Nvconfidentiality: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=rs6KvATAP7bVK2aijXNSVUJhskyJV8LMYyzLv/oG+mRxQGAclrCq/Mp6mugY11kJo wMkgLPz8J14xin3aL+5G176lScaimgFdcMyAoSKgJfemy68BU5tsR+gyIGrnc/PIA/ xnUNjg4bgGiIEQDIuzjDmI6jNlmwIU6gV9/kTQGhaTUkEIDLJ5TKbiS3mucJmI1tNM LcynFXlAfFJ6sj7QHwcoi5Kc/Km0KWd6rZziAFNpj9bQ0xVONQ6yr13q3PZkZ8iw7w 6p+YYKDXdmDODPJKSsfk9aH36pCF/Q0y9QNAEZYseed7CkFxnMReljBnJ7h72wYkr4 FGr9kqcGnQaNg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 3/11/20 11:34 AM, Jason Gunthorpe wrote: > From: Jason Gunthorpe > > Many of the direct returns of error skipped doing the pte_unmap(). All non > zero exit paths must unmap the pte. > > The pte_unmap() is split unnaturally like this because some of the error > exit paths trigger a sleep and must release the lock before sleeping. > > Fixes: 992de9a8b751 ("mm/hmm: allow to mirror vma of a file on a DAX backed filesystem") > Fixes: 53f5c3f489ec ("mm/hmm: factor out pte and pmd handling to simplify hmm_vma_walk_pmd()") > Signed-off-by: Jason Gunthorpe The changes look OK to me but one issue noted below. In any case, you can add: Reviewed-by: Ralph Campbell > --- > mm/hmm.c | 8 ++++++-- > 1 file changed, 6 insertions(+), 2 deletions(-) > > diff --git a/mm/hmm.c b/mm/hmm.c > index 72e5a6d9a41756..35f85424176d14 100644 > --- a/mm/hmm.c > +++ b/mm/hmm.c > @@ -325,6 +325,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, > } > > /* Report error for everything else */ > + pte_unmap(ptep); > *pfn = range->values[HMM_PFN_ERROR]; > return -EFAULT; > } else { > @@ -339,10 +340,13 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, > if (pte_devmap(pte)) { > hmm_vma_walk->pgmap = get_dev_pagemap(pte_pfn(pte), > hmm_vma_walk->pgmap); > - if (unlikely(!hmm_vma_walk->pgmap)) > + if (unlikely(!hmm_vma_walk->pgmap)) { > + pte_unmap(ptep); > return -EBUSY; > + } > } else if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL) && pte_special(pte)) { > if (!is_zero_pfn(pte_pfn(pte))) { > + pte_unmap(ptep); > *pfn = range->values[HMM_PFN_SPECIAL]; > return -EFAULT; > } > @@ -437,7 +441,7 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, > > r = hmm_vma_handle_pte(walk, addr, end, pmdp, ptep, &pfns[i]); > if (r) { > - /* hmm_vma_handle_pte() did unmap pte directory */ > + /* hmm_vma_handle_pte() did pte_unmap() */ > hmm_vma_walk->last = addr; > return r; > } > I think there is a case where hmm_vma_handle_pte() is called, a fault is requested, pte_unmap() and hmm_vma_walk_hole_() are called, the latter returns zero (the fault was handled OK), but now the page table is unmapped for successive loop iterations and pte_unmap(ptep - 1) is called at the loop end. Since this isn't an issue on x86_64 but is on x86_32, I can see how it could be missed.