From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.4 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5397FC2BB1D for ; Thu, 12 Mar 2020 01:38:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 561D620753 for ; Thu, 12 Mar 2020 01:38:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="fSpB6u5R" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 561D620753 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ABAAE6B0007; Wed, 11 Mar 2020 21:38:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A91246B0008; Wed, 11 Mar 2020 21:38:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 980446B000A; Wed, 11 Mar 2020 21:38:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0168.hostedemail.com [216.40.44.168]) by kanga.kvack.org (Postfix) with ESMTP id 7DAB16B0007 for ; Wed, 11 Mar 2020 21:38:11 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 5F14A87C4 for ; Thu, 12 Mar 2020 01:38:11 +0000 (UTC) X-FDA: 76584999582.24.name58_d3e98e32b41f X-HE-Tag: name58_d3e98e32b41f X-Filterd-Recvd-Size: 4658 Received: from hqnvemgate25.nvidia.com (hqnvemgate25.nvidia.com [216.228.121.64]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Thu, 12 Mar 2020 01:38:10 +0000 (UTC) Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 11 Mar 2020 18:37:24 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Wed, 11 Mar 2020 18:38:09 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Wed, 11 Mar 2020 18:38:09 -0700 Received: from rcampbell-dev.nvidia.com (172.20.13.39) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 12 Mar 2020 01:38:07 +0000 Subject: Re: [PATCH hmm 8/8] mm/hmm: add missing call to hmm_pte_need_fault in HMM_PFN_SPECIAL handling To: Jason Gunthorpe , Jerome Glisse , CC: , John Hubbard , , , Christoph Hellwig , Philip Yang , "Jason Gunthorpe" References: <20200311183506.3997-1-jgg@ziepe.ca> <20200311183506.3997-9-jgg@ziepe.ca> X-Nvconfidentiality: public From: Ralph Campbell Message-ID: <2feb1061-09b9-766d-3d0d-be17debedde8@nvidia.com> Date: Wed, 11 Mar 2020 18:38:07 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.2 MIME-Version: 1.0 In-Reply-To: <20200311183506.3997-9-jgg@ziepe.ca> X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1583977044; bh=TeIeAHJ8Ep5pjoo7cziGM/eX3EFeJGJdqZCXIdNjLkY=; h=X-PGP-Universal:Subject:To:CC:References:X-Nvconfidentiality:From: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=fSpB6u5RJxL5Ws6WRwyo0MDxd1wwB/p/tYs2GJJQ86w5OykymtKp01xcWGqBxKFyB EcwFhSJWy/Z3CzpI5sdXwxyqBy0Nmoqju318rE7SqcjIURSHLN8w5r//rf0Ut7qf+l hjUs/PEeQi2X01Dg2ygewCc/6EEGpOVlHnTxKQwzQviYIEkowxD/y3DPyQCwWVZBn4 UuACQ2qhLoRf3aDxtQgZJS3mAj+Ul4Yd4tsx4LvogUBd0LC/hvtqtVEt6abck2uSix CTHhQit97LsJ0KJTuKnGbKPzZZwmZZ8kSdnLCn2f1AYCSSMwYdbJFj5Vni5zJ/XHny NRfZ8z0fhTNvA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 3/11/20 11:35 AM, Jason Gunthorpe wrote: > From: Jason Gunthorpe > > Currently if a special PTE is encountered hmm_range_fault() immediately > returns EFAULT and sets the HMM_PFN_SPECIAL error output (which nothing > uses). > > EFAULT should only be returned after testing with hmm_pte_need_fault(). > > Also pte_devmap() and pte_special() are exclusive, and there is no need to > check IS_ENABLED, pte_special() is stubbed out to return false on > unsupported architectures. > > Fixes: 992de9a8b751 ("mm/hmm: allow to mirror vma of a file on a DAX backed filesystem") > Signed-off-by: Jason Gunthorpe Reviewed-by: Ralph Campbell > --- > mm/hmm.c | 19 ++++++++++++------- > 1 file changed, 12 insertions(+), 7 deletions(-) > > diff --git a/mm/hmm.c b/mm/hmm.c > index f61fddf2ef6505..ca33d086bdc190 100644 > --- a/mm/hmm.c > +++ b/mm/hmm.c > @@ -335,16 +335,21 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, > pte_unmap(ptep); > return -EBUSY; > } > - } else if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL) && pte_special(pte)) { > - if (!is_zero_pfn(pte_pfn(pte))) { > + } > + > + /* > + * Since each architecture defines a struct page for the zero page, just > + * fall through and treat it like a normal page. > + */ > + if (pte_special(pte) && !is_zero_pfn(pte_pfn(pte))) { > + hmm_pte_need_fault(hmm_vma_walk, orig_pfn, 0, &fault, > + &write_fault); > + if (fault || write_fault) { > pte_unmap(ptep); > - *pfn = range->values[HMM_PFN_SPECIAL]; > return -EFAULT; > } > - /* > - * Since each architecture defines a struct page for the zero > - * page, just fall through and treat it like a normal page. > - */ > + *pfn = range->values[HMM_PFN_SPECIAL]; > + return 0; > } > > *pfn = hmm_device_entry_from_pfn(range, pte_pfn(pte)) | cpu_flags; >