From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.4 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3074CC4332D for ; Fri, 20 Mar 2020 21:46:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C960B20637 for ; Fri, 20 Mar 2020 21:46:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="ZAXBlMra" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C960B20637 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2BB526B0003; Fri, 20 Mar 2020 17:46:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 26C456B0005; Fri, 20 Mar 2020 17:46:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 15BE46B0007; Fri, 20 Mar 2020 17:46:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id F2D746B0003 for ; Fri, 20 Mar 2020 17:46:16 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 9D4F88248068 for ; Fri, 20 Mar 2020 21:46:16 +0000 (UTC) X-FDA: 76617074352.18.glue49_70ea2d6015836 X-HE-Tag: glue49_70ea2d6015836 X-Filterd-Recvd-Size: 7060 Received: from hqnvemgate25.nvidia.com (hqnvemgate25.nvidia.com [216.228.121.64]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Fri, 20 Mar 2020 21:46:15 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Fri, 20 Mar 2020 14:45:23 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Fri, 20 Mar 2020 14:46:14 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Fri, 20 Mar 2020 14:46:14 -0700 Received: from rcampbell-dev.nvidia.com (172.20.13.39) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 20 Mar 2020 21:46:12 +0000 Subject: Re: [PATCH hmm 3/6] mm/hmm: remove unused code and tidy comments To: Jason Gunthorpe , Jerome Glisse , CC: , John Hubbard , , , Christoph Hellwig , Philip Yang , "Jason Gunthorpe" References: <20200320164905.21722-1-jgg@ziepe.ca> <20200320164905.21722-4-jgg@ziepe.ca> X-Nvconfidentiality: public From: Ralph Campbell Message-ID: <7d7e5bad-8956-775b-1a4c-c927b2464459@nvidia.com> Date: Fri, 20 Mar 2020 14:46:09 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.2 MIME-Version: 1.0 In-Reply-To: <20200320164905.21722-4-jgg@ziepe.ca> X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1584740723; bh=+X1JGG4+QMXK6G9/IAxJDpgA8uR55yF4OsGYSCBJdFQ=; h=X-PGP-Universal:Subject:To:CC:References:X-Nvconfidentiality:From: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=ZAXBlMrae9mLychRLdVIsc12Ht0a6tcAtm3slwRua8T3gf9XxdWQLNFrOGNr6Jpsr CGaPTNsDXcZPJkbkZOa/9MEhdYc27Cdu5og23tNQeYXhcJXB2PxF45JQPw8eNwQasQ Nu7HaKYgTuIuiOMhgdEXIrttaqbHhOUpcyXmeBimyzGG2A+sDsrkE5zU/qkfwax8Wu c/RZUuCwz5LeVB4hi1DaUCuBPblarsd9G27xPPAarvC8J4lvcCzR893xMkLp7C+UYR K2dExGDanoFTIjo0AC7Xxlc+LeIEfwD/Pra54nVxcHA68CmVPSweX/GTPpN8zkxApd A9npGZ3TgpclA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 3/20/20 9:49 AM, Jason Gunthorpe wrote: > From: Jason Gunthorpe > > Delete several functions that are never called, fix some desync between > comments and structure content, remove an unused ret, and move one > function only used by hmm.c into hmm.c > > Signed-off-by: Jason Gunthorpe Reviewed-by: Ralph Campbell > --- > include/linux/hmm.h | 50 --------------------------------------------- > mm/hmm.c | 12 +++++++++++ > 2 files changed, 12 insertions(+), 50 deletions(-) > > diff --git a/include/linux/hmm.h b/include/linux/hmm.h > index bb6be4428633a8..184a8633260f9d 100644 > --- a/include/linux/hmm.h > +++ b/include/linux/hmm.h > @@ -120,9 +120,6 @@ enum hmm_pfn_value_e { > * > * @notifier: a mmu_interval_notifier that includes the start/end > * @notifier_seq: result of mmu_interval_read_begin() > - * @hmm: the core HMM structure this range is active against > - * @vma: the vm area struct for the range > - * @list: all range lock are on a list > * @start: range virtual start address (inclusive) > * @end: range virtual end address (exclusive) > * @pfns: array of pfns (big enough for the range) > @@ -131,7 +128,6 @@ enum hmm_pfn_value_e { > * @default_flags: default flags for the range (write, read, ... see hmm doc) > * @pfn_flags_mask: allows to mask pfn flags so that only default_flags matter > * @pfn_shifts: pfn shift value (should be <= PAGE_SHIFT) s/pfn_shifts/pfn_shift > - * @valid: pfns array did not change since it has been fill by an HMM function > * @dev_private_owner: owner of device private pages > */ > struct hmm_range { > @@ -171,52 +167,6 @@ static inline struct page *hmm_device_entry_to_page(const struct hmm_range *rang > return pfn_to_page(entry >> range->pfn_shift); > } > > -/* > - * hmm_device_entry_to_pfn() - return pfn value store in a device entry > - * @range: range use to decode device entry value > - * @entry: device entry to extract pfn from > - * Return: pfn value if device entry is valid, -1UL otherwise > - */ > -static inline unsigned long > -hmm_device_entry_to_pfn(const struct hmm_range *range, uint64_t pfn) > -{ > - if (pfn == range->values[HMM_PFN_NONE]) > - return -1UL; > - if (pfn == range->values[HMM_PFN_ERROR]) > - return -1UL; > - if (pfn == range->values[HMM_PFN_SPECIAL]) > - return -1UL; > - if (!(pfn & range->flags[HMM_PFN_VALID])) > - return -1UL; > - return (pfn >> range->pfn_shift); > -} > - > -/* > - * hmm_device_entry_from_page() - create a valid device entry for a page > - * @range: range use to encode HMM pfn value > - * @page: page for which to create the device entry > - * Return: valid device entry for the page > - */ > -static inline uint64_t hmm_device_entry_from_page(const struct hmm_range *range, > - struct page *page) > -{ > - return (page_to_pfn(page) << range->pfn_shift) | > - range->flags[HMM_PFN_VALID]; > -} > - > -/* > - * hmm_device_entry_from_pfn() - create a valid device entry value from pfn > - * @range: range use to encode HMM pfn value > - * @pfn: pfn value for which to create the device entry > - * Return: valid device entry for the pfn > - */ > -static inline uint64_t hmm_device_entry_from_pfn(const struct hmm_range *range, > - unsigned long pfn) > -{ > - return (pfn << range->pfn_shift) | > - range->flags[HMM_PFN_VALID]; > -} > - > /* Don't fault in missing PTEs, just snapshot the current state. */ > #define HMM_FAULT_SNAPSHOT (1 << 1) > > diff --git a/mm/hmm.c b/mm/hmm.c > index b4f662eadb7a7c..687d21c675ee60 100644 > --- a/mm/hmm.c > +++ b/mm/hmm.c > @@ -37,6 +37,18 @@ enum { > NEED_WRITE_FAULT = 1 << 1, > }; > > +/* > + * hmm_device_entry_from_pfn() - create a valid device entry value from pfn > + * @range: range use to encode HMM pfn value > + * @pfn: pfn value for which to create the device entry > + * Return: valid device entry for the pfn > + */ > +static uint64_t hmm_device_entry_from_pfn(const struct hmm_range *range, > + unsigned long pfn) > +{ > + return (pfn << range->pfn_shift) | range->flags[HMM_PFN_VALID]; > +} > + > static int hmm_pfns_fill(unsigned long addr, unsigned long end, > struct hmm_range *range, enum hmm_pfn_value_e value) > { >