From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5F4FC3ABC0 for ; Thu, 8 May 2025 06:21:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CDDC96B0089; Thu, 8 May 2025 02:21:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C66426B008A; Thu, 8 May 2025 02:21:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B303B6B008C; Thu, 8 May 2025 02:21:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 92E636B0089 for ; Thu, 8 May 2025 02:21:42 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id C93B31CE155 for ; Thu, 8 May 2025 06:21:42 +0000 (UTC) X-FDA: 83418744444.08.74133B6 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf10.hostedemail.com (Postfix) with ESMTP id CC312C000C for ; Thu, 8 May 2025 06:21:40 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf10.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1746685301; a=rsa-sha256; cv=none; b=UUHmYIXJ0Ffeq/LM6I4UAktLB6jGoWFmTF6zf217Wama375iKYmoVet3hYR7kKGzlzDh6K md+NsNsjCaym/cNSt/5wXPbi45FKWrjBaOdVdTWh6+E4tYiYeYiWwIDk0AS7r+zrYZ2/9o e1YfGoP/X9WIc2bmTJS11qwj9nODFqw= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf10.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1746685301; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nfa+HIohnfpGPN0V+9ail7vzs5q/GsgruiAAMWg+pCI=; b=wkJY36CQc4bwXyW/DPAU4bmRFGjy13rZKgP5GzaHgezQuYEaXljxSG19WUV630ABdWWKQc +zce0sKdk/WkRJhJtxJymc5+MKnBBZ0nUv9ydPb1jDxFtVwVBODIjgfub++Gh4FEokW6Dg m4vQ3HQhphCk7sgizxPWIqwPgFH3eP8= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 717CC106F; Wed, 7 May 2025 23:21:29 -0700 (PDT) Received: from [10.163.54.182] (unknown [10.163.54.182]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 518303F5A1; Wed, 7 May 2025 23:21:32 -0700 (PDT) Message-ID: Date: Thu, 8 May 2025 11:51:28 +0530 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 1/2] mm: Call pointers to ptes as ptep To: Dev Jain , akpm@linux-foundation.org Cc: Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, vbabka@suse.cz, jannh@google.com, pfalcato@suse.de, linux-mm@kvack.org, linux-kernel@vger.kernel.org, david@redhat.com, peterx@redhat.com, ryan.roberts@arm.com, mingo@kernel.org, libang.li@antgroup.com, maobibo@loongson.cn, zhengqi.arch@bytedance.com, baohua@kernel.org, willy@infradead.org, ioworker0@gmail.com, yang@os.amperecomputing.com, baolin.wang@linux.alibaba.com, ziy@nvidia.com, hughd@google.com References: <20250507060256.78278-1-dev.jain@arm.com> <20250507060256.78278-2-dev.jain@arm.com> Content-Language: en-US From: Anshuman Khandual In-Reply-To: <20250507060256.78278-2-dev.jain@arm.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: CC312C000C X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: 97fi6i6t46etbbmkit3tk3mxtcqh1tkf X-HE-Tag: 1746685300-744573 X-HE-Meta: U2FsdGVkX18gWAgB8bmi5pNu2h1dO640mQ13U9+UTAJh2VNa+NGN9Trg5LSYOQcD77iznYBCrqNnVce8SyiQvtjmg3KQbn7ENzeiTm7yV8FGhtG1Zde81YaJ0QOEXmfXc8Y8hF9hhU4EE8sj6o3NWZICSSRrbPPzdXGuvxb8QUFoQK7SZxfWVG2qkQmwpmNl2Gv89R+K41nsPBi9Oy5/RUq9sdEbifCnVVwK9Rq0juia+eI2A/s21VuFgGuaTAJNp88b5vKNjIl5uR61s3b/I8iW45JLdTKl5gB8kWzf4WGS+ZCSTEKTyKEFyA2JqOfN+RXqHzIoSVDoS0H9ubmob+1aktccaJYcbf6psnGonyW7TLVbYZIuLbhW/wh+TxyYq0tJQATlOkHV1bEtzjPIowYNsxrc4DBN/FjTwddSdwaKwngQYPQUYUeMOuiITkMbhgT3Rr7qImwhqVIPxIZ5c5/tpmTvAeNjw4aCwOMLKCvmG8OcHPo7PxWzfR5TELBcPbihipVfl++bn5gSueOSPfBrPl4fioUTSyn+OIv/kSu2rUruuzWWQHUsZjWERGK0QfOye7qm3B8znwh2pggOCvUie8V+uMoPX0eGPrc+imzj4kYniMY9X0Qxe5V132oaoZGW3UthxC1xY1pWqqUPuh4aRvso+QZ4eRhrTHdoP3RWi+aFdZv8cRT4NWzUA4MY3CFyRXPsGeQB59crfcUfa2FZsn/+IcHwi+nIO4jcgiL631acoxbJJRXkU1O/vXersY89n4h1RYIw8ocMM1+x2wZfeETm4M9EceCM/DVe8gv2lWFmhwXBcHm/IzzytUVosJDZ0VbiLXynUROiL7woZWcj8BAs8Yq0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 5/7/25 11:32, Dev Jain wrote: > Avoid confusion between pte_t* and pte_t data types by suffixing pointer > type variables with p. No functional change. > > Signed-off-by: Dev Jain > --- > mm/mremap.c | 29 +++++++++++++++-------------- > 1 file changed, 15 insertions(+), 14 deletions(-) > > diff --git a/mm/mremap.c b/mm/mremap.c > index 7db9da609c84..0163e02e5aa8 100644 > --- a/mm/mremap.c > +++ b/mm/mremap.c > @@ -176,7 +176,8 @@ static int move_ptes(struct pagetable_move_control *pmc, > struct vm_area_struct *vma = pmc->old; > bool need_clear_uffd_wp = vma_has_uffd_without_event_remap(vma); > struct mm_struct *mm = vma->vm_mm; > - pte_t *old_pte, *new_pte, pte; > + pte_t *old_ptep, *new_ptep; > + pte_t pte; > pmd_t dummy_pmdval; > spinlock_t *old_ptl, *new_ptl; > bool force_flush = false; > @@ -211,8 +212,8 @@ static int move_ptes(struct pagetable_move_control *pmc, > * We don't have to worry about the ordering of src and dst > * pte locks because exclusive mmap_lock prevents deadlock. > */ > - old_pte = pte_offset_map_lock(mm, old_pmd, old_addr, &old_ptl); > - if (!old_pte) { > + old_ptep = pte_offset_map_lock(mm, old_pmd, old_addr, &old_ptl); > + if (!old_ptep) { > err = -EAGAIN; > goto out; > } > @@ -223,10 +224,10 @@ static int move_ptes(struct pagetable_move_control *pmc, > * mmap_lock, so this new_pte page is stable, so there is no need to get > * pmdval and do pmd_same() check. > */ > - new_pte = pte_offset_map_rw_nolock(mm, new_pmd, new_addr, &dummy_pmdval, > + new_ptep = pte_offset_map_rw_nolock(mm, new_pmd, new_addr, &dummy_pmdval, > &new_ptl); > - if (!new_pte) { > - pte_unmap_unlock(old_pte, old_ptl); > + if (!new_ptep) { > + pte_unmap_unlock(old_ptep, old_ptl); > err = -EAGAIN; > goto out; > } > @@ -235,12 +236,12 @@ static int move_ptes(struct pagetable_move_control *pmc, > flush_tlb_batched_pending(vma->vm_mm); > arch_enter_lazy_mmu_mode(); > > - for (; old_addr < old_end; old_pte++, old_addr += PAGE_SIZE, > - new_pte++, new_addr += PAGE_SIZE) { > - if (pte_none(ptep_get(old_pte))) > + for (; old_addr < old_end; old_ptep++, old_addr += PAGE_SIZE, > + new_ptep++, new_addr += PAGE_SIZE) { > + if (pte_none(ptep_get(old_ptep))) > continue; > > - pte = ptep_get_and_clear(mm, old_addr, old_pte); > + pte = ptep_get_and_clear(mm, old_addr, old_ptep); > /* > * If we are remapping a valid PTE, make sure > * to flush TLB before we drop the PTL for the > @@ -258,7 +259,7 @@ static int move_ptes(struct pagetable_move_control *pmc, > pte = move_soft_dirty_pte(pte); > > if (need_clear_uffd_wp && pte_marker_uffd_wp(pte)) > - pte_clear(mm, new_addr, new_pte); > + pte_clear(mm, new_addr, new_ptep); > else { > if (need_clear_uffd_wp) { > if (pte_present(pte)) > @@ -266,7 +267,7 @@ static int move_ptes(struct pagetable_move_control *pmc, > else if (is_swap_pte(pte)) > pte = pte_swp_clear_uffd_wp(pte); > } > - set_pte_at(mm, new_addr, new_pte, pte); > + set_pte_at(mm, new_addr, new_ptep, pte); > } > } > > @@ -275,8 +276,8 @@ static int move_ptes(struct pagetable_move_control *pmc, > flush_tlb_range(vma, old_end - len, old_end); > if (new_ptl != old_ptl) > spin_unlock(new_ptl); > - pte_unmap(new_pte - 1); > - pte_unmap_unlock(old_pte - 1, old_ptl); > + pte_unmap(new_ptep - 1); > + pte_unmap_unlock(old_ptep - 1, old_ptl); > out: > if (pmc->need_rmap_locks) > drop_rmap_locks(vma); Reviewed-by: Anshuman Khandual