linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: John Hubbard <jhubbard@nvidia.com>
To: Peter Xu <peterx@redhat.com>, <linux-mm@kvack.org>,
	<linux-kernel@vger.kernel.org>
Cc: Muchun Song <songmuchun@bytedance.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	James Houghton <jthoughton@google.com>,
	Jann Horn <jannh@google.com>, Rik van Riel <riel@surriel.com>,
	Miaohe Lin <linmiaohe@huawei.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	"Mike Kravetz" <mike.kravetz@oracle.com>,
	David Hildenbrand <david@redhat.com>,
	Nadav Amit <nadav.amit@gmail.com>
Subject: Re: [PATCH v2 01/10] mm/hugetlb: Let vma_offset_start() to return start
Date: Wed, 7 Dec 2022 13:21:12 -0800	[thread overview]
Message-ID: <e1f40f3f-162e-7d3c-00f1-8c71e3b5dc31@nvidia.com> (raw)
In-Reply-To: <20221207203034.650899-2-peterx@redhat.com>

On 12/7/22 12:30, Peter Xu wrote:
> Even though vma_offset_start() is named like that, it's not returning "the
> start address of the range" but rather the offset we should use to offset
> the vma->vm_start address.
> 
> Make it return the real value of the start vaddr, and it also helps for all
> the callers because whenever the retval is used, it'll be ultimately added
> into the vma->vm_start anyway, so it's better.
> 
> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
> Reviewed-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>   fs/hugetlbfs/inode.c | 24 ++++++++++++------------
>   1 file changed, 12 insertions(+), 12 deletions(-)

This is a nice refinement.

Reviewed-by: John Hubbard <jhubbard@nvidia.com>


thanks,
-- 
John Hubbard
NVIDIA

> 
> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
> index 790d2727141a..fdb16246f46e 100644
> --- a/fs/hugetlbfs/inode.c
> +++ b/fs/hugetlbfs/inode.c
> @@ -412,10 +412,12 @@ static bool hugetlb_vma_maps_page(struct vm_area_struct *vma,
>    */
>   static unsigned long vma_offset_start(struct vm_area_struct *vma, pgoff_t start)
>   {
> +	unsigned long offset = 0;
> +
>   	if (vma->vm_pgoff < start)
> -		return (start - vma->vm_pgoff) << PAGE_SHIFT;
> -	else
> -		return 0;
> +		offset = (start - vma->vm_pgoff) << PAGE_SHIFT;
> +
> +	return vma->vm_start + offset;
>   }
>   
>   static unsigned long vma_offset_end(struct vm_area_struct *vma, pgoff_t end)
> @@ -457,7 +459,7 @@ static void hugetlb_unmap_file_folio(struct hstate *h,
>   		v_start = vma_offset_start(vma, start);
>   		v_end = vma_offset_end(vma, end);
>   
> -		if (!hugetlb_vma_maps_page(vma, vma->vm_start + v_start, page))
> +		if (!hugetlb_vma_maps_page(vma, v_start, page))
>   			continue;
>   
>   		if (!hugetlb_vma_trylock_write(vma)) {
> @@ -473,8 +475,8 @@ static void hugetlb_unmap_file_folio(struct hstate *h,
>   			break;
>   		}
>   
> -		unmap_hugepage_range(vma, vma->vm_start + v_start, v_end,
> -				NULL, ZAP_FLAG_DROP_MARKER);
> +		unmap_hugepage_range(vma, v_start, v_end, NULL,
> +				     ZAP_FLAG_DROP_MARKER);
>   		hugetlb_vma_unlock_write(vma);
>   	}
>   
> @@ -507,10 +509,9 @@ static void hugetlb_unmap_file_folio(struct hstate *h,
>   		 */
>   		v_start = vma_offset_start(vma, start);
>   		v_end = vma_offset_end(vma, end);
> -		if (hugetlb_vma_maps_page(vma, vma->vm_start + v_start, page))
> -			unmap_hugepage_range(vma, vma->vm_start + v_start,
> -						v_end, NULL,
> -						ZAP_FLAG_DROP_MARKER);
> +		if (hugetlb_vma_maps_page(vma, v_start, page))
> +			unmap_hugepage_range(vma, v_start, v_end, NULL,
> +					     ZAP_FLAG_DROP_MARKER);
>   
>   		kref_put(&vma_lock->refs, hugetlb_vma_lock_release);
>   		hugetlb_vma_unlock_write(vma);
> @@ -540,8 +541,7 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end,
>   		v_start = vma_offset_start(vma, start);
>   		v_end = vma_offset_end(vma, end);
>   
> -		unmap_hugepage_range(vma, vma->vm_start + v_start, v_end,
> -				     NULL, zap_flags);
> +		unmap_hugepage_range(vma, v_start, v_end, NULL, zap_flags);
>   
>   		/*
>   		 * Note that vma lock only exists for shared/non-private



  reply	other threads:[~2022-12-07 21:21 UTC|newest]

Thread overview: 45+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-07 20:30 [PATCH v2 00/10] mm/hugetlb: Make huge_pte_offset() thread-safe for pmd unshare Peter Xu
2022-12-07 20:30 ` [PATCH v2 01/10] mm/hugetlb: Let vma_offset_start() to return start Peter Xu
2022-12-07 21:21   ` John Hubbard [this message]
2022-12-07 20:30 ` [PATCH v2 02/10] mm/hugetlb: Don't wait for migration entry during follow page Peter Xu
2022-12-07 22:03   ` John Hubbard
2022-12-07 20:30 ` [PATCH v2 03/10] mm/hugetlb: Document huge_pte_offset usage Peter Xu
2022-12-07 20:49   ` John Hubbard
2022-12-08 13:05     ` David Hildenbrand
2022-12-07 20:30 ` [PATCH v2 04/10] mm/hugetlb: Move swap entry handling into vma lock when faulted Peter Xu
2022-12-07 22:36   ` John Hubbard
2022-12-07 22:43     ` Peter Xu
2022-12-07 23:05       ` John Hubbard
2022-12-08 20:28         ` Peter Xu
2022-12-08 20:31           ` John Hubbard
2022-12-07 20:30 ` [PATCH v2 05/10] mm/hugetlb: Make userfaultfd_huge_must_wait() safe to pmd unshare Peter Xu
2022-12-07 23:19   ` John Hubbard
2022-12-07 23:44     ` Peter Xu
2022-12-07 23:54       ` John Hubbard
2022-12-07 20:30 ` [PATCH v2 06/10] mm/hugetlb: Make hugetlb_follow_page_mask() " Peter Xu
2022-12-07 23:21   ` John Hubbard
2022-12-07 20:30 ` [PATCH v2 07/10] mm/hugetlb: Make follow_hugetlb_page() " Peter Xu
2022-12-07 23:25   ` John Hubbard
2022-12-07 20:30 ` [PATCH v2 08/10] mm/hugetlb: Make walk_hugetlb_range() " Peter Xu
2022-12-07 20:34   ` John Hubbard
2022-12-08 13:14   ` David Hildenbrand
2022-12-08 20:47     ` Peter Xu
2022-12-08 21:20       ` Peter Xu
2022-12-09 10:24       ` David Hildenbrand
2022-12-09 14:39         ` Peter Xu
2022-12-09 15:18           ` David Hildenbrand
2022-12-07 20:31 ` [PATCH v2 09/10] mm/hugetlb: Introduce hugetlb_walk() Peter Xu
2022-12-07 22:27   ` Mike Kravetz
2022-12-08  0:12   ` John Hubbard
2022-12-08 21:01     ` Peter Xu
2022-12-08 21:50       ` John Hubbard
2022-12-08 23:21         ` Peter Xu
2022-12-07 20:31 ` [PATCH v2 10/10] mm/hugetlb: Document why page_vma_mapped_walk() is safe to walk Peter Xu
2022-12-08  0:16   ` John Hubbard
2022-12-08 21:05     ` Peter Xu
2022-12-08 21:54       ` John Hubbard
2022-12-08 22:21         ` Peter Xu
2022-12-09  0:24           ` John Hubbard
2022-12-09  0:43             ` Peter Xu
2022-12-08 13:16   ` David Hildenbrand
2022-12-08 21:05     ` Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e1f40f3f-162e-7d3c-00f1-8c71e3b5dc31@nvidia.com \
    --to=jhubbard@nvidia.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=jannh@google.com \
    --cc=jthoughton@google.com \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mike.kravetz@oracle.com \
    --cc=nadav.amit@gmail.com \
    --cc=peterx@redhat.com \
    --cc=riel@surriel.com \
    --cc=songmuchun@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox