linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Miaohe Lin <linmiaohe@huawei.com>
To: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Muchun Song <songmuchun@bytedance.com>,
	David Hildenbrand <david@redhat.com>,
	Michal Hocko <mhocko@suse.com>, Peter Xu <peterx@redhat.com>,
	Naoya Horiguchi <naoya.horiguchi@linux.dev>,
	"Aneesh Kumar K . V" <aneesh.kumar@linux.vnet.ibm.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
	Davidlohr Bueso <dave@stgolabs.net>,
	Prakash Sangappa <prakash.sangappa@oracle.com>,
	James Houghton <jthoughton@google.com>,
	Mina Almasry <almasrymina@google.com>,
	Pasha Tatashin <pasha.tatashin@soleen.com>,
	Axel Rasmussen <axelrasmussen@google.com>,
	Ray Fucillo <Ray.Fucillo@intersystems.com>,
	Andrew Morton <akpm@linux-foundation.org>, <linux-mm@kvack.org>,
	<linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 7/8] hugetlb: create hugetlb_unmap_file_folio to unmap single file folio
Date: Mon, 29 Aug 2022 10:44:28 +0800	[thread overview]
Message-ID: <0e31f9da-5953-2f44-1a59-888e3313e919@huawei.com> (raw)
In-Reply-To: <20220824175757.20590-8-mike.kravetz@oracle.com>

On 2022/8/25 1:57, Mike Kravetz wrote:
> Create the new routine hugetlb_unmap_file_folio that will unmap a single
> file folio.  This is refactored code from hugetlb_vmdelete_list.  It is
> modified to do locking within the routine itself and check whether the
> page is mapped within a specific vma before unmapping.
> 
> This refactoring will be put to use and expanded upon in a subsequent
> patch adding vma specific locking.
> 
> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
> ---
>  fs/hugetlbfs/inode.c | 123 +++++++++++++++++++++++++++++++++----------
>  1 file changed, 94 insertions(+), 29 deletions(-)
> 
> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
> index e83fd31671b3..b93d131b0cb5 100644
> --- a/fs/hugetlbfs/inode.c
> +++ b/fs/hugetlbfs/inode.c
> @@ -371,6 +371,94 @@ static void hugetlb_delete_from_page_cache(struct page *page)
>  	delete_from_page_cache(page);
>  }
>  
> +/*
> + * Called with i_mmap_rwsem held for inode based vma maps.  This makes
> + * sure vma (and vm_mm) will not go away.  We also hold the hugetlb fault
> + * mutex for the page in the mapping.  So, we can not race with page being
> + * faulted into the vma.
> + */
> +static bool hugetlb_vma_maps_page(struct vm_area_struct *vma,
> +				unsigned long addr, struct page *page)
> +{
> +	pte_t *ptep, pte;
> +
> +	ptep = huge_pte_offset(vma->vm_mm, addr,
> +			huge_page_size(hstate_vma(vma)));
> +
> +	if (!ptep)
> +		return false;
> +
> +	pte = huge_ptep_get(ptep);
> +	if (huge_pte_none(pte) || !pte_present(pte))
> +		return false;
> +
> +	if (pte_page(pte) == page)
> +		return true;

I'm thinking whether pte entry could change after we check it since huge_pte_lock is not held here.
But I think holding i_mmap_rwsem in writelock mode should give us such a guarantee, e.g. migration
entry is changed back to huge pte entry while holding i_mmap_rwsem in readlock mode.
Or am I miss something?

> +
> +	return false;
> +}
> +
> +/*
> + * Can vma_offset_start/vma_offset_end overflow on 32-bit arches?
> + * No, because the interval tree returns us only those vmas
> + * which overlap the truncated area starting at pgoff,
> + * and no vma on a 32-bit arch can span beyond the 4GB.
> + */
> +static unsigned long vma_offset_start(struct vm_area_struct *vma, pgoff_t start)
> +{
> +	if (vma->vm_pgoff < start)
> +		return (start - vma->vm_pgoff) << PAGE_SHIFT;
> +	else
> +		return 0;
> +}
> +
> +static unsigned long vma_offset_end(struct vm_area_struct *vma, pgoff_t end)
> +{
> +	unsigned long t_end;
> +
> +	if (!end)
> +		return vma->vm_end;
> +
> +	t_end = ((end - vma->vm_pgoff) << PAGE_SHIFT) + vma->vm_start;
> +	if (t_end > vma->vm_end)
> +		t_end = vma->vm_end;
> +	return t_end;
> +}
> +
> +/*
> + * Called with hugetlb fault mutex held.  Therefore, no more mappings to
> + * this folio can be created while executing the routine.
> + */
> +static void hugetlb_unmap_file_folio(struct hstate *h,
> +					struct address_space *mapping,
> +					struct folio *folio, pgoff_t index)
> +{
> +	struct rb_root_cached *root = &mapping->i_mmap;
> +	struct page *page = &folio->page;
> +	struct vm_area_struct *vma;
> +	unsigned long v_start;
> +	unsigned long v_end;
> +	pgoff_t start, end;
> +
> +	start = index * pages_per_huge_page(h);
> +	end = ((index + 1) * pages_per_huge_page(h));

It seems the outer parentheses is unneeded?

Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>

Thanks,
Miaohe Lin




  reply	other threads:[~2022-08-29  2:44 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-24 17:57 [PATCH 0/8] hugetlb: Use new vma mutex for huge pmd sharing synchronization Mike Kravetz
2022-08-24 17:57 ` [PATCH 1/8] hugetlbfs: revert use i_mmap_rwsem to address page fault/truncate race Mike Kravetz
2022-08-27  2:50   ` Miaohe Lin
2022-08-24 17:57 ` [PATCH 2/8] hugetlbfs: revert use i_mmap_rwsem for more pmd sharing synchronization Mike Kravetz
2022-08-27  2:59   ` Miaohe Lin
2022-08-24 17:57 ` [PATCH 3/8] hugetlb: rename remove_huge_page to hugetlb_delete_from_page_cache Mike Kravetz
2022-08-27  3:08   ` Miaohe Lin
2022-08-24 17:57 ` [PATCH 4/8] hugetlb: handle truncate racing with page faults Mike Kravetz
2022-08-25 17:00   ` Mike Kravetz
2022-08-27  8:02   ` Miaohe Lin
2022-08-29 21:53     ` Mike Kravetz
2022-09-06 13:57   ` Sven Schnelle
2022-09-06 16:48     ` Mike Kravetz
2022-09-06 18:05       ` Mike Kravetz
2022-09-06 23:08         ` Mike Kravetz
2022-09-07  2:11           ` Miaohe Lin
2022-09-07  2:37             ` Mike Kravetz
2022-09-07  3:07               ` Miaohe Lin
2022-09-07  3:30                 ` Mike Kravetz
2022-09-07  8:22                   ` Sven Schnelle
2022-09-07 14:50                     ` Mike Kravetz
2022-08-24 17:57 ` [PATCH 5/8] hugetlb: rename vma_shareable() and refactor code Mike Kravetz
2022-08-27  8:07   ` Miaohe Lin
2022-08-24 17:57 ` [PATCH 6/8] hugetlb: add vma based lock for pmd sharing Mike Kravetz
2022-08-27  9:30   ` Miaohe Lin
2022-08-29 22:24     ` Mike Kravetz
2022-08-30  2:34       ` Miaohe Lin
2022-09-07 20:50       ` Mike Kravetz
2022-09-08  2:04         ` Miaohe Lin
2022-08-24 17:57 ` [PATCH 7/8] hugetlb: create hugetlb_unmap_file_folio to unmap single file folio Mike Kravetz
2022-08-29  2:44   ` Miaohe Lin [this message]
2022-08-29 22:37     ` Mike Kravetz
2022-08-30  2:46       ` Miaohe Lin
2022-09-02 21:35         ` Mike Kravetz
2022-09-05  2:32           ` Miaohe Lin
2022-08-24 17:57 ` [PATCH 8/8] hugetlb: use new vma_lock for pmd sharing synchronization Mike Kravetz
2022-08-30  2:02   ` Miaohe Lin
2022-09-02 23:07     ` Mike Kravetz
2022-09-05  3:08       ` Miaohe Lin
2022-09-12 23:02         ` Mike Kravetz
2022-09-13  2:14           ` Miaohe Lin
2022-09-14  0:50             ` Mike Kravetz
2022-09-14  2:08               ` Miaohe Lin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0e31f9da-5953-2f44-1a59-888e3313e919@huawei.com \
    --to=linmiaohe@huawei.com \
    --cc=Ray.Fucillo@intersystems.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=almasrymina@google.com \
    --cc=aneesh.kumar@linux.vnet.ibm.com \
    --cc=axelrasmussen@google.com \
    --cc=dave@stgolabs.net \
    --cc=david@redhat.com \
    --cc=jthoughton@google.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=mike.kravetz@oracle.com \
    --cc=naoya.horiguchi@linux.dev \
    --cc=pasha.tatashin@soleen.com \
    --cc=peterx@redhat.com \
    --cc=prakash.sangappa@oracle.com \
    --cc=songmuchun@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox