From: John Hubbard <jhubbard@nvidia.com>
To: Peter Xu <peterx@redhat.com>
Cc: <linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>,
Jann Horn <jannh@google.com>,
Andrea Arcangeli <aarcange@redhat.com>,
James Houghton <jthoughton@google.com>,
Rik van Riel <riel@surriel.com>,
Miaohe Lin <linmiaohe@huawei.com>,
Nadav Amit <nadav.amit@gmail.com>,
Mike Kravetz <mike.kravetz@oracle.com>,
David Hildenbrand <david@redhat.com>,
"Andrew Morton" <akpm@linux-foundation.org>,
Muchun Song <songmuchun@bytedance.com>
Subject: Re: [PATCH v2 10/10] mm/hugetlb: Document why page_vma_mapped_walk() is safe to walk
Date: Thu, 8 Dec 2022 16:24:19 -0800 [thread overview]
Message-ID: <7f3096c5-c3cc-4ead-7c5e-8bade6c930da@nvidia.com> (raw)
In-Reply-To: <Y5JjTPTxCWSklCan@x1n>
On 12/8/22 14:21, Peter Xu wrote:
>
> Firstly, this patch (to be squashed into previous) is trying to document
> page_vma_mapped_walk() on why it's not needed to further take any lock to
> call hugetlb_walk().
>
> To call hugetlb_walk() we need either of the locks listed below (in either
> read or write mode), according to the rules we setup for it in patch 3:
>
> (1) hugetlb vma lock
> (2) i_mmap_rwsem lock
>
> page_vma_mapped_walk() is called in below sites across the kernel:
>
> __replace_page[179] if (!page_vma_mapped_walk(&pvmw))
> __damon_pa_mkold[24] while (page_vma_mapped_walk(&pvmw)) {
> __damon_pa_young[97] while (page_vma_mapped_walk(&pvmw)) {
> write_protect_page[1065] if (!page_vma_mapped_walk(&pvmw))
> remove_migration_pte[179] while (page_vma_mapped_walk(&pvmw)) {
> page_idle_clear_pte_refs_one[56] while (page_vma_mapped_walk(&pvmw)) {
> page_mapped_in_vma[318] if (!page_vma_mapped_walk(&pvmw))
> folio_referenced_one[813] while (page_vma_mapped_walk(&pvmw)) {
> page_vma_mkclean_one[958] while (page_vma_mapped_walk(pvmw)) {
> try_to_unmap_one[1506] while (page_vma_mapped_walk(&pvmw)) {
> try_to_migrate_one[1881] while (page_vma_mapped_walk(&pvmw)) {
> page_make_device_exclusive_one[2205] while (page_vma_mapped_walk(&pvmw)) {
>
> If we group them, we can see that most of them are during a rmap walk
> (i.e., comes from a higher rmap_walk() stack), they are:
>
> __damon_pa_mkold[24] while (page_vma_mapped_walk(&pvmw)) {
> __damon_pa_young[97] while (page_vma_mapped_walk(&pvmw)) {
> remove_migration_pte[179] while (page_vma_mapped_walk(&pvmw)) {
> page_idle_clear_pte_refs_one[56] while (page_vma_mapped_walk(&pvmw)) {
> page_mapped_in_vma[318] if (!page_vma_mapped_walk(&pvmw))
> folio_referenced_one[813] while (page_vma_mapped_walk(&pvmw)) {
> page_vma_mkclean_one[958] while (page_vma_mapped_walk(pvmw)) {
> try_to_unmap_one[1506] while (page_vma_mapped_walk(&pvmw)) {
> try_to_migrate_one[1881] while (page_vma_mapped_walk(&pvmw)) {
> page_make_device_exclusive_one[2205] while (page_vma_mapped_walk(&pvmw)) {
>
> Let's call it case (A).
>
> We have another two special cases that are not during a rmap walk, they
> are:
>
> write_protect_page[1065] if (!page_vma_mapped_walk(&pvmw))
> __replace_page[179] if (!page_vma_mapped_walk(&pvmw))
>
> Let's call it case (B).
>
> Case (A) is always safe because it always take the i_mmap_rwsem lock in
> read mode. It's done in rmap_walk_file() where:
>
> if (!locked) {
> if (i_mmap_trylock_read(mapping))
> goto lookup;
>
> if (rwc->try_lock) {
> rwc->contended = true;
> return;
> }
>
> i_mmap_lock_read(mapping);
> }
>
> If locked==true it means the caller already holds the lock, so no need to
> take it. It justifies that all callers from rmap_walk() upon a hugetlb vma
> is safe to call hugetlb_walk() already according to the rule of hugetlb_walk().
>
> Case (B) contains two cases either in KSM path or uprobe path, and none of
> the paths (afaict) can get a hugetlb vma involved. IOW, the whole path of
>
> if (unlikely(is_vm_hugetlb_page(vma))) {
>
> In page_vma_mapped_walk() just should never trigger.
>
> To summarize above into a shorter paragraph, it'll become the comment.
>
> Hope it explains. Thanks.
>
It does! And now for the comment, I'll think you'll find that this suffices:
/*
* All callers that get here will already hold the i_mmap_rwsem.
* Therefore, no additional locks need to be taken before
* calling hugetlb_walk().
*/
...which, considering all the data above, is probably the mother of
all summaries. :) But really, it's all that people need to know here, and
it's readily understandable without wondering what KSM has to do with this,
for example.
thanks,
--
John Hubbard
NVIDIA
next prev parent reply other threads:[~2022-12-09 0:24 UTC|newest]
Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-12-07 20:30 [PATCH v2 00/10] mm/hugetlb: Make huge_pte_offset() thread-safe for pmd unshare Peter Xu
2022-12-07 20:30 ` [PATCH v2 01/10] mm/hugetlb: Let vma_offset_start() to return start Peter Xu
2022-12-07 21:21 ` John Hubbard
2022-12-07 20:30 ` [PATCH v2 02/10] mm/hugetlb: Don't wait for migration entry during follow page Peter Xu
2022-12-07 22:03 ` John Hubbard
2022-12-07 20:30 ` [PATCH v2 03/10] mm/hugetlb: Document huge_pte_offset usage Peter Xu
2022-12-07 20:49 ` John Hubbard
2022-12-08 13:05 ` David Hildenbrand
2022-12-07 20:30 ` [PATCH v2 04/10] mm/hugetlb: Move swap entry handling into vma lock when faulted Peter Xu
2022-12-07 22:36 ` John Hubbard
2022-12-07 22:43 ` Peter Xu
2022-12-07 23:05 ` John Hubbard
2022-12-08 20:28 ` Peter Xu
2022-12-08 20:31 ` John Hubbard
2022-12-07 20:30 ` [PATCH v2 05/10] mm/hugetlb: Make userfaultfd_huge_must_wait() safe to pmd unshare Peter Xu
2022-12-07 23:19 ` John Hubbard
2022-12-07 23:44 ` Peter Xu
2022-12-07 23:54 ` John Hubbard
2022-12-07 20:30 ` [PATCH v2 06/10] mm/hugetlb: Make hugetlb_follow_page_mask() " Peter Xu
2022-12-07 23:21 ` John Hubbard
2022-12-07 20:30 ` [PATCH v2 07/10] mm/hugetlb: Make follow_hugetlb_page() " Peter Xu
2022-12-07 23:25 ` John Hubbard
2022-12-07 20:30 ` [PATCH v2 08/10] mm/hugetlb: Make walk_hugetlb_range() " Peter Xu
2022-12-07 20:34 ` John Hubbard
2022-12-08 13:14 ` David Hildenbrand
2022-12-08 20:47 ` Peter Xu
2022-12-08 21:20 ` Peter Xu
2022-12-09 10:24 ` David Hildenbrand
2022-12-09 14:39 ` Peter Xu
2022-12-09 15:18 ` David Hildenbrand
2022-12-07 20:31 ` [PATCH v2 09/10] mm/hugetlb: Introduce hugetlb_walk() Peter Xu
2022-12-07 22:27 ` Mike Kravetz
2022-12-08 0:12 ` John Hubbard
2022-12-08 21:01 ` Peter Xu
2022-12-08 21:50 ` John Hubbard
2022-12-08 23:21 ` Peter Xu
2022-12-07 20:31 ` [PATCH v2 10/10] mm/hugetlb: Document why page_vma_mapped_walk() is safe to walk Peter Xu
2022-12-08 0:16 ` John Hubbard
2022-12-08 21:05 ` Peter Xu
2022-12-08 21:54 ` John Hubbard
2022-12-08 22:21 ` Peter Xu
2022-12-09 0:24 ` John Hubbard [this message]
2022-12-09 0:43 ` Peter Xu
2022-12-08 13:16 ` David Hildenbrand
2022-12-08 21:05 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7f3096c5-c3cc-4ead-7c5e-8bade6c930da@nvidia.com \
--to=jhubbard@nvidia.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=jannh@google.com \
--cc=jthoughton@google.com \
--cc=linmiaohe@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mike.kravetz@oracle.com \
--cc=nadav.amit@gmail.com \
--cc=peterx@redhat.com \
--cc=riel@surriel.com \
--cc=songmuchun@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox