linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Peter Xu <peterx@redhat.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Muchun Song <songmuchun@bytedance.com>,
	John Hubbard <jhubbard@nvidia.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	James Houghton <jthoughton@google.com>,
	Jann Horn <jannh@google.com>, Rik van Riel <riel@surriel.com>,
	Miaohe Lin <linmiaohe@huawei.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	Nadav Amit <nadav.amit@gmail.com>
Subject: Re: [PATCH v2 08/10] mm/hugetlb: Make walk_hugetlb_range() safe to pmd unshare
Date: Fri, 9 Dec 2022 16:18:11 +0100	[thread overview]
Message-ID: <56eecd5e-9f1a-0171-0e4f-934e3e6b495a@redhat.com> (raw)
In-Reply-To: <Y5NIoqXlAvrXkCOM@x1n>

On 09.12.22 15:39, Peter Xu wrote:
> On Fri, Dec 09, 2022 at 11:24:55AM +0100, David Hildenbrand wrote:
>> For such cases, it would be good to have any evidence that it really helps.
> 
> I don't know much on the s390 path, but if a process has a large hugetlb
> vma, even MADV_DONTNEED will be blocked for whatever long time if there's
> another process or thread scanning pagemap for this vma.
> 
> Would this justify a bit?

I get your point. But that raises the question if we should voluntarily 
drop the VMA lock already in the caller every now and then on such large 
VMAs and maybe move even the cond_resched() into the common page walker, 
if you get what I mean?

On a preemtible kernel you could reschedule just before you drop the 
lock and call cond_resched() ... hmm

No strong opinion here, it just looked a bit weird to optimize for a 
cond_resched() if we might just reschedule either way even without the 
cond_resched().

-- 
Thanks,

David / dhildenb



  reply	other threads:[~2022-12-09 15:18 UTC|newest]

Thread overview: 45+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-07 20:30 [PATCH v2 00/10] mm/hugetlb: Make huge_pte_offset() thread-safe for " Peter Xu
2022-12-07 20:30 ` [PATCH v2 01/10] mm/hugetlb: Let vma_offset_start() to return start Peter Xu
2022-12-07 21:21   ` John Hubbard
2022-12-07 20:30 ` [PATCH v2 02/10] mm/hugetlb: Don't wait for migration entry during follow page Peter Xu
2022-12-07 22:03   ` John Hubbard
2022-12-07 20:30 ` [PATCH v2 03/10] mm/hugetlb: Document huge_pte_offset usage Peter Xu
2022-12-07 20:49   ` John Hubbard
2022-12-08 13:05     ` David Hildenbrand
2022-12-07 20:30 ` [PATCH v2 04/10] mm/hugetlb: Move swap entry handling into vma lock when faulted Peter Xu
2022-12-07 22:36   ` John Hubbard
2022-12-07 22:43     ` Peter Xu
2022-12-07 23:05       ` John Hubbard
2022-12-08 20:28         ` Peter Xu
2022-12-08 20:31           ` John Hubbard
2022-12-07 20:30 ` [PATCH v2 05/10] mm/hugetlb: Make userfaultfd_huge_must_wait() safe to pmd unshare Peter Xu
2022-12-07 23:19   ` John Hubbard
2022-12-07 23:44     ` Peter Xu
2022-12-07 23:54       ` John Hubbard
2022-12-07 20:30 ` [PATCH v2 06/10] mm/hugetlb: Make hugetlb_follow_page_mask() " Peter Xu
2022-12-07 23:21   ` John Hubbard
2022-12-07 20:30 ` [PATCH v2 07/10] mm/hugetlb: Make follow_hugetlb_page() " Peter Xu
2022-12-07 23:25   ` John Hubbard
2022-12-07 20:30 ` [PATCH v2 08/10] mm/hugetlb: Make walk_hugetlb_range() " Peter Xu
2022-12-07 20:34   ` John Hubbard
2022-12-08 13:14   ` David Hildenbrand
2022-12-08 20:47     ` Peter Xu
2022-12-08 21:20       ` Peter Xu
2022-12-09 10:24       ` David Hildenbrand
2022-12-09 14:39         ` Peter Xu
2022-12-09 15:18           ` David Hildenbrand [this message]
2022-12-07 20:31 ` [PATCH v2 09/10] mm/hugetlb: Introduce hugetlb_walk() Peter Xu
2022-12-07 22:27   ` Mike Kravetz
2022-12-08  0:12   ` John Hubbard
2022-12-08 21:01     ` Peter Xu
2022-12-08 21:50       ` John Hubbard
2022-12-08 23:21         ` Peter Xu
2022-12-07 20:31 ` [PATCH v2 10/10] mm/hugetlb: Document why page_vma_mapped_walk() is safe to walk Peter Xu
2022-12-08  0:16   ` John Hubbard
2022-12-08 21:05     ` Peter Xu
2022-12-08 21:54       ` John Hubbard
2022-12-08 22:21         ` Peter Xu
2022-12-09  0:24           ` John Hubbard
2022-12-09  0:43             ` Peter Xu
2022-12-08 13:16   ` David Hildenbrand
2022-12-08 21:05     ` Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56eecd5e-9f1a-0171-0e4f-934e3e6b495a@redhat.com \
    --to=david@redhat.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=jannh@google.com \
    --cc=jhubbard@nvidia.com \
    --cc=jthoughton@google.com \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mike.kravetz@oracle.com \
    --cc=nadav.amit@gmail.com \
    --cc=peterx@redhat.com \
    --cc=riel@surriel.com \
    --cc=songmuchun@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox