From: Yang Shi <shy828301@gmail.com>
To: Hugh Dickins <hughd@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Mike Kravetz <mike.kravetz@oracle.com>,
Mike Rapoport <rppt@kernel.org>,
"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
Matthew Wilcox <willy@infradead.org>,
David Hildenbrand <david@redhat.com>,
Suren Baghdasaryan <surenb@google.com>,
Qi Zheng <zhengqi.arch@bytedance.com>,
Mel Gorman <mgorman@techsingularity.net>,
Peter Xu <peterx@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
Will Deacon <will@kernel.org>, Yu Zhao <yuzhao@google.com>,
Alistair Popple <apopple@nvidia.com>,
Ralph Campbell <rcampbell@nvidia.com>,
Ira Weiny <ira.weiny@intel.com>,
Steven Price <steven.price@arm.com>,
SeongJae Park <sj@kernel.org>,
Naoya Horiguchi <naoya.horiguchi@nec.com>,
Christophe Leroy <christophe.leroy@csgroup.eu>,
Zack Rusin <zackr@vmware.com>, Jason Gunthorpe <jgg@ziepe.ca>,
Axel Rasmussen <axelrasmussen@google.com>,
Anshuman Khandual <anshuman.khandual@arm.com>,
Pasha Tatashin <pasha.tatashin@soleen.com>,
Miaohe Lin <linmiaohe@huawei.com>,
Minchan Kim <minchan@kernel.org>,
Christoph Hellwig <hch@infradead.org>,
Song Liu <song@kernel.org>,
Thomas Hellstrom <thomas.hellstrom@linux.intel.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH 27/31] mm/khugepaged: allow pte_offset_map[_lock]() to fail
Date: Mon, 22 May 2023 16:54:36 -0700 [thread overview]
Message-ID: <CAHbLzkrf-Ft6geL0XKwGCY+Btn3cW=FMRjujQ48VJEnCfVki9g@mail.gmail.com> (raw)
In-Reply-To: <aef43be2-f877-b0f8-b41c-37f847d3a7b4@google.com>
On Sun, May 21, 2023 at 10:24 PM Hugh Dickins <hughd@google.com> wrote:
>
> __collapse_huge_page_swapin(): don't drop the map after every pte, it
> only has to be dropped by do_swap_page(); give up if pte_offset_map()
> fails; trace_mm_collapse_huge_page_swapin() at the end, with result;
> fix comment on returned result; fix vmf.pgoff, though it's not used.
>
> collapse_huge_page(): use pte_offset_map_lock() on the _pmd returned
> from clearing; allow failure, but it should be impossible there.
> hpage_collapse_scan_pmd() and collapse_pte_mapped_thp() allow for
> pte_offset_map_lock() failure.
>
> Signed-off-by: Hugh Dickins <hughd@google.com>
Reviewed-by: Yang Shi <shy828301@gmail.com>
A nit below:
> ---
> mm/khugepaged.c | 72 +++++++++++++++++++++++++++++++++----------------
> 1 file changed, 49 insertions(+), 23 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 732f9ac393fc..49cfa7cdfe93 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -993,9 +993,8 @@ static int check_pmd_still_valid(struct mm_struct *mm,
> * Only done if hpage_collapse_scan_pmd believes it is worthwhile.
> *
> * Called and returns without pte mapped or spinlocks held.
> - * Note that if false is returned, mmap_lock will be released.
> + * Returns result: if not SCAN_SUCCEED, mmap_lock has been released.
> */
> -
> static int __collapse_huge_page_swapin(struct mm_struct *mm,
> struct vm_area_struct *vma,
> unsigned long haddr, pmd_t *pmd,
> @@ -1004,23 +1003,35 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm,
> int swapped_in = 0;
> vm_fault_t ret = 0;
> unsigned long address, end = haddr + (HPAGE_PMD_NR * PAGE_SIZE);
> + int result;
> + pte_t *pte = NULL;
>
> for (address = haddr; address < end; address += PAGE_SIZE) {
> struct vm_fault vmf = {
> .vma = vma,
> .address = address,
> - .pgoff = linear_page_index(vma, haddr),
> + .pgoff = linear_page_index(vma, address),
> .flags = FAULT_FLAG_ALLOW_RETRY,
> .pmd = pmd,
> };
>
> - vmf.pte = pte_offset_map(pmd, address);
> - vmf.orig_pte = *vmf.pte;
> - if (!is_swap_pte(vmf.orig_pte)) {
> - pte_unmap(vmf.pte);
> - continue;
> + if (!pte++) {
> + pte = pte_offset_map(pmd, address);
> + if (!pte) {
> + mmap_read_unlock(mm);
> + result = SCAN_PMD_NULL;
> + goto out;
> + }
> }
> +
> + vmf.orig_pte = *pte;
> + if (!is_swap_pte(vmf.orig_pte))
> + continue;
> +
> + vmf.pte = pte;
> ret = do_swap_page(&vmf);
> + /* Which unmaps pte (after perhaps re-checking the entry) */
> + pte = NULL;
>
> /*
> * do_swap_page returns VM_FAULT_RETRY with released mmap_lock.
> @@ -1029,24 +1040,29 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm,
> * resulting in later failure.
> */
> if (ret & VM_FAULT_RETRY) {
> - trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0);
> /* Likely, but not guaranteed, that page lock failed */
> - return SCAN_PAGE_LOCK;
> + result = SCAN_PAGE_LOCK;
With per-VMA lock, this may not be true anymore, at least not true
until per-VMA lock supports swap fault. It may be better to have a
more general failure code, for example, SCAN_FAIL. But anyway you
don't have to change it in your patch, I can send a follow-up patch
once this series is landed on mm-unstable.
> + goto out;
> }
> if (ret & VM_FAULT_ERROR) {
> mmap_read_unlock(mm);
> - trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0);
> - return SCAN_FAIL;
> + result = SCAN_FAIL;
> + goto out;
> }
> swapped_in++;
> }
>
> + if (pte)
> + pte_unmap(pte);
> +
> /* Drain LRU add pagevec to remove extra pin on the swapped in pages */
> if (swapped_in)
> lru_add_drain();
>
> - trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 1);
> - return SCAN_SUCCEED;
> + result = SCAN_SUCCEED;
> +out:
> + trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, result);
> + return result;
> }
>
> static int alloc_charge_hpage(struct page **hpage, struct mm_struct *mm,
> @@ -1146,9 +1162,6 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> address + HPAGE_PMD_SIZE);
> mmu_notifier_invalidate_range_start(&range);
>
> - pte = pte_offset_map(pmd, address);
> - pte_ptl = pte_lockptr(mm, pmd);
> -
> pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */
> /*
> * This removes any huge TLB entry from the CPU so we won't allow
> @@ -1163,13 +1176,18 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> mmu_notifier_invalidate_range_end(&range);
> tlb_remove_table_sync_one();
>
> - spin_lock(pte_ptl);
> - result = __collapse_huge_page_isolate(vma, address, pte, cc,
> - &compound_pagelist);
> - spin_unlock(pte_ptl);
> + pte = pte_offset_map_lock(mm, &_pmd, address, &pte_ptl);
> + if (pte) {
> + result = __collapse_huge_page_isolate(vma, address, pte, cc,
> + &compound_pagelist);
> + spin_unlock(pte_ptl);
> + } else {
> + result = SCAN_PMD_NULL;
> + }
>
> if (unlikely(result != SCAN_SUCCEED)) {
> - pte_unmap(pte);
> + if (pte)
> + pte_unmap(pte);
> spin_lock(pmd_ptl);
> BUG_ON(!pmd_none(*pmd));
> /*
> @@ -1253,6 +1271,11 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
> memset(cc->node_load, 0, sizeof(cc->node_load));
> nodes_clear(cc->alloc_nmask);
> pte = pte_offset_map_lock(mm, pmd, address, &ptl);
> + if (!pte) {
> + result = SCAN_PMD_NULL;
> + goto out;
> + }
> +
> for (_address = address, _pte = pte; _pte < pte + HPAGE_PMD_NR;
> _pte++, _address += PAGE_SIZE) {
> pte_t pteval = *_pte;
> @@ -1622,8 +1645,10 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
> * lockless_pages_from_mm() and the hardware page walker can access page
> * tables while all the high-level locks are held in write mode.
> */
> - start_pte = pte_offset_map_lock(mm, pmd, haddr, &ptl);
> result = SCAN_FAIL;
> + start_pte = pte_offset_map_lock(mm, pmd, haddr, &ptl);
> + if (!start_pte)
> + goto drop_immap;
>
> /* step 1: check all mapped PTEs are to the right huge page */
> for (i = 0, addr = haddr, pte = start_pte;
> @@ -1697,6 +1722,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
>
> abort:
> pte_unmap_unlock(start_pte, ptl);
> +drop_immap:
> i_mmap_unlock_write(vma->vm_file->f_mapping);
> goto drop_hpage;
> }
> --
> 2.35.3
>
next prev parent reply other threads:[~2023-05-22 23:54 UTC|newest]
Thread overview: 80+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-22 4:46 [PATCH 00/31] mm: " Hugh Dickins
2023-05-22 4:49 ` [PATCH 01/31] mm: use pmdp_get_lockless() without surplus barrier() Hugh Dickins
2023-05-24 22:29 ` Peter Xu
2023-05-25 22:35 ` Hugh Dickins
2023-05-26 16:48 ` Peter Xu
2023-05-24 22:54 ` Yu Zhao
2023-05-22 4:51 ` [PATCH 02/31] mm/migrate: remove cruft from migration_entry_wait()s Hugh Dickins
2023-05-23 1:45 ` Alistair Popple
2023-05-24 1:57 ` Hugh Dickins
2023-05-22 4:52 ` [PATCH 03/31] mm/pgtable: kmap_local_page() instead of kmap_atomic() Hugh Dickins
2023-05-26 22:22 ` Peter Xu
2023-05-26 22:42 ` Peter Xu
2023-05-22 4:53 ` [PATCH 04/31] mm/pgtable: allow pte_offset_map[_lock]() to fail Hugh Dickins
2023-05-22 11:17 ` Qi Zheng
2023-05-24 2:22 ` Hugh Dickins
2023-05-24 3:11 ` Qi Zheng
2023-07-05 14:48 ` Aneesh Kumar K.V
2023-07-05 22:26 ` Hugh Dickins
2023-05-22 4:54 ` [PATCH 05/31] mm/filemap: allow pte_offset_map_lock() " Hugh Dickins
2023-05-22 11:23 ` Qi Zheng
2023-05-24 2:35 ` Hugh Dickins
2023-05-24 3:14 ` Qi Zheng
2023-05-22 4:55 ` [PATCH 06/31] mm/page_vma_mapped: delete bogosity in page_vma_mapped_walk() Hugh Dickins
2023-05-22 4:57 ` [PATCH 07/31] mm/page_vma_mapped: reformat map_pte() with less indentation Hugh Dickins
2023-05-22 4:58 ` [PATCH 08/31] mm/page_vma_mapped: pte_offset_map_nolock() not pte_lockptr() Hugh Dickins
2023-05-22 11:41 ` Qi Zheng
2023-05-24 2:44 ` Hugh Dickins
2023-05-22 5:00 ` [PATCH 09/31] mm/pagewalkers: ACTION_AGAIN if pte_offset_map_lock() fails Hugh Dickins
2023-05-23 18:07 ` SeongJae Park
2023-05-22 5:01 ` [PATCH 10/31] mm/pagewalk: walk_pte_range() allow for pte_offset_map() Hugh Dickins
2023-05-22 5:03 ` [PATCH 11/31] mm/vmwgfx: simplify pmd & pud mapping dirty helpers Hugh Dickins
2023-05-22 5:04 ` [PATCH 12/31] mm/vmalloc: vmalloc_to_page() use pte_offset_kernel() Hugh Dickins
2023-05-22 7:27 ` Lorenzo Stoakes
2023-05-22 5:05 ` [PATCH 13/31] mm/hmm: retry if pte_offset_map() fails Hugh Dickins
2023-05-22 12:11 ` Qi Zheng
2023-05-23 2:39 ` Alistair Popple
2023-05-23 6:06 ` Qi Zheng
2023-05-24 2:50 ` Hugh Dickins
2023-05-24 5:16 ` Alistair Popple
2023-05-22 5:06 ` [PATCH 14/31] fs/userfaultfd: " Hugh Dickins
2023-05-24 22:31 ` Peter Xu
2023-05-22 5:07 ` [PATCH 15/31] mm/userfaultfd: allow pte_offset_map_lock() to fail Hugh Dickins
2023-05-24 22:44 ` Peter Xu
2023-05-25 22:06 ` Hugh Dickins
2023-05-26 16:25 ` Peter Xu
2023-05-22 5:08 ` [PATCH 16/31] mm/debug_vm_pgtable,page_table_check: warn pte map fails Hugh Dickins
2023-05-22 5:10 ` [PATCH 17/31] mm/various: give up if pte_offset_map[_lock]() fails Hugh Dickins
2023-05-22 12:24 ` Qi Zheng
2023-05-22 12:37 ` Qi Zheng
2023-05-24 3:20 ` Hugh Dickins
2023-05-22 5:12 ` [PATCH 18/31] mm/mprotect: delete pmd_none_or_clear_bad_unless_trans_huge() Hugh Dickins
2023-05-22 5:13 ` [PATCH 19/31] mm/mremap: retry if either pte_offset_map_*lock() fails Hugh Dickins
2023-05-22 5:15 ` [PATCH 20/31] mm/madvise: clean up pte_offset_map_lock() scans Hugh Dickins
2023-05-22 5:17 ` [PATCH 21/31] mm/madvise: clean up force_shm_swapin_readahead() Hugh Dickins
2023-05-22 5:18 ` [PATCH 22/31] mm/swapoff: allow pte_offset_map[_lock]() to fail Hugh Dickins
2023-05-22 5:19 ` [PATCH 23/31] mm/mglru: allow pte_offset_map_nolock() " Hugh Dickins
2023-05-22 5:26 ` Yu Zhao
2023-05-22 5:20 ` [PATCH 24/31] mm/migrate_device: allow pte_offset_map_lock() " Hugh Dickins
2023-05-23 2:23 ` Alistair Popple
2023-05-24 3:45 ` Hugh Dickins
2023-05-24 5:11 ` Alistair Popple
2023-05-22 5:22 ` [PATCH 25/31] mm/gup: remove FOLL_SPLIT_PMD use of pmd_trans_unstable() Hugh Dickins
2023-05-23 2:26 ` Yang Shi
2023-05-23 2:44 ` Yang Shi
2023-05-24 4:26 ` Hugh Dickins
2023-05-24 22:45 ` Yang Shi
2023-05-25 21:16 ` Hugh Dickins
2023-05-25 22:33 ` Yang Shi
2023-05-22 5:23 ` [PATCH 26/31] mm/huge_memory: split huge pmd under one pte_offset_map() Hugh Dickins
2023-05-22 23:35 ` Yang Shi
2023-05-22 5:24 ` [PATCH 27/31] mm/khugepaged: allow pte_offset_map[_lock]() to fail Hugh Dickins
2023-05-22 23:54 ` Yang Shi [this message]
2023-05-24 4:44 ` Hugh Dickins
2023-05-24 21:59 ` Yang Shi
2023-05-22 5:25 ` [PATCH 28/31] mm/memory: " Hugh Dickins
2023-05-22 5:26 ` [PATCH 29/31] mm/memory: handle_pte_fault() use pte_offset_map_nolock() Hugh Dickins
2023-05-22 12:52 ` Qi Zheng
2023-05-24 4:54 ` Hugh Dickins
2023-05-22 5:27 ` [PATCH 30/31] mm/pgtable: delete pmd_trans_unstable() and friends Hugh Dickins
2023-05-22 5:29 ` [PATCH 31/31] perf/core: Allow pte_offset_map() to fail Hugh Dickins
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAHbLzkrf-Ft6geL0XKwGCY+Btn3cW=FMRjujQ48VJEnCfVki9g@mail.gmail.com' \
--to=shy828301@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=apopple@nvidia.com \
--cc=axelrasmussen@google.com \
--cc=christophe.leroy@csgroup.eu \
--cc=david@redhat.com \
--cc=hch@infradead.org \
--cc=hughd@google.com \
--cc=ira.weiny@intel.com \
--cc=jgg@ziepe.ca \
--cc=kirill.shutemov@linux.intel.com \
--cc=linmiaohe@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mike.kravetz@oracle.com \
--cc=minchan@kernel.org \
--cc=naoya.horiguchi@nec.com \
--cc=pasha.tatashin@soleen.com \
--cc=peterx@redhat.com \
--cc=peterz@infradead.org \
--cc=rcampbell@nvidia.com \
--cc=rppt@kernel.org \
--cc=sj@kernel.org \
--cc=song@kernel.org \
--cc=steven.price@arm.com \
--cc=surenb@google.com \
--cc=thomas.hellstrom@linux.intel.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=yuzhao@google.com \
--cc=zackr@vmware.com \
--cc=zhengqi.arch@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox