From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DAB92E77188 for ; Fri, 10 Jan 2025 09:21:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3B5B18D0003; Fri, 10 Jan 2025 04:21:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 33E4D8D0002; Fri, 10 Jan 2025 04:21:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1B7FE8D0003; Fri, 10 Jan 2025 04:21:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id EE1C18D0002 for ; Fri, 10 Jan 2025 04:21:10 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 6B379160958 for ; Fri, 10 Jan 2025 09:21:10 +0000 (UTC) X-FDA: 82990998300.26.91E406A Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf24.hostedemail.com (Postfix) with ESMTP id 9C822180005 for ; Fri, 10 Jan 2025 09:21:08 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736500868; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CVp5aIc1gnSZV2qXjRcWxfQbWMYtwQnsMm0iJpUBseg=; b=a8njSGRexprauBgznesZESxh148o01MOvQ7Wtb4W8Ru1j2ZkRvrVIxNSMov8YEAqYYvVP0 aG1R/+wkobh2JA1IoPYeknZvnzMe8Nt92bbQQkANZoquoAPn2fHjxR6eiagDFvKw0/C64f 3Al87kPPL0naMmdRfFBYnhW84xuCjaw= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736500868; a=rsa-sha256; cv=none; b=SWgGV21+4iTb5euQ9DAuEBZYNIzsYRf+gHSSPSuOFA0eTDO9AEe7l/nt/80eMjVdLoFgsO +OUzZ4ja5d3+rnpKMN5opnu3UcOtE+vM/RGrT3m7jGUKVWhLSzmBv6EdPHKynSu56WHf1h A6NATF0olBfmcLSXZnY90zDoWa1FuhQ= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CD7431477; Fri, 10 Jan 2025 01:21:35 -0800 (PST) Received: from [10.162.42.21] (K4MQJ0H1H2.blr.arm.com [10.162.42.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C26B33F673; Fri, 10 Jan 2025 01:20:55 -0800 (PST) Message-ID: <71a2f471-3082-4ca2-ac48-2f664977282f@arm.com> Date: Fri, 10 Jan 2025 14:50:52 +0530 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC 09/11] khugepaged: add mTHP support To: Nico Pache , linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: ryan.roberts@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, cl@gentwo.org, vbabka@suse.cz, mhocko@suse.com, apopple@nvidia.com, dave.hansen@linux.intel.com, will@kernel.org, baohua@kernel.org, jack@suse.cz, srivatsa@csail.mit.edu, haowenchao22@gmail.com, hughd@google.com, aneesh.kumar@kernel.org, yang@os.amperecomputing.com, peterx@redhat.com, ioworker0@gmail.com, wangkefeng.wang@huawei.com, ziy@nvidia.com, jglisse@google.com, surenb@google.com, vishal.moola@gmail.com, zokeefe@google.com, zhengqi.arch@bytedance.com, jhubbard@nvidia.com, 21cnbao@gmail.com, willy@infradead.org, kirill.shutemov@linux.intel.com, david@redhat.com, aarcange@redhat.com, raquini@redhat.com, sunnanyong@huawei.com, usamaarif642@gmail.com, audra@redhat.com, akpm@linux-foundation.org References: <20250108233128.14484-1-npache@redhat.com> <20250108233128.14484-10-npache@redhat.com> Content-Language: en-US From: Dev Jain In-Reply-To: <20250108233128.14484-10-npache@redhat.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam05 X-Stat-Signature: fh5wztmij664byyux1rynw9oxa88aor8 X-Rspamd-Queue-Id: 9C822180005 X-Rspam-User: X-HE-Tag: 1736500868-132083 X-HE-Meta: U2FsdGVkX1+mcoZFMT16VPxAUhOudilIffj5sUGgzkMV1msFwyEJyzyu059GNlL4dQWtVcSdehtsvQaku0BxFX7rj17mZXDnHpxNRMvQEzkzVZaZSVYiQD7TvWfVNA7+fRLaNED+bTKENsQ6Td+GC6E5gvBMUVlL0Lp+xRHjCwO0KKofUkbNr6HJVpdEkN+xcEEvPvFgyrazx6ufHuvVmAgu3thmlWvoL/aqL/TPIHaSfFaRxwRFyRwOmQIl51Egphq8KYgs6kw4W1kdXDrKBarG4EL9HTZE6oIajUWnFCThARbYaVeiYvYt7+LVP2jSk9nop7SiWnESogulQdyomANU9R9FCXpgLzMQKmLFx/zHupk6qUDqpR8V0MlFfYfOCN9ZsPzOx0yXOkgnIMF2N8zBzFZLb1TyCtFoRkZMAhkxZJsJgZlsiy1YnOO25B0YAjieHHC8NCEEYqB66Y7yvOSqwfq16n3ZCzvmphU3CrfhGjPFPycTFZUAFll+lQxXiawS8LlV3IInKQIOLi8ruwd6JmiJyN8JG7lbU8+FMZST0Wxes/CVaWZSw3jfzVsRKSAnVZ+BC0fF/dcQIs8JP/7pTMVtOmzeHSOVxZA+phpstQT6FW/Qhbv65aQDAOzecmzixQh1eKYr54/h3VNiG69XCILHKqIE1qMGO7QrdgliiqzyP/YrG6Zd7OMRXxa1ZiyMNZN9lTQn8v8ESeyzUtxR3m2d+p2iE0ICw9iA+02o95mE3LoM5NW4SfgiRk1SUJYPvggryW+e/vxPwllhrltndHq3m/2kcrsC2ZAOUE533wLr9EpBq7NGYhnDpzscx+tlxlYDOEFTE+7mz3M82ic+tDDzBiO4fUq+TCHaJB1IIXyzbIOy9kIL0iPacvPv0zFGK+fV/gAoSunR82wStkftYXbqCGJAALpAjuRsg6kmnpTinh19Mn6Db5Cz19Vfqwot6NOTBVhkzH8We19 MRJGmpID zLuEJQfi7j8JH0h3qW9Hwh/8f+ennLOdxkP0sbZvNkAOdUgP8FFxdMrbs7RxR+/rdrgUu5plQXp05emNUEJnN78sqW/RyQ12w6sXKN8Ozb4INebhEpbR9ZD+yhYzs9inuUPziLPv7Xa9RwwDtQwDc5+CesVNo0c6+gGhYNwlG8kU8jzGaGMTmm+IfFEG0zeXoZ847jGUsu8v4gg+70H32DUj2enpFGO4iCjnj3FkXzBjL33gpfq6PLpFteg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 09/01/25 5:01 am, Nico Pache wrote: > Introduce the ability for khugepaged to collapse to different mTHP sizes. > While scanning a PMD range for potential hugepage collapse, track pages > in MIN_MTHP_ORDER chunks. Each bit represents a fully utilized region of > order MIN_MTHP_ORDER ptes. > > With this bitmap we can determine which mTHP sizes would be the most > efficient to collapse to if the PMD collapse is not suitible. > > Signed-off-by: Nico Pache > --- > mm/khugepaged.c | 111 +++++++++++++++++++++++++++++++++--------------- > 1 file changed, 77 insertions(+), 34 deletions(-) > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index de1dc6ea3c71..4d3c560f20b4 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -1139,13 +1139,14 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > { > LIST_HEAD(compound_pagelist); > pmd_t *pmd, _pmd; > - pte_t *pte; > + pte_t *pte, mthp_pte; > pgtable_t pgtable; > struct folio *folio; > spinlock_t *pmd_ptl, *pte_ptl; > int result = SCAN_FAIL; > struct vm_area_struct *vma; > struct mmu_notifier_range range; > + unsigned long _address = address + offset * PAGE_SIZE; > VM_BUG_ON(address & ~HPAGE_PMD_MASK); > > /* if collapsing mTHPs we may have already released the read_lock, and > @@ -1162,12 +1163,13 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > mmap_read_unlock(mm); > *mmap_locked = false; > > - result = alloc_charge_folio(&folio, mm, cc, HPAGE_PMD_ORDER); > + result = alloc_charge_folio(&folio, mm, cc, order); > if (result != SCAN_SUCCEED) > goto out_nolock; > > mmap_read_lock(mm); > - result = hugepage_vma_revalidate(mm, address, true, &vma, cc, HPAGE_PMD_ORDER); > + *mmap_locked = true; > + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, order); > if (result != SCAN_SUCCEED) { > mmap_read_unlock(mm); > goto out_nolock; > @@ -1185,13 +1187,14 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > * released when it fails. So we jump out_nolock directly in > * that case. Continuing to collapse causes inconsistency. > */ > - result = __collapse_huge_page_swapin(mm, vma, address, pmd, > - referenced, HPAGE_PMD_ORDER); > + result = __collapse_huge_page_swapin(mm, vma, _address, pmd, > + referenced, order); > if (result != SCAN_SUCCEED) > goto out_nolock; > } > > mmap_read_unlock(mm); > + *mmap_locked = false; > /* > * Prevent all access to pagetables with the exception of > * gup_fast later handled by the ptep_clear_flush and the VM > @@ -1201,7 +1204,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > * mmap_lock. > */ > mmap_write_lock(mm); > - result = hugepage_vma_revalidate(mm, address, true, &vma, cc, HPAGE_PMD_ORDER); > + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, order); > if (result != SCAN_SUCCEED) > goto out_up_write; > /* check if the pmd is still valid */ > @@ -1212,11 +1215,12 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > vma_start_write(vma); > anon_vma_lock_write(vma->anon_vma); > > - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, address, > - address + HPAGE_PMD_SIZE); > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, _address, > + _address + (PAGE_SIZE << order)); Since we are nuking the PMD for both cases, we do not need to change it for order, this should remain address + HPAGE_PMD_SIZE. > mmu_notifier_invalidate_range_start(&range); > > pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */ > + > /* > * This removes any huge TLB entry from the CPU so we won't allow > * huge and small TLB entries for the same virtual address to > @@ -1230,10 +1234,10 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > mmu_notifier_invalidate_range_end(&range); > tlb_remove_table_sync_one(); > > - pte = pte_offset_map_lock(mm, &_pmd, address, &pte_ptl); > + pte = pte_offset_map_lock(mm, &_pmd, _address, &pte_ptl); > if (pte) { > - result = __collapse_huge_page_isolate(vma, address, pte, cc, > - &compound_pagelist, HPAGE_PMD_ORDER); > + result = __collapse_huge_page_isolate(vma, _address, pte, cc, > + &compound_pagelist, order); > spin_unlock(pte_ptl); > } else { > result = SCAN_PMD_NULL; > @@ -1262,8 +1266,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > anon_vma_unlock_write(vma->anon_vma); > > result = __collapse_huge_page_copy(pte, folio, pmd, _pmd, > - vma, address, pte_ptl, > - &compound_pagelist, HPAGE_PMD_ORDER); > + vma, _address, pte_ptl, > + &compound_pagelist, order); > pte_unmap(pte); > if (unlikely(result != SCAN_SUCCEED)) > goto out_up_write; > @@ -1274,20 +1278,37 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > * write. > */ > __folio_mark_uptodate(folio); > - pgtable = pmd_pgtable(_pmd); > - > - _pmd = mk_huge_pmd(&folio->page, vma->vm_page_prot); > - _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma); > - > - spin_lock(pmd_ptl); > - BUG_ON(!pmd_none(*pmd)); > - folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE); > - folio_add_lru_vma(folio, vma); > - pgtable_trans_huge_deposit(mm, pmd, pgtable); > - set_pmd_at(mm, address, pmd, _pmd); > - update_mmu_cache_pmd(vma, address, pmd); > - deferred_split_folio(folio, false); > - spin_unlock(pmd_ptl); > + if (order == HPAGE_PMD_ORDER) { > + pgtable = pmd_pgtable(_pmd); > + _pmd = mk_huge_pmd(&folio->page, vma->vm_page_prot); > + _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma); > + > + spin_lock(pmd_ptl); > + BUG_ON(!pmd_none(*pmd)); > + folio_add_new_anon_rmap(folio, vma, _address, RMAP_EXCLUSIVE); > + folio_add_lru_vma(folio, vma); > + pgtable_trans_huge_deposit(mm, pmd, pgtable); > + set_pmd_at(mm, address, pmd, _pmd); > + update_mmu_cache_pmd(vma, address, pmd); > + deferred_split_folio(folio, false); > + spin_unlock(pmd_ptl); > + } else { //mTHP > + mthp_pte = mk_pte(&folio->page, vma->vm_page_prot); > + mthp_pte = maybe_mkwrite(pte_mkdirty(mthp_pte), vma); > + > + spin_lock(pmd_ptl); > + folio_ref_add(folio, (1 << order) - 1); > + folio_add_new_anon_rmap(folio, vma, _address, RMAP_EXCLUSIVE); > + folio_add_lru_vma(folio, vma); > + spin_lock(pte_ptl); > + set_ptes(vma->vm_mm, _address, pte, mthp_pte, (1 << order)); > + update_mmu_cache_range(NULL, vma, _address, pte, (1 << order)); > + spin_unlock(pte_ptl); > + smp_wmb(); /* make pte visible before pmd */ > + pmd_populate(mm, pmd, pmd_pgtable(_pmd)); > + deferred_split_folio(folio, false); > + spin_unlock(pmd_ptl); > + } You have done lock nesting here: lock(pmd_ptl) -> lock(pte_ptl) -> unlock(pte_ptl) -> unlock(pmd_ptl). Anyways, you do not need to take pmd_ptl when you are setting the ptes. I am almost done with my v2, and according to me this function should look like this: /* Similar to the PMD case except we have to batch set the PTEs */ static int vma_collapse_anon_folio(struct mm_struct *mm, unsigned long address, struct vm_area_struct *vma, struct collapse_control *cc, pmd_t *pmd, struct folio *folio, int order) { LIST_HEAD(compound_pagelist); spinlock_t *pmd_ptl, *pte_ptl; int result = SCAN_FAIL; struct mmu_notifier_range range; pmd_t _pmd; pte_t *pte; pte_t entry; int nr_pages = folio_nr_pages(folio); unsigned long haddress = address & HPAGE_PMD_MASK; VM_BUG_ON(address & ((1UL << order) - 1));; mmap_read_unlock(mm); mmap_write_lock(mm); result = hugepage_vma_revalidate(mm, address, true, &vma, order, cc); if (result != SCAN_SUCCEED) goto out_up_write; result = check_pmd_still_valid(mm, address, pmd); if (result != SCAN_SUCCEED) goto out_up_write; vma_start_write(vma); anon_vma_lock_write(vma->anon_vma); mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, haddress, haddress + HPAGE_PMD_SIZE); mmu_notifier_invalidate_range_start(&range); pmd_ptl = pmd_lock(mm, pmd); _pmd = pmdp_collapse_flush(vma, haddress, pmd); spin_unlock(pmd_ptl); mmu_notifier_invalidate_range_end(&range); tlb_remove_table_sync_one(); pte = pte_offset_map_lock(mm, &_pmd, address, &pte_ptl); if (pte) { result = __collapse_huge_page_isolate(vma, address, pte, cc, &compound_pagelist, order); spin_unlock(pte_ptl); } else { result = SCAN_PMD_NULL; } if (unlikely(result != SCAN_SUCCEED)) { if (pte) pte_unmap(pte); spin_lock(pmd_ptl); BUG_ON(!pmd_none(*pmd)); pmd_populate(mm, pmd, pmd_pgtable(_pmd)); spin_unlock(pmd_ptl); anon_vma_unlock_write(vma->anon_vma); goto out_up_write; } anon_vma_unlock_write(vma->anon_vma); __folio_mark_uptodate(folio); entry = mk_pte(&folio->page, vma->vm_page_prot); entry = maybe_mkwrite(pte_mkdirty(entry), vma); result = __collapse_huge_page_copy(pte, folio, pmd, *pmd, vma, address, pte_ptl, &compound_pagelist, order); pte_unmap(pte); if (unlikely(result != SCAN_SUCCEED)) goto out_up_write; folio_ref_add(folio, nr_pages - 1); folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE); folio_add_lru_vma(folio, vma); spin_lock(pte_ptl); set_ptes(mm, address, pte, entry, nr_pages); spin_unlock(pte_ptl); spin_lock(pmd_ptl); /* See pmd_install() */ smp_wmb(); pmd_populate(mm, pmd, pmd_pgtable(_pmd)); update_mmu_cache_pmd(vma, haddress, pmd); spin_unlock(pmd_ptl); result = SCAN_SUCCEED; out_up_write: mmap_write_unlock(mm); return result; } The difference being, I take the pte_ptl, set the ptes, drop the pte_ptl, then take pmd_ptl, do pmd_populate(). Now, instead of update_mmu_cache_range() in the mTHP case, we still need to do update_mmu_cache_pmd() since we are repopulating the PMD. And, IIUC update_mmu_cache_pmd() is a superset of update_mmu_cache_range(), so we can drop the latter altogether. > > folio = NULL; > > @@ -1367,21 +1388,26 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, > { > pmd_t *pmd; > pte_t *pte, *_pte; > + int i; > int result = SCAN_FAIL, referenced = 0; > int none_or_zero = 0, shared = 0; > struct page *page = NULL; > struct folio *folio = NULL; > unsigned long _address; > + unsigned long enabled_orders; > spinlock_t *ptl; > int node = NUMA_NO_NODE, unmapped = 0; > bool writable = false; > - > + bool all_valid = true; > + unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0; > VM_BUG_ON(address & ~HPAGE_PMD_MASK); > > result = find_pmd_or_thp_or_none(mm, address, &pmd); > if (result != SCAN_SUCCEED) > goto out; > > + bitmap_zero(cc->mthp_bitmap, 1 << (HPAGE_PMD_ORDER - MIN_MTHP_ORDER)); > + bitmap_zero(cc->mthp_bitmap_temp, 1 << (HPAGE_PMD_ORDER - MIN_MTHP_ORDER)); > memset(cc->node_load, 0, sizeof(cc->node_load)); > nodes_clear(cc->alloc_nmask); > pte = pte_offset_map_lock(mm, pmd, address, &ptl); > @@ -1390,8 +1416,12 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, > goto out; > } > > - for (_address = address, _pte = pte; _pte < pte + HPAGE_PMD_NR; > - _pte++, _address += PAGE_SIZE) { > + for (i = 0; i < HPAGE_PMD_NR; i++) { > + if (i % MIN_MTHP_NR == 0) > + all_valid = true; > + > + _pte = pte + i; > + _address = address + i * PAGE_SIZE; > pte_t pteval = ptep_get(_pte); > if (is_swap_pte(pteval)) { > ++unmapped; > @@ -1414,6 +1444,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, > } > } > if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) { > + all_valid = false; > ++none_or_zero; > if (!userfaultfd_armed(vma) && > (!cc->is_khugepaged || > @@ -1514,7 +1545,15 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, > folio_test_referenced(folio) || mmu_notifier_test_young(vma->vm_mm, > address))) > referenced++; > + > + /* > + * we are reading in MIN_MTHP_NR page chunks. if there are no empty > + * pages keep track of it in the bitmap for mTHP collapsing. > + */ > + if (all_valid && (i + 1) % MIN_MTHP_NR == 0) > + bitmap_set(cc->mthp_bitmap, i / MIN_MTHP_NR, 1); > } > + > if (!writable) { > result = SCAN_PAGE_RO; > } else if (cc->is_khugepaged && > @@ -1527,10 +1566,12 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, > out_unmap: > pte_unmap_unlock(pte, ptl); > if (result == SCAN_SUCCEED) { > - result = collapse_huge_page(mm, address, referenced, > - unmapped, cc, mmap_locked, HPAGE_PMD_ORDER, 0); > - /* collapse_huge_page will return with the mmap_lock released */ > - *mmap_locked = false; > + enabled_orders = thp_vma_allowable_orders(vma, vma->vm_flags, > + tva_flags, THP_ORDERS_ALL_ANON); > + result = khugepaged_scan_bitmap(mm, address, referenced, unmapped, cc, > + mmap_locked, enabled_orders); > + if (result > 0) > + result = SCAN_SUCCEED; > } > out: > trace_mm_khugepaged_scan_pmd(mm, &folio->page, writable, referenced, > @@ -2477,11 +2518,13 @@ static int khugepaged_collapse_single_pmd(unsigned long addr, struct mm_struct * > fput(file); > if (result == SCAN_PTE_MAPPED_HUGEPAGE) { > mmap_read_lock(mm); > + *mmap_locked = true; > if (khugepaged_test_exit_or_disable(mm)) > goto end; > result = collapse_pte_mapped_thp(mm, addr, > !cc->is_khugepaged); > mmap_read_unlock(mm); > + *mmap_locked = false; > } > } else { > result = khugepaged_scan_pmd(mm, vma, addr,