From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA8C9C001DF for ; Thu, 3 Aug 2023 09:18:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DA9E5280220; Thu, 3 Aug 2023 05:18:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D35832801EB; Thu, 3 Aug 2023 05:18:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B855D280220; Thu, 3 Aug 2023 05:18:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 9F5632801EB for ; Thu, 3 Aug 2023 05:18:00 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 5DE8D405A9 for ; Thu, 3 Aug 2023 09:18:00 +0000 (UTC) X-FDA: 81082241520.17.2CF0C15 Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) by imf13.hostedemail.com (Postfix) with ESMTP id B750E20027 for ; Thu, 3 Aug 2023 09:17:57 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=Wy81fxHc; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf13.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.210.177 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1691054278; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GRlGFt/b+eLL81fhIoc4g8Xnl9ZXM32WoVDoUNCTBPI=; b=X/yPGVGYdQPAL7ojiZi/DWjSseHItHEAkFOVwqJEqh0Zvto7TssqcdGx/JkMyPFSSteYZl fjKoKvypQa/tBRYCoTjMGRbx1xfTIDZKQbnivXFzKINaJtHCqFSQMPKwhVAQM9jV4mJllQ E5WE6/gYS9ebKh21/j/ysBhYO4r6f0s= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=Wy81fxHc; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf13.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.210.177 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1691054278; a=rsa-sha256; cv=none; b=zSMIsDY9XxFwoiFZ2CiFXtrhjdN8XqZoziSYjecKsBQ2bnx38OCClDZrKHJGJk0Qev1PDr 0WlrtckDXGSRttA4Uh3e75lYelHK4m5Mc1AB8JjbtBwgzv7jpgmilC45xfXfqa6Dsn9Gst Ms+hwtAfeqToIjghQ0J0qpQQ+TQG2Lg= Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-686f74a8992so129217b3a.1 for ; Thu, 03 Aug 2023 02:17:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1691054276; x=1691659076; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=GRlGFt/b+eLL81fhIoc4g8Xnl9ZXM32WoVDoUNCTBPI=; b=Wy81fxHcIQAkCOjVlz/bXR2PEFHMyBpLCB/0rUSMHU/+AN9M9o+SWg7IeMqOl7dtXJ u4X5Xmtgbr1j2HlIsVyt5UNXMMDLJxoTaRAclA9TqXI7WKrW9+IBnJfH+DTnoe8C1pR/ Ua8vJ7SkvZ3BfGOtQX4f/oVAVl0L5wivyhyymiEivlwLY8IuN30VfinZQXBHe84NQwjn vxvkWuD4OufFqIDnDBL6RQ49ypYOjDcMNYf9unSkCaHxT+Wgbf46GjdKuPaFlP2cpDBq 02ZumQIM1oI0uqByQs5V8bGS3+Zz9zHG2v99uaBr1W/QHfi9khuYRMuO4GhTbMXqmhfg fEBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691054276; x=1691659076; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=GRlGFt/b+eLL81fhIoc4g8Xnl9ZXM32WoVDoUNCTBPI=; b=TgmxBhiwHOf+QfXc/+Yj5+WhAoZOFU5HziTZIFiUTSwhPzGobT0kOFsZdESiMmgutF 60XVPEnNMa56u4n1leSeB3GMrSdnJOa+OobUW3W8mTqQ+LOiL02Co2ecx7cgNdgOAx8F F4tEIcyswtVn0Lssib0KTu7LLyDdbN0KW3mKW/oNFAA68/MRi2DdfGtCxlvFasWcsyNa Bu+wOlaW0NzuUPM+J17C2Fk625wywuHj327nguJuMe3VcdBdnLRKNbW9ve6GjNlqoy+x lmlQlwA39Z7v/eZtS6cH5uUqUyOaXlPJOwMVn+qbcRG7U+PUChgN7fLM3kVDPmfV8IkR 8xnw== X-Gm-Message-State: ABy/qLZ+YMH0YRT0tGhjAFTGg1hM3/YYscoUv4Jyb7ry/tVzEA+fqfgO KoXybZFTPRxzPbwVOEhj/Ys47w== X-Google-Smtp-Source: APBJJlF1R67qBIL1XVp7Pt2rmoZtkUWRGJa2ZDwYOFvPcmMmGiKpQTlnEEw5SIHIdIIPkp3jz66NKQ== X-Received: by 2002:a05:6a21:9989:b0:111:a0e5:d2b7 with SMTP id ve9-20020a056a21998900b00111a0e5d2b7mr20024356pzb.4.1691054276027; Thu, 03 Aug 2023 02:17:56 -0700 (PDT) Received: from [10.70.252.135] ([203.208.167.146]) by smtp.gmail.com with ESMTPSA id g10-20020a63b14a000000b0056471d2ae8fsm3892765pgp.90.2023.08.03.02.17.37 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 03 Aug 2023 02:17:55 -0700 (PDT) Message-ID: <0df84f9f-e9b0-80b1-4c9e-95abc1a73a96@bytedance.com> Date: Thu, 3 Aug 2023 17:17:34 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.12.0 Subject: Re: [PATCH v3 10/13] mm/khugepaged: collapse_pte_mapped_thp() with mmap_read_lock() Content-Language: en-US To: Hugh Dickins , Andrew Morton , Pasha Tatashin Cc: Mike Kravetz , Mike Rapoport , "Kirill A. Shutemov" , Matthew Wilcox , David Hildenbrand , Suren Baghdasaryan , Yang Shi , Mel Gorman , Peter Xu , Peter Zijlstra , Will Deacon , Yu Zhao , Alistair Popple , Ralph Campbell , Ira Weiny , Steven Price , SeongJae Park , Lorenzo Stoakes , Huang Ying , Naoya Horiguchi , Christophe Leroy , Zack Rusin , Jason Gunthorpe , Axel Rasmussen , Anshuman Khandual , Miaohe Lin , Minchan Kim , Christoph Hellwig , Song Liu , Thomas Hellstrom , Russell King , "David S. Miller" , Michael Ellerman , "Aneesh Kumar K.V" , Heiko Carstens , Christian Borntraeger , Claudio Imbrenda , Alexander Gordeev , Gerald Schaefer , Vasily Gorbik , Jann Horn , Vishal Moola , Vlastimil Babka , Zi Yan , linux-arm-kernel@lists.infradead.org, sparclinux@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <7cd843a9-aa80-14f-5eb2-33427363c20@google.com> From: Qi Zheng In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspam-User: X-Stat-Signature: dz7q8eq6usqwfpcx3wkyoc6izufnbqur X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: B750E20027 X-HE-Tag: 1691054277-152224 X-HE-Meta: U2FsdGVkX19sg95elOTVXJY5q7+EY0AfkNP7HA09PXeoEqaduwBrejtJ8RamFMdAY4Xb3c25KMU2jwt33qjZgopxsuSw3oXgV8lK6pMGIyeuuru9NTiIiV0Rdaw+W6L+3HS5nCQIMNvGaxLZmpBAovNmS7GNo1RHnGSB3FO6Hin0tbo4+M1D2Zqfk6+drUPJzVi3RlIUJsWM76OeklXyeT/sJJN+aPMEUlbaJ94JjMdKO+YnS21j3Mk5XpiVyj1Uyi3h+z9QyBM47KXINoDl6o1na44MtJvDE/6ejuhydmVYNA6rPsQ6vA0V8llfYWU/ZmnAnJ0lrNlv1irXTYh90F/ALp2oTpeZshjjWGj01rbVzmWKe9o3Jap9ncYBCaWIsQYFESs8DKfmuUv0DRFaPbRWRkJJqIaWddb181QxCvoXJnWYqLCCbUzELykyHB/M8eUL3mbfoizwO0p+m550edB0FUw8qBLr0ICYDlMaB3092/BiHqqr9xC3i4zVxbbzU6e4vLLHOiEt8k8Cnw3ufzSpnCN+c9/NTgzE3E7w1cnEJQT4aQZcULawkjiiRDunv34KN/l2xJraiBB46NFNuvRByyDtU5axmEFrcc7izF/dF3i7smq3olXETN+7OFuFE5NU/qbB6SvkQVoSxi2t8unYq1jykaV5GSsSnK9kfYR5LmLba1aI1MXmdfZWq7xSHZTCUic+fUwaQI/qy5RwQHO4NdyRT77LftMxoxM0Zs3lVakHk42PhV86mH6eGTxaedFtJF/ISppdQq3xzNK0oM7jUnJumWK0me/p86T11l+6BMniqoWMV4tdYpuAG1iCRy1+fZuy8TbG5t5opohuhN+bNEkg3jYm74NM8YSMBsuJSMNPb2N/7P3onnjFbZ9U+ImNcuuifSSQcXxmQxy+PibrESjy/0T5crvZZoxFiIrv1e4H02OfTchvngcbE+Wz4WIEsPWLRiL1XM5JpU0 LZ9AIujf 6qsHO9/HLm0pAUKHefLlbj5db3HwJHwfBfKs70+1Ev2RPm1KUWBttwMOzbt3/vMJzuQBmWnC77Ie7B8y9gXi5Im+/HxpfBwa/F8V93QGdd+vg+lzcEjXQyO9/gU6Swv31KZz5pWmsvBs1JByrrYZe70dx20qNeKMK1SMdIDyKaMtt1Q3Q0mQZwN+Vltz5hYJku4W1pOBCHCKbyC6z6QUshtOGiPBC6bwsv8U7F3Ev6YnYNA4M+C+CsRyEoRpw/Ia1301WE/UGqZOhHMFIwnaEyUbL6TiMshgAuToyvl0QyGORC6kHQdkwSaNJXt+Ztr0SzWJqRgWQx1M6c1NiW1Ta/YXDe/JNH3jXIfuVN/QSos+vU8ZtW6yXhKuUfjfq7/850LzG9hz4GAfuyCIwo9BcKDm76Q7F66j0lbBJit5GFys+wHFCWEOZAG14yexs+b84fMUE0mz7j5C55eDA5n4tRLd5VACgYNYYOFVmUBJtqROQ/TUYx1U7d62lp/oX+w3cqO8M X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi, On 2023/7/12 12:42, Hugh Dickins wrote: > Bring collapse_and_free_pmd() back into collapse_pte_mapped_thp(). > It does need mmap_read_lock(), but it does not need mmap_write_lock(), > nor vma_start_write() nor i_mmap lock nor anon_vma lock. All racing > paths are relying on pte_offset_map_lock() and pmd_lock(), so use those. > > Follow the pattern in retract_page_tables(); and using pte_free_defer() > removes most of the need for tlb_remove_table_sync_one() here; but call > pmdp_get_lockless_sync() to use it in the PAE case. > > First check the VMA, in case page tables are being torn down: from JannH. > Confirm the preliminary find_pmd_or_thp_or_none() once page lock has been > acquired and the page looks suitable: from then on its state is stable. > > However, collapse_pte_mapped_thp() was doing something others don't: > freeing a page table still containing "valid" entries. i_mmap lock did > stop a racing truncate from double-freeing those pages, but we prefer > collapse_pte_mapped_thp() to clear the entries as usual. Their TLB > flush can wait until the pmdp_collapse_flush() which follows, but the > mmu_notifier_invalidate_range_start() has to be done earlier. > > Do the "step 1" checking loop without mmu_notifier: it wouldn't be good > for khugepaged to keep on repeatedly invalidating a range which is then > found unsuitable e.g. contains COWs. "step 2", which does the clearing, > must then be more careful (after dropping ptl to do mmu_notifier), with > abort prepared to correct the accounting like "step 3". But with those > entries now cleared, "step 4" (after dropping ptl to do pmd_lock) is kept > safe by the huge page lock, which stops new PTEs from being faulted in. > > Signed-off-by: Hugh Dickins > --- > mm/khugepaged.c | 172 ++++++++++++++++++++++---------------------------- > 1 file changed, 77 insertions(+), 95 deletions(-) > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index 3bb05147961b..46986eb4eebb 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -1483,7 +1483,7 @@ static bool khugepaged_add_pte_mapped_thp(struct mm_struct *mm, > return ret; > } > > -/* hpage must be locked, and mmap_lock must be held in write */ > +/* hpage must be locked, and mmap_lock must be held */ > static int set_huge_pmd(struct vm_area_struct *vma, unsigned long addr, > pmd_t *pmdp, struct page *hpage) > { > @@ -1495,7 +1495,7 @@ static int set_huge_pmd(struct vm_area_struct *vma, unsigned long addr, > }; > > VM_BUG_ON(!PageTransHuge(hpage)); > - mmap_assert_write_locked(vma->vm_mm); > + mmap_assert_locked(vma->vm_mm); > > if (do_set_pmd(&vmf, hpage)) > return SCAN_FAIL; > @@ -1504,48 +1504,6 @@ static int set_huge_pmd(struct vm_area_struct *vma, unsigned long addr, > return SCAN_SUCCEED; > } > > -/* > - * A note about locking: > - * Trying to take the page table spinlocks would be useless here because those > - * are only used to synchronize: > - * > - * - modifying terminal entries (ones that point to a data page, not to another > - * page table) > - * - installing *new* non-terminal entries > - * > - * Instead, we need roughly the same kind of protection as free_pgtables() or > - * mm_take_all_locks() (but only for a single VMA): > - * The mmap lock together with this VMA's rmap locks covers all paths towards > - * the page table entries we're messing with here, except for hardware page > - * table walks and lockless_pages_from_mm(). > - */ > -static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *vma, > - unsigned long addr, pmd_t *pmdp) > -{ > - pmd_t pmd; > - struct mmu_notifier_range range; > - > - mmap_assert_write_locked(mm); > - if (vma->vm_file) > - lockdep_assert_held_write(&vma->vm_file->f_mapping->i_mmap_rwsem); > - /* > - * All anon_vmas attached to the VMA have the same root and are > - * therefore locked by the same lock. > - */ > - if (vma->anon_vma) > - lockdep_assert_held_write(&vma->anon_vma->root->rwsem); > - > - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, addr, > - addr + HPAGE_PMD_SIZE); > - mmu_notifier_invalidate_range_start(&range); > - pmd = pmdp_collapse_flush(vma, addr, pmdp); > - tlb_remove_table_sync_one(); > - mmu_notifier_invalidate_range_end(&range); > - mm_dec_nr_ptes(mm); > - page_table_check_pte_clear_range(mm, addr, pmd); > - pte_free(mm, pmd_pgtable(pmd)); > -} > - > /** > * collapse_pte_mapped_thp - Try to collapse a pte-mapped THP for mm at > * address haddr. > @@ -1561,26 +1519,29 @@ static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *v > int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, > bool install_pmd) > { > + struct mmu_notifier_range range; > + bool notified = false; > unsigned long haddr = addr & HPAGE_PMD_MASK; > struct vm_area_struct *vma = vma_lookup(mm, haddr); > struct page *hpage; > pte_t *start_pte, *pte; > - pmd_t *pmd; > - spinlock_t *ptl; > - int count = 0, result = SCAN_FAIL; > + pmd_t *pmd, pgt_pmd; > + spinlock_t *pml, *ptl; > + int nr_ptes = 0, result = SCAN_FAIL; > int i; > > - mmap_assert_write_locked(mm); > + mmap_assert_locked(mm); > + > + /* First check VMA found, in case page tables are being torn down */ > + if (!vma || !vma->vm_file || > + !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE)) > + return SCAN_VMA_CHECK; > > /* Fast check before locking page if already PMD-mapped */ > result = find_pmd_or_thp_or_none(mm, haddr, &pmd); > if (result == SCAN_PMD_MAPPED) > return result; > > - if (!vma || !vma->vm_file || > - !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE)) > - return SCAN_VMA_CHECK; > - > /* > * If we are here, we've succeeded in replacing all the native pages > * in the page cache with a single hugepage. If a mm were to fault-in > @@ -1610,6 +1571,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, > goto drop_hpage; > } > > + result = find_pmd_or_thp_or_none(mm, haddr, &pmd); > switch (result) { > case SCAN_SUCCEED: > break; > @@ -1623,27 +1585,10 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, > goto drop_hpage; > } > > - /* Lock the vma before taking i_mmap and page table locks */ > - vma_start_write(vma); > - > - /* > - * We need to lock the mapping so that from here on, only GUP-fast and > - * hardware page walks can access the parts of the page tables that > - * we're operating on. > - * See collapse_and_free_pmd(). > - */ > - i_mmap_lock_write(vma->vm_file->f_mapping); > - > - /* > - * This spinlock should be unnecessary: Nobody else should be accessing > - * the page tables under spinlock protection here, only > - * lockless_pages_from_mm() and the hardware page walker can access page > - * tables while all the high-level locks are held in write mode. > - */ > result = SCAN_FAIL; > start_pte = pte_offset_map_lock(mm, pmd, haddr, &ptl); > - if (!start_pte) > - goto drop_immap; > + if (!start_pte) /* mmap_lock + page lock should prevent this */ > + goto drop_hpage; > > /* step 1: check all mapped PTEs are to the right huge page */ > for (i = 0, addr = haddr, pte = start_pte; > @@ -1670,10 +1615,18 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, > */ > if (hpage + i != page) > goto abort; > - count++; > } > > - /* step 2: adjust rmap */ > + pte_unmap_unlock(start_pte, ptl); > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, > + haddr, haddr + HPAGE_PMD_SIZE); > + mmu_notifier_invalidate_range_start(&range); > + notified = true; > + start_pte = pte_offset_map_lock(mm, pmd, haddr, &ptl); > + if (!start_pte) /* mmap_lock + page lock should prevent this */ > + goto abort; > + > + /* step 2: clear page table and adjust rmap */ > for (i = 0, addr = haddr, pte = start_pte; > i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE, pte++) { > struct page *page; > @@ -1681,47 +1634,76 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, > > if (pte_none(ptent)) > continue; > - page = vm_normal_page(vma, addr, ptent); > - if (WARN_ON_ONCE(page && is_zone_device_page(page))) > + /* > + * We dropped ptl after the first scan, to do the mmu_notifier: > + * page lock stops more PTEs of the hpage being faulted in, but > + * does not stop write faults COWing anon copies from existing > + * PTEs; and does not stop those being swapped out or migrated. > + */ > + if (!pte_present(ptent)) { > + result = SCAN_PTE_NON_PRESENT; > goto abort; > + } > + page = vm_normal_page(vma, addr, ptent); > + if (hpage + i != page) > + goto abort; > + > + /* > + * Must clear entry, or a racing truncate may re-remove it. > + * TLB flush can be left until pmdp_collapse_flush() does it. > + * PTE dirty? Shmem page is already dirty; file is read-only. > + */ > + pte_clear(mm, addr, pte); This is not non-present PTE entry, so we should call ptep_clear() to let page_table_check track the PTE clearing operation, right? Otherwise it may lead to false positives? Thanks, Qi > page_remove_rmap(page, vma, false); > + nr_ptes++; > } > > pte_unmap_unlock(start_pte, ptl); > > /* step 3: set proper refcount and mm_counters. */ > - if (count) { > - page_ref_sub(hpage, count); > - add_mm_counter(vma->vm_mm, mm_counter_file(hpage), -count); > + if (nr_ptes) { > + page_ref_sub(hpage, nr_ptes); > + add_mm_counter(mm, mm_counter_file(hpage), -nr_ptes); > } > > - /* step 4: remove pte entries */ > - /* we make no change to anon, but protect concurrent anon page lookup */ > - if (vma->anon_vma) > - anon_vma_lock_write(vma->anon_vma); > + /* step 4: remove page table */ > > - collapse_and_free_pmd(mm, vma, haddr, pmd); > + /* Huge page lock is still held, so page table must remain empty */ > + pml = pmd_lock(mm, pmd); > + if (ptl != pml) > + spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); > + pgt_pmd = pmdp_collapse_flush(vma, haddr, pmd); > + pmdp_get_lockless_sync(); > + if (ptl != pml) > + spin_unlock(ptl); > + spin_unlock(pml); > > - if (vma->anon_vma) > - anon_vma_unlock_write(vma->anon_vma); > - i_mmap_unlock_write(vma->vm_file->f_mapping); > + mmu_notifier_invalidate_range_end(&range); > + > + mm_dec_nr_ptes(mm); > + page_table_check_pte_clear_range(mm, haddr, pgt_pmd); > + pte_free_defer(mm, pmd_pgtable(pgt_pmd)); > > maybe_install_pmd: > /* step 5: install pmd entry */ > result = install_pmd > ? set_huge_pmd(vma, haddr, pmd, hpage) > : SCAN_SUCCEED; > - > + goto drop_hpage; > +abort: > + if (nr_ptes) { > + flush_tlb_mm(mm); > + page_ref_sub(hpage, nr_ptes); > + add_mm_counter(mm, mm_counter_file(hpage), -nr_ptes); > + } > + if (start_pte) > + pte_unmap_unlock(start_pte, ptl); > + if (notified) > + mmu_notifier_invalidate_range_end(&range); > drop_hpage: > unlock_page(hpage); > put_page(hpage); > return result; > - > -abort: > - pte_unmap_unlock(start_pte, ptl); > -drop_immap: > - i_mmap_unlock_write(vma->vm_file->f_mapping); > - goto drop_hpage; > } > > static void khugepaged_collapse_pte_mapped_thps(struct khugepaged_mm_slot *mm_slot) > @@ -2855,9 +2837,9 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev, > case SCAN_PTE_MAPPED_HUGEPAGE: > BUG_ON(mmap_locked); > BUG_ON(*prev); > - mmap_write_lock(mm); > + mmap_read_lock(mm); > result = collapse_pte_mapped_thp(mm, addr, true); > - mmap_write_unlock(mm); > + mmap_locked = true; > goto handle_result; > /* Whitelisted set of results where continuing OK */ > case SCAN_PMD_NULL: