From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F38EC7EE25 for ; Fri, 9 Jun 2023 01:08:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1A9918E0010; Thu, 8 Jun 2023 21:08:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 132EE8E0001; Thu, 8 Jun 2023 21:08:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EEFB08E0010; Thu, 8 Jun 2023 21:08:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id DD1C48E0001 for ; Thu, 8 Jun 2023 21:08:27 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id AFE47AEEEB for ; Fri, 9 Jun 2023 01:08:27 +0000 (UTC) X-FDA: 80881423854.04.D6D0A6F Received: from mail-qk1-f179.google.com (mail-qk1-f179.google.com [209.85.222.179]) by imf23.hostedemail.com (Postfix) with ESMTP id E67D214000C for ; Fri, 9 Jun 2023 01:08:25 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=4TUvz+8K; spf=pass (imf23.hostedemail.com: domain of hughd@google.com designates 209.85.222.179 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1686272905; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=howkEJoV//tXil1Z2yV8DA3fQgmxIGa28HiuE8oWzhc=; b=SmT7bk4ESD3pDT0VWHhlQm6CxSUCCJOSqFue4WKrNgXMo3AD0Wqh2GdFU1NzphCiYymOn/ Efht09nWqImgeibv4aG/HosxLxEGAGM0DcuJDIr11YaRBypjO91aRgZweh2mMQ+BHH3J/b WDCnY0gy64N2V1C/48R0h3U0H3MSGo0= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=4TUvz+8K; spf=pass (imf23.hostedemail.com: domain of hughd@google.com designates 209.85.222.179 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1686272905; a=rsa-sha256; cv=none; b=mhiZuPtSDoaMIsWtK2XefdFQ3aPwgbnj+q89TaJuUpTOx0XoStOz6YRxcW4W2oo9x30fsC IaUY92F1+gSfANzNANaEE3aWdnNUr0Lw6kfPCVmKVKKhIKwvs/mAHcXDUnWbpwKj0vdzNI UZvJBVF5lUMCt678blTD3q8eKsIE8cU= Received: by mail-qk1-f179.google.com with SMTP id af79cd13be357-75ebd39fac8so105222885a.3 for ; Thu, 08 Jun 2023 18:08:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686272905; x=1688864905; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=howkEJoV//tXil1Z2yV8DA3fQgmxIGa28HiuE8oWzhc=; b=4TUvz+8KoUFPjBrCAcYR59EYGfTLuV6CLia8XZUURQDZbunBGHJYb+jGkZmsygXgl3 +15vELNZ2tt6xf6nRk+326C0YlS5yYQ2GCB6uL+UjODH+sl0FpGKeWODfJXUrOGUmWtC s13E9TqS54wkrJ0CBEm34+uvV622uJ8q5jR+3xcY9g99e07dWOHLQHDJc1AmZlHp7BF3 G5Ij1yJL5CY3uB6yvHN60sdMB/duTTXIRCBGPDCpPIbQNiuj+9xk+3ynZo+U+EWgi22V hQAEa6GzzVBUVKOsI+w/Ikakst0ZV10IGK7MsNaGUrB0ra9/vx96mtgIfR6mIWCjP/h4 L8aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686272905; x=1688864905; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=howkEJoV//tXil1Z2yV8DA3fQgmxIGa28HiuE8oWzhc=; b=Z2z+CTmGHaViVGzuEyQQjq4AT5viLu0pJvfAjrz2h/Bi5F1qt4kp/Ag2vOrkVZz82m qSRS2NcoFaI8hgBj9feRlX7nTidMZSoAhC63y25dBySoKDE7TPM3OzjH+ugCaaBaL2BF aVH4uGGylEvZZnKMN+lLtewHMo1+vYE/bDaPJ9OkWOUcFNOOXn1sSH+oSOKVxx9vsIgM K8/lVDPq3BqdXHDzzMvJ7Vn1n3YQYNOPet0rR7ecyfI5V4RLnSvdUpV94YtXKcDuhKO6 XP5tPPbyGg4U+qbLqQshqvK7wXZKg/rLv+HSKNdnX/sxf3FdK6occMQqlBgGAsMUTOZX 1hTQ== X-Gm-Message-State: AC+VfDyTjILWWCCX4ickcFpuskj2iCVCTwPiZWSLJwis8SXLMLyFnhg0 hGf3jtgB2AuURAl3Cr6oTS1PWQ== X-Google-Smtp-Source: ACHHUZ7/PxpX5u49rxcuOgTsJSbOZnLq1smZ1N/fcsjFrRWYRcflJMGjskZfjY8eigXMifdaD6g1aw== X-Received: by 2002:a05:620a:84c6:b0:75d:5571:64c0 with SMTP id pq6-20020a05620a84c600b0075d557164c0mr6330539qkn.37.1686272904886; Thu, 08 Jun 2023 18:08:24 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id x204-20020a81a0d5000000b00565de196516sm286345ywg.32.2023.06.08.18.08.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Jun 2023 18:08:24 -0700 (PDT) Date: Thu, 8 Jun 2023 18:08:20 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Mike Kravetz , Mike Rapoport , "Kirill A. Shutemov" , Matthew Wilcox , David Hildenbrand , Suren Baghdasaryan , Qi Zheng , Yang Shi , Mel Gorman , Peter Xu , Peter Zijlstra , Will Deacon , Yu Zhao , Alistair Popple , Ralph Campbell , Ira Weiny , Steven Price , SeongJae Park , Lorenzo Stoakes , Huang Ying , Naoya Horiguchi , Christophe Leroy , Zack Rusin , Jason Gunthorpe , Axel Rasmussen , Anshuman Khandual , Pasha Tatashin , Miaohe Lin , Minchan Kim , Christoph Hellwig , Song Liu , Thomas Hellstrom , Ryan Roberts , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 02/32] mm/migrate: remove cruft from migration_entry_wait()s In-Reply-To: Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Rspamd-Queue-Id: E67D214000C X-Rspam-User: X-Stat-Signature: ja5ccreciefttn598scj46bj5faprk18 X-Rspamd-Server: rspam01 X-HE-Tag: 1686272905-570598 X-HE-Meta: U2FsdGVkX19UQ/NUnzBbboUpZMzERFVPmyB5qy40XT3NxDWU9GC9qOffFBzVSrVgK/ty4PkQYg2LiYE0LBjYjtKseAi0yQIV+D/Ury7tSm3dmYe1B/ACBaH55A7cX7eM1soSEXNK3QpwG2yydmvpmNfZD7JvYX9xFAe87qdwQnajpNEVsPzNTve2jk6LWcfrRKcvSV8UXOwQHQbis/sRIrhI6bLgVMGcApnM0klmm4jxEpZVcDQ8P8zFxo2hivSKxAdma0q6zda8XngR5meDFw2u5j6BxAwcKvSzx4P5Q+e8JZhgncTrVEzy/ntR6aLeW1Wj/5EW/MI4UaCwfzlgjPbhLZzhgdR+NpRpsgi/JVlIWY3jqBydqIz1y2jnQ/GEvedcSmHNpniIRD16hTvFnp3r5Ay3GWe9fXUHf9Ce4lHK6qgxZaKZvxiGDQOsJKOCgRJG7PrfIDsSkKUmhk2vaQurBDQgMWIPBK9D+dNXAnIUuDwymuAm7EyKm9unBEVW2wTtjggAMCoALvIYjjp8Y6jHge/+nek3zwPcZInO2F/3aOkdAJ4e7GXaUmQ0cO2gIKK9W67V8sMDo6vHhQcRsnkP9KTGV4IU3iiOO8ZYePuy0vzettjQUzJ8TXQrsSchVmFxQM63dslcp/sM8wPYVnBiRv0cE4rlsPwy/yPZK9FDQ3CAR6iN34jlAsPdmbnhauvA/XNKwECyz0UDK0NGoNs4SRuApKrbHG0EyqlLL1h2X+gunoSCvEqj+GBzBnkbP7wZhvgHeYzuIMz3Wq/+211bQiegCaOl6IKFPOIkdUtrqdNatzx9WJKb2kCuZYolk0a2Ii1z1keDDLwye/tqK3PolXWwolKj8t9JwtDHefYfAbl0QnS29YgRPrmtPrP4DaPNKUfg+RyyjEbjuu3d8INm/gDHxVpONqpWW/1SYqVzPDkOaEkiuy4qH8+R8NLiGFDbxZ3aUR3Sg44Pleg J8F5F9x+ fRT7QFr0tCGnhke5pwUpUCPAw8u31jLyc7vfqSEikzR8eQdgOZoVf8IuK+zlTW02WkR/rNGJit7BsUDd5XGT8l0OiQQR+8aGHbQYglj3g4MMee3KRaNP2Om6vjUY/AMepNltLHyE9XBmZWrr1yjNg8MlQ9T8w7JzD1gGk3NjGFxuBeq5h/cdwvZTwcEe50vSwbKMi+IeOobE9L5s8McpVlBNyLvU0mV7KpoUHmjP48Qq8n45g6V8bRbwF050o2MRcfw5rFOFTWt6uuKq546IUve9jUHGA5CNzoCx9HFaUWH+Pqqb1NSbZz/uUTpHQSmLJ/KkmTdC4ZjocWYHu8KZRxpyKvNQuSKQJ0dEfUrxFkBozZQJPucXz95UUZDd8LWOdxHEWwHTgd0LQfcqJ71Q5hpONz5kFUrmUOZ8dpwUoK7cWq1xkDxBudeKbdu/8cy3xDG4B4P9i4bRWc+A= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: migration_entry_wait_on_locked() does not need to take a mapped pte pointer, its callers can do the unmap first. Annotate it with __releases(ptl) to reduce sparse warnings. Fold __migration_entry_wait_huge() into migration_entry_wait_huge(). Fold __migration_entry_wait() into migration_entry_wait(), preferring the tighter pte_offset_map_lock() to pte_offset_map() and pte_lockptr(). Signed-off-by: Hugh Dickins Reviewed-by: Alistair Popple --- include/linux/migrate.h | 4 ++-- include/linux/swapops.h | 17 +++-------------- mm/filemap.c | 13 ++++--------- mm/migrate.c | 37 +++++++++++++------------------------ 4 files changed, 22 insertions(+), 49 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 6241a1596a75..affea3063473 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -75,8 +75,8 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode); int migrate_huge_page_move_mapping(struct address_space *mapping, struct folio *dst, struct folio *src); -void migration_entry_wait_on_locked(swp_entry_t entry, pte_t *ptep, - spinlock_t *ptl); +void migration_entry_wait_on_locked(swp_entry_t entry, spinlock_t *ptl) + __releases(ptl); void folio_migrate_flags(struct folio *newfolio, struct folio *folio); void folio_migrate_copy(struct folio *newfolio, struct folio *folio); int folio_migrate_mapping(struct address_space *mapping, diff --git a/include/linux/swapops.h b/include/linux/swapops.h index 3a451b7afcb3..4c932cb45e0b 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -332,15 +332,9 @@ static inline bool is_migration_entry_dirty(swp_entry_t entry) return false; } -extern void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, - spinlock_t *ptl); extern void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, unsigned long address); -#ifdef CONFIG_HUGETLB_PAGE -extern void __migration_entry_wait_huge(struct vm_area_struct *vma, - pte_t *ptep, spinlock_t *ptl); extern void migration_entry_wait_huge(struct vm_area_struct *vma, pte_t *pte); -#endif /* CONFIG_HUGETLB_PAGE */ #else /* CONFIG_MIGRATION */ static inline swp_entry_t make_readable_migration_entry(pgoff_t offset) { @@ -362,15 +356,10 @@ static inline int is_migration_entry(swp_entry_t swp) return 0; } -static inline void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, - spinlock_t *ptl) { } static inline void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, - unsigned long address) { } -#ifdef CONFIG_HUGETLB_PAGE -static inline void __migration_entry_wait_huge(struct vm_area_struct *vma, - pte_t *ptep, spinlock_t *ptl) { } -static inline void migration_entry_wait_huge(struct vm_area_struct *vma, pte_t *pte) { } -#endif /* CONFIG_HUGETLB_PAGE */ + unsigned long address) { } +static inline void migration_entry_wait_huge(struct vm_area_struct *vma, + pte_t *pte) { } static inline int is_writable_migration_entry(swp_entry_t entry) { return 0; diff --git a/mm/filemap.c b/mm/filemap.c index b4c9bd368b7e..28b42ee848a4 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1359,8 +1359,6 @@ static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, /** * migration_entry_wait_on_locked - Wait for a migration entry to be removed * @entry: migration swap entry. - * @ptep: mapped pte pointer. Will return with the ptep unmapped. Only required - * for pte entries, pass NULL for pmd entries. * @ptl: already locked ptl. This function will drop the lock. * * Wait for a migration entry referencing the given page to be removed. This is @@ -1369,13 +1367,13 @@ static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, * should be called while holding the ptl for the migration entry referencing * the page. * - * Returns after unmapping and unlocking the pte/ptl with pte_unmap_unlock(). + * Returns after unlocking the ptl. * * This follows the same logic as folio_wait_bit_common() so see the comments * there. */ -void migration_entry_wait_on_locked(swp_entry_t entry, pte_t *ptep, - spinlock_t *ptl) +void migration_entry_wait_on_locked(swp_entry_t entry, spinlock_t *ptl) + __releases(ptl) { struct wait_page_queue wait_page; wait_queue_entry_t *wait = &wait_page.wait; @@ -1409,10 +1407,7 @@ void migration_entry_wait_on_locked(swp_entry_t entry, pte_t *ptep, * a valid reference to the page, and it must take the ptl to remove the * migration entry. So the page is valid until the ptl is dropped. */ - if (ptep) - pte_unmap_unlock(ptep, ptl); - else - spin_unlock(ptl); + spin_unlock(ptl); for (;;) { unsigned int flags; diff --git a/mm/migrate.c b/mm/migrate.c index 01cac26a3127..3ecb7a40075f 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -296,14 +296,18 @@ void remove_migration_ptes(struct folio *src, struct folio *dst, bool locked) * get to the page and wait until migration is finished. * When we return from this function the fault will be retried. */ -void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, - spinlock_t *ptl) +void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, + unsigned long address) { + spinlock_t *ptl; + pte_t *ptep; pte_t pte; swp_entry_t entry; - spin_lock(ptl); + ptep = pte_offset_map_lock(mm, pmd, address, &ptl); pte = *ptep; + pte_unmap(ptep); + if (!is_swap_pte(pte)) goto out; @@ -311,18 +315,10 @@ void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, if (!is_migration_entry(entry)) goto out; - migration_entry_wait_on_locked(entry, ptep, ptl); + migration_entry_wait_on_locked(entry, ptl); return; out: - pte_unmap_unlock(ptep, ptl); -} - -void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, - unsigned long address) -{ - spinlock_t *ptl = pte_lockptr(mm, pmd); - pte_t *ptep = pte_offset_map(pmd, address); - __migration_entry_wait(mm, ptep, ptl); + spin_unlock(ptl); } #ifdef CONFIG_HUGETLB_PAGE @@ -332,9 +328,9 @@ void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, * * This function will release the vma lock before returning. */ -void __migration_entry_wait_huge(struct vm_area_struct *vma, - pte_t *ptep, spinlock_t *ptl) +void migration_entry_wait_huge(struct vm_area_struct *vma, pte_t *ptep) { + spinlock_t *ptl = huge_pte_lockptr(hstate_vma(vma), vma->vm_mm, ptep); pte_t pte; hugetlb_vma_assert_locked(vma); @@ -352,16 +348,9 @@ void __migration_entry_wait_huge(struct vm_area_struct *vma, * lock release in migration_entry_wait_on_locked(). */ hugetlb_vma_unlock_read(vma); - migration_entry_wait_on_locked(pte_to_swp_entry(pte), NULL, ptl); + migration_entry_wait_on_locked(pte_to_swp_entry(pte), ptl); } } - -void migration_entry_wait_huge(struct vm_area_struct *vma, pte_t *pte) -{ - spinlock_t *ptl = huge_pte_lockptr(hstate_vma(vma), vma->vm_mm, pte); - - __migration_entry_wait_huge(vma, pte, ptl); -} #endif #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION @@ -372,7 +361,7 @@ void pmd_migration_entry_wait(struct mm_struct *mm, pmd_t *pmd) ptl = pmd_lock(mm, pmd); if (!is_pmd_migration_entry(*pmd)) goto unlock; - migration_entry_wait_on_locked(pmd_to_swp_entry(*pmd), NULL, ptl); + migration_entry_wait_on_locked(pmd_to_swp_entry(*pmd), ptl); return; unlock: spin_unlock(ptl); -- 2.35.3