From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E438C77B7C for ; Wed, 10 May 2023 04:51:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0B1BF6B0071; Wed, 10 May 2023 00:51:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 061E66B0072; Wed, 10 May 2023 00:51:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E6B936B0074; Wed, 10 May 2023 00:51:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D81246B0071 for ; Wed, 10 May 2023 00:51:23 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id A324416024B for ; Wed, 10 May 2023 04:51:23 +0000 (UTC) X-FDA: 80773121646.06.3C304D9 Received: from mail-yw1-f171.google.com (mail-yw1-f171.google.com [209.85.128.171]) by imf09.hostedemail.com (Postfix) with ESMTP id CF318140002 for ; Wed, 10 May 2023 04:51:21 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b="5cXf/P6B"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of hughd@google.com designates 209.85.128.171 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1683694281; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=b7UlqbgxG6/xaVRAtCCqUssmJjUq670TXTguXR1OYLA=; b=JugxqGah5zmOdFPxOYRov4Li3y4jtZmzTWt/QXy0GxwYIJrBftLoF/QU8Bida7+FLFMcbo /fb2JsHrVk7VAC1b0Tqznzbwb7Nsda9Hve5pFE4H/2/P0K0MUdDmXNrTgzJ/572H+ZlHJy AEXjZqV+YCoD0pypS2i+gPFWbMTHDOQ= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b="5cXf/P6B"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of hughd@google.com designates 209.85.128.171 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1683694281; a=rsa-sha256; cv=none; b=f1a22eSwdXqnC4vy5lq7DFMKPSO++JGvSHbBZ+2J8vA37BkG8QBjxMYvObEcuRE7hRCdmX /ytOs0KdeRzLmkSxiLcwOcddgGiC9sdJsvB8vxX1GJhFXejX4ZpI03kI+kRxu0qx/EyWks lW6Xk92QzCy0R0cM/SBvQXsC9TMgzUE= Received: by mail-yw1-f171.google.com with SMTP id 00721157ae682-55a5a830238so62393577b3.3 for ; Tue, 09 May 2023 21:51:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1683694281; x=1686286281; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=b7UlqbgxG6/xaVRAtCCqUssmJjUq670TXTguXR1OYLA=; b=5cXf/P6BxU1x2jWAw1XE7lRFR5Hz1XREgrw/VEMhvVcJnXcjM36N/6ukWbrtQrBqKA zSa2Sv7JIhhIHw0jqecJCQ0I3KFtNtFTtcWEm3Dd1ej1SIUJwcKbmfe+obSPbeDZo8gC 6xQBKG42nA4UxjHlH/bbWapoijTCEdQ0FWMFygpKibtGLht3vBtUJrxtZLXSikEjAohO 9Mt4H98I/XZmLwWtASWfmWRRB+ep6DPhrFoxb6sbiSMAPWqCSJ6PoFgtTXczonMbB8CK fEydvLzJU1eBlheYQjHQZRb6dlhSRwpD8khHEgpaWJ2Nq+jcZ1v9u6pbHABwaFooYDMC SQUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683694281; x=1686286281; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=b7UlqbgxG6/xaVRAtCCqUssmJjUq670TXTguXR1OYLA=; b=mHmYXP7Ks0ERTX4qnoTC0rmWnVr/VSiTIozOW4NAIQNaWuFoau+bbEFv4VILjydYDR KZsSQjbPvsHpI9jDDQXM3NBYN54EF8r4NAc8YKgU+3Xzy1Z6hvKssAjN9MGc1n6UOPUp Pjnx7kNesDPtfNujMUkMvCJ7bljoQb82i/sApwUCTb2HfLNm4sKC81WxV64GmbYxXFKf 1U4lXQ6lYWoYXgVc/2EwjfkwwJiz16C69qip0h7+r0g5DFT05Fn5HSOAx8npBRsvzRG+ TSDCz6q1vEoJKvD9tAi9gDO5P4TBuTokZQ3qnYFFGt0mIn5rPSfNA8TQjtgLQWr43wSM PpTw== X-Gm-Message-State: AC+VfDzYayY1Zs+pt27Xwfwj2Ph8dVhnziCuQkMQhuq6TQS+6YFyrfK7 6+g2Gx6Gy9e5s0q4TJ2yLhugBw== X-Google-Smtp-Source: ACHHUZ4qY6ip5x2yFLoBtebXka29+LgP7DYegkav9AGhNOLhqnLW12B3RwfHAHhI/1xbUzIiGo2rmQ== X-Received: by 2002:a0d:f347:0:b0:556:1065:e6a8 with SMTP id c68-20020a0df347000000b005561065e6a8mr19687163ywf.2.1683694280881; Tue, 09 May 2023 21:51:20 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id x186-20020a81a0c3000000b0054eff15530asm3831597ywg.90.2023.05.09.21.51.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 09 May 2023 21:51:20 -0700 (PDT) Date: Tue, 9 May 2023 21:51:16 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Mike Kravetz , Mike Rapoport , "Kirill A. Shutemov" , Matthew Wilcox , David Hildenbrand , Suren Baghdasaryan , Qi Zheng , Russell King , Catalin Marinas , Will Deacon , Geert Uytterhoeven , Greg Ungerer , Michal Simek , Thomas Bogendoerfer , Helge Deller , John David Anglin , "Aneesh Kumar K.V" , Michael Ellerman , Alexandre Ghiti , Palmer Dabbelt , Heiko Carstens , Christian Borntraeger , Claudio Imbrenda , John Paul Adrian Glaubitz , "David S. Miller" , Chris Zankel , Max Filippov , x86@kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 07/23] mips: update_mmu_cache() can replace __update_tlb() In-Reply-To: <77a5d8c-406b-7068-4f17-23b7ac53bc83@google.com> Message-ID: References: <77a5d8c-406b-7068-4f17-23b7ac53bc83@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Stat-Signature: 8eztko3o1fn1f6afobziejzkujzbukbp X-Rspam-User: X-Rspamd-Queue-Id: CF318140002 X-Rspamd-Server: rspam07 X-HE-Tag: 1683694281-897986 X-HE-Meta: U2FsdGVkX18NFCww8bCvapF82c8rdci3mfbG7m5cNg2s8WiBHXri4Wa0wgo5indq7Jtf8WhDL9Gjcwfp6UPgMxG8/k/OagESBF9jgQ4/DOkbMFi/iFUfTr0T3A7J1rkYMEjY8h50Rz6vFfDFUeR7zturJK2neLkdNRss9a+bk/qx+upzwliGiFzj7+JD+V/14vum1uCLqMHYo+LH7b0TvwYJVceIQqori8h9O+St3MIkRj50FlSZa7UuUce5h07T/XE7GIB/Ke5PEw6qQPKCIMQ529jv/54Gbd+b/iqlPgzo3ky23RjbQqgAooRzI/K1G5VVGWUkz2k6AL79k9sIZtIUIQJvOylg2ih9Lmhjr4QRzgfkvBYykFj4l1Q0wS0pZ75R3A2cmAmKdMuhddBvV3yyTRRdVZYFvdgBSdU2OC8AJoi/MHA7z6i3+gGhS9a25+quss9g3KfRgU3DfCyKQ5zmr1Ud0X0qjH1ZXEVTlzyKfuBgNgy0aRUwtGk/ezHlaEUkLlSSDk/8BN4EKERH3aXPFHnMUCK5mL4FR8f+cwyb3K2sQYVtzx0VuE8B0kna3OQrpgNVY5kKA0lpqhWiOuxN306zXYEBeb5Rxt8jUpxP+B+RHxE/BwQnv+oDgJjmXRdbhuzrB/2ZFMn7lmHKRMGDTkILO0DHNB19VEgORVbUhWgKtweaigOl+DDixV1qt6EEDwqDDkYnLctb3ZHJ4v4pz0N25WAOmICb9wYoq8sihW+pDLi3U5e6O9wlcoTy5PX7EEsAumZ3LZ9Skz+eixwAj5ALiB9P1F0TzuQ4J4irgZ3Sg0BbDFE4pO/v1sYT41f+sqDKjDNX+/KRPI/NHm962UI8stJYimXpSxEYMewv6A9qPzAaCmvyOiQ8JB7fWQ+IPMm7fj+/Jgmbz0YGHc6QuJZcrWvgnBBryoXGeyY7ULIviPIQKqTvuPD3bCnEbpQubmiIuUzdjg01M2Q DJsJuZF/ TZ1YNFU4GGJza/NGjLpFu05Vtpnz+d4WiFE9hZaRjkBR3nV+vVc3Mb6Ph+Bt6xGyAGDq5tMdzwjtRMRPLkUoRpr3TwEI3AiI39vxKUjNF7L6UwVeUGw1JzCNO25nbU5e+g7vNJJOHsoY3Ue8NdloaDUBVbVgFXHyT1v8k5wzK1PwIs+m6I8DAsLIJwTfF2YXlujpgQG7nKsudlw0EdqcThhAVnzVl5XLVE/k6F/GV2xSjO15IW2MiH/esyxq1dJm5RaF8rhAZyHkVHYa76s1RrQsxU18Y7tFo3pI8/Xj7RQbWPwjbBPM1Kfwi+n2fpnTKPtGAlpmd7xfDrAArm3+osohQQwKK+gBKFps/J3iRxeOfkOh1EBeeKKXAh91AU8S+FY0Q4Qe49U0coHFM5NLfnpglTMADj2FbYOmsMbJXuwA4dBOc6MffT0dMxA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Don't make update_mmu_cache() a wrapper around __update_tlb(): call it directly, and use the ptep (or pmdp) provided by the caller, instead of re-calling pte_offset_map() - which would raise a question of whether a pte_unmap() is needed to balance it. Check whether the "ptep" provided by the caller is actually the pmdp, instead of testing pmd_huge(): or test pmd_huge() too and warn if it disagrees? This is "hazardous" territory: needs review and testing. Signed-off-by: Hugh Dickins --- arch/mips/include/asm/pgtable.h | 15 +++------------ arch/mips/mm/tlb-r3k.c | 5 +++-- arch/mips/mm/tlb-r4k.c | 9 +++------ 3 files changed, 9 insertions(+), 20 deletions(-) diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h index 574fa14ac8b2..9175dfab08d5 100644 --- a/arch/mips/include/asm/pgtable.h +++ b/arch/mips/include/asm/pgtable.h @@ -565,15 +565,8 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) } #endif -extern void __update_tlb(struct vm_area_struct *vma, unsigned long address, - pte_t pte); - -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) -{ - pte_t pte = *ptep; - __update_tlb(vma, address, pte); -} +extern void update_mmu_cache(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep); #define __HAVE_ARCH_UPDATE_MMU_TLB #define update_mmu_tlb update_mmu_cache @@ -581,9 +574,7 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, static inline void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp) { - pte_t pte = *(pte_t *)pmdp; - - __update_tlb(vma, address, pte); + update_mmu_cache(vma, address, (pte_t *)pmdp); } /* diff --git a/arch/mips/mm/tlb-r3k.c b/arch/mips/mm/tlb-r3k.c index 53dfa2b9316b..e5722cd8dd6d 100644 --- a/arch/mips/mm/tlb-r3k.c +++ b/arch/mips/mm/tlb-r3k.c @@ -176,7 +176,8 @@ void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page) } } -void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte) +void update_mmu_cache(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep) { unsigned long asid_mask = cpu_asid_mask(¤t_cpu_data); unsigned long flags; @@ -203,7 +204,7 @@ void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte) BARRIER; tlb_probe(); idx = read_c0_index(); - write_c0_entrylo0(pte_val(pte)); + write_c0_entrylo0(pte_val(*ptep)); write_c0_entryhi(address | pid); if (idx < 0) { /* BARRIER */ tlb_write_random(); diff --git a/arch/mips/mm/tlb-r4k.c b/arch/mips/mm/tlb-r4k.c index 1b939abbe4ca..c96725d17cab 100644 --- a/arch/mips/mm/tlb-r4k.c +++ b/arch/mips/mm/tlb-r4k.c @@ -290,14 +290,14 @@ void local_flush_tlb_one(unsigned long page) * updates the TLB with the new pte(s), and another which also checks * for the R4k "end of page" hardware bug and does the needy. */ -void __update_tlb(struct vm_area_struct * vma, unsigned long address, pte_t pte) +void update_mmu_cache(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep) { unsigned long flags; pgd_t *pgdp; p4d_t *p4dp; pud_t *pudp; pmd_t *pmdp; - pte_t *ptep; int idx, pid; /* @@ -326,10 +326,9 @@ void __update_tlb(struct vm_area_struct * vma, unsigned long address, pte_t pte) idx = read_c0_index(); #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT /* this could be a huge page */ - if (pmd_huge(*pmdp)) { + if (ptep == (pte_t *)pmdp) { unsigned long lo; write_c0_pagemask(PM_HUGE_MASK); - ptep = (pte_t *)pmdp; lo = pte_to_entrylo(pte_val(*ptep)); write_c0_entrylo0(lo); write_c0_entrylo1(lo + (HPAGE_SIZE >> 7)); @@ -344,8 +343,6 @@ void __update_tlb(struct vm_area_struct * vma, unsigned long address, pte_t pte) } else #endif { - ptep = pte_offset_map(pmdp, address); - #if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32) #ifdef CONFIG_XPA write_c0_entrylo0(pte_to_entrylo(ptep->pte_high)); -- 2.35.3