From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64E95C2D0D6 for ; Fri, 6 Dec 2019 13:54:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 33EF924677 for ; Fri, 6 Dec 2019 13:54:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 33EF924677 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5A8046B162C; Fri, 6 Dec 2019 08:54:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 57DC66B162D; Fri, 6 Dec 2019 08:54:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4BB676B162E; Fri, 6 Dec 2019 08:54:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0114.hostedemail.com [216.40.44.114]) by kanga.kvack.org (Postfix) with ESMTP id 342716B162C for ; Fri, 6 Dec 2019 08:54:23 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id F119A2C33 for ; Fri, 6 Dec 2019 13:54:22 +0000 (UTC) X-FDA: 76234861164.20.art32_4351dd139f347 X-HE-Tag: art32_4351dd139f347 X-Filterd-Recvd-Size: 3791 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Fri, 6 Dec 2019 13:54:22 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D42ED1FB; Fri, 6 Dec 2019 05:54:21 -0800 (PST) Received: from e112269-lin.cambridge.arm.com (e112269-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 54C263F718; Fri, 6 Dec 2019 05:54:19 -0800 (PST) From: Steven Price To: Andrew Morton , linux-mm@kvack.org Cc: Steven Price , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Dave Hansen , Ingo Molnar , James Morse , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Peter Zijlstra , Thomas Gleixner , Will Deacon , x86@kernel.org, "H. Peter Anvin" , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Mark Rutland , "Liang, Kan" Subject: [PATCH v16 13/25] mm: pagewalk: Don't lock PTEs for walk_page_range_novma() Date: Fri, 6 Dec 2019 13:53:04 +0000 Message-Id: <20191206135316.47703-14-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191206135316.47703-1-steven.price@arm.com> References: <20191206135316.47703-1-steven.price@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: walk_page_range_novma() can be used to walk page tables or the kernel or for firmware. These page tables may contain entries that are not backed by a struct page and so it isn't (in general) possible to take the PTE lock for the pte_entry() callback. So update walk_pte_range() to only take the lock when no_vma=3D=3Dfalse and add a comment explaining the difference to walk_page_range_novma(). Signed-off-by: Steven Price --- mm/pagewalk.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/mm/pagewalk.c b/mm/pagewalk.c index efa464cf079b..1b9a3ba24c51 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -10,9 +10,10 @@ static int walk_pte_range(pmd_t *pmd, unsigned long ad= dr, unsigned long end, pte_t *pte; int err =3D 0; const struct mm_walk_ops *ops =3D walk->ops; - spinlock_t *ptl; + spinlock_t *uninitialized_var(ptl); =20 - pte =3D pte_offset_map_lock(walk->mm, pmd, addr, &ptl); + pte =3D walk->no_vma ? pte_offset_map(pmd, addr) : + pte_offset_map_lock(walk->mm, pmd, addr, &ptl); for (;;) { err =3D ops->pte_entry(pte, addr, addr + PAGE_SIZE, walk); if (err) @@ -23,7 +24,9 @@ static int walk_pte_range(pmd_t *pmd, unsigned long add= r, unsigned long end, pte++; } =20 - pte_unmap_unlock(pte, ptl); + if (!walk->no_vma) + spin_unlock(ptl); + pte_unmap(pte); return err; } =20 @@ -383,6 +386,12 @@ int walk_page_range(struct mm_struct *mm, unsigned l= ong start, return err; } =20 +/* + * Similar to walk_page_range() but can walk any page tables even if the= y are + * not backed by VMAs. Because 'unusual' entries may be walked this func= tion + * will also not lock the PTEs for the pte_entry() callback. This is use= ful for + * walking the kernel pages tables or page tables for firmware. + */ int walk_page_range_novma(struct mm_struct *mm, unsigned long start, unsigned long end, const struct mm_walk_ops *ops, void *private) --=20 2.20.1