From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DE28C388F2 for ; Sun, 8 Nov 2020 06:58:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 78D47217A0 for ; Sun, 8 Nov 2020 06:58:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="S1Y5DBsw" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 78D47217A0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 177336B005D; Sun, 8 Nov 2020 01:58:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1011C6B0068; Sun, 8 Nov 2020 01:58:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EBC2A6B006C; Sun, 8 Nov 2020 01:58:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0161.hostedemail.com [216.40.44.161]) by kanga.kvack.org (Postfix) with ESMTP id BEC8E6B005D for ; Sun, 8 Nov 2020 01:58:25 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 5EA128249980 for ; Sun, 8 Nov 2020 06:58:25 +0000 (UTC) X-FDA: 77460347370.03.spot47_0e1308f272e1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 36D2B28A4E8 for ; Sun, 8 Nov 2020 06:58:25 +0000 (UTC) X-HE-Tag: spot47_0e1308f272e1 X-Filterd-Recvd-Size: 8571 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Sun, 8 Nov 2020 06:58:24 +0000 (UTC) Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 48F71221FE; Sun, 8 Nov 2020 06:58:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604818703; bh=590zK77IIK/GRU0iesSEW1g1nAaxcF2LlciXyBdNWeI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=S1Y5DBswnJRHvAhvlNmC6rXkkgTQxCugaaOkUKj5K5HgitkS61OQ0AYHixTLTqX7c AsmYAXrwvk4Jm9jm5qdYR9OjHM/USSXU7ddTb6jnhZOcumAy1eP1nZ3lLOSywTmN7K RZjefYfRwtk+pch6QAA8dQU6ZB0oI3gOHb8aaSxU= From: Mike Rapoport To: Andrew Morton Cc: Albert Ou , Andy Lutomirski , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Christian Borntraeger , Christoph Lameter , "David S. Miller" , Dave Hansen , David Hildenbrand , David Rientjes , "Edgecombe, Rick P" , "H. Peter Anvin" , Heiko Carstens , Ingo Molnar , Joonsoo Kim , "Kirill A . Shutemov" , "Kirill A. Shutemov" , Len Brown , Michael Ellerman , Mike Rapoport , Mike Rapoport , Palmer Dabbelt , Paul Mackerras , Paul Walmsley , Pavel Machek , Pekka Enberg , Peter Zijlstra , "Rafael J. Wysocki" , Thomas Gleixner , Vasily Gorbik , Vlastimil Babka , Will Deacon , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-pm@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v5 1/5] mm: introduce debug_pagealloc_{map,unmap}_pages() helpers Date: Sun, 8 Nov 2020 08:57:54 +0200 Message-Id: <20201108065758.1815-2-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201108065758.1815-1-rppt@kernel.org> References: <20201108065758.1815-1-rppt@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel direct mapping after free_pages(). The pages than need to be mapped back before they could be used. Theese mapping operations use __kernel_map_pages() guarded with with debug_pagealloc_enabled(). The only place that calls __kernel_map_pages() without checking whether DEBUG_PAGEALLOC is enabled is the hibernation code that presumes availability of this function when ARCH_HAS_SET_DIRECT_MAP is set. Still, on arm64, __kernel_map_pages() will bail out when DEBUG_PAGEALLOC = is not enabled but set_direct_map_invalid_noflush() may render some pages no= t present in the direct map and hibernation code won't be able to save such pages. To make page allocation debugging and hibernation interaction more robust= , the dependency on DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP has to be ma= de more explicit. Start with combining the guard condition and the call to __kernel_map_pages() into debug_pagealloc_map_pages() and debug_pagealloc_unmap_pages() functions to emphasize that __kernel_map_pages() should not be called without DEBUG_PAGEALLOC and use these new functions to map/unmap pages when page allocation debugging is enabled. Signed-off-by: Mike Rapoport Reviewed-by: David Hildenbrand Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka --- include/linux/mm.h | 15 +++++++++++++++ mm/memory_hotplug.c | 3 +-- mm/page_alloc.c | 6 ++---- mm/slab.c | 16 +++++++--------- 4 files changed, 25 insertions(+), 15 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index ef360fe70aaf..bb8c70178f4e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2936,12 +2936,27 @@ kernel_map_pages(struct page *page, int numpages,= int enable) { __kernel_map_pages(page, numpages, enable); } + +static inline void debug_pagealloc_map_pages(struct page *page, int nump= ages) +{ + if (debug_pagealloc_enabled_static()) + __kernel_map_pages(page, numpages, 1); +} + +static inline void debug_pagealloc_unmap_pages(struct page *page, int nu= mpages) +{ + if (debug_pagealloc_enabled_static()) + __kernel_map_pages(page, numpages, 0); +} + #ifdef CONFIG_HIBERNATION extern bool kernel_page_present(struct page *page); #endif /* CONFIG_HIBERNATION */ #else /* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */ static inline void kernel_map_pages(struct page *page, int numpages, int enable) {} +static inline void debug_pagealloc_map_pages(struct page *page, int nump= ages) {} +static inline void debug_pagealloc_unmap_pages(struct page *page, int nu= mpages) {} #ifdef CONFIG_HIBERNATION static inline bool kernel_page_present(struct page *page) { return true;= } #endif /* CONFIG_HIBERNATION */ diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index b44d4c7ba73b..f18f86ba2a68 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -614,8 +614,7 @@ void generic_online_page(struct page *page, unsigned = int order) * so we should map it first. This is better than introducing a special * case in page freeing fast path. */ - if (debug_pagealloc_enabled_static()) - kernel_map_pages(page, 1 << order, 1); + debug_pagealloc_map_pages(page, 1 << order); __free_pages_core(page, order); totalram_pages_add(1UL << order); #ifdef CONFIG_HIGHMEM diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 23f5066bd4a5..db1bf70458d0 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1272,8 +1272,7 @@ static __always_inline bool free_pages_prepare(stru= ct page *page, */ arch_free_page(page, order); =20 - if (debug_pagealloc_enabled_static()) - kernel_map_pages(page, 1 << order, 0); + debug_pagealloc_unmap_pages(page, 1 << order); =20 kasan_free_nondeferred_pages(page, order); =20 @@ -2270,8 +2269,7 @@ inline void post_alloc_hook(struct page *page, unsi= gned int order, set_page_refcounted(page); =20 arch_alloc_page(page, order); - if (debug_pagealloc_enabled_static()) - kernel_map_pages(page, 1 << order, 1); + debug_pagealloc_map_pages(page, 1 << order); kasan_alloc_pages(page, order); kernel_poison_pages(page, 1 << order, 1); set_page_owner(page, order, gfp_flags); diff --git a/mm/slab.c b/mm/slab.c index b1113561b98b..07317386e150 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1428,21 +1428,19 @@ static bool is_debug_pagealloc_cache(struct kmem_= cache *cachep) return false; } =20 -#ifdef CONFIG_DEBUG_PAGEALLOC static void slab_kernel_map(struct kmem_cache *cachep, void *objp, int m= ap) { if (!is_debug_pagealloc_cache(cachep)) return; =20 - kernel_map_pages(virt_to_page(objp), cachep->size / PAGE_SIZE, map); + if (map) + debug_pagealloc_map_pages(virt_to_page(objp), + cachep->size / PAGE_SIZE); + else + debug_pagealloc_unmap_pages(virt_to_page(objp), + cachep->size / PAGE_SIZE); } =20 -#else -static inline void slab_kernel_map(struct kmem_cache *cachep, void *objp= , - int map) {} - -#endif - static void poison_obj(struct kmem_cache *cachep, void *addr, unsigned c= har val) { int size =3D cachep->object_size; @@ -2062,7 +2060,7 @@ int __kmem_cache_create(struct kmem_cache *cachep, = slab_flags_t flags) =20 #if DEBUG /* - * If we're going to use the generic kernel_map_pages() + * If we're going to use the generic debug_pagealloc_map_pages() * poisoning, then it's going to smash the contents of * the redzone and userword anyhow, so switch them off. */ --=20 2.28.0