From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6685AC433E1 for ; Wed, 1 Jul 2020 08:38:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 389F42074D for ; Wed, 1 Jul 2020 08:38:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 389F42074D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=8bytes.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BBDCA8D0025; Wed, 1 Jul 2020 04:38:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B2C4E8D0024; Wed, 1 Jul 2020 04:38:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9A7F28D0025; Wed, 1 Jul 2020 04:38:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0253.hostedemail.com [216.40.44.253]) by kanga.kvack.org (Postfix) with ESMTP id 61AA08D0012 for ; Wed, 1 Jul 2020 04:38:47 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 2EF91181AC9CC for ; Wed, 1 Jul 2020 08:38:47 +0000 (UTC) X-FDA: 76988856294.21.soup66_190dbad26e7f Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id B81BE180442C0 for ; Wed, 1 Jul 2020 08:38:43 +0000 (UTC) X-HE-Tag: soup66_190dbad26e7f X-Filterd-Recvd-Size: 3406 Received: from theia.8bytes.org (8bytes.org [81.169.241.247]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Wed, 1 Jul 2020 08:38:42 +0000 (UTC) Received: by theia.8bytes.org (Postfix, from userid 1000) id 07570217; Wed, 1 Jul 2020 10:38:41 +0200 (CEST) From: Joerg Roedel To: x86@kernel.org Cc: hpa@zytor.com, Dave Hansen , Andy Lutomirski , Peter Zijlstra , Andrew Morton , Steven Rostedt , joro@8bytes.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Joerg Roedel Subject: [PATCH v2 1/3] x86/mm/64: Pre-allocate p4d/pud pages for vmalloc area Date: Wed, 1 Jul 2020 10:38:36 +0200 Message-Id: <20200701083839.19193-2-joro@8bytes.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200701083839.19193-1-joro@8bytes.org> References: <20200701083839.19193-1-joro@8bytes.org> X-Rspamd-Queue-Id: B81BE180442C0 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joerg Roedel Pre-allocate the page-table pages for the vmalloc area at the level which needs synchronization on x86. This is P4D for 5-level and PUD for 4-level paging. Doing this at boot makes sure all page-tables in the system have these pages already and do not need to be synchronized at runtime. The runtime synchronizatin takes the pgd_lock and iterates over all page-tables in the system, so it can take quite long and is better avoided. Signed-off-by: Joerg Roedel --- arch/x86/mm/init_64.c | 52 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 52 insertions(+) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index dbae185511cd..e76bdb001460 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1238,6 +1238,56 @@ static void __init register_page_bootmem_info(void) #endif } +/* + * Pre-allocates page-table pages for the vmalloc area in the kernel page-table. + * Only the level which needs to be synchronized between all page-tables is + * allocated because the synchronization can be expensive. + */ +static void __init preallocate_vmalloc_pages(void) +{ + unsigned long addr; + const char *lvl; + + for (addr = VMALLOC_START; addr <= VMALLOC_END; addr = ALIGN(addr + 1, PGDIR_SIZE)) { + pgd_t *pgd = pgd_offset_k(addr); + p4d_t *p4d; + pud_t *pud; + + p4d = p4d_offset(pgd, addr); + if (p4d_none(*p4d)) { + /* Can only happen with 5-level paging */ + p4d = p4d_alloc(&init_mm, pgd, addr); + if (!p4d) { + lvl = "p4d"; + goto failed; + } + } + + if (pgtable_l5_enabled()) + continue; + + pud = pud_offset(p4d, addr); + if (pud_none(*pud)) { + /* Ends up here only with 4-level paging */ + pud = pud_alloc(&init_mm, p4d, addr); + if (!pud) { + lvl = "pud"; + goto failed; + } + } + } + + return; + +failed: + + /* + * The pages have to be there now or they will be missing in + * process page-tables later. + */ + panic("Failed to pre-allocate %s pages for vmalloc area\n", lvl); +} + void __init mem_init(void) { pci_iommu_alloc(); @@ -1261,6 +1311,8 @@ void __init mem_init(void) if (get_gate_vma(&init_mm)) kclist_add(&kcore_vsyscall, (void *)VSYSCALL_ADDR, PAGE_SIZE, KCORE_USER); + preallocate_vmalloc_pages(); + mem_init_print_info(NULL); } -- 2.17.1