From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50C7DC433F5 for ; Fri, 22 Apr 2022 06:01:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AEF806B0074; Fri, 22 Apr 2022 02:01:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AA0246B0075; Fri, 22 Apr 2022 02:01:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 968046B0078; Fri, 22 Apr 2022 02:01:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 872096B0074 for ; Fri, 22 Apr 2022 02:01:29 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 665EE20A41 for ; Fri, 22 Apr 2022 06:01:29 +0000 (UTC) X-FDA: 79383467898.30.7410F96 Received: from mail-pf1-f182.google.com (mail-pf1-f182.google.com [209.85.210.182]) by imf01.hostedemail.com (Postfix) with ESMTP id EB7F040027 for ; Fri, 22 Apr 2022 06:01:26 +0000 (UTC) Received: by mail-pf1-f182.google.com with SMTP id y14so6265538pfe.10 for ; Thu, 21 Apr 2022 23:01:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iCtPEUmganS3WW6hgnqPl75q8FroRAr1REDTeJ7X0R0=; b=I478Ghar3KWh63quvKkgtK9ltlcIKE6xMB4aGC1ywBJ0fiu1rd4oq8FEIaZBiWG2Vy XdbNKsmyGSP2SzgBZMBPoMsVEnZqXfDub+b+tWxI5A3JPJ1L8SI5CvfK/RZ7oiWTdrTs FdMzKDGt5hJLtxYG4I0kYObvHpro/xHMdZ4fyK3XldXONdspWEsbo2tUWCQ57yPGDRlv 7gxnJJPW/H1V57G81YCRPeId/zxr4D7+CuH31t3RjSlJkRSqfQ8r39R4IMHRqDvvwP8D SWkGvKF3f1V902b3YuPg0HZJWLCsAMMmt9RzHu2kAgZqExJk0t8zhWPL0x2+MCIVs/Y0 WnjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iCtPEUmganS3WW6hgnqPl75q8FroRAr1REDTeJ7X0R0=; b=6KDFrbxQZLbRLpRvgaCNXKUqu6ezIfYqQXTxhdOGiELv+h2B828IFufrt5FF1MEeS/ g5Xp9jrsF7dPczW61G2FznBN2g7zDrpPkuTMeysDMCcZd+2DFrQRxoE6YxTFrPTGicna /fYL/5EmtPU+gNsfXulm8oFrLJZhxtqkhNth2a6yRYwUSWTcIwoeNzrwdLM7aqHxA2l1 XXDsQXNkLWYDOw2417cYlO++Cc7ncQNQ2n+OTK7/wY1DPVgANel2lFWdjoWfF3a3gqLT c96JqawSf8xId13tSITz/1b7+CQ52g2Mt7mtoLS0dgeCLTHKLivm1PjR//glMhKnaiOj beUg== X-Gm-Message-State: AOAM533evT2bEX+9yBvPKoQhb/EaYivFcL21WFnsv9EVPgLa2NHxDkqS qi4kNlVfp+g14D30naGJFtw= X-Google-Smtp-Source: ABdhPJychbw03HI4297NZJB2yHoXqNJYk8ferxJW1V428PntpJtqiVa6H3q8JY2D4MExZdVRlMhXOw== X-Received: by 2002:a05:6a00:2405:b0:4e1:5008:adcc with SMTP id z5-20020a056a00240500b004e15008adccmr3352801pfh.35.1650607287854; Thu, 21 Apr 2022 23:01:27 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (193-116-116-20.tpgi.com.au. [193.116.116.20]) by smtp.gmail.com with ESMTPSA id y16-20020a637d10000000b00381268f2c6fsm998607pgc.4.2022.04.21.23.01.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Apr 2022 23:01:27 -0700 (PDT) From: Nicholas Piggin To: Paul Menzel Cc: Nicholas Piggin , x86@kernel.org, Song Liu , "Edgecombe, Rick P" , "Torvalds, Linus" , akpm@linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 2/2] Revert "vmalloc: replace VM_NO_HUGE_VMAP with VM_ALLOW_HUGE_VMAP" Date: Fri, 22 Apr 2022 16:01:06 +1000 Message-Id: <20220422060107.781512-3-npiggin@gmail.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220422060107.781512-1-npiggin@gmail.com> References: <20220422060107.781512-1-npiggin@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: EB7F040027 X-Rspam-User: Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=I478Ghar; spf=pass (imf01.hostedemail.com: domain of npiggin@gmail.com designates 209.85.210.182 as permitted sender) smtp.mailfrom=npiggin@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Stat-Signature: rxx3bk9tsxxp1ftnwdnf43nbr67x1c64 X-HE-Tag: 1650607286-593412 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This reverts commit 559089e0a93d44280ec3ab478830af319c56dbe3 The previous commit fixes huge vmalloc for drivers that use the vmalloc_to_page() struct pages. Signed-off-by: Nicholas Piggin --- arch/Kconfig | 6 ++++-- arch/powerpc/kernel/module.c | 2 +- arch/s390/kvm/pv.c | 7 ++++++- include/linux/vmalloc.h | 4 ++-- mm/vmalloc.c | 17 +++++++---------- 5 files changed, 20 insertions(+), 16 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 31c4fdc4a4ba..29b0167c088b 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -854,8 +854,10 @@ config HAVE_ARCH_HUGE_VMAP # # Archs that select this would be capable of PMD-sized vmaps (i.e., -# arch_vmap_pmd_supported() returns true). The VM_ALLOW_HUGE_VMAP flag -# must be used to enable allocations to use hugepages. +# arch_vmap_pmd_supported() returns true), and they must make no assumptions +# that vmalloc memory is mapped with PAGE_SIZE ptes. The VM_NO_HUGE_VMAP flag +# can be used to prohibit arch-specific allocations from using hugepages to +# help with this (e.g., modules may require it). # config HAVE_ARCH_HUGE_VMALLOC depends on HAVE_ARCH_HUGE_VMAP diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c index 97a76a8619fb..40a583e9d3c7 100644 --- a/arch/powerpc/kernel/module.c +++ b/arch/powerpc/kernel/module.c @@ -101,7 +101,7 @@ __module_alloc(unsigned long size, unsigned long start, unsigned long end, bool * too. */ return __vmalloc_node_range(size, 1, start, end, gfp, prot, - VM_FLUSH_RESET_PERMS, + VM_FLUSH_RESET_PERMS | VM_NO_HUGE_VMAP, NUMA_NO_NODE, __builtin_return_address(0)); } diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c index cc7c9599f43e..7f7c0d6af2ce 100644 --- a/arch/s390/kvm/pv.c +++ b/arch/s390/kvm/pv.c @@ -137,7 +137,12 @@ static int kvm_s390_pv_alloc_vm(struct kvm *kvm) /* Allocate variable storage */ vlen = ALIGN(virt * ((npages * PAGE_SIZE) / HPAGE_SIZE), PAGE_SIZE); vlen += uv_info.guest_virt_base_stor_len; - kvm->arch.pv.stor_var = vzalloc(vlen); + /* + * The Create Secure Configuration Ultravisor Call does not support + * using large pages for the virtual memory area. + * This is a hardware limitation. + */ + kvm->arch.pv.stor_var = vmalloc_no_huge(vlen); if (!kvm->arch.pv.stor_var) goto out_err; return 0; diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index b159c2789961..3b1df7da402d 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -26,7 +26,7 @@ struct notifier_block; /* in notifier.h */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ #define VM_FLUSH_RESET_PERMS 0x00000100 /* reset direct map and flush TLB on unmap, can't be freed in atomic context */ #define VM_MAP_PUT_PAGES 0x00000200 /* put pages and free array in vfree */ -#define VM_ALLOW_HUGE_VMAP 0x00000400 /* Allow for huge pages on archs with HAVE_ARCH_HUGE_VMALLOC */ +#define VM_NO_HUGE_VMAP 0x00000400 /* force PAGE_SIZE pte mapping */ #if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \ !defined(CONFIG_KASAN_VMALLOC) @@ -153,7 +153,7 @@ extern void *__vmalloc_node_range(unsigned long size, unsigned long align, const void *caller) __alloc_size(1); void *__vmalloc_node(unsigned long size, unsigned long align, gfp_t gfp_mask, int node, const void *caller) __alloc_size(1); -void *vmalloc_huge(unsigned long size, gfp_t gfp_mask) __alloc_size(1); +void *vmalloc_no_huge(unsigned long size) __alloc_size(1); extern void *__vmalloc_array(size_t n, size_t size, gfp_t flags) __alloc_size(1, 2); extern void *vmalloc_array(size_t n, size_t size) __alloc_size(1, 2); diff --git a/mm/vmalloc.c b/mm/vmalloc.c index cadfbb5155ea..09470361dc03 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3101,7 +3101,7 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, return NULL; } - if (vmap_allow_huge && (vm_flags & VM_ALLOW_HUGE_VMAP)) { + if (vmap_allow_huge && !(vm_flags & VM_NO_HUGE_VMAP)) { unsigned long size_per_node; /* @@ -3268,24 +3268,21 @@ void *vmalloc(unsigned long size) EXPORT_SYMBOL(vmalloc); /** - * vmalloc_huge - allocate virtually contiguous memory, allow huge pages - * @size: allocation size - * @gfp_mask: flags for the page level allocator + * vmalloc_no_huge - allocate virtually contiguous memory using small pages + * @size: allocation size * - * Allocate enough pages to cover @size from the page level + * Allocate enough non-huge pages to cover @size from the page level * allocator and map them into contiguous kernel virtual space. - * If @size is greater than or equal to PMD_SIZE, allow using - * huge pages for the memory * * Return: pointer to the allocated memory or %NULL on error */ -void *vmalloc_huge(unsigned long size, gfp_t gfp_mask) +void *vmalloc_no_huge(unsigned long size) { return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END, - gfp_mask, PAGE_KERNEL, VM_ALLOW_HUGE_VMAP, + GFP_KERNEL, PAGE_KERNEL, VM_NO_HUGE_VMAP, NUMA_NO_NODE, __builtin_return_address(0)); } -EXPORT_SYMBOL_GPL(vmalloc_huge); +EXPORT_SYMBOL(vmalloc_no_huge); /** * vzalloc - allocate virtually contiguous memory with zero fill -- 2.35.1