From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-it1-f200.google.com (mail-it1-f200.google.com [209.85.166.200]) by kanga.kvack.org (Postfix) with ESMTP id 7E9178E0001 for ; Wed, 19 Dec 2018 12:38:03 -0500 (EST) Received: by mail-it1-f200.google.com with SMTP id t143so7280327itc.9 for ; Wed, 19 Dec 2018 09:38:03 -0800 (PST) Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id m25sor2590900iti.26.2018.12.19.09.38.02 for (Google Transport Security); Wed, 19 Dec 2018 09:38:02 -0800 (PST) From: Roman Gushchin Subject: [PATCH 2/3] mm: separate memory allocation and actual work in alloc_vmap_area() Date: Wed, 19 Dec 2018 09:37:50 -0800 Message-Id: <20181219173751.28056-3-guro@fb.com> In-Reply-To: <20181219173751.28056-1-guro@fb.com> References: <20181219173751.28056-1-guro@fb.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: owner-linux-mm@kvack.org List-ID: To: linux-mm@kvack.org Cc: Matthew Wilcox , Johannes Weiner , Michal Hocko , linux-kernel@vger.kernel.org, kernel-team@fb.com, Roman Gushchin alloc_vmap_area() is allocating memory for the vmap_area, and performing the actual lookup of the vm area and vmap_area initialization. This prevents us from using a pre-allocated memory for the map_area structure, which can be used in some cases to minimize the number of required memory allocations. Let's keep the memory allocation part in alloc_vmap_area() and separate everything else into init_vmap_area(). Signed-off-by: Roman Gushchin Acked-by: Johannes Weiner Reviewed-by: Matthew Wilcox --- mm/vmalloc.c | 50 +++++++++++++++++++++++++++++++++----------------- 1 file changed, 33 insertions(+), 17 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 7660e3ef4133..042175d7d95f 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -395,16 +395,10 @@ static void purge_vmap_area_lazy(void); static BLOCKING_NOTIFIER_HEAD(vmap_notify_list); -/* - * Allocate a region of KVA of the specified size and alignment, within the - * vstart and vend. - */ -static struct vmap_area *alloc_vmap_area(unsigned long size, - unsigned long align, - unsigned long vstart, unsigned long vend, - int node, gfp_t gfp_mask) +static int init_vmap_area(struct vmap_area *va, unsigned long size, + unsigned long align, unsigned long vstart, + unsigned long vend, int node, gfp_t gfp_mask) { - struct vmap_area *va; struct rb_node *n; unsigned long addr; int purged = 0; @@ -416,11 +410,6 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, might_sleep(); - va = kmalloc_node(sizeof(struct vmap_area), - gfp_mask & GFP_RECLAIM_MASK, node); - if (unlikely(!va)) - return ERR_PTR(-ENOMEM); - /* * Only scan the relevant parts containing pointers to other objects * to avoid false negatives. @@ -512,7 +501,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, BUG_ON(va->va_start < vstart); BUG_ON(va->va_end > vend); - return va; + return 0; overflow: spin_unlock(&vmap_area_lock); @@ -534,8 +523,35 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, if (!(gfp_mask & __GFP_NOWARN) && printk_ratelimit()) pr_warn("vmap allocation for size %lu failed: use vmalloc= to increase size\n", size); - kfree(va); - return ERR_PTR(-EBUSY); + + return -EBUSY; +} + +/* + * Allocate a region of KVA of the specified size and alignment, within the + * vstart and vend. + */ +static struct vmap_area *alloc_vmap_area(unsigned long size, + unsigned long align, + unsigned long vstart, + unsigned long vend, + int node, gfp_t gfp_mask) +{ + struct vmap_area *va; + int ret; + + va = kmalloc_node(sizeof(struct vmap_area), + gfp_mask & GFP_RECLAIM_MASK, node); + if (unlikely(!va)) + return ERR_PTR(-ENOMEM); + + ret = init_vmap_area(va, size, align, vstart, vend, node, gfp_mask); + if (ret) { + kfree(va); + return ERR_PTR(ret); + } + + return va; } int register_vmap_purge_notifier(struct notifier_block *nb) -- 2.19.2