From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14666C54E4A for ; Thu, 7 Mar 2024 19:16:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 62D596B027B; Thu, 7 Mar 2024 14:16:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5B6E36B027C; Thu, 7 Mar 2024 14:16:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 409266B027D; Thu, 7 Mar 2024 14:16:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 2BC116B027B for ; Thu, 7 Mar 2024 14:16:31 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id C8A7C160AA2 for ; Thu, 7 Mar 2024 19:16:30 +0000 (UTC) X-FDA: 81871199340.17.94E53FF Received: from mail-lf1-f52.google.com (mail-lf1-f52.google.com [209.85.167.52]) by imf01.hostedemail.com (Postfix) with ESMTP id E909A40006 for ; Thu, 7 Mar 2024 19:16:28 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=M14NY87p; spf=pass (imf01.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.52 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709838989; a=rsa-sha256; cv=none; b=MqRe2AinS4HXE/XaypEZ0u8w4SCt8XEh+H4M0zRl+ouKs7MRh6E4MEWoXDoHlSjpAviTEr O76XEqZ6PujWT868Qu9UqsiCpwh61akPmWyt6+YmX0Ib1jHdR3LV3eY+5STwuffNe0EmuQ Gz6r86W78/pKWiyf54/qzEuRiEV/lZc= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=M14NY87p; spf=pass (imf01.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.52 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709838989; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4OUeKVtKJPLFMBmWIglHJRvn/ZfzCTXm54FGIBKHaik=; b=L9EX3WIUedx+c9Dw52DfoctSpfGHd0d+Yu1Y93OyHvL/+rTWhX7bWXknUWaj9clTKGjZP8 UKGOKBHqrGX/EQabS1bvGl3Z4q5W11w906PtcoXdP41GCRS4XIFsdJr3tH1Qw9me5ZpEAC K852E8bTYZ7e7QmgcqL3kwajIPkPArQ= Received: by mail-lf1-f52.google.com with SMTP id 2adb3069b0e04-51380c106d9so102462e87.3 for ; Thu, 07 Mar 2024 11:16:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709838987; x=1710443787; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:from:to:cc:subject:date:message-id:reply-to; bh=4OUeKVtKJPLFMBmWIglHJRvn/ZfzCTXm54FGIBKHaik=; b=M14NY87pr9s0gC8W4vV1Z6LOT1T/BX9X4yNLykR0iMaNPzBb9glWmPyRmaBncCon4c Z2GfjDXCjdprXLxjxsWXNfJ9vKzrMDiBiQRmjkHMrl4/1gZZMs5X2aXqy/p0QWH2lN68 DY+vG4XgwqAp2SJUi5DzPoZLwznkkjY1yM40cXQOG1cbAf0IsXfZUByriSi6v9fZHVmH 4QW0JFsOqNmvmdiKOEJVPih8J9HVgmYYea+UdhtBIbka98Rl2LFP4qbbZ8TaVbYEG830 Sxj+XMLL2mxZK2MXXxTkwM1TUQhrewqAA3M+hKlMckOKcRevGgZ07BWRFWC20Pogtbd0 116g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709838987; x=1710443787; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=4OUeKVtKJPLFMBmWIglHJRvn/ZfzCTXm54FGIBKHaik=; b=B64m5SGbOosEI3MZvSmm12ewYE3DLlGixtdlH2S+OaUc5oUmQ4AVwplX1nH8KYMbQs RDlvd/048yMhZ+oSdA7DfVu3z2wXDzv8sMbsxdkraqRpfVSfaxdlOM+i0TGFcIJ+sBDC +CTVFbaJchTJeqhaUkTskHZekgXU17SuTVAGIpF9g63xwvh3HFXFmCNdZKx1rkJ8iS0b UvZLolYRs1Gfl4Gw18K3Ci5r/z3SNKbw9KKWn+cZdhy6R22k2KqXyggCk4SU1cdJUt2J QXU4Ocl5aif0QOMVYBBoMWutYGARAFP6715uf7vuBIHcNZTEV/0sUIZwVCegsmV+8k5Y g5gQ== X-Forwarded-Encrypted: i=1; AJvYcCUhKUdFMxHsRlCvR/zPMkMsdEHjo4fs4hA8Z6h1NWN8wXP44KUityzMCIF91D+VsgQ73UKyWwMlW3FbDTJia832J8Q= X-Gm-Message-State: AOJu0Yy542dzlg8tQkHq9W7/rEZ1xSBwuiaXEi19ydjByEi59ijdoiHM D1CKjHlL65l2/NZ0hC4J0lENGGI+4AMibBv3EFixbazX1buSCyFf X-Google-Smtp-Source: AGHT+IETE3KPsfvE6jVkSgFp0n9Sm44HxPyP9Xj7YlX8C58ohCu8YLqEbvp7hJTjJ/dm9RX8d/mLsA== X-Received: by 2002:a05:6512:290:b0:513:49f7:70f with SMTP id j16-20020a056512029000b0051349f7070fmr2029999lfp.57.1709838986809; Thu, 07 Mar 2024 11:16:26 -0800 (PST) Received: from pc638.lan (host-185-121-47-193.sydskane.nu. [185.121.47.193]) by smtp.gmail.com with ESMTPSA id p4-20020a19f004000000b005131b457ee7sm3130537lfc.264.2024.03.07.11.16.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Mar 2024 11:16:26 -0800 (PST) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Thu, 7 Mar 2024 20:16:24 +0100 To: Baoquan He , rulinhuang Cc: Uladzislau Rezki , rulinhuang , akpm@linux-foundation.org, colin.king@intel.com, hch@infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, lstoakes@gmail.com, tianyou.li@intel.com, tim.c.chen@intel.com, wangyang.guo@intel.com, zhiguo.zhou@intel.com Subject: Re: [PATCH v7 1/2] mm/vmalloc: Moved macros with no functional change happened Message-ID: References: <20240301155417.1852290-1-rulin.huang@intel.com> <20240301155417.1852290-2-rulin.huang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: E909A40006 X-Stat-Signature: b1mch6xog6wrzggxnip76c3s6dofi8eb X-Rspam-User: X-HE-Tag: 1709838988-329436 X-HE-Meta: U2FsdGVkX18q+E7SuRaXqvXQ3hcY2XzKtnhx2TUaLNz+++kKrxrNzqUiUB5/PAYp9tljwTqNNt+Bq/7TBl89Uq1Pqn6x9BkEaJgq9ymGQupFDRoonB8p5NBnOLsFObGtb4pVXz1TCGfM/JS0FJpMQetRnVQB0nVJi7ezOwgNzQZ/ea+gVGHvRceslyet0g0d409Br52fdj4ar/y1fqSAi/0DkfjgznhWDXlVJ92CfIXwZRgp1TfMF3iM/vfFUNUAmYcJrWJetF3mHa8AcEtTj65GkHdeXlD9mCmaqsLFf9kbVNPBGqMyvoItAX6GvCouoRcszXcLeapUj9/488kvKie16DIVcG341v3wYaCkPybZgKI4G27WZXR6NkfAjPtr/JGUpDGHuDh1X1aZW25MSawPXVPTNDat6xyUNVs2lK1sdm1DrPrwU3rpdvm3d98t5aSde6zX4ZnKgNlsi98KA7Y331Tj/GIWQB95XrM6VzvwRloJChp2RFwImo3cNobDXgFCcusnecF8Vp9UuhodktcgEPxTZ6gfsUwzL7zGohChMVsa/0igoMAqUFd6V4rgFbfiijlhNz8ZlzdoFz+8V+TQi6X8vTHGhwZCeNqnWpIQnZ+h7WEu0+ucmWLR1fyl5AAFWJn7NcnaZgTGU+cAjTEDUrFJnAr89yf8t2AffCBbxMIfQpMeFRZePONcT+tjoKJ4F0PuI9PGv1bW13UaVHgQxZ0871rDCFL1yWxcZHzqHUGlobymmfBlq0QiwC5DN+sPZnVQC4YKMrNld9BtW4l+ZmwBPcs8hSnlQti1wc3kQdSrf/LGw2ZFiZUJoamjHsGL6gIt8W9pCn1ez535rgFyHUGZsUQU5g9zXVUib96gBAi98h1eM6VHzGyb301Lt+STSd4mQDbyUEk7W50v/oXy3TMEzcIrjgfCuguuA9w33r2EeI+YcPCGpiT+2sf0HbMXqvwkN+gyLwvlhow VScwWP6s VzH3i/7NyyHYX3nQ2vOxv7rpy0RvtaDiU+mdx X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Mar 07, 2024 at 09:23:10AM +0800, Baoquan He wrote: > On 03/06/24 at 08:01pm, Uladzislau Rezki wrote: > > On Fri, Mar 01, 2024 at 10:54:16AM -0500, rulinhuang wrote: > ...... > > > > Sorry for the late answer, i also just noticed this email. It was not in > > my inbox... > > > > OK, now you move part of the per-cpu allocator on the top and leave > > another part down making it split. This is just for the: > > > > BUG_ON(va_flags & VMAP_RAM); > > > > VMAP_RAM macro. Do we really need this BUG_ON()? > > Sorry, I suggested that when reviewing v5: > https://lore.kernel.org/all/ZdiltpK5fUvwVWtD@MiWiFi-R3L-srv/T/#u > > About part of per-cpu kva allocator moving and the split making, I would > argue that we will have vmap_nodes defintion and basic helper functions > like addr_to_node_id() etc at top, and leave other part like > size_to_va_pool(), node_pool_add_va() etc down. These are similar. > > While about whether we should add 'BUG_ON(va_flags & VMAP_RAM);', I am > not sure about it. When I suggested that, I am also hesitant. From the > current code, alloc_vmap_area() is called in below three functions, only > __get_vm_area_node() will pass the non-NULL vm. > new_vmap_block() -| > vm_map_ram() ----> alloc_vmap_area() > __get_vm_area_node() -| > > It could be wrongly passed in the future? Only checking if vm is > non-NULL makes me feel a little unsafe. While I am fine if removing the > BUG_ON, because there's no worry in the current code. We can wait and > see in the future. > > if (vm) { > BUG_ON(va_flags & VMAP_RAM); > setup_vmalloc_vm(vm, va, flags, caller); > } > I would remove it, because it is really hard to mess it, there is only one place also BUG_ON() is really a show stopper. I really appreciate what rulinhuang is doing and i understand that it might be not so easy. So, if we can avoid of moving the code, that looks to me that we can do, if we can pass less arguments into alloc_vmap_area() since it is overloaded that would be great. Just an example: diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 25a8df497255..b6050e018539 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1841,6 +1841,30 @@ node_alloc(unsigned long size, unsigned long align, return va; } +static inline void +__pre_setup_vmalloc_vm(struct vm_struct *vm, + unsigned long flags, const void *caller) +{ + vm->flags = flags; + vm->caller = caller; +} + +static inline void +__post_setup_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va) +{ + vm->addr = (void *)va->va_start; + vm->size = va->va_end - va->va_start; + va->vm = vm; +} + +static inline void +setup_vmalloc_vm_locked(struct vm_struct *vm, struct vmap_area *va, + unsigned long flags, const void *caller) +{ + __pre_setup_vmalloc_vm(vm, flags, caller); + __post_setup_vmalloc_vm(vm, va); +} + /* * Allocate a region of KVA of the specified size and alignment, within the * vstart and vend. @@ -1849,7 +1873,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, unsigned long align, unsigned long vstart, unsigned long vend, int node, gfp_t gfp_mask, - unsigned long va_flags) + unsigned long va_flags, struct vm_struct *vm) { struct vmap_node *vn; struct vmap_area *va; @@ -1912,6 +1936,9 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, va->vm = NULL; va->flags = (va_flags | vn_id); + if (vm) + __post_setup_vmalloc_vm(vm, va); + vn = addr_to_node(va->va_start); spin_lock(&vn->busy.lock); @@ -2486,7 +2513,7 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) va = alloc_vmap_area(VMAP_BLOCK_SIZE, VMAP_BLOCK_SIZE, VMALLOC_START, VMALLOC_END, node, gfp_mask, - VMAP_RAM|VMAP_BLOCK); + VMAP_RAM|VMAP_BLOCK, NULL); if (IS_ERR(va)) { kfree(vb); return ERR_CAST(va); @@ -2843,7 +2870,8 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node) struct vmap_area *va; va = alloc_vmap_area(size, PAGE_SIZE, VMALLOC_START, VMALLOC_END, - node, GFP_KERNEL, VMAP_RAM); + node, GFP_KERNEL, VMAP_RAM, NULL); + if (IS_ERR(va)) return NULL; @@ -2946,26 +2974,6 @@ void __init vm_area_register_early(struct vm_struct *vm, size_t align) kasan_populate_early_vm_area_shadow(vm->addr, vm->size); } -static inline void setup_vmalloc_vm_locked(struct vm_struct *vm, - struct vmap_area *va, unsigned long flags, const void *caller) -{ - vm->flags = flags; - vm->addr = (void *)va->va_start; - vm->size = va->va_end - va->va_start; - vm->caller = caller; - va->vm = vm; -} - -static void setup_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va, - unsigned long flags, const void *caller) -{ - struct vmap_node *vn = addr_to_node(va->va_start); - - spin_lock(&vn->busy.lock); - setup_vmalloc_vm_locked(vm, va, flags, caller); - spin_unlock(&vn->busy.lock); -} - static void clear_vm_uninitialized_flag(struct vm_struct *vm) { /* @@ -3002,14 +3010,15 @@ static struct vm_struct *__get_vm_area_node(unsigned long size, if (!(flags & VM_NO_GUARD)) size += PAGE_SIZE; - va = alloc_vmap_area(size, align, start, end, node, gfp_mask, 0); + /* post-setup is done in the alloc_vmap_area(). */ + __pre_setup_vmalloc_vm(area, flags, caller); + + va = alloc_vmap_area(size, align, start, end, node, gfp_mask, 0, area); if (IS_ERR(va)) { kfree(area); return NULL; } - setup_vmalloc_vm(area, va, flags, caller); - /* * Mark pages for non-VM_ALLOC mappings as accessible. Do it now as a * best-effort approach, as they can be mapped outside of vmalloc code. -- Uladzislau Rezki