From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 838DDC54798 for ; Thu, 29 Feb 2024 10:33:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DDC3D6B00D1; Thu, 29 Feb 2024 05:33:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D8BDB6B00D2; Thu, 29 Feb 2024 05:33:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C06CC6B00D3; Thu, 29 Feb 2024 05:33:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id AD2FB6B00D1 for ; Thu, 29 Feb 2024 05:33:33 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id B3DBD801EC for ; Thu, 29 Feb 2024 10:33:31 +0000 (UTC) X-FDA: 81844479822.01.F6D53D4 Received: from mail-lf1-f41.google.com (mail-lf1-f41.google.com [209.85.167.41]) by imf18.hostedemail.com (Postfix) with ESMTP id E3BC01C0007 for ; Thu, 29 Feb 2024 10:33:29 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ZtdLrBm7; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf18.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.41 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709202810; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nQl+fyrc7gr5M0XagvOsbZaoINJk1clL/Vv0rjJoV4s=; b=XxD9Zt9/bGB8VQ2WB+SC6FzcK+6LFNvq0vediQnaGSG3zCunAyctbTjkh+YHyikc0FnA5w HeItcJ5K0vDwtHS5RQCbSp+M9hXfHuK81DeWqDl1PkgxNpH0KPkTHyzT8/P7dJkHOAMkFc Q64V3ajjVqjjJB0Y7EYxhL37FkeatnM= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ZtdLrBm7; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf18.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.41 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709202810; a=rsa-sha256; cv=none; b=RysIaPLiTBFEy3aws/8k10DUtWUDq85pl3/TYbQTaw5+pDJpCnBXbeK5iKbKxReEOuRpUK PB98eJxhAFCg14idQ20almEuajzru/79O/3O2CMT0fBFWSZltrEnlvjm8nu0b4bCYzhir/ WENXe+Zk9xiUbXULPC5jKa/fxuy6unM= Received: by mail-lf1-f41.google.com with SMTP id 2adb3069b0e04-512f54fc2dbso583556e87.1 for ; Thu, 29 Feb 2024 02:33:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709202808; x=1709807608; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:from:to:cc:subject:date:message-id:reply-to; bh=nQl+fyrc7gr5M0XagvOsbZaoINJk1clL/Vv0rjJoV4s=; b=ZtdLrBm7Rx5nqIeiXpGlEixAzicscBzUNVS4LmpT4J2RwCyI6wXdp0ni68mV5Qbp5I h+rD4cr8u8JfzZFnvGV0AEUF+Fp6UF/hnfH3O/owqWdS9kcTHbxZ364UDB8u05Yh4O2p S0r2koHPal3471h3hwI/np+nfEpfZJGyzeP0C/x1IZeRsplddM9KoEew0nK7M2c1dpI/ oeBHSsSHLpBB7tGXZCC7PSi2zPUBdiZV6N8jbp1e94x33sxl9LUmUdB4yZ/XGjgnBopN CHAhjf/36sFv37HSZqBqJGXqkKXxNglNaQVKDbwdTXjmodfay6KYUhQjWF53a/9t+PPy odbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709202808; x=1709807608; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=nQl+fyrc7gr5M0XagvOsbZaoINJk1clL/Vv0rjJoV4s=; b=gPaSOE8kXZlcXDHFci/P8hHGIVTv8vPhAiaNRuiv/TbaViIEoO8wTsckE+7hR0rer0 oIPEoFWNL1D8jB2/CISjpc2JjXOIc30LsrQhIoDSCJey169H4ORab7T00sjwIOBqpcxh iRl8G6gALPO3mDifJ+0eoCs1FuMuUqmaPCp/Ouov4OoSCetanoAHhdG+0X/2hE2Z0KC4 TxCwU8gR/ctky3Z21lS1u24EqS5LLtCrrKH+gR/gEBz2BVyjTRFJ8SbZXHZ4Fw6Z4J6+ UG6r+kPwIn3R/xZ6J2LepNAqiTXwk9RKUL2tzNL+U/W5xkR/apgFnNeGaiMYB5llLXl5 Xi8Q== X-Forwarded-Encrypted: i=1; AJvYcCWr3XzpWq1eMCzxEu7jD0SjjoEcdGIg8ol7SMcYEwzXSbzKK63xiCdKzK5inpmJM8MdrnN3/8Ux9PBJGjIye5taKRA= X-Gm-Message-State: AOJu0YyyvqR6X/fg0owr27keOCD8Kc313lJMka7szX/JSXVsqGK/YJ72 fjf1Bu3ZmlosSyjWo3oEHeKVLrZyFevqInpRkv3vUojoSBM/zDj1 X-Google-Smtp-Source: AGHT+IFWyjKjAlNiSjqaRPUs0Xu6bGmQJGy4LpaFWeaebN2VSos2G1fvUr4d/OWaFa8UC3DEJ1Beaw== X-Received: by 2002:ac2:43d0:0:b0:513:2473:9b93 with SMTP id u16-20020ac243d0000000b0051324739b93mr997542lfl.8.1709202807738; Thu, 29 Feb 2024 02:33:27 -0800 (PST) Received: from pc636 (host-90-233-206-150.mobileonline.telia.com. [90.233.206.150]) by smtp.gmail.com with ESMTPSA id a15-20020a056512200f00b00512daaef13bsm201261lfb.102.2024.02.29.02.33.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Feb 2024 02:33:27 -0800 (PST) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Thu, 29 Feb 2024 11:33:24 +0100 To: Baoquan He , rulinhuang Cc: rulinhuang , urezki@gmail.com, akpm@linux-foundation.org, colin.king@intel.com, hch@infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, lstoakes@gmail.com, tianyou.li@intel.com, tim.c.chen@intel.com, wangyang.guo@intel.com, zhiguo.zhou@intel.com Subject: Re: [PATCH v6] mm/vmalloc: lock contention optimization under multi-threading Message-ID: References: <20240229082611.4104839-1-rulin.huang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: E3BC01C0007 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: z1dx38t4ijnneaqzdbpurr5ifaq3r8wt X-HE-Tag: 1709202809-922818 X-HE-Meta: U2FsdGVkX1/TEtsYt7OnotKAQB2hw11utJljlr+CQMBisAY9gxr9p+o5KL1vZlXzaP4EeCKShAl1c/PgAwNmiKwvMCUD3I5FuH82fIcQtitlz0kK7ikMY7UNeCAPX32FmPYBsXYTcxwV2RnyKr/jDXpXRvTJjrjVKxVwFV1BThWgfJXimFyTt8l2q6gnTWSpH+7PjIqdBaxp455QTLdD5EQ9eQATYCtjAVqSyFFBBclVwyL+N6hFzTXuOt3DlHSd65remPwvfbLS4syA6h1pFRfY7NlHcTiLbET6t581Cah+J8jDo02atqRjW6tMqJeqYx/Uq3i0PcWD7kTSUCzGhE9t8LBGL3R5GIAwIqEJnZNMsB6hcfKPoaENL42V8h1iWzXaPgZf5GVol4iWuiSKh4pGfddP4lKEKW8JNSE2mP9SrDYrHzSeeheAFGOMFEJyVg13KTi/s6mwldedvIa2/OME2qTDu7hT6a5JRkvfkE4yJKA9DU6KdOQjcYrkmpBmljCRSB/nSfgrPMPjdW8P+npWGf1fRFpzYt6K3XWE0ojPU5Pl/DvEbMPExPzUfXwQRNZwlvxky90zHLVXtI8S4wwwr4ffmLZ66f3EDjbPcJ2GDqWKtN6XtAVXHG+OxmaTuwxWmxbs5uc9i0+6d0cBGCPaJ1rzHO7nydsbgLkm5a/do72Iee60tFpk426CXkcy3/N5BGzsNIyekqBy3u1XZMk7yUPwnVS3ivOcUAI8mIhgU60znJgd7YNY+dqy64VJjX7sN2FjBVKpQtNPjyF5mwLUWphDP+ccVAToeF5P0TX1s4+Gsv2YXaYSszIS9Y71n2DbxRI4pULlXES5BmV/6ZqI5caaq9Q/H9Fxf6jaEhdpdkIs7B1gVAOSy55N/ShtlQ5HW7wimh5H6nhjC2LGHAwVAYLykUKC7tx9J3quzn/yexc/FQSRjQOuCtDgmHQDqfTVl6MqaW1Bei/ie8I ORwbTNID 0QyHeVtbgtx9wmTB+An5+IDAmRNWOQGMEaZVbE5GTUB2yTtQQNiZ+/0qAcceTMfLrHbGAuY/ZsQwnk2E= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000004, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Feb 29, 2024 at 06:12:00PM +0800, Baoquan He wrote: > Hi Rulin, > > Thanks for the great work and v6, some concerns, please see inline > comments. > > On 02/29/24 at 12:26am, rulinhuang wrote: > > When allocating a new memory area where the mapping address range is > > known, it is observed that the vmap_node->busy.lock is acquired twice. > > > > The first acquisition occurs in the alloc_vmap_area() function when > > inserting the vm area into the vm mapping red-black tree. The second > > acquisition occurs in the setup_vmalloc_vm() function when updating the > > properties of the vm, such as flags and address, etc. > > > > Combine these two operations together in alloc_vmap_area(), which > > improves scalability when the vmap_node->busy.lock is contended. > > By doing so, the need to acquire the lock twice can also be eliminated > > to once. > > > > With the above change, tested on intel sapphire rapids > > platform(224 vcpu), a 4% performance improvement is > > gained on stress-ng/pthread(https://github.com/ColinIanKing/stress-ng), > > which is the stress test of thread creations. > > > > Reviewed-by: Uladzislau Rezki > > Reviewed-by: Baoquan He > > Reviewed-by: "Chen, Tim C" > > Reviewed-by: "King, Colin" > > > We possibly need remove these reviewers' tags when new code change is > taken so that people check and add Acked-by or Reviewed-by again if then > agree, or add new comments if any concern. > > > Signed-off-by: rulinhuang > > --- > > V1 -> V2: Avoided the partial initialization issue of vm and > > separated insert_vmap_area() from alloc_vmap_area() > > V2 -> V3: Rebased on 6.8-rc5 > > V3 -> V4: Rebased on mm-unstable branch > > V4 -> V5: cancel the split of alloc_vmap_area() > > and keep insert_vmap_area() > > V5 -> V6: add bug_on > > --- > > mm/vmalloc.c | 132 +++++++++++++++++++++++++-------------------------- > > 1 file changed, 64 insertions(+), 68 deletions(-) > > > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > > index 25a8df497255..5ae028b0d58d 100644 > > --- a/mm/vmalloc.c > > +++ b/mm/vmalloc.c > > @@ -1841,15 +1841,66 @@ node_alloc(unsigned long size, unsigned long align, > > return va; > > } > > > > +/*** Per cpu kva allocator ***/ > > + > > +/* > > + * vmap space is limited especially on 32 bit architectures. Ensure there is > > + * room for at least 16 percpu vmap blocks per CPU. > > + */ > > +/* > > + * If we had a constant VMALLOC_START and VMALLOC_END, we'd like to be able > > + * to #define VMALLOC_SPACE (VMALLOC_END-VMALLOC_START). Guess > > + * instead (we just need a rough idea) > > + */ > > +#if BITS_PER_LONG == 32 > > +#define VMALLOC_SPACE (128UL*1024*1024) > > +#else > > +#define VMALLOC_SPACE (128UL*1024*1024*1024) > > +#endif > > + > > +#define VMALLOC_PAGES (VMALLOC_SPACE / PAGE_SIZE) > > +#define VMAP_MAX_ALLOC BITS_PER_LONG /* 256K with 4K pages */ > > +#define VMAP_BBMAP_BITS_MAX 1024 /* 4MB with 4K pages */ > > +#define VMAP_BBMAP_BITS_MIN (VMAP_MAX_ALLOC*2) > > +#define VMAP_MIN(x, y) ((x) < (y) ? (x) : (y)) /* can't use min() */ > > +#define VMAP_MAX(x, y) ((x) > (y) ? (x) : (y)) /* can't use max() */ > > +#define VMAP_BBMAP_BITS \ > > + VMAP_MIN(VMAP_BBMAP_BITS_MAX, \ > > + VMAP_MAX(VMAP_BBMAP_BITS_MIN, \ > > + VMALLOC_PAGES / roundup_pow_of_two(NR_CPUS) / 16)) > > + > > +#define VMAP_BLOCK_SIZE (VMAP_BBMAP_BITS * PAGE_SIZE) > > + > > +/* > > + * Purge threshold to prevent overeager purging of fragmented blocks for > > + * regular operations: Purge if vb->free is less than 1/4 of the capacity. > > + */ > > +#define VMAP_PURGE_THRESHOLD (VMAP_BBMAP_BITS / 4) > > + > > +#define VMAP_RAM 0x1 /* indicates vm_map_ram area*/ > > +#define VMAP_BLOCK 0x2 /* mark out the vmap_block sub-type*/ > > +#define VMAP_FLAGS_MASK 0x3 > > These code moving is made because we need check VMAP_RAM in advance. We > may need move all those data structures and basic helpers related to per > cpu kva allocator up too to along with these macros, just as the newly > introduced vmap_node does. If that's agreed, better be done in a > separate patch. My personal opinion. Not sure if Uladzislau has > different thoughts. > > Other than this, the overall looks good to me. > I agree, the split should be done. One is a preparation move saying that no functional change happens and final one an actual change is. -- Uladzislau Rezki