From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5AB90C48BF6 for ; Wed, 21 Feb 2024 08:36:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6A2B06B0071; Wed, 21 Feb 2024 03:36:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 652F76B0072; Wed, 21 Feb 2024 03:36:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F2BA6B0074; Wed, 21 Feb 2024 03:36:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 408F36B0071 for ; Wed, 21 Feb 2024 03:36:52 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id D372C80979 for ; Wed, 21 Feb 2024 08:36:51 +0000 (UTC) X-FDA: 81815155422.28.85A7FCE Received: from mail-lf1-f52.google.com (mail-lf1-f52.google.com [209.85.167.52]) by imf15.hostedemail.com (Postfix) with ESMTP id D0BDBA0009 for ; Wed, 21 Feb 2024 08:36:49 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="Q/KCMsaF"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf15.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.52 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1708504610; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OOEkN5zzS8AdBwHRBxEL83E6EfiU4pKbnsarrgPrKNo=; b=Hy4nA6s8XKW/9VuTBL9A4u8xSN4d8w70lLNfNbBNZJf+aJPJJWKz7fpftJCaw8IhzSvN3A FBCAofVaTTX6m5+bboDTtDtr1LaMMNhxAoXZP+nK72+Rd+IpSKvg5icDBYf3sj3LM1J+DF HM6r5MW7NL0Byr8C41pWyHQjTF+zPPI= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="Q/KCMsaF"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf15.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.52 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1708504610; a=rsa-sha256; cv=none; b=faGUOeNCIuu8n96JXm04qayrVQPIEvMnf2hkzo1vD8PgQJAF4zwPsHT4TW8mzwu/Mfhlvo 1TYWd6i0ytGtou5uxulCn/b6dCSB7m8cnnCXe/pW0zj89RB8Hty3aznAr0moigYniz6KV+ /FP2rSwpSFke/4tatAgarWi0OZxUnwg= Received: by mail-lf1-f52.google.com with SMTP id 2adb3069b0e04-512cca90f38so1448774e87.2 for ; Wed, 21 Feb 2024 00:36:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1708504608; x=1709109408; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:from:to:cc:subject:date:message-id:reply-to; bh=OOEkN5zzS8AdBwHRBxEL83E6EfiU4pKbnsarrgPrKNo=; b=Q/KCMsaFGXThonxxb0ANFyuDsMn417w+/ZM5vEzgDeJWLhXCiJFJxbRoI5pnwWQOAN n6AvpMrJWLXlBZVzRWN/+Gq/p0WEjC+KHTg4VKNiXwns37fyJImP9SEMx5lrT+/8EP4B wX4IzAHFdvmkC7p8m74K9DjSS+QaDA56rA6w6pbJ3GZKg+VpnmCcx/w98GDF1Sg7YjkD rxU6ocP95BDdeMESkUpV1ujGq16DaSDoM5lzZgp6DE/eOF4Z+BtjAfDy788lMlfAEBgy e1+B4iybhXaQyESNQzcZFJTtVNKdLG0kN4TiOCMa3rVtEQ/ofeYkywEKRHazU7OB0LbQ gpgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708504608; x=1709109408; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=OOEkN5zzS8AdBwHRBxEL83E6EfiU4pKbnsarrgPrKNo=; b=rk1SOhZUW8k4nWL+Di161MLvGfIKcVOjrM321LlGyeGn1fTjJ6QoPECT9gHVUh+XAE RWFA31O0huRzX6ikXF4uzgE7GTf9aeNiUD3YKqoPTBw/0dFtmLHoL/O1fYm5iIUddWFi ubVy+ipMXPoD/wDESXMYc//4dx3QJpaeX4TTAbwkGmFGJpIqhrp6cK7oWUBMFdvn9hHd ZK+Ro/nbCNYC4kwHT7/rw1cUzNROzsECTCbx81n1ZrFUrhHiguqC/k8xpEZYu/z0XhZl z0TD6kE4HhRQmRLYsHVx5g0BR8Tv+102D5PXIYUqxhqkju1O8wR6vjvjabxwboxRxTnW 9Gpw== X-Forwarded-Encrypted: i=1; AJvYcCXpA6lx8TaKT/Q2bZG2hTdnirjZAVE2nTn0KOaVkyhA0m8NbWjePNooXBy6/myQDxz+noqmUvnrX5xYrkRVD1ttQYU= X-Gm-Message-State: AOJu0YwncCB9C5xddG2kiOdVsDQLsgBvlvHkHIE2EUVeuQNMU5JbFdbZ M1C2J3vIjfzoib/fW9re2wd5La/zTyl+tkwuvwNcs0jO6hAX8Vwb X-Google-Smtp-Source: AGHT+IHs8CgWki/h4dwNO1UeNqhCR8NGomJkiR9fMcDxPqxYx1RdDdpKEDfVNf75KzXFzlt685vKyQ== X-Received: by 2002:a05:6512:ba1:b0:512:bda4:bf47 with SMTP id b33-20020a0565120ba100b00512bda4bf47mr4905320lfv.4.1708504607648; Wed, 21 Feb 2024 00:36:47 -0800 (PST) Received: from pc636 (host-90-233-206-150.mobileonline.telia.com. [90.233.206.150]) by smtp.gmail.com with ESMTPSA id g25-20020ac25399000000b00512aa465e2dsm1364054lfh.206.2024.02.21.00.36.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Feb 2024 00:36:47 -0800 (PST) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Wed, 21 Feb 2024 09:36:44 +0100 To: rulinhuang Cc: akpm@linux-foundation.org, colin.king@intel.com, hch@infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, lstoakes@gmail.com, tianyou.li@intel.com, tim.c.chen@intel.com, urezki@gmail.com, wangyang.guo@intel.com, zhiguo.zhou@intel.com Subject: Re: [PATCH v3] mm/vmalloc: lock contention optimization under multi-threading Message-ID: References: <20240207033059.1565623-1-rulin.huang@intel.com> <20240221032905.11392-1-rulin.huang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240221032905.11392-1-rulin.huang@intel.com> X-Rspamd-Queue-Id: D0BDBA0009 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: oew531fzhbhef3zt6s8e1k3soaobun79 X-HE-Tag: 1708504609-743906 X-HE-Meta: U2FsdGVkX1+iPaGvK4Atunj9/yeDtqn06NmFMIlPEG9xRLnlWLRsxNZEcdUm3/1UcoWfcBOLu8YuwhEVgboJEmfSCwGVF+/Bd/C1fQeYAU4gefCQgWsMcMwksgmdYhyDRgt6gsIZfzprfzCAZeWqwengLsOiyDqa2WKTUYACVcT+ENfJyde3aSTgDPZI1xfykHM0wFMUbD4o/g8MPST6xFeeKh2cvs59OFZYQkMBKgOZ6YgLgvBAXrmAiQxDiwTCBGr8YFpAzNkAtEL9cD8LMUZeK9i7LSQEBj0XAfEnzqLMoRlY2kKmTxAIMfp2m3kDfPVocLx+nyges+QI1+3D5IM5h/xH24BIeAHnE6Jkdyy+BBJh3WadZhxwkGEDOKPzEiMVF2XNExqM03hBGPF7WkYR5FANx6dOqaQL/gl9ifgdpZWOIZnNkpYe5CLg1TIfo9fj5l3oARCSevTHai8sz5/GZhJa7pq9sWznKD8BvstmgpWRJtkylbQHxKHJRjryW5bk7Rn3oZ3SJV9udGRCBj0kzGxK3Fr7X6iAI/mWlbIn5hkaQXP0mOa03TEi/SDz9g6JQ7tjnWrJXMbVstMVKFuExci99/F2SDlnkcsDLa5zErZQCDPp4B+6/AtfujDMcvZQAxkQxur0tmuX+8muBOWiKDhP+4Rhc4Q7d2CyFevfxl9NAoweZlRzzoBpMbGmR3ZuSj5ebXmgcLlWzoES+0Wpku47Cg+/DwjikPGNj2zWwlmlchAeheVpjStjxL1h/PExQMACSHLM1Yj3WDo0WvaDQYLFrKXO4VQY4irt9uwu77gdlc2Ix8uzRoLWWZpBrZBijiJU7zzQV6y6Nc8mCqFdDDtU7AS0QHv0FRjHBhTYtXRvkMorW60AbyS0NfU5qegKv3may2r08aizvtbXPz5ejSm0/II9Zi54TTv8VqOZDnU4LZS4CgFWpjHpE4kgs8b4+dO3b907x9jnSDk t5lcYrBR MsbFqQVxF3rJUaH2e9rzaoErceB29Nn4qrT0my4fg4ZG1doCDH9To1rBR1H0hPWZz/KU8HaA/NXQxw1GOVpCSpjbj+Yl0xc0me0Ch0OhyU6siWmpHf5WBw+LpkMWc3HGvlkpmvUBieeg0UJXJy3cXJlXo231vC1Nsrb5/w3dxXzXftu+T+20mbbH6tFAsu93wZNmQSqUASMw5pfRpiQS2LSsDOJd+JMbN+tdhH12BDoWXPhdihkGu0Y1umgAERM9wKlq6IzwemcKNkLxH/iOY2NtSBQR6XCstHBiX3OOnLF96GVBSpc49erWRAzqobxT/W7xEvv/OoDFLfPyj398jFGj7SCNAh3gEoHwr65w5zrLrs9bsLY8luucsttL6X39rZaJkEvvOVMCCAt80Reyx1/ZeJUa+2PSXhkzAbC50wBTYBb1nHpS/9zLboSlkDzGTmrMy1bbGQI0h9Z+jixB19IQ6QEcHgJe0vBg/Of8u8+PxyqN63Vj9q3WcV3BXyTJk3R1VnitRFNDeLv6BOFMvJPuqomze6MYLP8ajYT9ErCfLKZCsKjUFnLblMg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Feb 20, 2024 at 10:29:05PM -0500, rulinhuang wrote: > When allocating a new memory area where the mapping address range is > known, it is observed that the vmap_area lock is acquired twice. > The first acquisition occurs in the alloc_vmap_area() function when > inserting the vm area into the vm mapping red-black tree. The second > acquisition occurs in the setup_vmalloc_vm() function when updating the > properties of the vm, such as flags and address, etc. > Combine these two operations together in alloc_vmap_area(), which > improves scalability when the vmap_area lock is contended. By doing so, > the need to acquire the lock twice can also be eliminated. > With the above change, tested on intel icelake platform(160 vcpu, kernel > v6.7), a 6% performance improvement and a 7% reduction in overall > spinlock hotspot are gained on > stress-ng/pthread(https://github.com/ColinIanKing/stress-ng), which is > the stress test of thread creations. > > Reviewed-by: Chen Tim C > Reviewed-by: King Colin > Signed-off-by: rulinhuang > --- > V1 -> V2: Avoided the partial initialization issue of vm and > separated insert_vmap_area() from alloc_vmap_area() > V2 -> V3: Rebased on 6.8-rc5 > --- > mm/vmalloc.c | 36 +++++++++++++++++++++--------------- > 1 file changed, 21 insertions(+), 15 deletions(-) > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index d12a17fc0c17..768e45f2ed94 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -1630,17 +1630,18 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, > va->vm = NULL; > va->flags = va_flags; > > - spin_lock(&vmap_area_lock); > - insert_vmap_area(va, &vmap_area_root, &vmap_area_list); > - spin_unlock(&vmap_area_lock); > - > BUG_ON(!IS_ALIGNED(va->va_start, align)); > BUG_ON(va->va_start < vstart); > BUG_ON(va->va_end > vend); > > ret = kasan_populate_vmalloc(addr, size); > if (ret) { > - free_vmap_area(va); > + /* > + * Insert/Merge it back to the free tree/list. > + */ > + spin_lock(&free_vmap_area_lock); > + merge_or_add_vmap_area_augment(va, &free_vmap_area_root, &free_vmap_area_list); > + spin_unlock(&free_vmap_area_lock); > return ERR_PTR(ret); > } > > @@ -1669,6 +1670,13 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, > return ERR_PTR(-EBUSY); > } > > +static inline void insert_vmap_area_with_lock(struct vmap_area *va) > +{ > + spin_lock(&vmap_area_lock); > + insert_vmap_area(va, &vmap_area_root, &vmap_area_list); > + spin_unlock(&vmap_area_lock); > +} > + > int register_vmap_purge_notifier(struct notifier_block *nb) > { > return blocking_notifier_chain_register(&vmap_notify_list, nb); > @@ -2045,6 +2053,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) > return ERR_CAST(va); > } > > + insert_vmap_area_with_lock(va); > + > vaddr = vmap_block_vaddr(va->va_start, 0); > spin_lock_init(&vb->lock); > vb->va = va; > @@ -2398,6 +2408,8 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node) > if (IS_ERR(va)) > return NULL; > > + insert_vmap_area_with_lock(va); > + > addr = va->va_start; > mem = (void *)addr; > } > @@ -2538,7 +2550,7 @@ static void vmap_init_free_space(void) > } > } > > -static inline void setup_vmalloc_vm_locked(struct vm_struct *vm, > +static inline void setup_vmalloc_vm(struct vm_struct *vm, > struct vmap_area *va, unsigned long flags, const void *caller) > { > vm->flags = flags; > @@ -2548,14 +2560,6 @@ static inline void setup_vmalloc_vm_locked(struct vm_struct *vm, > va->vm = vm; > } > > -static void setup_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va, > - unsigned long flags, const void *caller) > -{ > - spin_lock(&vmap_area_lock); > - setup_vmalloc_vm_locked(vm, va, flags, caller); > - spin_unlock(&vmap_area_lock); > -} > - > static void clear_vm_uninitialized_flag(struct vm_struct *vm) > { > /* > @@ -2600,6 +2604,8 @@ static struct vm_struct *__get_vm_area_node(unsigned long size, > > setup_vmalloc_vm(area, va, flags, caller); > > + insert_vmap_area_with_lock(va); > + > > /* > * Mark pages for non-VM_ALLOC mappings as accessible. Do it now as a > * best-effort approach, as they can be mapped outside of vmalloc code. > @@ -4166,7 +4172,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, > for (area = 0; area < nr_vms; area++) { > insert_vmap_area(vas[area], &vmap_area_root, &vmap_area_list); > > - setup_vmalloc_vm_locked(vms[area], vas[area], VM_ALLOC, > + setup_vmalloc_vm(vms[area], vas[area], VM_ALLOC, > pcpu_get_vm_areas); > } > spin_unlock(&vmap_area_lock); > > base-commit: b401b621758e46812da61fa58a67c3fd8d91de0d > -- > 2.43.0 > Spreading the insert_vmap_area_lock() across several callers, like: __get_vm_area_node(), new_vmap_block(), vm_map_ram(), etc is not good approach, simply because it changes the behaviour and people might miss this point. Could you please re-spin it on the mm-unstable, because the vmalloc code was changes a lot? From my side i can check and help you how to fix it in a better way. Because the v3 should be improved anyaway. Apparently i have not seen you messages for some reason, i do not understand why. I started to get emails with below topic: "Bounce probe for linux-kernel@vger.kernel.org (no action required)" -- Uladzislau Rezki