From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00D82CA0ED3 for ; Mon, 2 Sep 2024 17:00:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5CFEE8D00F9; Mon, 2 Sep 2024 13:00:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5805B8D00EF; Mon, 2 Sep 2024 13:00:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 448018D00F9; Mon, 2 Sep 2024 13:00:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 270678D00EF for ; Mon, 2 Sep 2024 13:00:53 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id D73EB1A03EA for ; Mon, 2 Sep 2024 17:00:52 +0000 (UTC) X-FDA: 82520412744.03.CBF2482 Received: from mail-lj1-f176.google.com (mail-lj1-f176.google.com [209.85.208.176]) by imf10.hostedemail.com (Postfix) with ESMTP id EAD13C0021 for ; Mon, 2 Sep 2024 17:00:49 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=LgHlAZpY; spf=pass (imf10.hostedemail.com: domain of urezki@gmail.com designates 209.85.208.176 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725296357; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=J9+V/kbhLj5ujl3zbVoImDUD1XrjW3PQicqV6baThDk=; b=5jsFPcc2deKjdcT4OG+A8hOPiSqm+6kRPvMyCABHQ4Ky6TAzLP5e7+enLmI573MAVFWoK+ 8NUi/5p+1fAbw9qAX7zTYLydZlqs0wLY/7Q2BdenqahRg6uGVwlhrG/6LheAzFAZP6sWJL LJERHwVYkbcogSKIBmHwhzA/u88BXxY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725296357; a=rsa-sha256; cv=none; b=qxnd9URf5hBL7X2xv6Mk3VsjdfpxPgy8wv7Xt0X7fWPMyEXpo3e3TCqVTDrsPgV8LJCX4r dGZ7O26NxKABu5KMNvHO+KbLTBNsHvnAGsWTUiPrAphr0+NIIqHVINaQ4gU6VmWVpdPBNj 8u3olAzIjspkVKxQkMm1HW7NwMypIB4= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=LgHlAZpY; spf=pass (imf10.hostedemail.com: domain of urezki@gmail.com designates 209.85.208.176 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-lj1-f176.google.com with SMTP id 38308e7fff4ca-2f504652853so49857991fa.0 for ; Mon, 02 Sep 2024 10:00:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1725296448; x=1725901248; darn=kvack.org; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:date:from:from:to :cc:subject:date:message-id:reply-to; bh=J9+V/kbhLj5ujl3zbVoImDUD1XrjW3PQicqV6baThDk=; b=LgHlAZpY2Uc70DAeKGSwVvjLcs+zj7UyGn9M0LcIexL6+cPW4Q8tf8hzyR6eOKi072 c37O8IJ8hrhJ8jQUZGEqJDpfPsWU4OqM7gHEwDR8Ka993zz8hhVdwl2luChGzkS74m2v v2fxPRft6rlJKOglt1skYHO0Jg30bTznKYfrYTjpXhgWJR03k4LIuBBKDk6XIbKnOmEb 4c3CoZpPnCb2yGXlnirYwGUr9SoleEypI1XIH1AimgGABoMOiTLfKiEzFLUgVD1EiyeH C65t3Ff/Vun13zZXxjvaH59EnQ+8fZk+PNmft3ZfR83iHLcV0xRVQI6kCjCtTGy5tlNM 0Qdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725296448; x=1725901248; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:date:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=J9+V/kbhLj5ujl3zbVoImDUD1XrjW3PQicqV6baThDk=; b=Zqiobxd90VamTHuobDDULaDG2OKy4/g2h/iKsS+tG69ritCNk1M8N6GkmOz7DKW5h5 ikH2P0mBNaFDO6nhvX91suhj0CUfL5B5iNlAZCVrntJaINJqgarFPbBsLjUEroVd109Q gySVmWDkP7ROmefyPvlrDlv34jta/bgybm9hVsdmf2Rd1CKj74UKDpVFIHK4Ne2Uh3OM GHgpLAePf6oUzZV5R4Yof2pmVY7xNU8J0W37yxcwV1LmH94AszmSvbE3S+Fb/xVE2JDn yho1CcyZv1alcxiC3GPhlj+Kt0gR87jO5mV2T3KusCVtwItFBdsPw0pgosljhEwT4lHJ 7u7A== X-Forwarded-Encrypted: i=1; AJvYcCUqqjuUCagHVK7UWKTsX+BtIpGQxe3IwaxaKNOWzHOU2cYZ1WUv39hjjoHPRTHZOTt29o0rq+HMxg==@kvack.org X-Gm-Message-State: AOJu0YwH3Aa1B+4LR9k0hFWfaiFGocuMkXvaA/AdWkyNu+Vg8CgB545G QF7ATTUWy/lSrXaBW+5Kkupf1kiXfkXTGdjYA5+eqaAzF7gecDNN X-Google-Smtp-Source: AGHT+IGrCsmJj7R1rux4b9vaIuGUMo9LKgogBWBVQ6KAtjZhu5ho54eoZE1kk2wngGuHCXxiW9L+og== X-Received: by 2002:a05:651c:1548:b0:2f3:f193:d2d0 with SMTP id 38308e7fff4ca-2f61e0a5a5cmr73946381fa.33.1725296446821; Mon, 02 Sep 2024 10:00:46 -0700 (PDT) Received: from pc636 (host-90-233-206-146.mobileonline.telia.com. [90.233.206.146]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-2f615183134sm19368931fa.122.2024.09.02.10.00.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Sep 2024 10:00:46 -0700 (PDT) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Mon, 2 Sep 2024 19:00:44 +0200 To: Adrian Huang Cc: urezki@gmail.com, ahuang12@lenovo.com, akpm@linux-foundation.org, hch@infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH 1/1] mm: vmalloc: Optimize vmap_lazy_nr arithmetic when purging each vmap_area Message-ID: References: <20240902120046.26478-1-ahuang12@lenovo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20240902120046.26478-1-ahuang12@lenovo.com> X-Rspamd-Queue-Id: EAD13C0021 X-Stat-Signature: bo3muzp7mhcbounaxrcgc49ngrder3ks X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1725296449-139892 X-HE-Meta: U2FsdGVkX18ZNAEtBFDV9U06Dsm13x5A7m91sbyBgOKaEnT3LAGEve9iO5HqWEmc/0qP7d4fj/HE1q4tsFUGi7BUNB+aLXa5q/C1Tc5+hH0MBi4XWFqjGx6pQYoRfQSmQ2ZnYivPVdswNXtTvl4c5lsmiubfuSPpVC0AcPPGnaabO2lIuYIYf73vxdWosBMC7DSeIXjkkldecTol/5XzyPp4DeA3ptHXXogcZURZx6VUS5Lc7NyYb5PafOJONCLiDAMc7Nntk2fpuZ1WNlJqAy5+/0ezBgVOQaC/nuOuDYg6AiNvNvq0EzAN6X522yMh2aKt24N3pJz5O+lkntse5OS1+7da41y2DkKS4IGmPKiet4H8RnTnvTgDaCe9JJmR2AA0t9nLXxZuw39Sv+s83nVy91jQKbyBYyDQJRKI5XhpSRPBRqrdtGNFe5fFsGV+6LzLjOJG6iqdxNX73TYa0pRIjnh8q8vzcxTjR+Qra9PlUa5ADXZlG+W0/IT136+AE8cH25zndYxdIq5UWJcDI3YhQmpIebTl6y8KHRGlyERB1Ry1DZPbuAZFS5N8HvkXCWP61owqjhHu86h1Y+2PwPQSoHyCM/QiLigXx6cBv0P4+/2a8rt4XE/ICYYQTYf2ifHyDSHZYAMhiKjElVkEtW3f3dv7zbijec7KN/QPwmAJrQp8ch279xkVwSfzCGNg18QKDJx/aIkgIZnirSKcG/FaAMZ3/gOiIws7qzqeh14dFUvI15ua5hQ3oVX8FOPrwOGTiJOn3MJQx/evHW6W5ioO5s7R1FuDh8X+Z2DrTTUT+dA6TQofg6sCYPi7LuXHdmxLvPmBa6C/Ebgqd4vV/baBOvIDYIrI6YXeQlNst67kSeXrMb5XWuBhybA9+FGp58LH0S8oBRbE0fZBMjmXJDkpKUXCkFLa1Y/4+h4SHpzYbV36340BY+lK2F9NXhJ17hmcHxUHb853p++aJ9O CH3QB6wZ zGWk42p9ez1zCwlp8aO7296FB8rtn/3A2tINA92+XQE+GCsRcKk7/wZ5kIwfMWTxeMhbRX0NJyWjKYxVp/l0Gl3RjJTSGMZ+WP+6/wFRIx2SYeyTzPuHLU3bVWBfWkz5aeITJFmBv+TBD4nDM6sQxhaEdYqY5h78X5EO2DLo68ZGWlLl6OsxtMTVKVuUFDyyyna4QG+tkUheKI4+JuI1ivxgskp7IlU9fJDOkrU6/Bkn6qXFaNiXSqe70zkjJrOi3lq51e2PM8PBu+zVh3zNqRumPT7jSYDGg/nGo8mkboBDeQt1EcNBZXzLa+sGs4CHmVX7GYNOVtl9erBTKT1M5r/3ReyLFwP+I/f/ciDRyKjpZBPQeMpiyJcnkdLPPElUUgO1+a4pNo4FiBHLTt9Vf20daJx0FnIMRNbSm1K6rl6OrHe+J4KSHgH3Vw5P3dqWR1ChU5WsBVCtiPFtM7+kFAEzWydY2sQe4KHl6BCuEC+XY9pw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000084, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Sep 02, 2024 at 08:00:46PM +0800, Adrian Huang wrote: > On Fri, Aug 30, 2024 at 3:00 AM Uladzislau Rezki wrote: > > atomic_long_add_return() might also introduce a high contention. We can > > optimize by splitting into more light atomics. Can you check it on your > > 448-cores system? > > Interestingly, the following result shows the latency of > free_vmap_area_noflush() is just 26 usecs (The worst case is 16ms-32ms). > > /home/git-repo/bcc/tools/funclatency.py -u free_vmap_area_noflush & pid1=$! && sleep 8 && modprobe test_vmalloc nr_threads=$(nproc) run_test_mask=0x7; kill -SIGINT $pid1 > > usecs : count distribution > 0 -> 1 : 18166 | | > 2 -> 3 : 41929818 |** | > 4 -> 7 : 181203439 |*********** | > 8 -> 15 : 464242836 |***************************** | > 16 -> 31 : 620077545 |****************************************| > 32 -> 63 : 442133041 |**************************** | > 64 -> 127 : 111432597 |******* | > 128 -> 255 : 3441649 | | > 256 -> 511 : 302655 | | > 512 -> 1023 : 738 | | > 1024 -> 2047 : 73 | | > 2048 -> 4095 : 0 | | > 4096 -> 8191 : 0 | | > 8192 -> 16383 : 0 | | > 16384 -> 32767 : 196 | | > > avg = 26 usecs, total: 49415657269 usecs, count: 1864782753 > > > free_vmap_area_noflush() just executes the lock prefix one time, so the > wrost case might be just about a hundred clock cycles. > > The problem of purge_vmap_node() is that some cores are busy on purging > each vmap_area of the *long* purge_list and executing atomic_long_sub() > for each vmap_area, while other cores free vmalloc allocations and execute > atomic_long_add_return() in free_vmap_area_noflush(). The following crash > log shows the 22 cores are busy on purging vmap_area structs [1]: > > crash> bt -a | grep "purge_vmap_node+291" | wc -l > 22 > > So, the latency of purge_vmap_node() dramatically increases becase it > excutes the lock prefix over 600,0000 times. The issue can be easier > to reproduce if more cores execute purge_vmap_node() simultaneously. > Right. This is clear to me. Under heavy stressing in a tight loop we invoke atomic_long_sub() per one freed VA. Having 448-cores and one stress job per-cpu we end up with a high-contention spot when access to an atomic which requires a cache-line lock. > > > Tested the following patch with the light atomics. However, nothing improved > (But, the worst case is improved): > > usecs : count distribution > 0 -> 1 : 7146 | | > 2 -> 3 : 31734187 |** | > 4 -> 7 : 161408609 |*********** | > 8 -> 15 : 461411377 |********************************* | > 16 -> 31 : 557005293 |****************************************| > 32 -> 63 : 435518485 |******************************* | > 64 -> 127 : 175033097 |************ | > 128 -> 255 : 42265379 |*** | > 256 -> 511 : 399112 | | > 512 -> 1023 : 734 | | > 1024 -> 2047 : 72 | | > > avg = 32 usecs, total: 59952713176 usecs, count: 1864783491 > Thank you for checking this! So there is no difference. As for worst case, it might be an error of measurements. The problem is that we/you measure the time which includes a context switch because a context which triggers the free_vmap_area_noflush() function can easily be preempted. -- Uladzislau Rezki