From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 26521CCD199 for ; Fri, 17 Oct 2025 16:15:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 527878E0084; Fri, 17 Oct 2025 12:15:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4FECA8E001F; Fri, 17 Oct 2025 12:15:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3EDEA8E0084; Fri, 17 Oct 2025 12:15:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 277F98E001F for ; Fri, 17 Oct 2025 12:15:28 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id C61EBC024A for ; Fri, 17 Oct 2025 16:15:27 +0000 (UTC) X-FDA: 84008106294.16.E976142 Received: from mail-lf1-f50.google.com (mail-lf1-f50.google.com [209.85.167.50]) by imf27.hostedemail.com (Postfix) with ESMTP id C20234000F for ; Fri, 17 Oct 2025 16:15:25 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="b/EFlxvp"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf27.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.50 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1760717725; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ngLRmK1oXthdQUYtmQtbPUF1dhiRRUALxIRLtMvIerU=; b=jeQ2moWgTMhGs3CDKWeRlMJGI/P/XSS7ytGAOjK/4qfHeNABgAVtfTP1qoTTQkQ7iOUkPT EA1/DU3NJfxlRzfXwBWGbyo9VHyjcuoCwSHseyqANVkeZfspelzrwexIlwYowLsa0UUxs0 b7O/YIY2CrA2XagWz+9trmeEVBIxqQg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1760717725; a=rsa-sha256; cv=none; b=lX+TyFPSUuYnql0b7berYoWc0Jq56Obbqvo4bFJNdPWN4yQvkuLj15aI/e1oXvbtI8/BYB tU0Ox7hZE3/QQfH6nWaMDqD+e2QPrQLXWakn0FRw8umzezg1DPCGodfvjq3ciXYCEq83G4 NgNljICxVQKlQTAb7XEFpCF5m2ObqKY= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="b/EFlxvp"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf27.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.50 as permitted sender) smtp.mailfrom=urezki@gmail.com Received: by mail-lf1-f50.google.com with SMTP id 2adb3069b0e04-57d8ff3944dso2548671e87.3 for ; Fri, 17 Oct 2025 09:15:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1760717724; x=1761322524; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:from:to:cc:subject:date:message-id:reply-to; bh=ngLRmK1oXthdQUYtmQtbPUF1dhiRRUALxIRLtMvIerU=; b=b/EFlxvpb9TptJA35Xm0pD4N1Bz3rAUCrRwPXVBoEoMWGUSosJhz/BfCd6YfVjxQYN m/ClyB4vt1gWNgoayt0BMgITrSsbZxLU2TKhBmj4g4kEXo5bylVCD4o0R7RugnmDi6Tl dLfsM2sROz7sxfhLdkt2mYyxJ492J9zcHd/kF1LDrlOOsOZ4rRsAn4MqBogtcG1cVB/s /AqTyAVWGcegxmHdho+Q3KCGNCLADpFxVLwsZrtlImEsAJnfPV6DVhdevciAjqK0LSk1 JfJFz1DBRBcPhyt/3PnB0/VSN4ino4DbRvC5EUW6ScHFloDWUcKn4FRRRzYNdV3oCdbh AwJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760717724; x=1761322524; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=ngLRmK1oXthdQUYtmQtbPUF1dhiRRUALxIRLtMvIerU=; b=IyMOPbc5pWrGqe9XCozO6L8iP3HQDJco6A2onY+NGCd5Agv/MpKurtkZ18R0qWyT5k Lh09fCUQkMI+YztFD2Q3sSgqNxzhPSohyrHqhJUyou+KNXzG6mOK6H09l1q8zMGvjccC OskGKPVwfpLJQEE6jrRwRijurDXoISdSGox3UX/ifcsiw0clmj/Lj5F6eZLtKpKd9vEn xumsN2kFrRSZ9akin8kj68zj6Fchw1U2Nd8fi+SznfcuuW6JmvMK+JZOKpyme/drPwuZ v1xzmo/jcDAETGYyQByMfEUbJviVI7PsJb/00/RGHCC3hwwdgyBoCfsp0UNyrX2w8+D6 eIfA== X-Forwarded-Encrypted: i=1; AJvYcCVvTqPKshSEk/hSU8Y4/6H5DCXbumz9Okn5iFGnzOdvtgrZSlW+fXiuOHrAhVuzDkdhqnkst8jcJw==@kvack.org X-Gm-Message-State: AOJu0Yw3qz2lfMHrxdEDwdkL7OwU1BCFiEBlwAJYlFJEgmWUk3fYiryV xeLwkmB2uXhqPs4zMxPJQRXWB5yffbjlPPCDNRyxjjOwe06zpfCfCieC X-Gm-Gg: ASbGncsKnI1kKzy7dO9yEaW1Ui70dOaXU9P/llau3NttnYkwlLGewDH2jyDbM6is0Qj xedeJ/9eRQ1rIaABt4ETph1eLp8NzFCcGu7tbRh0QgP85daAo3QgXTiCTaS+Cd8vmTwbM5AkCGt 5hR+XGvIK+KTAiu1FWXZdvz8uI1DnhUVZVe030MgjMjFfTwGxNW7Zh3HFu+By0u+Rz7rTm7xV4C 6j9OhRQW3aY+k899GtWQefEbZMTzcbGdT2cp8Rm7gvJAZY72wVaW8qxFef7V/iH7/DI/dMHCmzI UpvglGEPgkukLSOmTY6I8s92bcp0x2crc9ajAJNiRMtOhK0nExBhoRcDKKP+I/9CUYo7Je0ez4D BErG6Fs1hlOe9Sh1GMTNJ1SDWoA2tpcD00IsSqD6Jrn3ghadpsjXqHttMBx/N//Hn X-Google-Smtp-Source: AGHT+IF4+HUifUUlo17HSCgYnNGk3aG8u5kLir6yEURWDm6jFdmspJdCdomosbkBnIZiFQb1BeuOoQ== X-Received: by 2002:a05:6512:118f:b0:591:c745:3024 with SMTP id 2adb3069b0e04-591d85665d6mr1544436e87.43.1760717723485; Fri, 17 Oct 2025 09:15:23 -0700 (PDT) Received: from milan ([2001:9b1:d5a0:a500::24b]) by smtp.gmail.com with ESMTPSA id 2adb3069b0e04-591deeaf49bsm24526e87.28.2025.10.17.09.15.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Oct 2025 09:15:22 -0700 (PDT) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Fri, 17 Oct 2025 18:15:21 +0200 To: "Vishal Moola (Oracle)" Cc: Uladzislau Rezki , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Subject: Re: [RFC PATCH] mm/vmalloc: request large order pages from buddy allocator Message-ID: References: <20251014182754.4329-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspam-User: X-Rspamd-Queue-Id: C20234000F X-Rspamd-Server: rspam02 X-Stat-Signature: whugps818remnm4t8pnzc6c6b1yx5qkj X-HE-Tag: 1760717725-335261 X-HE-Meta: U2FsdGVkX1/3wOEcfWH5a687RmQqEUgHTHvaglVI1nG0oJc3qIYtPDLEKb7GLz6y4YJ+lmY8B76v7gE63rVIY6/C8oq5VTAJ3ATS2/6jk1hDVoZH0023DX+4cyTv9e4d3OKxVi6g86jbZkDPdJTee0upGJ78tjhpXne2ZaCE1obt8Pkswx7ct9qBqpBaC200hjbnSExs2KRrj6iK1j1CzJRr3OHgtgwYmq3qOmEn09GBtAI7auLlog1eET9t9G26jeU7FFCFvn0LqkWG15zywhEJ6dJuNHUGPy35WOAZis2pLW7lkEoTjEAyIoWW2+ZjlAG5w78ayNp/Twb4n/SOvPPQmzM9Wknaddi2y++tM34b4qahvYlcC/JWeQBL2C3VMTngeSAjHy401u3X5LIDHIgxiSZ8TRqGF4yCztlsmN83msIqZ1YzoCSFgwLfNs5iym6C1O9F2VHhaoq9aVNks/MjpHCrt3y0CJfaddWizVut3QrTju3YAjDI5sEiFOR/h20QOdtsXAYtYXuD1kAXuXm6DGXTZtvs6Mq11i+v76U/dGAjTMAwn3/UyA2DRPWCZbqJdTad15nuZn1MKYQgt+lN7ebOu7cCOIgLf4MOUXtPSTicpy0TbNYiMZlHx8hGludAZj1/hzW2YOIIK8QXqIi5ug3hE/+oWjGT5v/OwV3nlN3A5F2IydWucTVfcev6LqMDD/Qes3K8Jt6zKIzWz5o82z2972BsVVZnmZTA3w0jKFvJj/et4FMBE1QwtfmH5oj/fdjX9slTysPVjhDicS9HAnPsrPOZjQOUY1KCmaKrRALHpru4zg/6KQj9qEgx6GTO2s+Nz7+s5ZUjvInBUZvHI/cnPNMieBvmMofUCCpXjUPbdqwuQstblL8Di5b3gX1gcOCscXVcrbJ/8P/iL158lu+Fq4ee9iYD95KytVdUAV11nJNfB/568dvYCQwxXGjb2Ry2a2v8h8/UHoS fDuLoqYj lJZQckAwMSE6J3u84eythuce1VVUPK9qLtXqUyvO0n0oBhI63cs/RrIBgHSPPbq+tUkwRjreVqerrC85PvC6W8Al9XO1/sIGy2c0t64o9cisrTh9UPA1XraRAxvHWgDwdpXa/KKlZNP3bGgIKFGG9dG8ZoiqslKT/GPWoWPVBalGnKuVeivFi1Vv1k84ofcrLFdpmuXJnb2br71T6V+tg2QKHIC81ICyrl4RZQ/J7RVQbXliHAE2GjlL0QP34igIPFk/s16AYPLFvSlF1uzCNKyvXiJ+NdlKGWp4TfHWRITIH+Oncs2LtffvQbuqVr+2tgmichFsirOr2dQi3nwb/vOqPiMz8Z9s8Fjtllk3nFiZz136uRG0nflVAnHVLEA3jFOM99CSXUdeNZM2MqSHmpBBHoPLIPCWcRPhQ3HQ8OC8PFwQUNwXeyCrOx5VKHH3h1ktxJHoX98goIVBKUBc4yj6vi7dhAgP+Sxsd6257Q+T6g+d4V5jFBZ2YWd2ya9fcRsxuxrCiPgmOlstXyyWcJp8uXvOlJQgPPP1PRS/VXs3o11E= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Oct 16, 2025 at 12:02:59PM -0700, Vishal Moola (Oracle) wrote: > On Thu, Oct 16, 2025 at 10:42:04AM -0700, Vishal Moola (Oracle) wrote: > > On Thu, Oct 16, 2025 at 06:12:36PM +0200, Uladzislau Rezki wrote: > > > On Wed, Oct 15, 2025 at 02:28:49AM -0700, Vishal Moola (Oracle) wrote: > > > > On Wed, Oct 15, 2025 at 04:56:42AM +0100, Matthew Wilcox wrote: > > > > > On Tue, Oct 14, 2025 at 11:27:54AM -0700, Vishal Moola (Oracle) wrote: > > > > > > Running 1000 iterations of allocations on a small 4GB system finds: > > > > > > > > > > > > 1000 2mb allocations: > > > > > > [Baseline] [This patch] > > > > > > real 46.310s real 34.380s > > > > > > user 0.001s user 0.008s > > > > > > sys 46.058s sys 34.152s > > > > > > > > > > > > 10000 200kb allocations: > > > > > > [Baseline] [This patch] > > > > > > real 56.104s real 43.946s > > > > > > user 0.001s user 0.003s > > > > > > sys 55.375s sys 43.259s > > > > > > > > > > > > 10000 20kb allocations: > > > > > > [Baseline] [This patch] > > > > > > real 0m8.438s real 0m9.160s > > > > > > user 0m0.001s user 0m0.002s > > > > > > sys 0m7.936s sys 0m8.671s > > > > > > > > > > I'd be more confident in the 20kB numbers if you'd done 10x more > > > > > iterations. > > > > > > > > I actually ran my a number of times to mitigate the effects of possibly > > > > too small sample sizes, so I do have that number for you too: > > > > > > > > [Baseline] [This patch] > > > > real 1m28.119s real 1m32.630s > > > > user 0m0.012s user 0m0.011s > > > > sys 1m23.270s sys 1m28.529s > > > > > > > I have just had a look at performance figures of this patch. The test > > > case is 16K allocation by one single thread, 1 000 000 loops, 10 run: > > > > > > sudo ./test_vmalloc.sh run_test_mask=1 nr_threads=1 nr_pages=4 > > > > The reason I didn't use this test module is the same concern Matthew > > brought up earlier about testing the PCP list rather than buddy > > allocator. The test module allocates, then frees over and over again, > > making it incredibly prone to reuse the pages over and over again. > > > > > BOX: AMD Milan, 256 CPUs, 512GB of memory > > > > > > # default 16K alloc > > > [ 15.823704] Summary: fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 955334 usec > > > [ 17.751685] Summary: fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 1158739 usec > > > [ 19.443759] Summary: fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 1016522 usec > > > [ 21.035701] Summary: fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 911381 usec > > > [ 22.727688] Summary: fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 987286 usec > > > [ 24.199694] Summary: fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 955112 usec > > > [ 25.755675] Summary: fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 926393 usec > > > [ 27.355670] Summary: fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 937875 usec > > > [ 28.979671] Summary: fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 1006985 usec > > > [ 30.531674] Summary: fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 941088 usec > > > > > > # the patch 16K alloc > > > [ 44.343380] Summary: fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 2296849 usec > > > [ 47.171290] Summary: fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 2014678 usec > > > [ 50.007258] Summary: fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 2094184 usec > > > [ 52.651141] Summary: fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 1953046 usec > > > [ 55.455089] Summary: fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 2209423 usec > > > [ 57.943153] Summary: fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 1941747 usec > > > [ 60.799043] Summary: fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 2038504 usec > > > [ 63.299007] Summary: fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 1788588 usec > > > [ 65.843011] Summary: fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 2137055 usec > > > [ 68.647031] Summary: fix_size_alloc_test passed: 1 failed: 0 xfailed: 0 repeat: 1 loops: 1000000 avg: 2193022 usec > > > > > > 2X slower. > > > > > > perf-cycles, same test but on 64 CPUs: > > > > > > + 97.02% 0.13% [test_vmalloc] [k] fix_size_alloc_test > > > - 82.11% 82.10% [kernel] [k] native_queued_spin_lock_slowpath > > > 26.19% ret_from_fork_asm > > > ret_from_fork > > > - kthread > > > - 25.96% test_func > > > - fix_size_alloc_test > > > - 23.49% __vmalloc_node_noprof > > > - __vmalloc_node_range_noprof > > > - 54.70% alloc_pages_noprof > > > alloc_pages_mpol > > > __alloc_frozen_pages_noprof > > > get_page_from_freelist > > > __rmqueue_pcplist > > > - 5.58% __get_vm_area_node > > > alloc_vmap_area > > > - 20.54% vfree.part.0 > > > - 20.43% __free_frozen_pages > > > free_frozen_page_commit > > > free_pcppages_bulk > > > _raw_spin_lock_irqsave > > > native_queued_spin_lock_slowpath > > > - 0.77% worker_thread > > > - process_one_work > > > - 0.76% vmstat_update > > > refresh_cpu_vm_stats > > > decay_pcp_high > > > free_pcppages_bulk > > > _raw_spin_lock_irqsave > > > native_queued_spin_lock_slowpath > > > + 76.57% 0.16% [kernel] [k] _raw_spin_lock_irqsave > > > + 71.62% 0.00% [kernel] [k] __vmalloc_node_noprof > > > + 71.61% 0.58% [kernel] [k] __vmalloc_node_range_noprof > > > + 62.35% 0.06% [kernel] [k] alloc_pages_mpol > > > + 62.27% 0.17% [kernel] [k] __alloc_frozen_pages_noprof > > > + 62.20% 0.02% [kernel] [k] alloc_pages_noprof > > > + 62.10% 0.05% [kernel] [k] get_page_from_freelist > > > + 55.63% 0.19% [kernel] [k] __rmqueue_pcplist > > > + 32.11% 0.00% [kernel] [k] ret_from_fork_asm > > > + 32.11% 0.00% [kernel] [k] ret_from_fork > > > + 32.11% 0.00% [kernel] [k] kthread > > > > > > I would say the bottle-neck is a page-allocator. It seems high-order > > > allocations are not good for it. > > Ah also just took a closer look at this. I realize that you also did 16k > allocations (which is at most order-2), so it may not be a good > representation of high-order allocations either. > I agree. But then we should not optimize "small" orders and focus on highest ones. Because of double degrade. I assume stress-ng fork test would alos notice this. > Plus that falls into the regression range I found that I detailed in > response to Matthew elsewhere (I've copy pasted it here for reference) > > I ended up finding that allocating sizes <=20k had noticeable > regressions, while [20k, 90k] was approximately the same, and >= 90k had > improvements (getting more and more noticeable as size grows in > magnitude). > Yes, i did 2-order allocations # default + 35.87% 4.24% [kernel] [k] alloc_pages_bulk_noprof + 31.94% 0.88% [kernel] [k] vfree.part.0 - 27.38% 27.36% [kernel] [k] clear_page_rep 27.36% ret_from_fork_asm ret_from_fork kthread test_func fix_size_alloc_test __vmalloc_node_noprof __vmalloc_node_range_noprof alloc_pages_bulk_noprof clear_page_rep # patch + 53.32% 1.12% [kernel] [k] get_page_from_freelist + 49.41% 0.71% [kernel] [k] prep_new_page - 48.70% 48.64% [kernel] [k] clear_page_rep 48.64% ret_from_fork_asm ret_from_fork kthread test_func fix_size_alloc_test __vmalloc_node_noprof __vmalloc_node_range_noprof alloc_pages_noprof alloc_pages_mpol __alloc_frozen_pages_noprof get_page_from_freelist prep_new_page clear_page_rep i noticed it is because of clear_page_rep() which with patch consumes double in cycles. Both versions should mostly go over pcp-cache, as far as i remember order-2 is allowed to be cached. I wounder why the patch gives x2 of cycles to clear_page_rep()... -- Uladzislau Rezki