From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 628FFC3DA61 for ; Mon, 29 Jul 2024 05:46:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 92FF96B0095; Mon, 29 Jul 2024 01:46:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8DBE66B009B; Mon, 29 Jul 2024 01:46:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7A4686B009C; Mon, 29 Jul 2024 01:46:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 581976B0095 for ; Mon, 29 Jul 2024 01:46:24 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id DDE53140214 for ; Mon, 29 Jul 2024 05:46:23 +0000 (UTC) X-FDA: 82391705046.27.43253A6 Received: from mail-qv1-f45.google.com (mail-qv1-f45.google.com [209.85.219.45]) by imf14.hostedemail.com (Postfix) with ESMTP id 10235100025 for ; Mon, 29 Jul 2024 05:46:21 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=S3G1fOhE; spf=pass (imf14.hostedemail.com: domain of laoar.shao@gmail.com designates 209.85.219.45 as permitted sender) smtp.mailfrom=laoar.shao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722231929; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RArEgJrRz89NlKLWtQbfrw/OvfyfriP7MVoGavuVRY4=; b=tCKPnUCmeKDEtKNjBUTnp72CXx3Mx+hiWkuGFDitQCojLfqERh4kTFxIeNhMiHbE7Ae7Iw vwGNdZ9dYZ7Rj4ne2T2vJkKV3vNVfJqi7pT6Mn0RohNhEZLrWRGkl5X0rfPgldFSp1FL7R lpfmC5VMYV3ahr3E5nTMqA4kyoz20lI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722231929; a=rsa-sha256; cv=none; b=imalxBJdr8O80OnFYjInSuTVddqPgVQl2aIfnWvFxCRgnYQ6sdqCPQYQQHdRr5nd1XgYTo PnNTZlwWiyUTitOudQVyYdXb3jkuYFNj4iulkvNbyILvO4paIeKf0RDrUzY71G5KAOb2+P Otm//4vfiQqFZOsEXWN/DBfqvGq+U3I= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=S3G1fOhE; spf=pass (imf14.hostedemail.com: domain of laoar.shao@gmail.com designates 209.85.219.45 as permitted sender) smtp.mailfrom=laoar.shao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-qv1-f45.google.com with SMTP id 6a1803df08f44-6bb687c3cceso3821366d6.0 for ; Sun, 28 Jul 2024 22:46:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1722231981; x=1722836781; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=RArEgJrRz89NlKLWtQbfrw/OvfyfriP7MVoGavuVRY4=; b=S3G1fOhEpigf6RlAe3oiYX1L4sH9suViaQfGPU9i7ZA4tmCuGkc3Sm2J8dN5YZsvbq iir3rr1NW6rzQ7laexlnx9gslsnaYa5PdT0GI9eTySpjFW3y1qOrDJitOQSGpybq8MAM 4F07RMV2n71QFsdAkWxnYJnd1MUGZliwoDQ69Rw585TeYWR0AQDJJkMjs2Cf3dmbpNFs l/fOefr6Zea0bj9PIydQNLk8gb3ARUwDVDLTgDdm/MkRV0Hnp9PONhL/cC/sETta9ZMB 5tWtihLDQDgAqZNG/Ik/r3Z2+ul1oKtpsEYxvz+jcHv/m88aLXpM6TrYEVw2CvBh+ESQ tHwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722231981; x=1722836781; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RArEgJrRz89NlKLWtQbfrw/OvfyfriP7MVoGavuVRY4=; b=D+fAbHuSleEjVbviRtpDlay3l3pHcOvb1gFv6jXPqzFtkXNRgoX6x2NoFSN05TLUdo ecczva+Jwo+LKUwcXUzApcONXHDFgtTcO9ka3yTPKyz7xx2kT6hzlcTQ2ld3Q8VaHGfy PuhtlcF7dwpKQ66i0cHaf4e9hYgM1rn22EqgxnF+6c4kBT9/DOFy89MrNbiTXrZ8o7sH NhNh0WnSz0xWGKnG5rKZs20r0fFj13u03EcrB2xPNl34EVBbdAtHD0rJi0Y6nSaQjrLD 49nm5zJ3+IkJ5Q4tbHRzlYMi7XqosxZ6IfsC4A79VbM5mn2P+Sqwol/h06cYNf7f40qG BVXg== X-Forwarded-Encrypted: i=1; AJvYcCV13+HLMLoTPynegJ9Hlvy7xwtW8FHWwhPcyYP5U5GKl37phyd/0HzaLVtgWrGyt1InURGs+twS2Mpa2Z8iZqmr39k= X-Gm-Message-State: AOJu0Yyaj6PuYDr9otI7jFVHy5qTgmccGMx+/go9zmYecJ9IqBWRE3J5 JjhKli2SKguzZxU7OPVeo0UcJQWfhRTp1QSuMaZB1LgVQuQ1JnMUXtmdrD/z30x0u3EwEGjCazq /CJ235nqnbKSQZe4RTXzxYHa/+SA= X-Google-Smtp-Source: AGHT+IFueRZw68rDN+y3IY0o3e6KTqM5a6ZgeVh4EFmDZlV7/fAsaaLtlQoP2tzqzSgvAHIubosUypYg0pikkbVrN94= X-Received: by 2002:a05:6214:d8a:b0:6b5:db40:8dba with SMTP id 6a1803df08f44-6bb559777b2mr78643206d6.11.1722231980922; Sun, 28 Jul 2024 22:46:20 -0700 (PDT) MIME-Version: 1.0 References: <20240729023532.1555-1-laoar.shao@gmail.com> <20240729023532.1555-4-laoar.shao@gmail.com> <878qxkyjfr.fsf@yhuang6-desk2.ccr.corp.intel.com> <874j88ye65.fsf@yhuang6-desk2.ccr.corp.intel.com> In-Reply-To: <874j88ye65.fsf@yhuang6-desk2.ccr.corp.intel.com> From: Yafang Shao Date: Mon, 29 Jul 2024 13:45:44 +0800 Message-ID: Subject: Re: [PATCH v2 3/3] mm/page_alloc: Introduce a new sysctl knob vm.pcp_batch_scale_max To: "Huang, Ying" Cc: akpm@linux-foundation.org, mgorman@techsingularity.net, linux-mm@kvack.org, Matthew Wilcox , David Rientjes Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 10235100025 X-Stat-Signature: r8zfni96duwf45646sjq5j4xw3q3tqnh X-HE-Tag: 1722231981-248807 X-HE-Meta: U2FsdGVkX1/iwngQivrMydDz7vwp01iQxn25Bwmcpr80iX+96fJNZdUUd36sjKZy+WDq7D4WqGqqBXIjWvXFPn3YZXTuFctyFt/DxWIQ3v7zj/1PIEJG+z6FGlfOXybxRpXqU09bTIERNY9iIikx7Z4eowKtGIS6YmTMXkNmk9I8szdNqPlMF6HW9KbHRyudZYAYyP/wGbyGgvZPnNmcU9XjCVNx0rfgtziK2NPwzg3FSkhxbhPeYrUQ6WZoMb+O71coRPoV7WIZWdpshPhdVbgmocVb8daLMsySKQzLPG0UnLnws/LOLViTWt/oG2gm4HWX8y7B3qzmGyjOF/i1ByOLn2FKY8USLyIUoF2UbZa4K+1TNDeEANKPdhQHH1TGDiRxvsgaus/+DsQeOOEHuMnJmwZZlyhTDmX/xav+Umw29dKLZoB/eQsHt91UI8QWFsn6Zn9i9CSWeiAM6C2SPiRVwTvxpAOPIgPGy4JYSu7ntjatGxTW9aL1hfI/9hgFoJIx4Nm+4lIHL1Iop7p/e0Q/xhKir/Xyv4jDSeUIn83/uCnzGeV1IDBOERl1dkabyOZ8twhiYDfHLmpVtbXcXDvMgaSW1YjP+l/sORxelNCcEo/fTD5tbKzVCbdewiaPD8CG48QS4KwLgd78dX4kjY/X0UKAAMIpzRpiOe9HZdXjnv9AmeKUay23HWBD2XB6XA64l1DTZjT3dV5KgbeL48vDZpC0SBFuFD5fk9yKIc8p5qPIIhbjlOwSuqUPTkGgUH9yB81DhJEACw/4QPLBE8UGMhB9Pmo5zS5xGGFp8VIe/jyPetRHgkyaRwXbZiwH0pNZXF0rQLuvYlbmya/SEMoI2VeFPJrCbKIN/3VAw7Z4usBEBhKRaKxNROO1seKx1kj1OjH1P/whpKs+l1ZIH9OHrD88RnUnXL8MD/Ln5WlztbAsr/A5fDRW/OSv8CmHNVR4BJAT073Nc+Hvjgk FVg4ordq S6a8d3/DWOEm0ba4uvYWZchZ7chZlR5lgOnDx8qH0O9xsJlgj8Ir21SK5Nz0zhPkUw0xLAzSVHgXevwkMDr+KfD9TtJG1S1UV/Vczu2rSABD9Zp4erBQqasLC6VuBMi80yNKQ2I018fc7KBE32hZhdpsRn6cU9aiEfSFaWtG6v5yCUGybWCEDZ0VW7sCos2NOkV+IVI9ISewWkW5i9ZvzmuqTdlK/Iq0epNnl/40RM3YWTQGRgM2oKTJ6W4jwhw6RYFd5CF51cvBRqGTkY7N03KYqdbIWExNKmD8aa8vNs9LswfKVp+wkt9P2NQdw8b66d4uNswECgR82qPUXa/1U4w4MivEiMM+Bxx/ODl/DHtUakfO8MqhHoJR5UMS6GIHqcMRVy8TRZyMp/mx2jlOrkfjhxueKBp/3jjDraxBVubykiTxsIEWWKOivzyPePtswpUP3p18ofqT4gzh/dIZiN1BbiPAACACNNnsX X-Bogosity: Ham, tests=bogofilter, spamicity=0.000025, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Jul 29, 2024 at 1:16=E2=80=AFPM Huang, Ying = wrote: > > Yafang Shao writes: > > > On Mon, Jul 29, 2024 at 11:22=E2=80=AFAM Huang, Ying wrote: > >> > >> Hi, Yafang, > >> > >> Yafang Shao writes: > >> > >> > During my recent work to resolve latency spikes caused by zone->lock > >> > contention[0], I found that CONFIG_PCP_BATCH_SCALE_MAX is difficult = to use > >> > in practice. > >> > >> As we discussed before [1], I still feel confusing about the descripti= on > >> about zone->lock contention. How about change the description to > >> something like, > > > > Sure, I will change it. > > > >> > >> Larger page allocation/freeing batch number may cause longer run time = of > >> code holding zone->lock. If zone->lock is heavily contended at the sa= me > >> time, latency spikes may occur even for casual page allocation/freeing= . > >> Although reducing the batch number cannot make zone->lock contended > >> lighter, it can reduce the latency spikes effectively. > >> > >> [1] https://lore.kernel.org/linux-mm/87ttgv8hlz.fsf@yhuang6-desk2.ccr.= corp.intel.com/ > >> > >> > To demonstrate this, I wrote a Python script: > >> > > >> > import mmap > >> > > >> > size =3D 6 * 1024**3 > >> > > >> > while True: > >> > mm =3D mmap.mmap(-1, size) > >> > mm[:] =3D b'\xff' * size > >> > mm.close() > >> > > >> > Run this script 10 times in parallel and measure the allocation late= ncy by > >> > measuring the duration of rmqueue_bulk() with the BCC tools > >> > funclatency[1]: > >> > > >> > funclatency -T -i 600 rmqueue_bulk > >> > > >> > Here are the results for both AMD and Intel CPUs. > >> > > >> > AMD EPYC 7W83 64-Core Processor, single NUMA node, KVM virtual serve= r > >> > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > >> > > >> > - Default value of 5 > >> > > >> > nsecs : count distribution > >> > 0 -> 1 : 0 | = | > >> > 2 -> 3 : 0 | = | > >> > 4 -> 7 : 0 | = | > >> > 8 -> 15 : 0 | = | > >> > 16 -> 31 : 0 | = | > >> > 32 -> 63 : 0 | = | > >> > 64 -> 127 : 0 | = | > >> > 128 -> 255 : 0 | = | > >> > 256 -> 511 : 0 | = | > >> > 512 -> 1023 : 12 | = | > >> > 1024 -> 2047 : 9116 | = | > >> > 2048 -> 4095 : 2004 | = | > >> > 4096 -> 8191 : 2497 | = | > >> > 8192 -> 16383 : 2127 | = | > >> > 16384 -> 32767 : 2483 | = | > >> > 32768 -> 65535 : 10102 | = | > >> > 65536 -> 131071 : 212730 |******************* = | > >> > 131072 -> 262143 : 314692 |***************************** = | > >> > 262144 -> 524287 : 430058 |*******************************= *********| > >> > 524288 -> 1048575 : 224032 |******************** = | > >> > 1048576 -> 2097151 : 73567 |****** = | > >> > 2097152 -> 4194303 : 17079 |* = | > >> > 4194304 -> 8388607 : 3900 | = | > >> > 8388608 -> 16777215 : 750 | = | > >> > 16777216 -> 33554431 : 88 | = | > >> > 33554432 -> 67108863 : 2 | = | > >> > > >> > avg =3D 449775 nsecs, total: 587066511229 nsecs, count: 1305242 > >> > > >> > The avg alloc latency can be 449us, and the max latency can be highe= r > >> > than 30ms. > >> > > >> > - Value set to 0 > >> > > >> > nsecs : count distribution > >> > 0 -> 1 : 0 | = | > >> > 2 -> 3 : 0 | = | > >> > 4 -> 7 : 0 | = | > >> > 8 -> 15 : 0 | = | > >> > 16 -> 31 : 0 | = | > >> > 32 -> 63 : 0 | = | > >> > 64 -> 127 : 0 | = | > >> > 128 -> 255 : 0 | = | > >> > 256 -> 511 : 0 | = | > >> > 512 -> 1023 : 92 | = | > >> > 1024 -> 2047 : 8594 | = | > >> > 2048 -> 4095 : 2042818 |****** = | > >> > 4096 -> 8191 : 8737624 |************************** = | > >> > 8192 -> 16383 : 13147872 |*******************************= *********| > >> > 16384 -> 32767 : 8799951 |************************** = | > >> > 32768 -> 65535 : 2879715 |******** = | > >> > 65536 -> 131071 : 659600 |** = | > >> > 131072 -> 262143 : 204004 | = | > >> > 262144 -> 524287 : 78246 | = | > >> > 524288 -> 1048575 : 30800 | = | > >> > 1048576 -> 2097151 : 12251 | = | > >> > 2097152 -> 4194303 : 2950 | = | > >> > 4194304 -> 8388607 : 78 | = | > >> > > >> > avg =3D 19359 nsecs, total: 708638369918 nsecs, count: 36604636 > >> > > >> > The avg was reduced significantly to 19us, and the max latency is re= duced > >> > to less than 8ms. > >> > > >> > - Conclusion > >> > > >> > On this AMD CPU, reducing vm.pcp_batch_scale_max significantly helps= reduce > >> > latency. Latency-sensitive applications will benefit from this tunin= g. > >> > > >> > However, I don't have access to other types of AMD CPUs, so I was un= able to > >> > test it on different AMD models. > >> > > >> > Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz, two NUMA nodes > >> > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > >> > > >> > - Default value of 5 > >> > > >> > nsecs : count distribution > >> > 0 -> 1 : 0 | = | > >> > 2 -> 3 : 0 | = | > >> > 4 -> 7 : 0 | = | > >> > 8 -> 15 : 0 | = | > >> > 16 -> 31 : 0 | = | > >> > 32 -> 63 : 0 | = | > >> > 64 -> 127 : 0 | = | > >> > 128 -> 255 : 0 | = | > >> > 256 -> 511 : 0 | = | > >> > 512 -> 1023 : 2419 | = | > >> > 1024 -> 2047 : 34499 |* = | > >> > 2048 -> 4095 : 4272 | = | > >> > 4096 -> 8191 : 9035 | = | > >> > 8192 -> 16383 : 4374 | = | > >> > 16384 -> 32767 : 2963 | = | > >> > 32768 -> 65535 : 6407 | = | > >> > 65536 -> 131071 : 884806 |*******************************= *********| > >> > 131072 -> 262143 : 145931 |****** = | > >> > 262144 -> 524287 : 13406 | = | > >> > 524288 -> 1048575 : 1874 | = | > >> > 1048576 -> 2097151 : 249 | = | > >> > 2097152 -> 4194303 : 28 | = | > >> > > >> > avg =3D 96173 nsecs, total: 106778157925 nsecs, count: 1110263 > >> > > >> > - Conclusion > >> > > >> > This Intel CPU works fine with the default setting. > >> > > >> > Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz, single NUMA node > >> > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > >> > > >> > Using the cpuset cgroup, we can restrict the test script to run on N= UMA > >> > node 0 only. > >> > > >> > - Default value of 5 > >> > > >> > nsecs : count distribution > >> > 0 -> 1 : 0 | = | > >> > 2 -> 3 : 0 | = | > >> > 4 -> 7 : 0 | = | > >> > 8 -> 15 : 0 | = | > >> > 16 -> 31 : 0 | = | > >> > 32 -> 63 : 0 | = | > >> > 64 -> 127 : 0 | = | > >> > 128 -> 255 : 0 | = | > >> > 256 -> 511 : 46 | = | > >> > 512 -> 1023 : 695 | = | > >> > 1024 -> 2047 : 19950 |* = | > >> > 2048 -> 4095 : 1788 | = | > >> > 4096 -> 8191 : 3392 | = | > >> > 8192 -> 16383 : 2569 | = | > >> > 16384 -> 32767 : 2619 | = | > >> > 32768 -> 65535 : 3809 | = | > >> > 65536 -> 131071 : 616182 |*******************************= *********| > >> > 131072 -> 262143 : 295587 |******************* = | > >> > 262144 -> 524287 : 75357 |**** = | > >> > 524288 -> 1048575 : 15471 |* = | > >> > 1048576 -> 2097151 : 2939 | = | > >> > 2097152 -> 4194303 : 243 | = | > >> > 4194304 -> 8388607 : 3 | = | > >> > > >> > avg =3D 144410 nsecs, total: 150281196195 nsecs, count: 1040651 > >> > > >> > The zone->lock contention becomes severe when there is only a single= NUMA > >> > node. The average latency is approximately 144us, with the maximum > >> > latency exceeding 4ms. > >> > > >> > - Value set to 0 > >> > > >> > nsecs : count distribution > >> > 0 -> 1 : 0 | = | > >> > 2 -> 3 : 0 | = | > >> > 4 -> 7 : 0 | = | > >> > 8 -> 15 : 0 | = | > >> > 16 -> 31 : 0 | = | > >> > 32 -> 63 : 0 | = | > >> > 64 -> 127 : 0 | = | > >> > 128 -> 255 : 0 | = | > >> > 256 -> 511 : 24 | = | > >> > 512 -> 1023 : 2686 | = | > >> > 1024 -> 2047 : 10246 | = | > >> > 2048 -> 4095 : 4061529 |********* = | > >> > 4096 -> 8191 : 16894971 |*******************************= *********| > >> > 8192 -> 16383 : 6279310 |************** = | > >> > 16384 -> 32767 : 1658240 |*** = | > >> > 32768 -> 65535 : 445760 |* = | > >> > 65536 -> 131071 : 110817 | = | > >> > 131072 -> 262143 : 20279 | = | > >> > 262144 -> 524287 : 4176 | = | > >> > 524288 -> 1048575 : 436 | = | > >> > 1048576 -> 2097151 : 8 | = | > >> > 2097152 -> 4194303 : 2 | = | > >> > > >> > avg =3D 8401 nsecs, total: 247739809022 nsecs, count: 29488508 > >> > > >> > After setting it to 0, the avg latency is reduced to around 8us, and= the > >> > max latency is less than 4ms. > >> > > >> > - Conclusion > >> > > >> > On this Intel CPU, this tuning doesn't help much. Latency-sensitive > >> > applications work well with the default setting. > >> > > >> > It is worth noting that all the above data were tested using the ups= tream > >> > kernel. > >> > > >> > Why introduce a systl knob? > >> > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D > >> > > >> > From the above data, it's clear that different CPU types have varyin= g > >> > allocation latencies concerning zone->lock contention. Typically, pe= ople > >> > don't release individual kernel packages for each type of x86_64 CPU= . > >> > > >> > Furthermore, for latency-insensitive applications, we can keep the d= efault > >> > setting for better throughput. In our production environment, we set= this > >> > value to 0 for applications running on Kubernetes servers while keep= ing it > >> > at the default value of 5 for other applications like big data. It's= not > >> > common to release individual kernel packages for each application. > >> > >> Thanks for detailed performance data! > >> > >> Is there any downside observed to set CONFIG_PCP_BATCH_SCALE_MAX to 0 = in > >> your environment? If not, I suggest to use 0 as default for > >> CONFIG_PCP_BATCH_SCALE_MAX. Because we have clear evidence that > >> CONFIG_PCP_BATCH_SCALE_MAX hurts latency for some workloads. After > >> that, if someone found some other workloads need larger > >> CONFIG_PCP_BATCH_SCALE_MAX, we can make it tunable dynamically. > >> > > > > The decision doesn=E2=80=99t rest with us, the kernel team at our compa= ny. > > It=E2=80=99s made by the system administrators who manage a large numbe= r of > > servers. The latency spikes only occur on the Kubernetes (k8s) > > servers, not in other environments like big data servers. We have > > informed other system administrators, such as those managing the big > > data servers, about the latency spike issues, but they are unwilling > > to make the change. > > > > No one wants to make changes unless there is evidence showing that the > > old settings will negatively impact them. However, as you know, > > latency is not a critical concern for big data; throughput is more > > important. If we keep the current settings, we will have to release > > different kernel packages for different environments, which is a > > significant burden for us. > > Totally understand your requirements. And, I think that this is better > to be resolved in your downstream kernel. If there are clear evidences > to prove small batch number hurts throughput for some workloads, we can > make the change in the upstream kernel. > Please don't make this more complicated. We are at an impasse. The key issue here is that the upstream kernel has a default value of 5, not 0. If you can change it to 0, we can persuade our users to follow the upstream changes. They currently set it to 5, not because you, the author, chose this value, but because it is the default in Linus's tree. Since it's in Linus's tree, kernel developers worldwide support it. It's not just your decision as the author, but the entire community supports this default. If, in the future, we find that the value of 0 is not suitable, you'll tell us, "It is an issue in your downstream kernel, not in the upstream kernel, so we won't accept it." PANIC. -- Regards Yafang