From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51F54C4167B for ; Mon, 4 Dec 2023 18:07:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BA3E36B02DF; Mon, 4 Dec 2023 13:07:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B54566B02E0; Mon, 4 Dec 2023 13:07:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F5BD6B02E1; Mon, 4 Dec 2023 13:07:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 90D5E6B02DF for ; Mon, 4 Dec 2023 13:07:13 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 2F1ABC0306 for ; Mon, 4 Dec 2023 18:07:13 +0000 (UTC) X-FDA: 81529917546.19.6FC7002 Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) by imf08.hostedemail.com (Postfix) with ESMTP id 2E09D16000E for ; Mon, 4 Dec 2023 18:07:10 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=VsgvJVrM; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none); spf=pass (imf08.hostedemail.com: domain of htejun@gmail.com designates 209.85.210.179 as permitted sender) smtp.mailfrom=htejun@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701713231; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vnKGL9rVmaXJ0k7UQTGGb8qgGBw3yaFYXulmlj0ClVk=; b=Wz3xAibIj6MXQRTAlZd3oGupRJWQqktZQhVCztehNZhPyxxabdpbkyoapyxuFq1/G/oyF4 UuhCRZReIEmxXXChVFfr07RYIl5lgfD/rEGRY+mc5tnOZ24/B140/bqd5LpZe563fhkwRj XkgCXgxWHWnIDaNgWDiFMPyzi4FERJQ= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=VsgvJVrM; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none); spf=pass (imf08.hostedemail.com: domain of htejun@gmail.com designates 209.85.210.179 as permitted sender) smtp.mailfrom=htejun@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701713231; a=rsa-sha256; cv=none; b=JvNuUGef3ucf+2m3YAYQt1YKsRsv38sTXKF66wqc+rfgKyvw3Dr6bkeQ0r6iQYizBL8i0n 5yCRr556gVGf4OsXvo+hz1laH4BefFg5MnVFj20J3xt7U7W+VKAzh4sOWJVRra1LA9JG6G cFWVYtZ9+AbNRFwcfbJkW51wno4/H0A= Received: by mail-pf1-f179.google.com with SMTP id d2e1a72fcca58-6ce387bcb06so959921b3a.0 for ; Mon, 04 Dec 2023 10:07:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701713230; x=1702318030; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:sender:from:to:cc:subject:date:message-id :reply-to; bh=vnKGL9rVmaXJ0k7UQTGGb8qgGBw3yaFYXulmlj0ClVk=; b=VsgvJVrM160An1ALnF2vl10rI2RTr9aT15+UMdltf+ZTVgTRWoZ0+Mfn5QFmhCz3rx J1166F0LDINAEX9PA/VXdE3Qucoy7R/wRg03ow4U1rj7LHcQMSmKaVKjMFxf+++CbK7j V4RhM4cNq5IqUt/kFOr0zffCpBZTC285ySqHHmPtxgf68MVyb48L+WFhXbXW2Y/4XklJ im8CLPGd6mJkxnuFobclx/2I+ihgmaoyHpyW7Q+yAWwDc7qKvgO9ui0K40EcjHNRO+WY sxDJxUvcp+Cf/W6U/tyMFXbF0yuScdoGd9B6LdGc/S7G7G0IsbIuIoNa97E1gEzWKxjm c82g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701713230; x=1702318030; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:sender:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vnKGL9rVmaXJ0k7UQTGGb8qgGBw3yaFYXulmlj0ClVk=; b=aKLwZRcf0WHhFwamDw01TFNnHw5LyIT8A2/9+nt7ZDGDk5UwmShzaCMAqfbYxAjXOA 2qtNRLov/zgsHphNc7vt60JA6fAGXl+SekSo9At8Q4F/Xz1x4Ogh9FydwVcLmaqbkNbn 2A4diszNAEi0DQ/ss2J06FKr+UvV41m0yvK8yrz3dB0Eayb4KUa0srO8i7z0fI3U6jUb RKo5zyVSdDef+F9fPyiZhHdCSxTcQW7hM16VPTc4ZBsbKRcTzSmyM7J0S7W3fkzwy/ky r9cOAM0ZTi+bbVjS8dEkCbTlJRR1lRO5YIDGNoMLgjmcEvk1Fn2m4SB0Nc3bihvcKLdw q99A== X-Gm-Message-State: AOJu0YwUmuIyp+j9suy80xZp+JuiFmxG0avfzLExaoZCxZXjQQNXZAWD Dg+rxetP5FvFPAqZSr9Z6vs= X-Google-Smtp-Source: AGHT+IGDIGeeBa7UWPj4E4w2SIGHBjZgGz8oxGqwAWzNb+KD7mpnEvIno4bt3NIHORxil3wMtNwC4w== X-Received: by 2002:a05:6a20:1448:b0:18c:374c:6e64 with SMTP id a8-20020a056a20144800b0018c374c6e64mr27716780pzi.36.1701713229690; Mon, 04 Dec 2023 10:07:09 -0800 (PST) Received: from localhost ([2620:10d:c090:400::4:27ef]) by smtp.gmail.com with ESMTPSA id u2-20020a056a00158200b006cdd507ca2esm7943470pfk.167.2023.12.04.10.07.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 10:07:09 -0800 (PST) Date: Mon, 4 Dec 2023 08:07:07 -1000 From: Tejun Heo To: Naohiro Aota Cc: Lai Jiangshan , "linux-kernel@vger.kernel.org" , "linux-btrfs@vger.kernel.org" , "ceph-devel@vger.kernel.org" , "cgroups@vger.kernel.org" , "coreteam@netfilter.org" , "dm-devel@lists.linux.dev" , "dri-devel@lists.freedesktop.org" , "gfs2@lists.linux.dev" , "intel-gfx@lists.freedesktop.org" , "iommu@lists.linux.dev" , "linux-arm-kernel@lists.infradead.org" , "linux-bcachefs@vger.kernel.org" , "linux-block@vger.kernel.org" , "linux-cachefs@redhat.com" , "linux-cifs@vger.kernel.org" , "linux-crypto@vger.kernel.org" , "linux-erofs@lists.ozlabs.org" , "linux-f2fs-devel@lists.sourceforge.net" , "linux-fscrypt@vger.kernel.org" , "linux-media@vger.kernel.org" , "linux-mediatek@lists.infradead.org" , "linux-mm@kvack.org" , "linux-mmc@vger.kernel.org" , "linux-nfs@vger.kernel.org" , "linux-nvme@lists.infradead.org" , "linux-raid@vger.kernel.org" , "linux-rdma@vger.kernel.org" , "linux-remoteproc@vger.kernel.org" , "linux-scsi@vger.kernel.org" , "linux-trace-kernel@vger.kernel.org" , "linux-usb@vger.kernel.org" , "linux-wireless@vger.kernel.org" , "linux-xfs@vger.kernel.org" , "nbd@other.debian.org" , "netdev@vger.kernel.org" , "ntb@lists.linux.dev" , "open-iscsi@googlegroups.com" , "oss-drivers@corigine.com" , "platform-driver-x86@vger.kernel.org" , "samba-technical@lists.samba.org" , "target-devel@vger.kernel.org" , "virtualization@lists.linux.dev" , "wireguard@lists.zx2c4.com" Subject: Re: Performance drop due to alloc_workqueue() misuse and recent change Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: 2E09D16000E X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: 7okhetxr3xiuyafk84bsshygu8jra1h5 X-HE-Tag: 1701713230-874743 X-HE-Meta: U2FsdGVkX18bokJhUuC7dnCHsVUS4vs8tsd+HMhQQrcXZE4ulfsMX+dXW380rheAME6Tw07oBHEC1eB0iCdwUZ2VhAnpseNALpb29yYNUh4nDRrlf0O0mPHpaAfUSJWrgZRbM5wV29BxdXP+6XzINbp7n7Emywu4SwJAlD9SjQm0Ger/MB/YcSLHGgDA2S3RNb+66lMEM5QaUmPLuuIGuPoXu6LKa7HbD+KB5iWeyidZif4RqZRhyI7v4Sf6HkDzWxf4JpvldW/vD8xwJCSrYRxIWaLySEPsjes5Vrmf9IqIrWFwtFRggSzYTDFOYrzpV5AAX9T4gI/oTeaq4oCrJ5pwoWyXydpN/SWZ4yKrV/ASHaMYF9uEgKFhZvsa6ncFQiUDkmgGu3d4Cc6C0Gnyj+RupffE0WElXHHy2RESEL0cdUeshZqUAP+m8C9hzf3sq9C5SDOTOTZXQbaqATCCiaDgxqw1TUV0Hs04SeJCVAzipbdtjJgAUDiZ1LyxtTXwk5R1aOXuTWCrxfqWS2txUhmUfSvIlZYk7ar5WE/dv9iUfEJkjO1532Gi6N9CFPCzs1jltM2nDdeaXqoOk+o9kE4SbvrIiO0iN7naRasTyzbS7bxbgQoqbCzXBJzQ9nhcVuhxVqSJ2gfykZa8Br0rMNhSdTMsEnYhv0Dnq9cKq8Bb50O+fYdsf2WYAXDXlmx+eGmbAhkAPXECe1ycf98bhM41TzonGdCmRMjd33Ia9XiEmjEjiXRH6JV3t+yfkHO/m3wDz12cxQbP4xeGdfoOjLMJzAFS4fBjNMwAo08PBXUFTNjvb6xd6FFUjGz2+Zy1Qon4f8bF9GxhQUa/R3kAJ4wk97jRWmf2/b6Nsp2W9ps8c/c3IHd6738pvmMcNuc/dELcYCbcTkjxyMtNKTOeT9mNcbtWmqWuTds3T3C+nRQG0cATjZJ0+id7+ycbU5bJdQVfCPjkbI7xFvaIQLL hQMDvHIX LJSh1B6pQcGWCbN5cdOPX+CB2dDZx5AaQbdLlsrCp/JyziVnOhxXTBNPBrquZS7SHg+2mk2tToAMW2PhcSNtFPEHoWlzENxkKPjbe6GK/AdSL3KlFS3XJFLUF5ptELY+IeZyCeRd76TxuyJ2ExLe/EOgiYlwa5SCRQzxMpZ5ZVCjDb00pgVU9GnEyYhPp9i0+Wa+d5iNYlcApvnYZs/f35ET8+6Dq2Ud2UGp9gunul56CVcEwwPYGYJ2pA7rB+kbQViM89sq2qQ2MVPgp5Gf85t/+B5Kyvw4GDRRsVIDf+cVvedLpk7+sYmRTRgYz+byWWe6vQr2vnav0TRLwxh0bmBY8HINU62vLRuyDmEX0OlWdEg2kY7wIIVxyxGl/Zj5y10raVKqeptEU6+kUhEI/a2VQu9+7HUA+5zxftwRc4WpxbH+Ok8B6HGe8QKQFryDGC6DkY088PLZdqckGt9950UgTdRB6SM1uUnfuu5VQiL8q1/th1SMLd8rsZ4Co5dYkUIhNMeiXYMzIwMv5wVvBLPgyvHMq/okd99JmnXQ3pPqgAhaVKJlQHEbHRUEPlHm17KIWegN+PSL6CI55rrmJvbe40TmrcmTRKVpb2pVeKedX19ba/Kr0l5Rq2PKqQSbJpg+BO1PN3qJd4YwNU8k+OyjO/FQPEhF31/PeUn7BjkMY0gqMX5PV5cKG1nU4761oBpIFrdA7r1yHDHxGF0aSIeI+CsIyBSE/fTO7B3TgxVNQ5tARWuhBrpSalw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hello, On Mon, Dec 04, 2023 at 04:03:47PM +0000, Naohiro Aota wrote: > Recently, commit 636b927eba5b ("workqueue: Make unbound workqueues to use > per-cpu pool_workqueues") changed WQ_UNBOUND workqueue's behavior. It > changed the meaning of alloc_workqueue()'s max_active from an upper limit > imposed per NUMA node to a limit per CPU. As a result, massive number of > workers can be running at the same time, especially if the workqueue user > thinks the max_active is a global limit. > > Actually, it is already written it is per-CPU limit in the documentation > before the commit. However, several callers seem to misuse max_active, > maybe thinking it is a global limit. It is an unexpected behavior change > for them. Right, and the behavior has been like that for a very long time and there was no other way to achieve reasonable level of concurrency, so the current situation is expected. > For example, these callers set max_active = num_online_cpus(), which is a > suspicious limit applying to per-CPU. This config means we can have nr_cpu > * nr_cpu active tasks working at the same time. Yeah, that sounds like a good indicator. > fs/f2fs/data.c: sbi->post_read_wq = alloc_workqueue("f2fs_post_read_wq", > fs/f2fs/data.c- WQ_UNBOUND | WQ_HIGHPRI, > fs/f2fs/data.c- num_online_cpus()); > > fs/crypto/crypto.c: fscrypt_read_workqueue = alloc_workqueue("fscrypt_read_queue", > fs/crypto/crypto.c- WQ_UNBOUND | WQ_HIGHPRI, > fs/crypto/crypto.c- num_online_cpus()); > > fs/verity/verify.c: fsverity_read_workqueue = alloc_workqueue("fsverity_read_queue", > fs/verity/verify.c- WQ_HIGHPRI, > fs/verity/verify.c- num_online_cpus()); > > drivers/crypto/hisilicon/qm.c: qm->wq = alloc_workqueue("%s", WQ_HIGHPRI | WQ_MEM_RECLAIM | > drivers/crypto/hisilicon/qm.c- WQ_UNBOUND, num_online_cpus(), > drivers/crypto/hisilicon/qm.c- pci_name(qm->pdev)); > > block/blk-crypto-fallback.c: blk_crypto_wq = alloc_workqueue("blk_crypto_wq", > block/blk-crypto-fallback.c- WQ_UNBOUND | WQ_HIGHPRI | > block/blk-crypto-fallback.c- WQ_MEM_RECLAIM, num_online_cpus()); > > drivers/md/dm-crypt.c: cc->crypt_queue = alloc_workqueue("kcryptd/%s", > drivers/md/dm-crypt.c- WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM | WQ_UNBOUND, > drivers/md/dm-crypt.c- num_online_cpus(), devname); Most of these work items are CPU bound but not completley so. e.g. kcrypt_crypt_write_continue() does wait_for_completion(), so setting max_active to 1 likely isn't what they want either. They mostly want some reasonable system-wide concurrency limit w.r.t. the CPU count while keeping some level of flexibility in terms of task placement. The previous max_active wasn't great for this because its meaning changed depending on the number of nodes. Now, the meaning doesn't change but it's not really useful for the above purpose. It's only useful for avoiding melting the system completely. One way to go about it is to declare that concurrency level management for unbound workqueue is on users but that seems not ideal given many use cases would want it anyway. Let me think it over but I think the right way to go about it is going the other direction - ie. making max_active apply to the whole system regardless of the number of nodes / ccx's / whatever. > Furthermore, the change affects performance in a certain case. > > Btrfs creates several WQ_UNBOUND workqueues with a default max_active = > min(NRCPUS + 2, 8). As my machine has 96 CPUs with NUMA disabled, this > max_active config allows running over 700 active works. Before the commit, > it is limited to 8 if NUMA is disabled or limited to 16 if NUMA nodes is 2. > > I reverted the workqueue code back to before the commit, and I ran the > following fio command on RAID0 btrfs on 6 SSDs. > > fio --group_reporting --eta=always --eta-interval=30s --eta-newline=30s \ > --rw=write --fallocate=none \ > --direct=1 --ioengine=libaio --iodepth=32 \ > --filesize=100G \ > --blocksize=64k \ > --time_based --runtime=300s \ > --end_fsync=1 \ > --directory=${MNT} \ > --name=writer --numjobs=32 > > By changing workqueue's max_active, the result varies. > > - wq max_active=8 (intended limit by btrfs?) > WRITE: bw=2495MiB/s (2616MB/s), 2495MiB/s-2495MiB/s (2616MB/s-2616MB/s), io=753GiB (808GB), run=308953-308953msec > - wq max_active=16 (actual limit on 2 NUMA nodes setup) > WRITE: bw=1736MiB/s (1820MB/s), 1736MiB/s-1736MiB/s (1820MB/s-1820MB/s), io=670GiB (720GB), run=395532-395532msec > - wq max_active=768 (simulating current limit) > WRITE: bw=1276MiB/s (1338MB/s), 1276MiB/s-1276MiB/s (1338MB/s-1338MB/s), io=375GiB (403GB), run=300984-300984msec > > The current performance is slower than the previous limit (max_active=16) > by 27%, or it is 50% slower than the intended limit. The performance drop > might be due to contention of the btrfs-endio-write works. There are over > 700 kworker instances were created and 100 works are on the 'D' state > competing for a lock. > > More specifically, I tested the same workload on the commit. > > - At commit 636b927eba5b ("workqueue: Make unbound workqueues to use per-cpu pool_workqueues") > WRITE: bw=1191MiB/s (1249MB/s), 1191MiB/s-1191MiB/s (1249MB/s-1249MB/s), io=350GiB (376GB), run=300714-300714msec > - At the previous commit = 4cbfd3de73 ("workqueue: Call wq_update_unbound_numa() on all CPUs in NUMA node on CPU hotplug") > WRITE: bw=1747MiB/s (1832MB/s), 1747MiB/s-1747MiB/s (1832MB/s-1832MB/s), io=748GiB (803GB), run=438134-438134msec > > So, it is -31.8% performance down with the commit. > > In summary, we misuse max_active, considering it is a global limit. And, > the recent commit introduced a huge performance drop in some cases. We > need to review alloc_workqueue() usage to check if its max_active setting > is proper or not. Thanks a lot for the report. I think it's a lot more reasonable to assume that max_active is global for unbound workqueues. The current workqueue behavior is not very intuitive or useful. I'll try to find something more reasonable. Thanks for the report and analysis. Much appreciated. Thanks. -- tejun