From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91C36C369AB for ; Thu, 24 Apr 2025 14:12:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3E9396B0085; Thu, 24 Apr 2025 10:12:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 396BB6B00AF; Thu, 24 Apr 2025 10:12:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 25E166B00B3; Thu, 24 Apr 2025 10:12:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 04EB66B0085 for ; Thu, 24 Apr 2025 10:12:55 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 96C0E8093F for ; Thu, 24 Apr 2025 14:12:57 +0000 (UTC) X-FDA: 83369128794.20.84AB338 Received: from verein.lst.de (verein.lst.de [213.95.11.211]) by imf02.hostedemail.com (Postfix) with ESMTP id A6D6C80007 for ; Thu, 24 Apr 2025 14:12:55 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=lst.de; spf=pass (imf02.hostedemail.com: domain of hch@lst.de designates 213.95.11.211 as permitted sender) smtp.mailfrom=hch@lst.de ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745503976; a=rsa-sha256; cv=none; b=WdfcjQNnZ2U2TvlKkZtaN8BuV61W7qjBE4GSUj7RQygyI+LmyOk3Rm8oxmqiS1dRltxnCg av1PqiWS+HB5Z4tLnLK5hY6p9dx7GxOdWb3gHqJd6KVJ8K5d3SRrwQ1IH8xP+cveqAVxX3 IaY5jdHlYCLb9YL+JNfsdVQNczNWE40= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=lst.de; spf=pass (imf02.hostedemail.com: domain of hch@lst.de designates 213.95.11.211 as permitted sender) smtp.mailfrom=hch@lst.de ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745503976; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JELX5TFBl1qb68aHJWaauT5yp5eUlZ8uHJX0n8eWjOo=; b=3iW8oES4tn0Vr7T/oxQrva+rW49cvTBu07v6hDjji5TP7B7tudUP0WipTt3gYBZIfOsaFl gJSx2Z5YQFe977uTFBLJeQVVkUIrSw55Rl0+oh5nelvhSh9LXGER3vtUM4oulDnghZLxY5 j8t6UXG4jywazPhx8est7KLKhi9Pw/s= Received: by verein.lst.de (Postfix, from userid 2407) id 748D468AFE; Thu, 24 Apr 2025 16:12:49 +0200 (CEST) Date: Thu, 24 Apr 2025 16:12:49 +0200 From: Christoph Hellwig To: Caleb Sander Mateos Cc: Keith Busch , Jens Axboe , Christoph Hellwig , Sagi Grimberg , Andrew Morton , Kanchan Joshi , linux-nvme@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v5 3/3] nvme/pci: make PRP list DMA pools per-NUMA-node Message-ID: <20250424141249.GA18970@lst.de> References: <20250422220952.2111584-1-csander@purestorage.com> <20250422220952.2111584-4-csander@purestorage.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250422220952.2111584-4-csander@purestorage.com> User-Agent: Mutt/1.5.17 (2007-11-01) X-Rspam-User: X-Rspamd-Queue-Id: A6D6C80007 X-Rspamd-Server: rspam04 X-Stat-Signature: bsx6qph7b1qggn3fi5uakb9jt7xstetm X-HE-Tag: 1745503975-936965 X-HE-Meta: U2FsdGVkX19EPNXu+K9oUUFfoBpl29OBSc8ywGGe8QRD4lEA0JsjZ9ImV9iVXXXpG5ExoltgSh2s+QHMxt8aKFilnXwpo6ed481rtrAsoUkwjFpCtO/DcvIPMAo5n35qQmcgnCoSFcIHGXe1EBlSEnYoWjFPqk97gZ/U2tw+lA8bp/5SNXbx26kYD5/O/hCD5182up+UJN/pmVyWMSPQMiuGpdsDYRoRdPlGobe3l9eNSo+Va/aZGuxCCfqMWD5I98/xdhIUFzq/VJ7032rz2BczYtr2Iol20KnRO52gBcLnxoPbIWdr001oCOfzv0oFPlz0on5/dFcFEyWaKXeO7kSFIO6sGkoMT/4MztpTB5+1zAIBs0gVbELyIHQetS/Pbr7ohd5qn9e3Pdah85gZwnOOnocdKiyDfJVqJUfO5HGAgviHQlT15LMuzQVZzGlsU3QsV1DM8cnYLuGkI7RGDp0imhEY56zYeIDPhNVyNH9YT/opHLi3bp5SIPG+/v4UX3FFJHqg8BDkwdY7oxzaAHSFJ5aXInroPZq01WEyj3P/7YaVKjU4G4oYtpovwR8m5bwjjWjTTlk7ymUB/t6btgSrJYvFUkhWYmFPUjCFtAWWj5kzXEYLgH4AVtbZrGxEUG83aHdR3GouD7CEWCRpPxk9rHB8w3LQSaDUzDvNHYUTYT0kDUrHZxPCQl/330fwTrokia6CiOfnHrDq0PPKkW2Gr3FpRfDnZIslAEroq/nOPcBKzT1mWLnd41fQFvaBQe8qSjagy1mnkLIM6uKbas9V4vlBoo5X9u0c4zfYtv0W0WZsyCAxQq2xfLVn8uwf9PDsB1gK4g5myuxIJ57atsaAbHfJdSNK198B+YacRDBsS3v4I2yhTjxECvLNWboaRwiCCoa+TdyNsTPzBv4pqrf7C3EEyq56Yl5MJldmAAzhbn7BxgmhVbEikSf1Ck1wf/ZEqc2pvE3c7OxJcOY ha+13F4u ZzooAAL8h4Js72vvfHT7BDpHkw7n/TpBf6svsQOeLzQpnYH7mwjYRhV5+WVPsymD4nWuD3y/PO6Q+Sav8NfFUTSN/ejd4R9nisJQs X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Apr 22, 2025 at 04:09:52PM -0600, Caleb Sander Mateos wrote: > NVMe commands with more than 4 KB of data allocate PRP list pages from > the per-nvme_device dma_pool prp_page_pool or prp_small_pool. That's not actually true. We can transfer all of the MDTS without a single pool allocation when using SGLs. > Each call > to dma_pool_alloc() and dma_pool_free() takes the per-dma_pool spinlock. > These device-global spinlocks are a significant source of contention > when many CPUs are submitting to the same NVMe devices. On a workload > issuing 32 KB reads from 16 CPUs (8 hypertwin pairs) across 2 NUMA nodes > to 23 NVMe devices, we observed 2.4% of CPU time spent in > _raw_spin_lock_irqsave called from dma_pool_alloc and dma_pool_free. > > Ideally, the dma_pools would be per-hctx to minimize > contention. But that could impose considerable resource costs in a > system with many NVMe devices and CPUs. Should we try to simply do a slab allocation first and only allocate from the dmapool when that fails? That should give you all the scalability from the slab allocator without very little downsides.