From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id E7889BC7 for ; Wed, 15 Jul 2015 12:07:10 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 9F118A7 for ; Wed, 15 Jul 2015 12:07:10 +0000 (UTC) Date: Wed, 15 Jul 2015 05:07:08 -0700 From: Christoph Hellwig To: ksummit-discuss@lists.linuxfoundation.org Message-ID: <20150715120708.GA24534@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Cc: linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org Subject: [Ksummit-discuss] [TECH TOPIC] IRQ affinity List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Many years ago we decided to move setting of IRQ to core affnities to userspace with the irqbalance daemon. These days we have systems with lots of MSI-X vector, and we have hardware and subsystem support for per-CPU I/O queues in the block layer, the RDMA subsystem and probably the network stack (I'm not too familar with the recent developments there). It would really help the out of the box performance and experience if we could allow such subsystems to bind interrupt vectors to the node that the queue is configured on. I'd like to discuss if the rationale for moving the IRQ affinity setting fully to userspace are still correct in todays world any any pitfalls we'll have to learn from in irqbalanced and the old in-kernel affinity code.