From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f69.google.com (mail-pg0-f69.google.com [74.125.83.69]) by kanga.kvack.org (Postfix) with ESMTP id 501396B0038 for ; Thu, 21 Dec 2017 21:08:46 -0500 (EST) Received: by mail-pg0-f69.google.com with SMTP id q3so16722482pgv.16 for ; Thu, 21 Dec 2017 18:08:46 -0800 (PST) Received: from mga14.intel.com (mga14.intel.com. [192.55.52.115]) by mx.google.com with ESMTPS id b12si14408067pgq.139.2017.12.21.18.08.44 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 21 Dec 2017 18:08:44 -0800 (PST) Subject: Re: [PATCH v2 3/5] mm: enlarge NUMA counters threshold size References: <1513665566-4465-1-git-send-email-kemi.wang@intel.com> <1513665566-4465-4-git-send-email-kemi.wang@intel.com> <20171219124045.GO2787@dhcp22.suse.cz> <439918f7-e8a3-c007-496c-99535cbc4582@intel.com> <20171220101229.GJ4831@dhcp22.suse.cz> <268b1b6e-ff7a-8f1a-f97c-f94e14591975@intel.com> From: kemi Message-ID: <9fb9af97-167c-6a0b-ded1-2790113ece9a@intel.com> Date: Fri, 22 Dec 2017 10:06:42 +0800 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: owner-linux-mm@kvack.org List-ID: To: Christopher Lameter Cc: Michal Hocko , Greg Kroah-Hartman , Andrew Morton , Vlastimil Babka , Mel Gorman , Johannes Weiner , YASUAKI ISHIMATSU , Andrey Ryabinin , Nikolay Borisov , Pavel Tatashin , David Rientjes , Sebastian Andrzej Siewior , Dave , Andi Kleen , Tim Chen , Jesper Dangaard Brouer , Ying Huang , Aaron Lu , Aubrey Li , Linux MM , Linux Kernel On 2017a1'12ae??22ae?JPY 01:10, Christopher Lameter wrote: > On Thu, 21 Dec 2017, kemi wrote: > >> Some thinking about that: >> a) the overhead due to cache bouncing caused by NUMA counter update in fast path >> severely increase with more and more CPUs cores >> b) AFAIK, the typical usage scenario (similar at least)for which this optimization can >> benefit is 10/40G NIC used in high-speed data center network of cloud service providers. > > I think you are fighting a lost battle there. As evident from the timing > constraints on packet processing in a 10/40G you will have a hard time to > process data if the packets are of regular ethernet size. And we alrady > have 100G NICs in operation here. > Not really. For 10/40G NIC or even 100G, I admit DPDK is widely used in data center network rather than kernel driver in production environment. That's due to the slow page allocator and long pipeline processing in network protocol stack. That's not easy to change this state in short time, but if we can do something here to change it a little, why not. > We can try to get the performance as high as possible but full rate high > speed networking invariable must use offload mechanisms and thus the > statistics would only be available from the hardware devices that can do > wire speed processing. > I think you may be talking something about SmartNIC (e.g. OpenVswitch offload + VF pass through). That's usually used in virtualization environment to eliminate the overhead from device emulation and packet processing in software virtual switch(OVS or linux bridge). What I have done in this patch series is to improve page allocator performance, that's also helpful in offload environment (guest kernel at least), IMHO. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org