From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f199.google.com (mail-pf0-f199.google.com [209.85.192.199]) by kanga.kvack.org (Postfix) with ESMTP id 565046B0033 for ; Tue, 31 Jan 2017 14:15:07 -0500 (EST) Received: by mail-pf0-f199.google.com with SMTP id f144so525647550pfa.3 for ; Tue, 31 Jan 2017 11:15:07 -0800 (PST) Received: from hqemgate16.nvidia.com (hqemgate16.nvidia.com. [216.228.121.65]) by mx.google.com with ESMTPS id x3si16813667plb.112.2017.01.31.11.15.06 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 31 Jan 2017 11:15:06 -0800 (PST) Subject: Re: [RFC V2 03/12] mm: Change generic FALLBACK zonelist creation process References: <20170130033602.12275-1-khandual@linux.vnet.ibm.com> <20170130033602.12275-4-khandual@linux.vnet.ibm.com> <07bd439c-6270-b219-227b-4079d36a2788@intel.com> <434aa74c-e917-490e-85ab-8c67b1a82d95@linux.vnet.ibm.com> <79bfd849-8e6c-2f6d-0acf-4256a4137526@nvidia.com> <217e817e-2f91-91a5-1bef-16fb0cbacb63@intel.com> From: David Nellans Message-ID: <9c951c50-3d75-2356-3f21-434ddca63f1b@nvidia.com> Date: Tue, 31 Jan 2017 13:14:59 -0600 MIME-Version: 1.0 In-Reply-To: <217e817e-2f91-91a5-1bef-16fb0cbacb63@intel.com> Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Dave Hansen , John Hubbard , Anshuman Khandual , linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: mhocko@suse.com, vbabka@suse.cz, mgorman@suse.de, minchan@kernel.org, aneesh.kumar@linux.vnet.ibm.com, bsingharora@gmail.com, srikar@linux.vnet.ibm.com, haren@linux.vnet.ibm.com, jglisse@redhat.com, dan.j.williams@intel.com On 01/31/2017 12:04 PM, Dave Hansen wrote: > On 01/30/2017 11:25 PM, John Hubbard wrote: >> I also don't like having these policies hard-coded, and your 100x >> example above helps clarify what can go wrong about it. It would be >> nicer if, instead, we could better express the "distance" between nodes >> (bandwidth, latency, relative to sysmem, perhaps), and let the NUMA >> system figure out the Right Thing To Do. >> >> I realize that this is not quite possible with NUMA just yet, but I >> wonder if that's a reasonable direction to go with this? > In the end, I don't think the kernel can make the "right" decision very > widely here. > > Intel's Xeon Phis have some high-bandwidth memory (MCDRAM) that > evidently has a higher latency than DRAM. Given a plain malloc(), how > is the kernel to know that the memory will be used for AVX-512 > instructions that need lots of bandwidth vs. some random data structure > that's latency-sensitive? > > In the end, I think all we can do is keep the kernel's existing default > of "low latency to the CPU that allocated it", and let apps override > when that policy doesn't fit them. > I think John's point is that latency might not be the predominant factor anymore for certain sections of the CPU and GPU world. What if a Phi has MCDRAM physically attached, but DDR4 connected via QPI that still has lower total latency (might be a stretch for Phi but not a stretch for GPUs with deep sorting memory controllers)? Lowest latency is probably the wrong choice. Latency has really been a numeric proxy for physical proximity, under assumption most closely coupled memory is the right placement, but HBM/MCDRAM is causing that relationship to break down in all sorts of interesting ways. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org