From: Nishanth Aravamudan <nacc@linux.vnet.ibm.com>
To: Jiang Liu <jiang.liu@linux.intel.com>
Cc: Tony Luck <tony.luck@gmail.com>,
Andrew Morton <akpm@linux-foundation.org>,
Mel Gorman <mgorman@suse.de>,
David Rientjes <rientjes@google.com>,
Mike Galbraith <umgwanakikbuti@gmail.com>,
Peter Zijlstra <peterz@infradead.org>,
"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
linux-hotplug@vger.kernel.org,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: [RFC Patch V1 00/30] Enable memoryless node on x86 platforms
Date: Mon, 18 Aug 2014 16:30:41 -0700 [thread overview]
Message-ID: <20140818233041.GA15310@linux.vnet.ibm.com> (raw)
In-Reply-To: <53D1B7C9.9040907@linux.intel.com>
Hi Gerry,
On 25.07.2014 [09:50:01 +0800], Jiang Liu wrote:
>
>
> On 2014/7/25 7:32, Nishanth Aravamudan wrote:
> > On 23.07.2014 [16:20:24 +0800], Jiang Liu wrote:
> >>
> >>
> >> On 2014/7/22 1:57, Nishanth Aravamudan wrote:
> >>> On 21.07.2014 [10:41:59 -0700], Tony Luck wrote:
> >>>> On Mon, Jul 21, 2014 at 10:23 AM, Nishanth Aravamudan
> >>>> <nacc@linux.vnet.ibm.com> wrote:
> >>>>> It seems like the issue is the order of onlining of resources on a
> >>>>> specific x86 platform?
> >>>>
> >>>> Yes. When we online a node the BIOS hits us with some ACPI hotplug events:
> >>>>
> >>>> First: Here are some new cpus
> >>>
> >>> Ok, so during this period, you might get some remote allocations. Do you
> >>> know the topology of these CPUs? That is they belong to a
> >>> (soon-to-exist) NUMA node? Can you online that currently offline NUMA
> >>> node at this point (so that NODE_DATA()) resolves, etc.)?
> >> Hi Nishanth,
> >> We have method to get the NUMA information about the CPU, and
> >> patch "[RFC Patch V1 30/30] x86, NUMA: Online node earlier when doing
> >> CPU hot-addition" tries to solve this issue by onlining NUMA node
> >> as early as possible. Actually we are trying to enable memoryless node
> >> as you have suggested.
> >
> > Ok, it seems like you have two sets of patches then? One is to fix the
> > NUMA information timing (30/30 only). The rest of the patches are
> > general discussions about where cpu_to_mem() might be used instead of
> > cpu_to_node(). However, based upon Tejun's feedback, it seems like
> > rather than force all callers to use cpu_to_mem(), we should be looking
> > at the core VM to ensure fallback is occuring appropriately when
> > memoryless nodes are present.
> >
> > Do you have a specific situation, once you've applied 30/30, where
> > kmalloc_node() leads to an Oops?
> Hi Nishanth,
> After following the two threads related to support of memoryless
> node and digging more code, I realized my first version path set is an
> overkill. As Tejun has pointed out, we shouldn't expose the detail of
> memoryless node to normal user, but there are still some special users
> who need the detail. So I have tried to summarize it as:
> 1) Arch code should online corresponding NUMA node before onlining any
> CPU or memory, otherwise it may cause invalid memory access when
> accessing NODE_DATA(nid).
I think that's reasonable.
A related caveat is that NUMA topology information should be stored as
early as possible in boot for *all* CPUs [I think only cpu_to_* is used,
at least for now], not just the boot CPU, etc. This is because (at least
on my examination) pre-SMP initcalls are not prevented from using
cpu_to_node, which will falsely return 0 for all CPUs until
set_cpu_numa_node() is called.
> 2) For normal memory allocations without __GFP_THISNODE setting in the
> gfp_flags, we should prefer numa_node_id()/cpu_to_node() instead of
> numa_mem_id()/cpu_to_mem() because the latter loses hardware topology
> information as pointed out by Tejun:
> A - B - X - C - D
> Where X is the memless node. numa_mem_id() on X would return
> either B or C, right? If B or C can't satisfy the allocation,
> the allocator would fallback to A from B and D for C, both of
> which aren't optimal. It should first fall back to C or B
> respectively, which the allocator can't do anymoe because the
> information is lost when the caller side performs numa_mem_id().
Yes, this seems like a very good description of the reasoning.
> 3) For memory allocation with __GFP_THISNODE setting in gfp_flags,
> numa_node_id()/cpu_to_node() should be used if caller only wants to
> allocate from local memory, otherwise numa_mem_id()/cpu_to_mem()
> should be used if caller wants to allocate from the nearest node.
>
> 4) numa_mem_id()/cpu_to_mem() should be used if caller wants to check
> whether a page is allocated from the nearest node.
I'm less clear on what you mean here, I'll look at your v2 patches. I
mean, numa_node_id()/cpu_to_node() should be used to indicate node-local
preference with appropriate failure handling. But I don't know why one
would prefer to use numa_node_id() to numa_mem_id() in such a path? The
only time they differ is if memoryless nodes are present, which is what
your local memory allocation would ideally be for those nodes anyways?
Thanks,
Nish
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2014-08-18 23:30 UTC|newest]
Thread overview: 90+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-07-11 7:37 Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 01/30] mm, kernel: Use cpu_to_mem()/numa_mem_id() to support memoryless node Jiang Liu
2014-07-11 15:14 ` Paul E. McKenney
2014-07-21 17:15 ` Nishanth Aravamudan
2014-07-21 17:33 ` Paul E. McKenney
2014-07-12 12:32 ` Jens Axboe
2014-07-11 7:37 ` [RFC Patch V1 02/30] mm, sched: " Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 03/30] mm, net: " Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 04/30] mm, netfilter: " Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 05/30] mm, perf: " Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 06/30] mm, tracing: " Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 07/30] mm: " Jiang Liu
2014-07-11 13:51 ` Christoph Lameter
2014-07-11 14:42 ` Tejun Heo
2014-07-11 15:13 ` Christoph Lameter
2014-07-11 15:21 ` Tejun Heo
2014-07-11 15:33 ` Tejun Heo
2014-07-11 15:55 ` Christoph Lameter
2014-07-11 15:58 ` Tejun Heo
2014-07-11 16:04 ` Christoph Lameter
2014-07-11 15:58 ` Christoph Lameter
2014-07-11 16:01 ` Tejun Heo
2014-07-11 16:19 ` Christoph Lameter
2014-07-11 16:24 ` Tejun Heo
2014-07-11 17:29 ` Christoph Lameter
2014-07-11 18:28 ` Tejun Heo
2014-07-11 19:11 ` Christoph Lameter
2014-07-23 3:16 ` Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 08/30] mm, thp: " Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 09/30] mm, memcg: " Jiang Liu
2014-07-18 7:36 ` Michal Hocko
2014-07-23 3:18 ` Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 10/30] mm, xfrm: " Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 11/30] mm, char/mspec.c: " Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 12/30] mm, IB/qib: " Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 13/30] mm, i40e: " Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 14/30] mm, i40evf: " Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 15/30] mm, igb: " Jiang Liu
2014-07-21 17:42 ` Nishanth Aravamudan
2014-07-21 19:53 ` Alexander Duyck
2014-07-21 21:09 ` Nishanth Aravamudan
2014-07-23 3:20 ` Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 16/30] mm, ixgbe: " Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 17/30] mm, intel_powerclamp: " Jiang Liu
2014-07-21 17:38 ` Nishanth Aravamudan
2014-07-11 7:37 ` [RFC Patch V1 18/30] mm, bnx2fc: " Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 19/30] mm, bnx2i: " Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 20/30] mm, fcoe: " Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 21/30] mm, irqchip: " Jiang Liu
2014-07-18 12:40 ` Jason Cooper
2014-07-23 3:47 ` Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 22/30] mm, of: " Jiang Liu
2014-07-21 17:52 ` Nishanth Aravamudan
2014-07-28 13:30 ` Grant Likely
2014-07-28 19:26 ` Nishanth Aravamudan
2014-07-11 7:37 ` [RFC Patch V1 23/30] mm, x86: " Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 24/30] mm, x86/platform/uv: " Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 25/30] mm, x86, kvm: " Jiang Liu
2014-07-11 7:44 ` Paolo Bonzini
2014-07-11 7:37 ` [RFC Patch V1 26/30] mm, x86, perf: " Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 27/30] x86, numa: Kill useless code to improve code readability Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 28/30] mm: Update _mem_id_[] for every possible CPU when memory configuration changes Jiang Liu
2014-07-21 17:47 ` Nishanth Aravamudan
2014-07-23 8:16 ` Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 29/30] mm, x86: Enable memoryless node support to better support CPU/memory hotplug Jiang Liu
2014-07-24 23:26 ` Nishanth Aravamudan
2014-07-25 1:41 ` Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 30/30] x86, NUMA: Online node earlier when doing CPU hot-addition Jiang Liu
2014-07-24 23:30 ` Nishanth Aravamudan
2014-07-25 1:43 ` Jiang Liu
2014-07-25 1:44 ` Jiang Liu
2014-07-11 8:29 ` [RFC Patch V1 00/30] Enable memoryless node on x86 platforms Peter Zijlstra
2014-07-11 15:33 ` Greg KH
2014-07-11 20:02 ` Dave Hansen
2014-07-11 20:20 ` Andi Kleen
2014-07-11 20:51 ` Peter Zijlstra
2014-07-11 21:58 ` Andi Kleen
2014-07-15 1:18 ` David Rientjes
2014-07-11 23:51 ` H. Peter Anvin
2014-07-11 22:40 ` Jiri Kosina
2014-07-15 1:19 ` David Rientjes
2014-07-18 17:48 ` Nish Aravamudan
2014-07-21 17:23 ` Nishanth Aravamudan
2014-07-21 17:41 ` Tony Luck
2014-07-21 17:57 ` Nishanth Aravamudan
2014-07-23 8:20 ` Jiang Liu
2014-07-24 23:32 ` Nishanth Aravamudan
2014-07-25 1:50 ` Jiang Liu
2014-08-18 23:30 ` Nishanth Aravamudan [this message]
2014-07-21 20:06 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140818233041.GA15310@linux.vnet.ibm.com \
--to=nacc@linux.vnet.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=jiang.liu@linux.intel.com \
--cc=linux-hotplug@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=peterz@infradead.org \
--cc=rafael.j.wysocki@intel.com \
--cc=rientjes@google.com \
--cc=tony.luck@gmail.com \
--cc=umgwanakikbuti@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox