From: "Takayoshi Kochi" <takayoshi.kochi@gmail.com>
To: linux-mm@kvack.org
Cc: Christoph Lameter <clameter@sgi.com>,
linux-kernel@vger.kernel.org, Arnd Bergmann <arnd@arndb.de>
Subject: Re: NUMA BOF @OLS
Date: Mon, 25 Jun 2007 11:45:12 -0700 [thread overview]
Message-ID: <43c301fe0706251145q3249ddcar3e723ae7db8d6ebc@mail.gmail.com> (raw)
In-Reply-To: <200706221214.58823.arnd@arndb.de>
Hi all,
I'll host another mm-related BOF at OLS:
Discussion for the Future of Linux Memory Management
Saturday Jun 30th, 2007 14:45-15:30
I'll share some experiences with the MM-related real world issues there.
Anyone who have something to pitch in is welcome.
Please contact me or grab me at OLS.
Any topics spilled out of NUMA BOF are welcome!
2007/6/22, Arnd Bergmann <arnd@arndb.de>:
> On Friday 22 June 2007, Christoph Lameter wrote:
> >
> > On Fri, 22 Jun 2007, Arnd Bergmann wrote:
> >
> > > - Interface for preallocating hugetlbfs pages per node instead of system wide
> >
> > We may want to get a bit higher level than that. General way of
> > controlling subsystem use on nodes. One wants to restrict the slab
> > allocator and the kernel etc on nodes too.
> >
> > How will this interact with the other NUMA policy specifications?
>
> I guess that's what I'd like to discuss at the BOF. I frequently
> get requests from users that need to have some interface for it:
> Application currently break if they try to use /proc/sys/vm/nr_hugepages
> in combination with numactl --membind.
>
> > > - architecture independent in-kernel API for enumerating CPU sockets with
> > > multicore processors (not sure if that's the same as your existing subject).
> >
> > Not sure what you mean by this. We already have a topology interface and
> > the scheduler knows about these things.
>
> I'm not referring to user interfaces or scheduling. It's probably not really
> a NUMA topic, but we currently use the topology interfaces for enumerating
> sockets on systems that are not really NUMA. This includes stuff like
> per-socket
> * cpufreq settings (these have their own logic currently)
> * IOMMU
> * performance counters
> * thermal management
> * local interrupt controller
> * PCI/HT host bridge
>
> If you have a system with multiple CPUs in one socket and either multiple
> sockets in one NUMA node or no NUMA at all, you have no way of properly
> enumerating the sockets. I'd like to discuss what such an interface
> would need to look like to be useful for all architectures.
>
> Arnd <><
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
Takayoshi Kochi
prev parent reply other threads:[~2007-06-25 18:45 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-06-21 20:24 Christoph Lameter
2007-06-21 23:12 ` Arnd Bergmann
2007-06-22 1:46 ` Christoph Lameter
2007-06-22 10:14 ` Arnd Bergmann
2007-06-25 18:45 ` Takayoshi Kochi [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=43c301fe0706251145q3249ddcar3e723ae7db8d6ebc@mail.gmail.com \
--to=takayoshi.kochi@gmail.com \
--cc=arnd@arndb.de \
--cc=clameter@sgi.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox