linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
To: Andrew Dickinson <andrew@whydna.net>
Cc: linux-mm@kvack.org
Subject: Re: Memory/CPU affinity and Nehalem/QPI
Date: Tue, 28 Apr 2009 12:52:57 -0400	[thread overview]
Message-ID: <1240937577.6998.78.camel@lts-notebook> (raw)
In-Reply-To: <606676310904280915i3161fc90h367218482b19bbd6@mail.gmail.com>

On Tue, 2009-04-28 at 09:15 -0700, Andrew Dickinson wrote:
> Howdy linux-mm,
> 
> <background>
> I'm working on a kernel module which does some packet mangling based
> on the results of a memory lookup;  a packet comes in, I do a table
> lookup and if there's a match, I mangle the packet.  This process is
> 2-way; I encode in one direction and decode in the other.  I've found
> that I get better performance of I pin the interrupts of the 2 NICs in
> my system to different cores; I match the rx IRQs on one NIC and the
> tx IRQs on the other NIC to one set of cores and the other rx/tx pairs
> to another set of cores.  The reason for the IRQ pinning is that I
> spend less time passing table locks across cpu packages (at least,
> that's my theory).  My "current" system is a dual Xeon 5160
> (woodcrest).  It has a relatively low-speed FSB and passing memory
> from core-to-core seems to suck at high packet rates.
> </background>
> 
> I'm now testing a dual-package Nehalem system.  If I understand this
> architecture correctly, each package's memory controller is driving
> its own bank of RAM.  In my ideal world, I'd be able to provide a hint
> to kmalloc (or friends) such that my encode-table is stored close to
> one package and my decode-table is stored close to the other package.
> Is this something that I can control?  If so, how?  

You can use kmalloc_node() to allocate on a specific node--if you encode
table is, indeed, dynamically allocated.  

> Does this matter
> with Intel's QPI or am I wasting my time?

I recently ran a numademo [from the numactl source package] test on a
2-node [== 2 socket], 2.93GHz Nehalem and saw this:

nelly:~ # taskset -c 0 numademo 256m    # run on cpu/node 0, 256M test buffer
2 nodes available
memory on node 0 memset                   Avg 6744.42 MB/s Max 6751.22 MB/s Min 6734.46 MB/s
memory on node 1 memset                   Avg 3900.29 MB/s Max 3904.86 MB/s Min 3890.31 MB/s
# remote = ~0.58 of local bandwidth

nelly:~ # taskset -c 1 numademo 256m    # run on cpu/node 1, 256M test buffer
2 nodes available
memory on node 0 memset                   Avg 3909.94 MB/s Max 3912.54 MB/s Min 3906.96 MB/s
memory on node 1 memset                   Avg 6668.64 MB/s Max 6677.33 MB/s Min 6657.96 MB/s
# remote = ~0.59 of local bandwidth

So, it's probably worth trying and measuring your results.

Lee


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2009-04-28 16:52 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-04-28 16:15 Andrew Dickinson
2009-04-28 16:52 ` Lee Schermerhorn [this message]
2009-04-28 17:38 ` Christoph Lameter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1240937577.6998.78.camel@lts-notebook \
    --to=lee.schermerhorn@hp.com \
    --cc=andrew@whydna.net \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox