linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@osdl.org>
To: Christoph Lameter <clameter@sgi.com>
Cc: linux-mm@kvack.org, pj@sgi.com, jes@sgi.com,
	Andy Whitcroft <apw@shadowen.org>
Subject: Re: [1/3] Add __GFP_THISNODE to avoid fallback to other nodes and ignore cpuset/memory policy restrictions.
Date: Fri, 11 Aug 2006 12:15:40 -0700	[thread overview]
Message-ID: <20060811121540.2253cae7.akpm@osdl.org> (raw)
In-Reply-To: <Pine.LNX.4.64.0608111150550.18495@schroedinger.engr.sgi.com>

On Fri, 11 Aug 2006 11:51:59 -0700 (PDT)
Christoph Lameter <clameter@sgi.com> wrote:

> On Fri, 11 Aug 2006, Andrew Morton wrote:
> 
> > How about we do
> > 
> > /*
> >  * We do this to avoid lots of ifdefs and their consequential conditional
> >  * compilation
> >  */
> > #ifdef CONFIG_NUMA
> > #define NUMA_BUILD 1
> > #else
> > #define NUMA_BUILD 0
> > #endif
> 
> Put this in kernel.h?

spose so.

> Sounds good but this sets a new precedent on how to avoid #ifdefs.

It does, a bit.  I'm not aware of any downside to it, really.  I got dinged
by Linus maybe five years back for this sort of thing.  He muttered something
about it defeating checkconfig or configcheck or some similar thing which probably
doesn't exist now.

Perhaps there is a downside.  But one could argue that NUMA is a
special-case.   Let's try it in a couple of places, see how it goes?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2006-08-11 19:15 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-08-08 16:33 Christoph Lameter
2006-08-08 16:34 ` [2/3] sys_move_pages: Do not fall back to other nodes Christoph Lameter
2006-08-08 16:37   ` [3/3] Guarantee that the uncached allocator gets pages on the correct node Christoph Lameter
2006-08-08 16:56 ` [1/3] Add __GFP_THISNODE to avoid fallback to other nodes and ignore cpuset/memory policy restrictions Andy Whitcroft
2006-08-08 17:01   ` Christoph Lameter
2006-08-08 16:59 ` Mel Gorman
2006-08-08 17:03   ` Christoph Lameter
2006-08-08 17:16     ` Mel Gorman
2006-08-08 17:51       ` Christoph Lameter
2006-08-08 17:47     ` Paul Jackson
2006-08-08 17:59       ` Christoph Lameter
2006-08-08 18:18         ` Paul Jackson
2006-08-08 18:49           ` Christoph Lameter
2006-08-08 20:35             ` Paul Jackson
2006-08-09  9:33               ` Mel Gorman
2006-08-09  1:34 ` KAMEZAWA Hiroyuki
2006-08-09  2:00   ` Christoph Lameter
2006-08-10 19:41 ` Andrew Morton
2006-08-11  3:16   ` Christoph Lameter
2006-08-11 18:08     ` Andrew Morton
2006-08-11 18:15       ` Christoph Lameter
2006-08-11 18:42         ` Andrew Morton
2006-08-11 18:51           ` Christoph Lameter
2006-08-11 19:15             ` Andrew Morton [this message]
2006-08-11 19:16               ` Christoph Lameter
2006-08-11 19:41           ` Dave McCracken

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20060811121540.2253cae7.akpm@osdl.org \
    --to=akpm@osdl.org \
    --cc=apw@shadowen.org \
    --cc=clameter@sgi.com \
    --cc=jes@sgi.com \
    --cc=linux-mm@kvack.org \
    --cc=pj@sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox