linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/7] Fragmentation Avoidance V19
@ 2005-10-30 18:33 Mel Gorman
  2005-10-30 18:34 ` [PATCH 1/7] Fragmentation Avoidance V19: 001_antidefrag_flags Mel Gorman
                   ` (7 more replies)
  0 siblings, 8 replies; 253+ messages in thread
From: Mel Gorman @ 2005-10-30 18:33 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, Mel Gorman, linux-kernel, lhms-devel

Hi Andrew,

This is the latest release of the fragmentation avoidance patches with no
code changes since v18. If it is possible, I would like to get this into -mm,
so this patch is generated against the latest -mm tree 2.6.14-rc5-mm1 and
is known to apply cleanly. If there is another tree that should be diffed
against instead, just say so and I'll send another version.

Here are a few brief reasons why this set of patches is useful;

o Reduced fragmentation improves the chance a large order allocation succeeds
o General-purpose memory hotplug needs the page/memory groupings provided
o Reduces the number of badly-placed pages that page migration mechanism must
  deal with. This also applies to any active page defragmentation mechanism.
o This patch is a pre-requisite for a linear scanning mechanism that could
  be used to guarantee large-page allocations

Built and tested successfully on a single processor AMD machine, quad
processor Xeon machine and PPC64. Benchmarks are generated on the Xeon machine.

Changelog since v18
o Resync against 2.6.14-rc5-mm1
o 004_markfree dropped
o Documentation note added on the behavior of free_area.nr_free

Changelog since v17
o Update to 2.6.14-rc4-mm1
o Remove explicit casts where implicit casts were in place
o Change __GFP_USER to __GFP_EASYRCLM, RCLM_USER to RCLM_EASY and PCPU_USER to
  PCPU_EASY
o Print a warning and return NULL if both RCLM flags are set in the GFP flags
o Reduce size of fallback_allocs
o Change magic number 64 to FREE_AREA_USEMAP_SIZE
o CodingStyle regressions cleanup
o Move sparsemen setup_usemap() out of header
o Changed fallback_balance to a mechanism that depended on zone->present_pages
  to avoid hotplug problems later
o Many superflous parenthesis removed

Changlog since v16
o Variables using bit operations now are unsigned long. Note that when used
  as indices, they are integers and cast to unsigned long when necessary.
  This is because aim9 shows regressions when used as unsigned longs 
  throughout (~10% slowdown)
o 004_showfree added to provide more debugging information
o 008_stats dropped. Even with CONFIG_ALLOCSTATS disabled, it is causing 
  severe performance regressions. No explanation as to why
o for_each_rclmtype_order moved to header
o More coding style cleanups

Changelog since V14 (V15 not released)
o Update against 2.6.14-rc3
o Resync with Joel's work. All suggestions made on fix-ups to his last
  set of patches should also be in here. e.g. __GFP_USER is still __GFP_USER
  but is better commented.
o Large amount of CodingStyle, readability cleanups and corrections pointed
  out by Dave Hansen.
o Fix CONFIG_NUMA error that corrupted per-cpu lists
o Patches broken out to have one-feature-per-patch rather than
  more-code-per-patch
o Fix fallback bug where pages for RCLM_NORCLM end up on random other
  free lists.

Changelog since V13
o Patches are now broken out
o Added per-cpu draining of userrclm pages
o Brought the patch more in line with memory hotplug work
o Fine-grained use of the __GFP_USER and __GFP_KERNRCLM flags
o Many coding-style corrections
o Many whitespace-damage corrections

Changelog since V12
o Minor whitespace damage fixed as pointed by Joel Schopp

Changelog since V11
o Mainly a redefiff against 2.6.12-rc5
o Use #defines for indexing into pcpu lists
o Fix rounding error in the size of usemap

Changelog since V10
o All allocation types now use per-cpu caches like the standard allocator
o Removed all the additional buddy allocator statistic code
o Elimated three zone fields that can be lived without
o Simplified some loops
o Removed many unnecessary calculations

Changelog since V9
o Tightened what pools are used for fallbacks, less likely to fragment
o Many micro-optimisations to have the same performance as the standard 
  allocator. Modified allocator now faster than standard allocator using
  gcc 3.3.5
o Add counter for splits/coalescing

Changelog since V8
o rmqueue_bulk() allocates pages in large blocks and breaks it up into the
  requested size. Reduces the number of calls to __rmqueue()
o Beancounters are now a configurable option under "Kernel Hacking"
o Broke out some code into inline functions to be more Hotplug-friendly
o Increased the size of reserve for fallbacks from 10% to 12.5%. 

Changelog since V7
o Updated to 2.6.11-rc4
o Lots of cleanups, mainly related to beancounters
o Fixed up a miscalculation in the bitmap size as pointed out by Mike Kravetz
  (thanks Mike)
o Introduced a 10% reserve for fallbacks. Drastically reduces the number of
  kernnorclm allocations that go to the wrong places
o Don't trigger OOM when large allocations are involved

Changelog since V6
o Updated to 2.6.11-rc2
o Minor change to allow prezeroing to be a cleaner looking patch

Changelog since V5
o Fixed up gcc-2.95 errors
o Fixed up whitespace damage

Changelog since V4
o No changes. Applies cleanly against 2.6.11-rc1 and 2.6.11-rc1-bk6. Applies
  with offsets to 2.6.11-rc1-mm1

Changelog since V3
o inlined get_pageblock_type() and set_pageblock_type()
o set_pageblock_type() now takes a zone parameter to avoid a call to page_zone()
o When taking from the global pool, do not scan all the low-order lists

Changelog since V2
o Do not to interfere with the "min" decay
o Update the __GFP_BITS_SHIFT properly. Old value broke fsync and probably
  anything to do with asynchronous IO
  
Changelog since V1
o Update patch to 2.6.11-rc1
o Cleaned up bug where memory was wasted on a large bitmap
o Remove code that needed the binary buddy bitmaps
o Update flags to avoid colliding with __GFP_ZERO changes
o Extended fallback_count bean counters to show the fallback count for each
  allocation type
o In-code documentation

Version 1
o Initial release against 2.6.9

This patch is designed to reduce fragmentation in the standard buddy allocator
without impairing the performance of the allocator. High fragmentation in
the standard binary buddy allocator means that high-order allocations can
rarely be serviced. This patch works by dividing allocations into three
different types of allocations;

EasyReclaimable - These are userspace pages that are easily reclaimable. This
	flag is set when it is known that the pages will be trivially reclaimed
	by writing the page out to swap or syncing with backing storage

KernelReclaimable - These are pages allocated by the kernel that are easily
	reclaimed. This is stuff like inode caches, dcache, buffer_heads etc.
	These type of pages potentially could be reclaimed by dumping the
	caches and reaping the slabs

KernelNonReclaimable - These are pages that are allocated by the kernel that
	are not trivially reclaimed. For example, the memory allocated for a
	loaded module would be in this category. By default, allocations are
	considered to be of this type

Instead of having one global MAX_ORDER-sized array of free lists,
there are four, one for each type of allocation and another reserve for
fallbacks. 

Once a 2^MAX_ORDER block of pages it split for a type of allocation, it is
added to the free-lists for that type, in effect reserving it. Hence, over
time, pages of the different types can be clustered together. This means that
if 2^MAX_ORDER number of pages were required, the system could linearly scan
a block of pages allocated for EasyReclaimable and page each of them out.

Fallback is used when there are no 2^MAX_ORDER pages available and there
are no free pages of the desired type. The fallback lists were chosen in a
way that keeps the most easily reclaimable pages together.

Three benchmark results are included all based on a 2.6.14-rc3 kernel
compiled with gcc 3.4 (it is known that gcc 2.95 produces different results).
The first is the output of portions of AIM9 for the vanilla allocator and
the modified one;

(Tests run with bench-aim9.sh from VMRegress 0.17)
2.6.14-rc5-mm1-clean
------------------------------------------------------------------------------------------------------------
 Test        Test        Elapsed  Iteration    Iteration          Operation
Number       Name      Time (sec)   Count   Rate (loops/sec)    Rate (ops/sec)
------------------------------------------------------------------------------------------------------------
     1 creat-clo           60.04        961   16.00600        16006.00 File Creations and Closes/second
     2 page_test           60.02       4149   69.12696       117515.83 System Allocations & Pages/second
     3 brk_test            60.04       1555   25.89940       440289.81 System Memory Allocations/second
     4 jmp_test            60.00     250768 4179.46667      4179466.67 Non-local gotos/second
     5 signal_test         60.01       4849   80.80320        80803.20 Signal Traps/second
     6 exec_test           60.00        741   12.35000           61.75 Program Loads/second
     7 fork_test           60.06        797   13.27006         1327.01 Task Creations/second
     8 link_test           60.01       5269   87.80203         5531.53 Link/Unlink Pairs/second

2.6.14-rc3-mbuddy-v19
------------------------------------------------------------------------------------------------------------
 Test        Test        Elapsed  Iteration    Iteration          Operation
Number       Name      Time (sec)   Count   Rate (loops/sec)    Rate (ops/sec)
------------------------------------------------------------------------------------------------------------
     1 creat-clo           60.04        954   15.88941        15889.41 File Creations and Closes/second
     2 page_test           60.01       4133   68.87185       117082.15 System Allocations & Pages/second
     3 brk_test            60.02       1546   25.75808       437887.37 System Memory Allocations/second
     4 jmp_test            60.00     250797 4179.95000      4179950.00 Non-local gotos/second
     5 signal_test         60.01       5121   85.33578        85335.78 Signal Traps/second
     6 exec_test           60.00        743   12.38333           61.92 Program Loads/second
     7 fork_test           60.05        806   13.42215         1342.21 Task Creations/second
     8 link_test           60.00       5291   88.18333         5555.55 Link/Unlink Pairs/second

Difference in performance operations report generated by diff-aim9.sh
                   Clean   mbuddy-v19
                ---------- ----------
 1 creat-clo      16006.00   15889.41    -116.59 -0.73% File Creations and Closes/second
 2 page_test     117515.83  117082.15    -433.68 -0.37% System Allocations & Pages/second
 3 brk_test      440289.81  437887.37   -2402.44 -0.55% System Memory Allocations/second
 4 jmp_test     4179466.67 4179950.00     483.33  0.01% Non-local gotos/second
 5 signal_test    80803.20   85335.78    4532.58  5.61% Signal Traps/second
 6 exec_test         61.75      61.92       0.17  0.28% Program Loads/second
 7 fork_test       1327.01    1342.21      15.20  1.15% Task Creations/second
 8 link_test       5531.53    5555.55      24.02  0.43% Link/Unlink Pairs/second

In this test, there were small regressions in the page_test. However, it
is known that different kernel configurations, compilers and even different
runs show similar varianes of +/- 3% .

The second benchmark tested the CPU cache usage to make sure it was not
getting clobbered. The test was to repeatedly render a large postscript file
10 times and get the average. The result is;

2.6.14-rc5-mm1-clean:      Average: 43.254 real, 38.89 user, 0.042 sys
2.6.14-rc5-mm1-mbuddy-v19: Average: 43.212 real, 40.494 user, 0.044 sys

So there are no adverse cache effects. The last test is to show that the
allocator can satisfy more high-order allocations, especially under load,
than the standard allocator. The test performs the following;

1. Start updatedb running in the background
2. Load kernel modules that tries to allocate high-order blocks on demand
3. Clean a kernel tree
4. Make 6 copies of the tree. As each copy finishes, a compile starts at -j2
5. Start compiling the primary tree
6. Sleep 1 minute while the 7 trees are being compiled
7. Use the kernel module to attempt 160 times to allocate a 2^10 block of pages
    - note, it only attempts 160 times, no matter how often it succeeds
    - An allocation is attempted every 1/10th of a second
    - Performance will get badly shot as it forces considerable amounts of
      pageout

The result of the allocations under load (load averaging 18) were;

2.6.14-rc5-mm1 Clean
Order:                 10
Allocation type:       HighMem
Attempted allocations: 160
Success allocs:        30
Failed allocs:         130
DMA zone allocs:       0
Normal zone allocs:    7
HighMem zone allocs:   23
% Success:            18

2.6.14-rc5-mm1 MBuddy V19
Order:                 10
Allocation type:       HighMem
Attempted allocations: 160
Success allocs:        76
Failed allocs:         84
DMA zone allocs:       1
Normal zone allocs:    30
HighMem zone allocs:   45
% Success:            47

One thing that had to be changed in the 2.6.14-rc5-mm1 clean test was to
disable the OOM killer. During one test, the OOM killer had better results
but invoked the OOM killer a very large number of times to achieve it. The
patch with the placement policy never invoked the OOM killer.

The above results are not very dramatic but the affect is very noticeable when
the system is at rest after the test completes. After the test, the standard
allocator was able to allocate 45 order-10 pages and the modified allocator
allocated 159. The ability to allocate large pages under load depend heavily
on the decisions of kswapd so there can be large variances in results but
that is a separate problem. It is also known that the success of large 
allocations is also dependant on the location of per-cpu pages but fixing
that problem is a separate issue.

The results show that the modified allocator has comparable speed, has no
adverse cache effects but is far less fragmented and in a better position
to satisfy high-order allocations.
-- 
Mel Gorman
Part-time Phd Student                          Java Applications Developer
University of Limerick                         IBM Dublin Software Lab

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 253+ messages in thread
* Re: [Lhms-devel] [PATCH 0/7] Fragmentation Avoidance V19
@ 2005-11-04  1:00 Andy Nelson
  2005-11-04  1:16 ` Martin J. Bligh
  2005-11-04  5:14 ` Linus Torvalds
  0 siblings, 2 replies; 253+ messages in thread
From: Andy Nelson @ 2005-11-04  1:00 UTC (permalink / raw)
  To: mbligh, torvalds
  Cc: akpm, arjan, arjanv, haveblue, kravetz, lhms-devel, linux-kernel,
	linux-mm, mel, mingo, nickpiggin




Linus writes:

>Just face it - people who want memory hotplug had better know that 
>beforehand (and let's be honest - in practice it's only going to work in 
>virtualized environments or in environments where you can insert the new 
>bank of memory and copy it over and remove the old one with hw support).
>
>Same as hugetlb.
>
>Nobody sane _cares_. Nobody sane is asking for these things. Only people 
>with special needs are asking for it, and they know their needs.


Hello, my name is Andy. I am insane. I am one of the CRAZY PEOPLE you wrote
about. I am the whisperer in people's minds, causing them to conspire 
against sanity everywhere and make lives as insane and crazy as mine is.
I love my work. I am an astrophysicist. I have lurked on various linux 
lists for years now, and this is my first time standing in front of all 
you people, hoping to make you bend your insane and crazy kernel developing
minds to listen to the rantings of my insane and crazy HPC mind.

I have done high performance computing in astrophysics for nearly two
decades now. It gives me a perspective that kernel developers usually
don't have, but sometimes need. For my part, I promise that I specifically
do *not* have the perspective of a kernel developer. I don't even speak C.

I don't really know what you folks do all day or night, and I actually 
don't much care except when it impacts my own work. I am fairly certain 
a lot of this hotplug/page defragmentation/page faulting/page zeroing
stuff that the sgi and ibm folk are currently getting rejected from
inclusion in the kernel impacts my work in very serious ways. You're
right, I do know my needs. They are not being met and the people with the
power to do anything about it call me insane and crazy and refuse to be
interested even in making improvement possible, even when it quite likely
helps them too.

Today I didn't hear a voice in my head that told me to shoot the pope, but
I did I hear one telling me to write a note telling you about my issues,
which apparently are in the 0.01% of insane crazies that should be
ignored, as are about 1/2 of the people responding on this thread. 
I'll tell you a bit about my issues and their context now that things
have gotten hot enough that even a devout lurker like me is posting. Some
of it might make sense. Other parts may be internally inconsistent if only
I knew enough. Still other parts may be useful to get people who don't
talk to each other in contact, and think about things in ways they haven't.


I run large hydrodynamic simulations using a variety of techniques
whose relevance is only tangential to the current flamefest. I'll let you
know the important details as they come in later. A lot of my statements
will be common to a large fraction of all hpc applications, and I imagine
to many large scale database applications as well though I'm guessing a
bit there.

I run the same codes on many kinds of systems from workstations up
to large supercomputing platforms. Mostly my experience has been
in shared memory systems, but recently I've been part of things that
will put me into distributed memory space as well.

What does it mean to use computers like I do? Maybe this is surprising
but my executables are very very small. Typically 1-2MB or less, with
only a bit more needed for various external libraries like FFTW or
the like. On the other hand, my memory requirements are huge. Typically
many GB, and some folks run simulations with many TB.  Picture a very
small and very fast flea repeatedly jumping around all over the skin of a
very large elephant, taking a bite at each jump and that is a crude idea
of what is happening.


This has bearing on the current discussion in the following ways, which
are not theoretical in any way. 


1) Some of these simulations frequently need to access data that is 
   located very far away in memory. That means that the bigger your
   pages are, the fewer TLB misses you get, the smaller the
   thrashing, and the faster your code runs. 

   One example: I have a particle hydrodynamics code that uses gravity.
   Molecular dynamics simulations have similar issues with long range
   forces too. Gravity is calculated by culling acceptable nodes and atoms
   out of a tree structure that can be many GB in size, or for bigger
   jobs, many TB. You have to traverse the entire tree for every particle
   (or closely packed small group). During this stage, almost every node
   examination (a simple compare and about 5 flops) requires at least one
   TLB miss and depending on how you've laid out your array, several TLB
   misses. Huge pages help this problem, big time. Fine with me if all I
   had was one single page. If I am stupid and get into swap territory, I
   deserve every bad thing that happens to me.

   Now you have a list of a few thousand nodes and atoms with their data
   spread sparsely over that entire multi-GB memory volume. Grab data
   (about 10 double precision numbers) for one node, do 40-50 flops with
   it, and repeat, L1 and TLB thrashing your way through the entire list.
   There are some tricks that work some times (preload an L1 sized array
   of node data and use it for an entire group of particles, then discard
   it for another preload if there is more data; dimension arrays in the
   right direction, so you get multiple loads from the same cache line
   etc) but such things don't always work or aren't always useful.

   I can easily imagine database apps doing things not too dissimilar to
   this. With my particular code, I have measured factors of several (\sim
   3-4) speedup with large pages compared to small. This is measured on 
   an Origin 3000, with 64k, 1M and 16MB pages were used. Not a factor 
   of several percent. A factor of several. I have also measured similar
   sorts of speedups on other types of machines. It is also not a factor
   related to NUMA. I can see other effects from that source and can
   distinguish between them.

   Another example: Take a code that discretizes space on a grid
   in 3d and does something to various variables to make them evolve.
   You've got 3d arrays many GB in size, and for various calculations 
   you have to sweep through them in each direction: x, y and z. Going 
   in the z direction means that you are leaping across huge slices of 
   memory every time you increment the grid zone by 1. In some codes 
   only a few calculations are needed per zone. For example you want 
   to take a derivative:

        deriv = (rho(i,j,k+1) - rho(i,j,k-1))/dz(k)

   (I speak fortran, so the last index is the slow one here).

   Again, every calculation strides through huge distances and gets you
   a TLB miss or several. Note for the unwary: it usually does not make
   sense to transpose the arrays so that the fast index is the one you
   work with. You don't have enough memory for one thing and you pay
   for the TLB overhead in the transpose anyway.

   In both examples, with large pages the chances of getting a TLB hit 
   are far far higher than with small pages. That means I want truly 
   huge pages. Assuming pages at all (various arches don't have them
   I think), a single one that covered my whole memory would be fine. 


   Other codes don't seem to benefit so much from large pages, or even
   benefit from small pages, though my experience is minimal with 
   such codes. Other folks run them on the same machines I do though:


2) The last paragraph above is important because of the way HPC
   works as an industry. We often don't just have a dedicated machine to
   run on, that gets booted once and one dedicated application runs on it
   till it dies or gets rebooted again. Many jobs run on the same machine.
   Some jobs run for weeks. Others run for a few hours over and over
   again. Some run massively parallel. Some run throughput.

   How is this situation handled? With a batch scheduler. You submit
   a job to run and ask for X cpus, Y memory and Z time. It goes and
   fits you in wherever it can. cpusets were helpful infrastructure
   in linux for this. 

   You may get some cpus on one side of the machine, some more
   on the other, and memory associated with still others. They
   do a pretty good job of allocating resources sanely, but there is
   only so much that it can do. 

   The important point here for page related discusssions is that
   someone, you don't know who, was running on those cpu's and memory
   before you. And doing Ghu Knows What with it. 

   This code could be running something that benefits from small pages, or
   it could be running with large pages. It could be dynamically
   allocating and freeing large or small blocks of memory or it could be
   allocating everything at the beginning and running statically
   thereafter. Different codes do different things. That means that the
   memory state could be totally fubar'ed before your job ever gets
   any time allocated to it.

>Nobody takes a random machine and says "ok, we'll now put our most 
>performance-critical database on this machine, and oh, btw, you can't 
>reboot it and tune for it beforehand". 

   Wanna bet?

   What I wrote above makes tuning the machine itself totally ineffective.
   What do you tune for? Tuning for one person's code makes someone else's
   slower. Tuning for the same code on one input makes another input run
   horribly. 
   
   You also can't be rebooting after every job. What about all the other
   ones that weren't done yet? You'd piss off everyone running there and
   it takes too long besides.

   What about a machine that is running multiple instances of some
   database, some bigger or smaller than others, or doing other kinds
   of work? Do you penalize the big ones or the small ones, this kind
   of work or that?


   You also can't establish zones that can't be changed on the fly
   as things on the system change. How do zones like that fit into
   numa? How do things work when suddenly you've got a job that wants
   the entire memory filled with large pages and you've only got 
   half your system set up for large pages? What if you tune the
   system that way and then let that job run. For some stupid reason user
   reason it dies 10 minutes after starting? Do you let the 30
   other jobs in the queue sit idle because they want a different
   page distribution?

   This way lies madness. Sysadmins just say no and set up the machine
   in as stably as they can, usually with something not too different
   that whatever manufacturer recommends as a default. For very good reasons.  

   I would bet the only kind of zone stuff that could even possibly
   work would be related to a cpu/memset zone arrangement. See below.


3) I have experimented quite a bit with the page merge infrastructure
   that exists on irix. I understand that similar large page and merge
   infrastructure exists on solaris, though I haven't run on such systems.
   I can get very good page distributions if I run immediately after
   reboot. I get progressively worse distributions if my job runs only
   a few days or weeks later. 

   My experience is that after some days or weeks of running have gone
   by, there is no possible way short of a reboot to get pages merged
   effectively back to any pristine state with the infrastructure that 
   exists there.
   
   Some improvement can be had however, with a bit of pain. What I
   would like to see is not a theoretical, general, all purpose 
   defragmentation and hotplug scheme, but one that can work effectively
   with the kinds of constraints that a batch scheduler imposes.
   I would even imagine that a more general scheduler type of situation
   could be effective it that scheduler was smart enough. God knows,
   the scheduler in linux has been rewritten often enough. What is
   one more time for this purpose too?

   You may claim that this sort of merge stuff requires excessive time
   for the OS. Nothing could matter to me less. I've got those cpu's
   full time for the next X days and if I want them to spend the first
   5 minutes or whatever of my run making the place comfortable, so that
   my job gets done three days earlier then I want to spend that time. 


3) The thing is that all of this memory management at this level is not
   the batch scheduler's job, its the OS's job. The thing that will make
   it work is that in the case of a reasonably intelligent batch scheduler
   (there are many), you are absolutely certain that nothing else is
   running on those cpus and that memory. Except whatever the kernel
   sprinkled in and didn't clean up afterwards. 

   So why can't the kernel clean up after itself? Why does the kernel need
   to keep anything in this memory anyway? I supposedly have a guarantee
   that it is mine, but it goes and immediately violates that guarantee
   long before I even get started. I want all that kernel stuff gone from
   my allocation and reset to a nice, sane pristine state.

   The thing that would make all of it work is good fragmentation and
   hotplug type stuff in the kernel. Push everything that the kernel did
   to my memory into the bitbucket and start over. There shouldn't be
   anything there that it needs to remember from before anyway. Perhaps
   this is what the defragmentation stuff is supposed to help with.
   Probably it has other uses that aren't on my agenda. Like pulling out
   bad ram sticks or whatever. Perhaps there are things that need to be
   remembered. Certainly being able to hotunplug those pieces would do it.
   Just do everything but unplug it from the board, and then do a hotplug
   to turn it back on. 


4) You seem to claim that issues I wrote about above are 'theoretical
   general cases'. They are not, at least not to any more people than the
   0.01% of people who regularly time their kernel builds as I saw someone
   doing some emails ago. Using that sort of argument as a reason not to
   incorporate this sort of infrastructure just about made me fall out of
   my chair, especially in the context of keeping the sane case sane.
   Since this thread has long since lost decency and meaning and descended
   into name calling, I suppose I'll pitch in with that too on two fronts: 
   
   1) I'd say someone making that sort of argument is doing some very serious 
   navel gazing. 

   2) Here's a cluebat: that ain't one of the sane cases you wrote about.



That said, it appears to me there are a variety of constituencies that
have some serious interest in this infrastructure.

1) HPC stuff

2) big database stuff.

3) people who are pushing hotplug for other reasons like the
   bad memory replacement stuff I saw discussed.  

4) Whatever else the hotplug folk want that I don't follow.


Seems to me that is a bit more than 0.01%.



>When you hear voices in your head that tell you to shoot the pope, do you 
>do what they say? Same thing goes for customers and managers. They are the 
>crazy voices in your head, and you need to set them right, not just 
>blindly do what they ask for.

I don't care if you do what I ask for, but I do start getting irate and
start writing long annoyed letters if I can't do what I need to do, and
find out that someone could do something about it but refuses.

That said, I'm not so hot any more so I'll just unplug now.


Andy Nelson



PS: I read these lists at an archive, so if responders want to rm me from
any cc's that is fine. I'll still read what I want or need to from there.



--
Andy Nelson                       Theoretical Astrophysics Division (T-6) 
andy dot nelson at lanl dot gov   Los Alamos National Laboratory
http://www.phys.lsu.edu/~andy     Los Alamos, NM 87545











--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 253+ messages in thread
* Re: [Lhms-devel] [PATCH 0/7] Fragmentation Avoidance V19
@ 2005-11-04 15:19 Andy Nelson
  0 siblings, 0 replies; 253+ messages in thread
From: Andy Nelson @ 2005-11-04 15:19 UTC (permalink / raw)
  To: torvalds
  Cc: akpm, arjan, arjanv, haveblue, kravetz, lhms-devel, linux-kernel,
	linux-mm, mbligh, mel, mingo, nickpiggin



Nick Piggin wrote:
>Mel Gorman wrote:
>> On Fri, 4 Nov 2005, Nick Piggin wrote:
>>
>> Todays massive machiens are tomorrows desktop. Weak comment, I know, but
>> it's happened before.
>>

>Oh I wouldn't bet against it. And if desktops of the future are using
>100s of GB then they probably would be happy to use 64K pages as well.

Just a note. The data I referenced in my other post that can be found
on comp.arch uses 64k pages as the smallest page size in the study.
Pages sized 1M and 16M were the other two.

As I understand it, only a few arch's have hw support for more than 2
page sizes, but my response is that they will eventually need them. 
The larger the memory, the larger the possible page size needs to be
too. Otherwise you are just pushing out the problem for a few years.

Andy


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 253+ messages in thread
* [Lhms-devel] [PATCH 0/7] Fragmentation Avoidance V19
@ 2005-11-04 17:03 Andy Nelson
  2005-11-04 17:49 ` Linus Torvalds
  2005-11-04 20:12 ` Ingo Molnar
  0 siblings, 2 replies; 253+ messages in thread
From: Andy Nelson @ 2005-11-04 17:03 UTC (permalink / raw)
  To: torvalds
  Cc: akpm, arjan, arjanv, haveblue, kravetz, lhms-devel, linux-kernel,
	linux-mm, mbligh, mel, mingo, nickpiggin






>On Fri, 4 Nov 2005, Andy Nelson wrote:
>> 
>> My measurements of factors of 3-4 on more than one hw arch don't
>> mean anything then?
>
>When I _know_ that modern hardware does what you tested at least two 
>orders of magnitude better than the hardware you tested?


Ok. In other posts you have skeptically accepted Power as a
`modern' architecture. I have just now dug out some numbers
of a slightly different problem running on a Power 5. Specifically
a IBM p575 I think. These tests were done in June, while the others
were done more than 2.5 years ago. In other words, there may be 
other small tuning optimizations that have gone in since then too.

The problem is a different configuration of particles, and about
2 times bigger (7Million) than the one in comp.arch (3million I think).
I would estimate that the data set in this test spans something like
2-2.5GB or so.

Here are the results:

cpus    4k pages   16m pages
1       4888.74s   2399.36s
2       2447.68s   1202.71s
4       1225.98s    617.23s
6        790.05s    418.46s
8        592.26s    310.03s
12       398.46s    210.62s
16       296.19s    161.96s
 

These numbers were on a recent Linux. I don't know which one.

Now it looks like it is down to a factor 2 or slightly more. That
is a totally different arch, that I think you have accepted as 
`modern', running the OS that you say doesn't need big page support. 

Still a bit more than insignificant I would say.


>Think about it. 

Likewise.


Andy





  
  

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 253+ messages in thread
* Re: [Lhms-devel] [PATCH 0/7] Fragmentation Avoidance V19
@ 2005-11-04 17:56 Andy Nelson
  0 siblings, 0 replies; 253+ messages in thread
From: Andy Nelson @ 2005-11-04 17:56 UTC (permalink / raw)
  To: torvalds
  Cc: akpm, arjan, arjanv, haveblue, kravetz, lhms-devel, linux-kernel,
	linux-mm, mbligh, mel, mingo, nickpiggin



Correction:
>and you'll see why. Capsule form: Every tree node results in several 

read 

>and you'll see why. Capsule form: Every tree traversal results in several 


Andy

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 253+ messages in thread
* Re: [Lhms-devel] [PATCH 0/7] Fragmentation Avoidance V19
@ 2005-11-04 21:51 Andy Nelson
  0 siblings, 0 replies; 253+ messages in thread
From: Andy Nelson @ 2005-11-04 21:51 UTC (permalink / raw)
  To: gmaxwell
  Cc: akpm, arjan, arjanv, haveblue, kravetz, lhms-devel, linux-kernel,
	linux-mm, mbligh, mel, mingo, nickpiggin, torvalds

Hi folks,

It sound like in principle I (`I'=generic HPC person) could be
happy with this sort of solution. The proof of the pudding is
in the eating however, and various perversions and misunderstanding
can still always crop up. Hopefully they can be solved or avoided
if the do show up though. Also, other folk might not be so satisfied.
I'll let them speak for themselves though.

One issue remaining is that I don't know how this hugetlbfs stuff 
that was discussed actually works or should work, in terms of 
the interface to my code. What would work for me is something to 
the effect of

f90 -flag_that_turns_access_to_big_pages_on code.f

That then substitutes in allocation calls to this hugetlbfs zone
instead of `normal' allocation calls to generic memory, and perhaps
lets me fall back to normal memory up to whatever system limits may
exist if no big pages are available.

Or even something more simple like 

setenv HEY_OS_I_WANT_BIG_PAGES_FOR_MY_JOB  

or alternatively, a similar request in a batch script.
I don't know that any of these things really have much to do
with the OS directly however.


Thanks all, and have a good weekend.

Andy




--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 253+ messages in thread
* RE: [Lhms-devel] [PATCH 0/7] Fragmentation Avoidance V19
@ 2005-11-05  1:37 Seth, Rohit, Nick
  2005-11-07  0:34 ` Andy Nelson
  0 siblings, 1 reply; 253+ messages in thread
From: Seth, Rohit, Nick @ 2005-11-05  1:37 UTC (permalink / raw)
  To: Nick Piggin, Andi Kleen
  Cc: Gregory Maxwell, Andy Nelson, mingo, akpm, arjan, arjanv,
	haveblue, kravetz, lhms-devel, linux-kernel, linux-mm, mbligh,
	mel, torvalds


>These are essentially the same problems that the frag patches face as
>well.

>> None of this is very attractive.
>> 

>Though it is simple and I expect it should actually do a really good
>job for the non-kernel-intensive HPC group, and the highly tuned
>database group.

Not sure how applications seamlessly can use the proposed hugetlb zone
based on hugetlbfs.  Depending on the programming language, it might
actually need changes in libs/tools etc.

As far as databases are concerned, I think they mostly already grab vast
chunks of memory to be used as hugepages (particularly for big mem
systems)which is a separate list of pages.  And actually are also glad
that kernel never looks at them for any other purpose.

-rohit

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 253+ messages in thread
* RE: [Lhms-devel] [PATCH 0/7] Fragmentation Avoidance V19
@ 2005-11-05  1:52 Seth, Rohit, Friday, November
  0 siblings, 0 replies; 253+ messages in thread
From: Seth, Rohit, Friday, November @ 2005-11-05  1:52 UTC (permalink / raw)
  To: Linus Torvalds, Andy Nelson
  Cc: akpm, arjan, arjanv, haveblue, kravetz, lhms-devel, linux-kernel,
	linux-mm, mbligh, mel, mingo, nickpiggin


>If I remember correctly, ia64 used to suck horribly because Linux had
to 
>use a mode where the hw page table walker didn't work well (maybe it
was 
>just an itanium 1 bug), but should be better now. But x86 probably
kicks 
>its butt.

I don't remember a difference of more than (roughly) 30 percentage
points even on first generation Itaniums (using hugetlb vs normal
pages). And few more percentage points when walker was disabled. Over
time the page table walker on IA-64 has gotten more aggressive.


...though I believe that 30% is a lot of performance.

-rohit

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 253+ messages in thread

end of thread, other threads:[~2005-11-10 18:47 UTC | newest]

Thread overview: 253+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2005-10-30 18:33 [PATCH 0/7] Fragmentation Avoidance V19 Mel Gorman
2005-10-30 18:34 ` [PATCH 1/7] Fragmentation Avoidance V19: 001_antidefrag_flags Mel Gorman
2005-10-30 18:34 ` [PATCH 2/7] Fragmentation Avoidance V19: 002_usemap Mel Gorman
2005-10-30 18:34 ` [PATCH 3/7] Fragmentation Avoidance V19: 003_fragcore Mel Gorman
2005-10-30 18:34 ` [PATCH 4/7] Fragmentation Avoidance V19: 004_fallback Mel Gorman
2005-10-30 18:34 ` [PATCH 5/7] Fragmentation Avoidance V19: 005_largealloc_tryharder Mel Gorman
2005-10-30 18:34 ` [PATCH 6/7] Fragmentation Avoidance V19: 006_percpu Mel Gorman
2005-10-30 18:34 ` [PATCH 7/7] Fragmentation Avoidance V19: 007_stats Mel Gorman
2005-10-31  5:57 ` [Lhms-devel] [PATCH 0/7] Fragmentation Avoidance V19 Mike Kravetz
2005-10-31  6:37   ` Nick Piggin
2005-10-31  7:54     ` Andrew Morton
2005-10-31  7:11       ` Nick Piggin
2005-10-31 16:19         ` Mel Gorman
2005-10-31 23:54           ` Nick Piggin
2005-11-01  1:28             ` Mel Gorman
2005-11-01  1:42               ` Nick Piggin
2005-10-31 14:34       ` Martin J. Bligh
2005-10-31 19:24         ` Andrew Morton
2005-10-31 19:40           ` Martin J. Bligh
2005-10-31 23:59             ` Nick Piggin
2005-11-01  1:36               ` Mel Gorman
2005-10-31 23:29         ` Nick Piggin
2005-11-01  0:59           ` Mel Gorman
2005-11-01  1:31             ` Nick Piggin
2005-11-01  2:07               ` Mel Gorman
2005-11-01  2:35                 ` Nick Piggin
2005-11-01 11:57                   ` Mel Gorman
2005-11-01 13:56                     ` Ingo Molnar
2005-11-01 14:10                       ` Dave Hansen
2005-11-01 14:29                         ` Ingo Molnar
2005-11-01 14:49                           ` Dave Hansen
2005-11-01 15:01                             ` Ingo Molnar
2005-11-01 15:22                               ` Dave Hansen
2005-11-02  8:49                                 ` Ingo Molnar
2005-11-02  9:02                                   ` Nick Piggin
2005-11-02  9:17                                     ` Ingo Molnar
2005-11-02  9:32                                     ` Dave Hansen
2005-11-02  9:48                                       ` Nick Piggin
2005-11-02 10:54                                         ` Dave Hansen
2005-11-02 15:02                                         ` Martin J. Bligh
2005-11-03  3:21                                           ` Nick Piggin
2005-11-03 15:36                                             ` Martin J. Bligh
2005-11-03 15:40                                               ` Arjan van de Ven
2005-11-03 15:51                                                 ` Linus Torvalds
2005-11-03 15:57                                                   ` Martin J. Bligh
2005-11-03 16:20                                                   ` Arjan van de Ven
2005-11-03 16:27                                                   ` Mel Gorman
2005-11-03 16:46                                                     ` Linus Torvalds
2005-11-03 16:52                                                       ` Martin J. Bligh
2005-11-03 17:19                                                         ` Linus Torvalds
2005-11-03 17:48                                                           ` Dave Hansen
2005-11-03 17:51                                                           ` Martin J. Bligh
2005-11-03 17:59                                                             ` Arjan van de Ven
2005-11-03 18:08                                                               ` Linus Torvalds
2005-11-03 18:17                                                                 ` Martin J. Bligh
2005-11-03 18:44                                                                   ` Linus Torvalds
2005-11-03 18:51                                                                     ` Martin J. Bligh
2005-11-03 19:35                                                                       ` Linus Torvalds
2005-11-03 22:40                                                                         ` Martin J. Bligh
2005-11-03 22:56                                                                           ` Linus Torvalds
2005-11-03 23:01                                                                             ` Martin J. Bligh
2005-11-04  0:58                                                                   ` Nick Piggin
2005-11-04  1:06                                                                     ` Linus Torvalds
2005-11-04  1:20                                                                       ` Paul Mackerras
2005-11-04  1:22                                                                       ` Nick Piggin
2005-11-04  1:48                                                                         ` Mel Gorman
2005-11-04  1:59                                                                           ` Nick Piggin
2005-11-04  2:35                                                                             ` Mel Gorman
2005-11-04  1:26                                                                       ` Mel Gorman
2005-11-03 21:11                                                                 ` Mel Gorman
2005-11-03 18:03                                                             ` Linus Torvalds
2005-11-03 20:00                                                               ` Paul Jackson
2005-11-03 20:46                                                               ` Mel Gorman
2005-11-03 18:48                                                             ` Martin J. Bligh
2005-11-03 19:08                                                               ` Linus Torvalds
2005-11-03 22:37                                                                 ` Martin J. Bligh
2005-11-03 23:16                                                                   ` Linus Torvalds
2005-11-03 23:39                                                                     ` Martin J. Bligh
2005-11-04  0:42                                                                       ` Nick Piggin
2005-11-04  4:39                                                                     ` Andrew Morton
2005-11-04 16:22                                                                 ` Mel Gorman
2005-11-03 15:53                                                 ` Martin J. Bligh
2005-11-02 14:57                                   ` Martin J. Bligh
2005-11-01 16:48                               ` Kamezawa Hiroyuki
2005-11-01 16:59                                 ` Kamezawa Hiroyuki
2005-11-01 17:19                                 ` Mel Gorman
2005-11-02  0:32                                   ` KAMEZAWA Hiroyuki
2005-11-02 11:22                                     ` Mel Gorman
2005-11-01 18:06                                 ` linux-os (Dick Johnson)
2005-11-02  7:19                                 ` Ingo Molnar
2005-11-02  7:46                                   ` Gerrit Huizenga
2005-11-02  8:50                                     ` Nick Piggin
2005-11-02  9:12                                       ` Gerrit Huizenga
2005-11-02  9:37                                         ` Nick Piggin
2005-11-02 10:17                                           ` Gerrit Huizenga
2005-11-02 23:47                                           ` Rob Landley
2005-11-03  4:43                                             ` Nick Piggin
2005-11-03  6:07                                               ` Rob Landley
2005-11-03  7:34                                                 ` Nick Piggin
2005-11-03 17:54                                                   ` Rob Landley
2005-11-03 20:13                                                     ` Jeff Dike
2005-11-03 16:35                                                 ` Jeff Dike
2005-11-03 16:23                                                   ` Badari Pulavarty
2005-11-03 18:27                                                     ` Jeff Dike
2005-11-03 18:49                                                     ` Rob Landley
2005-11-04  4:52                                                     ` Andrew Morton
2005-11-04  5:35                                                       ` Paul Jackson
2005-11-04  5:48                                                         ` Andrew Morton
2005-11-04  6:42                                                           ` Paul Jackson
2005-11-04  7:10                                                             ` Andrew Morton
2005-11-04  7:45                                                               ` Paul Jackson
2005-11-04  8:02                                                                 ` Andrew Morton
2005-11-04  9:52                                                                   ` Paul Jackson
2005-11-04 15:27                                                                     ` Martin J. Bligh
2005-11-04 15:19                                                               ` Martin J. Bligh
2005-11-04 17:38                                                                 ` Andrew Morton
2005-11-04  6:16                                                         ` Bron Nelson
2005-11-04  7:26                                                       ` [patch] swapin rlimit Ingo Molnar
2005-11-04  7:36                                                         ` Andrew Morton
2005-11-04  8:07                                                           ` Ingo Molnar
2005-11-04 10:06                                                             ` Paul Jackson
2005-11-04 15:24                                                             ` Martin J. Bligh
2005-11-04  8:18                                                           ` Arjan van de Ven
2005-11-04 10:04                                                             ` Paul Jackson
2005-11-04 15:14                                                           ` Rob Landley
2005-11-04 10:14                                                         ` Bernd Petrovitsch
2005-11-04 10:21                                                           ` Ingo Molnar
2005-11-04 11:17                                                             ` Bernd Petrovitsch
2005-11-02 10:41                                     ` [Lhms-devel] [PATCH 0/7] Fragmentation Avoidance V19 Ingo Molnar
2005-11-02 11:04                                       ` Gerrit Huizenga
2005-11-02 12:00                                         ` Ingo Molnar
2005-11-02 12:42                                           ` Dave Hansen
2005-11-02 15:02                                           ` Gerrit Huizenga
2005-11-03  0:10                                             ` Rob Landley
2005-11-02  7:57                                   ` Nick Piggin
2005-11-02  0:51                             ` Nick Piggin
2005-11-02  7:42                               ` Dave Hansen
2005-11-02  8:24                                 ` Nick Piggin
2005-11-02  8:33                                   ` Yasunori Goto
2005-11-02  8:43                                     ` Nick Piggin
2005-11-02 14:51                                       ` Martin J. Bligh
2005-11-02 23:28                                       ` Rob Landley
2005-11-03  5:26                                         ` Jeff Dike
2005-11-03  5:41                                           ` Rob Landley
2005-11-04  3:26                                             ` [uml-devel] " Blaisorblade
2005-11-04 15:50                                               ` Rob Landley
2005-11-04 17:18                                                 ` Blaisorblade
2005-11-04 17:44                                                   ` Rob Landley
2005-11-02 12:38                               ` [Lhms-devel] [PATCH 0/7] Fragmentation Avoidance V19 - Summary Mel Gorman
2005-11-03  3:14                                 ` Nick Piggin
2005-11-03 12:19                                   ` Mel Gorman
2005-11-10 18:47                                     ` Steve Lord
2005-11-03 15:34                                   ` Martin J. Bligh
2005-11-01 14:41                       ` [Lhms-devel] [PATCH 0/7] Fragmentation Avoidance V19 Mel Gorman
2005-11-01 14:46                         ` Ingo Molnar
2005-11-01 15:23                           ` Mel Gorman
2005-11-01 18:33                           ` Rob Landley
2005-11-01 19:02                             ` Ingo Molnar
2005-11-01 14:50                         ` Dave Hansen
2005-11-01 15:24                           ` Mel Gorman
2005-11-02  5:11                         ` Andrew Morton
2005-11-01 18:23                       ` Rob Landley
2005-11-01 20:31                         ` Joel Schopp
2005-11-01 20:59                   ` Joel Schopp
2005-11-02  1:06                     ` Nick Piggin
2005-11-02  1:41                       ` Martin J. Bligh
2005-11-02  2:03                         ` Nick Piggin
2005-11-02  2:24                           ` Martin J. Bligh
2005-11-02  2:49                             ` Nick Piggin
2005-11-02  4:39                               ` Martin J. Bligh
2005-11-02  5:09                                 ` Nick Piggin
2005-11-02  5:14                                   ` Martin J. Bligh
2005-11-02  6:23                                     ` KAMEZAWA Hiroyuki
2005-11-02 10:15                                       ` Nick Piggin
2005-11-02  7:19                               ` Yasunori Goto
2005-11-02 11:48                               ` Mel Gorman
2005-11-02 11:41                           ` Mel Gorman
2005-11-02 11:37                       ` Mel Gorman
2005-11-02 15:11                       ` Mel Gorman
2005-11-01 15:25               ` Martin J. Bligh
2005-11-01 15:33                 ` Dave Hansen
2005-11-01 16:57                   ` Mel Gorman
2005-11-01 17:00                     ` Mel Gorman
2005-11-01 18:58                   ` Rob Landley
2005-11-01 14:40         ` Avi Kivity
2005-11-04  1:00 Andy Nelson
2005-11-04  1:16 ` Martin J. Bligh
2005-11-04  1:27   ` Nick Piggin
2005-11-04  5:14 ` Linus Torvalds
2005-11-04  6:10   ` Paul Jackson
2005-11-04  6:38     ` Ingo Molnar
2005-11-04  7:26       ` Paul Jackson
2005-11-04  7:37         ` Ingo Molnar
2005-11-04 15:31       ` Linus Torvalds
2005-11-04 15:39         ` Martin J. Bligh
2005-11-04 15:53         ` Ingo Molnar
2005-11-06  7:34           ` Paul Jackson
2005-11-06 15:55             ` Linus Torvalds
2005-11-06 18:18               ` Paul Jackson
2005-11-06  8:44         ` Kyle Moffett
2005-11-06 16:12           ` Linus Torvalds
2005-11-06 17:00             ` Linus Torvalds
2005-11-07  8:00               ` Ingo Molnar
2005-11-07 11:00                 ` Dave Hansen
2005-11-07 12:20                   ` Ingo Molnar
2005-11-07 19:34                     ` Steven Rostedt
2005-11-07 23:38                       ` Joel Schopp
2005-11-04  7:44     ` Eric Dumazet
2005-11-07 16:42       ` Adam Litke
2005-11-04 14:56   ` Andy Nelson
2005-11-04 15:18     ` Ingo Molnar
2005-11-04 15:39       ` Andy Nelson
2005-11-04 16:05         ` Ingo Molnar
2005-11-04 16:07         ` Linus Torvalds
2005-11-04 16:40           ` Ingo Molnar
2005-11-04 17:22             ` Linus Torvalds
2005-11-04 17:43               ` Andy Nelson
2005-11-04 16:00     ` Linus Torvalds
2005-11-04 16:13       ` Martin J. Bligh
2005-11-04 16:40         ` Linus Torvalds
2005-11-04 17:10           ` Martin J. Bligh
2005-11-04 16:14       ` Andy Nelson
2005-11-04 16:49         ` Linus Torvalds
2005-11-04 15:19 Andy Nelson
2005-11-04 17:03 Andy Nelson
2005-11-04 17:49 ` Linus Torvalds
2005-11-04 17:51   ` Andy Nelson
2005-11-04 20:12 ` Ingo Molnar
2005-11-04 21:04   ` Andy Nelson
2005-11-04 21:14     ` Ingo Molnar
2005-11-04 21:22     ` Linus Torvalds
2005-11-04 21:39       ` Linus Torvalds
2005-11-05  2:48       ` Rob Landley
2005-11-06 10:59       ` Paul Jackson
2005-11-04 21:31     ` Gregory Maxwell
2005-11-04 22:43       ` Andi Kleen
2005-11-05  0:07         ` Nick Piggin
2005-11-06  1:30         ` Zan Lynx
2005-11-06  2:25           ` Rob Landley
2005-11-04 17:56 Andy Nelson
2005-11-04 21:51 Andy Nelson
2005-11-05  1:37 Seth, Rohit, Nick
2005-11-07  0:34 ` Andy Nelson
2005-11-07 18:58   ` Adam Litke
2005-11-07 20:51     ` Rohit Seth
2005-11-07 20:55       ` Andy Nelson
2005-11-07 20:58         ` Martin J. Bligh
2005-11-07 21:20           ` Rohit Seth
2005-11-07 21:33             ` Adam Litke
2005-11-08  2:12         ` David Gibson
2005-11-07 21:11       ` Adam Litke
2005-11-07 21:31         ` Rohit Seth
2005-11-05  1:52 Seth, Rohit, Friday, November

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox