* [PATCH 0/6] Sizing zones and holes in an architecture independent manner V8
@ 2006-07-08 11:10 Mel Gorman
2006-07-08 11:11 ` [PATCH 1/6] Introduce mechanism for registering active regions of memory Mel Gorman
` (7 more replies)
0 siblings, 8 replies; 27+ messages in thread
From: Mel Gorman @ 2006-07-08 11:10 UTC (permalink / raw)
To: akpm
Cc: davej, tony.luck, Mel Gorman, ak, bob.picco, linux-kernel,
linuxppc-dev, linux-mm
This is V8 of the patchset to size zones and memory holes in an
architecture-independent manner. The notable addition in this release is
accounting for mem_map as a memory hole as it is not reclaimable and the
optional account of the kernel image as a memory hole. This is to match the
existing behavior of x86_64.
Changelog since V7
o Rebase to 2.6.17-mm6
o Account for mem_map as a memory hole
o Adjust mem_map when arch independent zone-sizing is used and PFN 0 is in
a memory hole not accounted for by ARCH_PFN_OFFSET
Changelog since V6
o MAX_ACTIVE_REGIONS is really maximum active regions, not MAX_ACTIVE_REGIONS-1
o MAX_ACTIVE_REGIONS is 256 unless the architecture specifically asks for
a different number or MAX_NUMNODES is >= 32
o nr_nodemap_entries tracks the number of entries rather than terminating with
end_pfn == 0
o Add number of documentation-related comments. Functions exposed by headers
may potentially be picked up by kerneldoc
o Changed misleading zone_present_pages_in_node() name to
zone_spanned_pages_in_node()
o Be a bit more verbose to help debugging when things go wrong.
o On x86_64, end_pfn_map now gets updated properly or ACPI tables get "lost"
o Signoffs added to patches 1 and 5 by Bob Picco related to contributions,
fixes and reviews
Changelog since V5
o Add a missing #include to mm/mem_init.c
o Drop the verbose debugging part of the set
o Report active range registration when loglevel is set for KERN_DEBUG
Changelog since V4
o Rebase to 2.6.17-rc3-mm1
o Calculate holes on x86 with SRAT correctly
Changelog since V3
o Rebase to 2.6.17-rc2
o Allow the active regions to be cleared. Needed by x86_64 when it decides
the SRAT table is bad half way through the registering of active regions
o Fix for flatmem x86_64 machines booting
Changelog since V2
o Fix a bug where holes in lower zones get double counted
o Catch the case where a new range is registered that is within an range
o Catch the case where a zone boundary is within a hole
o Use the EFI map for registering ranges on x86_64+numa
o On IA64+NUMA, add the active ranges before rounding for granules
o On x86_64, remove e820_hole_size and e820_bootmem_free and use
arch-independent equivalents
o On x86_64, remove the map walk in e820_end_of_ram()
o Rename memory_present_with_active_regions, name ambiguous
o Add absent_pages_in_range() for arches to call
Changelog since V1
o Correctly convert virtual and physical addresses to PFNs on ia64
o Correctly convert physical addresses to PFN on older ppc
o When add_active_range() is called with overlapping pfn ranges, merge them
o When a zone boundary occurs within a memory hole, account correctly
o Minor whitespace damage cleanup
o Debugging patch temporarily included
At a basic level, architectures define structures to record where active
ranges of page frames are located. Once located, the code to calculate
zone sizes and holes in each architecture is very similar. Some of this
zone and hole sizing code is difficult to read for no good reason. This
set of patches eliminates the similar-looking architecture-specific code.
The patches introduce a mechanism where architectures register where the
active ranges of page frames are with add_active_range(). When all areas
have been discovered, free_area_init_nodes() is called to initialise
the pgdat and zones. The zone sizes and holes are then calculated in an
architecture independent manner.
Patch 1 introduces the mechanism for registering and initialising PFN ranges
Patch 2 changes ppc to use the mechanism - 134 arch-specific LOC removed
Patch 3 changes x86 to use the mechanism - 142 arch-specific LOC removed
Patch 4 changes x86_64 to use the mechanism - 78 arch-specific LOC removed
Patch 5 changes ia64 to use the mechanism - 57 arch-specific LOC removed
Patch 6 accounts for mem_map as a memory hole as the pages are not reclaimable.
It adjusts the watermarks slightly
The patches have been successfully boot tested by me and verified that the
zones are the correct size on
o x86, flatmem with 1.5GiB of RAM
o x86, NUMAQ
o x86 with SRAT CONFIG_NUMA=n
o PPC64, NUMA
o PPC64, CONFIG_NUMA=n
o PPC64, CONFIG_64BIT=N
o x86_64, NUMA with SRAT
o x86_64, NUMA with broken SRAT that falls back to k8topology discovery
o x86_64, CONFIG_NUMA=n
o x86_64, CONFIG_64=n
o x86_64, CONFIG_64=n, CONFIG_NUMA=n
o x86_64, ACPI_NUMA, ACPI_MEMORY_HOTPLUG && !SPARSEMEM to trigger the
hotadd path without sparsemem fun in srat.c (SRAT broken on test machine and
I'm pretty sure the machine does not support physical memory hotadd anyway
so test may not have been effective other than being a compile test.)
o ia64 (Itanium 2)
o ia64 (Itanium 2), CONFIG_64=N
Tony Luck has successfully tested for ia64 on Itanium with tiger_defconfig,
gensparse_defconfig and defconfig. Bob Picco has also tested and debugged
on IA64. Jack Steiner successfully boot tested on a mammoth SGI IA64-based
machine. These were on patches against 2.6.17-rc1 and release 3 of these
patches but there have been no ia64-changes since release 3.
There are differences in the zone sizes for x86_64 as the arch-specific code
for x86_64 accounts the kernel image and the starting mem_maps as memory
holes but the architecture-independent code accounts the memory as present.
The big benefit of this set of patches is the reduction of 411 lines of
architecture-specific code, some of which is very hairy. There should be
a greater net reduction when other architectures use the same mechanisms
for zone and hole sizing but I lack the hardware to test on.
Additional credit;
Dave Hansen for the initial suggestion and comments on early patches
Andy Whitcroft for reviewing early versions and catching numerous
errors
Tony Luck for testing and debugging on IA64
Bob Picco for fixing bugs related to pfn registration, reviewing a
number of patch revisions, providing a number of suggestions
on future direction and testing heavily
Jack Steiner and Robin Holt for testing on IA64 and clarifying
issues related to memory holes
Yasunori for testing on IA64
Andi Kleen for reviewing and feeding back about x86_64
Christian Kujau for providing valuable information related to ACPI
problems on x86_64 and testing potential fixes
--
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 1/6] Introduce mechanism for registering active regions of memory
2006-07-08 11:10 [PATCH 0/6] Sizing zones and holes in an architecture independent manner V8 Mel Gorman
@ 2006-07-08 11:11 ` Mel Gorman
2006-07-08 11:11 ` [PATCH 2/6] Have Power use add_active_range() and free_area_init_nodes() Mel Gorman
` (6 subsequent siblings)
7 siblings, 0 replies; 27+ messages in thread
From: Mel Gorman @ 2006-07-08 11:11 UTC (permalink / raw)
To: akpm
Cc: davej, tony.luck, Mel Gorman, linux-kernel, bob.picco, ak,
linux-mm, linuxppc-dev
This patch defines the structure to represent an active range of page
frames within a node in an architecture independent manner. Architectures
are expected to register active ranges of PFNs using add_active_range(nid,
start_pfn, end_pfn) and call free_area_init_nodes() passing the PFNs of
the end of each zone.
include/linux/mm.h | 45 +++
include/linux/mmzone.h | 10
mm/page_alloc.c | 557 ++++++++++++++++++++++++++++++++++++++++++--
3 files changed, 586 insertions(+), 26 deletions(-)
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Bob Picco <bob.picco@hp.com>
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-clean/include/linux/mm.h linux-2.6.17-mm6-101-add_free_area_init_nodes/include/linux/mm.h
--- linux-2.6.17-mm6-clean/include/linux/mm.h 2006-07-05 14:31:17.000000000 +0100
+++ linux-2.6.17-mm6-101-add_free_area_init_nodes/include/linux/mm.h 2006-07-06 11:04:22.000000000 +0100
@@ -960,6 +960,51 @@ extern void free_area_init(unsigned long
extern void free_area_init_node(int nid, pg_data_t *pgdat,
unsigned long * zones_size, unsigned long zone_start_pfn,
unsigned long *zholes_size);
+#ifdef CONFIG_ARCH_POPULATES_NODE_MAP
+/*
+ * With CONFIG_ARCH_POPULATES_NODE_MAP set, an architecture may initialise its
+ * zones, allocate the backing mem_map and account for memory holes in a more
+ * architecture independent manner. This is a substitute for creating the
+ * zone_sizes[] and zholes_size[] arrays and passing them to
+ * free_area_init_node()
+ *
+ * An architecture is expected to register range of page frames backed by
+ * physical memory with add_active_range() before calling
+ * free_area_init_nodes() passing in the PFN each zone ends at. At a basic
+ * usage, an architecture is expected to do something like
+ *
+ * for_each_valid_physical_page_range()
+ * add_active_range(node_id, start_pfn, end_pfn)
+ * free_area_init_nodes(max_dma, max_dma32, max_normal_pfn, max_highmem_pfn);
+ *
+ * If the architecture guarantees that there are no holes in the ranges
+ * registered with add_active_range(), free_bootmem_active_regions()
+ * will call free_bootmem_node() for each registered physical page range.
+ * Similarly sparse_memory_present_with_active_regions() calls
+ * memory_present() for each range when SPARSEMEM is enabled.
+ *
+ * See mm/page_alloc.c for more information on each function exposed by
+ * CONFIG_ARCH_POPULATES_NODE_MAP
+ */
+extern void free_area_init_nodes(unsigned long max_dma_pfn,
+ unsigned long max_dma32_pfn,
+ unsigned long max_low_pfn,
+ unsigned long max_high_pfn);
+extern void add_active_range(unsigned int nid, unsigned long start_pfn,
+ unsigned long end_pfn);
+extern void shrink_active_range(unsigned int nid, unsigned long old_end_pfn,
+ unsigned long new_end_pfn);
+extern void remove_all_active_ranges(void);
+extern unsigned long absent_pages_in_range(unsigned long start_pfn,
+ unsigned long end_pfn);
+extern void get_pfn_range_for_nid(unsigned int nid,
+ unsigned long *start_pfn, unsigned long *end_pfn);
+extern unsigned long find_min_pfn_with_active_regions(void);
+extern unsigned long find_max_pfn_with_active_regions(void);
+extern void free_bootmem_with_active_regions(int nid,
+ unsigned long max_low_pfn);
+extern void sparse_memory_present_with_active_regions(int nid);
+#endif /* CONFIG_ARCH_POPULATES_NODE_MAP */
extern void memmap_init_zone(unsigned long, int, unsigned long, unsigned long);
extern void setup_per_zone_pages_min(void);
extern void mem_init(void);
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-clean/include/linux/mmzone.h linux-2.6.17-mm6-101-add_free_area_init_nodes/include/linux/mmzone.h
--- linux-2.6.17-mm6-clean/include/linux/mmzone.h 2006-07-05 14:31:17.000000000 +0100
+++ linux-2.6.17-mm6-101-add_free_area_init_nodes/include/linux/mmzone.h 2006-07-06 11:04:22.000000000 +0100
@@ -293,6 +293,13 @@ struct zonelist {
struct zone *zones[MAX_NUMNODES * MAX_NR_ZONES + 1]; // NULL delimited
};
+#ifdef CONFIG_ARCH_POPULATES_NODE_MAP
+struct node_active_region {
+ unsigned long start_pfn;
+ unsigned long end_pfn;
+ int nid;
+};
+#endif /* CONFIG_ARCH_POPULATES_NODE_MAP */
/*
* The pg_data_t structure is used in machines with CONFIG_DISCONTIGMEM
@@ -493,7 +500,8 @@ extern struct zone *next_zone(struct zon
#endif
-#ifndef CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID
+#if !defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID) && \
+ !defined(CONFIG_ARCH_POPULATES_NODE_MAP)
#define early_pfn_to_nid(nid) (0UL)
#endif
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-clean/mm/page_alloc.c linux-2.6.17-mm6-101-add_free_area_init_nodes/mm/page_alloc.c
--- linux-2.6.17-mm6-clean/mm/page_alloc.c 2006-07-05 14:31:18.000000000 +0100
+++ linux-2.6.17-mm6-101-add_free_area_init_nodes/mm/page_alloc.c 2006-07-06 21:44:44.000000000 +0100
@@ -37,6 +37,8 @@
#include <linux/vmalloc.h>
#include <linux/mempolicy.h>
#include <linux/stop_machine.h>
+#include <linux/sort.h>
+#include <linux/pfn.h>
#include <asm/tlbflush.h>
#include <asm/div64.h>
@@ -86,6 +88,33 @@ int min_free_kbytes = 1024;
unsigned long __meminitdata nr_kernel_pages;
unsigned long __meminitdata nr_all_pages;
+#ifdef CONFIG_ARCH_POPULATES_NODE_MAP
+ /*
+ * MAX_ACTIVE_REGIONS determines the maxmimum number of distinct
+ * ranges of memory (RAM) that may be registered with add_active_range().
+ * Ranges passed to add_active_range() will be merged if possible
+ * so the number of times add_active_range() can be called is
+ * related to the number of nodes and the number of holes
+ */
+ #ifdef CONFIG_MAX_ACTIVE_REGIONS
+ /* Allow an architecture to set MAX_ACTIVE_REGIONS to save memory */
+ #define MAX_ACTIVE_REGIONS CONFIG_MAX_ACTIVE_REGIONS
+ #else
+ #if MAX_NUMNODES >= 32
+ /* If there can be many nodes, allow up to 50 holes per node */
+ #define MAX_ACTIVE_REGIONS (MAX_NUMNODES*50)
+ #else
+ /* By default, allow up to 256 distinct regions */
+ #define MAX_ACTIVE_REGIONS 256
+ #endif
+ #endif
+
+ struct node_active_region __initdata early_node_map[MAX_ACTIVE_REGIONS];
+ int __initdata nr_nodemap_entries;
+ unsigned long __initdata arch_zone_lowest_possible_pfn[MAX_NR_ZONES];
+ unsigned long __initdata arch_zone_highest_possible_pfn[MAX_NR_ZONES];
+#endif /* CONFIG_ARCH_POPULATES_NODE_MAP */
+
#ifdef CONFIG_DEBUG_VM
static int page_outside_zone_boundaries(struct zone *zone, struct page *page)
{
@@ -1728,25 +1757,6 @@ static inline unsigned long wait_table_b
#define LONG_ALIGN(x) (((x)+(sizeof(long))-1)&~((sizeof(long))-1))
-static void __init calculate_zone_totalpages(struct pglist_data *pgdat,
- unsigned long *zones_size, unsigned long *zholes_size)
-{
- unsigned long realtotalpages, totalpages = 0;
- int i;
-
- for (i = 0; i < MAX_NR_ZONES; i++)
- totalpages += zones_size[i];
- pgdat->node_spanned_pages = totalpages;
-
- realtotalpages = totalpages;
- if (zholes_size)
- for (i = 0; i < MAX_NR_ZONES; i++)
- realtotalpages -= zholes_size[i];
- pgdat->node_present_pages = realtotalpages;
- printk(KERN_DEBUG "On node %d totalpages: %lu\n", pgdat->node_id, realtotalpages);
-}
-
-
/*
* Initially all pages are reserved - free ones are freed
* up by free_all_bootmem() once the early boot process is
@@ -2064,6 +2074,272 @@ __meminit int init_currently_empty_zone(
return 0;
}
+#ifdef CONFIG_ARCH_POPULATES_NODE_MAP
+/*
+ * Basic iterator support. Return the first range of PFNs for a node
+ * Note: nid == MAX_NUMNODES returns first region regardless of node
+ */
+static int __init first_active_region_index_in_nid(int nid)
+{
+ int i;
+
+ for (i = 0; i < nr_nodemap_entries; i++)
+ if (nid == MAX_NUMNODES || early_node_map[i].nid == nid)
+ return i;
+
+ return -1;
+}
+
+/*
+ * Basic iterator support. Return the next active range of PFNs for a node
+ * Note: nid == MAX_NUMNODES returns next region regardles of node
+ */
+static int __init next_active_region_index_in_nid(int index, int nid)
+{
+ for (index = index + 1; index < nr_nodemap_entries; index++)
+ if (nid == MAX_NUMNODES || early_node_map[index].nid == nid)
+ return index;
+
+ return -1;
+}
+
+#ifndef CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID
+/*
+ * Required by SPARSEMEM. Given a PFN, return what node the PFN is on.
+ * Architectures may implement their own version but if add_active_range()
+ * was used and there are no special requirements, this is a convenient
+ * alternative
+ */
+int __init early_pfn_to_nid(unsigned long pfn)
+{
+ int i;
+
+ for (i = 0; i < nr_nodemap_entries; i++) {
+ unsigned long start_pfn = early_node_map[i].start_pfn;
+ unsigned long end_pfn = early_node_map[i].end_pfn;
+
+ if (start_pfn <= pfn && pfn < end_pfn)
+ return early_node_map[i].nid;
+ }
+
+ return 0;
+}
+#endif /* CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID */
+
+/* Basic iterator support to walk early_node_map[] */
+#define for_each_active_range_index_in_nid(i, nid) \
+ for (i = first_active_region_index_in_nid(nid); i != -1; \
+ i = next_active_region_index_in_nid(i, nid))
+
+/**
+ * free_bootmem_with_active_regions - Call free_bootmem_node for each active range
+ * @nid: The node to free memory on. If MAX_NUMNODES, all nodes are freed
+ * @max_low_pfn: The highest PFN that till be passed to free_bootmem_node
+ *
+ * If an architecture guarantees that all ranges registered with
+ * add_active_ranges() contain no holes and may be freed, this
+ * this function may be used instead of calling free_bootmem() manually.
+ */
+void __init free_bootmem_with_active_regions(int nid,
+ unsigned long max_low_pfn)
+{
+ int i;
+
+ for_each_active_range_index_in_nid(i, nid) {
+ unsigned long size_pages = 0;
+ unsigned long end_pfn = early_node_map[i].end_pfn;
+
+ if (early_node_map[i].start_pfn >= max_low_pfn)
+ continue;
+
+ if (end_pfn > max_low_pfn)
+ end_pfn = max_low_pfn;
+
+ size_pages = end_pfn - early_node_map[i].start_pfn;
+ free_bootmem_node(NODE_DATA(early_node_map[i].nid),
+ PFN_PHYS(early_node_map[i].start_pfn),
+ size_pages << PAGE_SHIFT);
+ }
+}
+
+/**
+ * sparse_memory_present_with_active_regions - Call memory_present for each active range
+ * @nid: The node to call memory_present for. If MAX_NUMNODES, all nodes will be used
+ *
+ * If an architecture guarantees that all ranges registered with
+ * add_active_ranges() contain no holes and may be freed, this
+ * this function may be used instead of calling memory_present() manually.
+ */
+void __init sparse_memory_present_with_active_regions(int nid)
+{
+ int i;
+
+ for_each_active_range_index_in_nid(i, nid)
+ memory_present(early_node_map[i].nid,
+ early_node_map[i].start_pfn,
+ early_node_map[i].end_pfn);
+}
+
+/**
+ * get_pfn_range_for_nid - Return the start and end page frames for a node
+ * @nid: The nid to return the range for. If MAX_NUMNODES, the min and max PFN are returned
+ * @start_pfn: Passed by reference. On return, it will have the node start_pfn
+ * @end_pfn: Passed by reference. On return, it will have the node end_pfn
+ *
+ * It returns the start and end page frame of a node based on information
+ * provided by an arch calling add_active_range(). If called for a node
+ * with no available memory, a warning is printed and the start and end
+ * PFNs will be 0
+ */
+void __init get_pfn_range_for_nid(unsigned int nid,
+ unsigned long *start_pfn, unsigned long *end_pfn)
+{
+ int i;
+ *start_pfn = -1UL;
+ *end_pfn = 0;
+
+ for_each_active_range_index_in_nid(i, nid) {
+ *start_pfn = min(*start_pfn, early_node_map[i].start_pfn);
+ *end_pfn = max(*end_pfn, early_node_map[i].end_pfn);
+ }
+
+ if (*start_pfn == -1UL) {
+ printk(KERN_WARNING "Node %u active with no memory\n", nid);
+ *start_pfn = 0;
+ }
+}
+
+/*
+ * Return the number of pages a zone spans in a node, including holes
+ * present_pages = zone_spanned_pages_in_node() - zone_absent_pages_in_node()
+ */
+unsigned long __init zone_spanned_pages_in_node(int nid,
+ unsigned long zone_type,
+ unsigned long *ignored)
+{
+ unsigned long node_start_pfn, node_end_pfn;
+ unsigned long zone_start_pfn, zone_end_pfn;
+
+ /* Get the start and end of the node and zone */
+ get_pfn_range_for_nid(nid, &node_start_pfn, &node_end_pfn);
+ zone_start_pfn = arch_zone_lowest_possible_pfn[zone_type];
+ zone_end_pfn = arch_zone_highest_possible_pfn[zone_type];
+
+ /* Check that this node has pages within the zone's required range */
+ if (zone_end_pfn < node_start_pfn || zone_start_pfn > node_end_pfn)
+ return 0;
+
+ /* Move the zone boundaries inside the node if necessary */
+ zone_end_pfn = min(zone_end_pfn, node_end_pfn);
+ zone_start_pfn = max(zone_start_pfn, node_start_pfn);
+
+ /* Return the spanned pages */
+ return zone_end_pfn - zone_start_pfn;
+}
+
+/*
+ * Return the number of holes in a range on a node. If nid is MAX_NUMNODES,
+ * then all holes in the requested range will be accounted for
+ */
+unsigned long __init __absent_pages_in_range(int nid,
+ unsigned long range_start_pfn,
+ unsigned long range_end_pfn)
+{
+ int i = 0;
+ unsigned long prev_end_pfn = 0, hole_pages = 0;
+ unsigned long start_pfn;
+
+ /* Find the end_pfn of the first active range of pfns in the node */
+ i = first_active_region_index_in_nid(nid);
+ if (i == -1)
+ return 0;
+
+ prev_end_pfn = early_node_map[i].start_pfn;
+
+ /* Find all holes for the zone within the node */
+ for (; i != -1; i = next_active_region_index_in_nid(i, nid)) {
+
+ /* No need to continue if prev_end_pfn is outside the zone */
+ if (prev_end_pfn >= range_end_pfn)
+ break;
+
+ /* Make sure the end of the zone is not within the hole */
+ start_pfn = min(early_node_map[i].start_pfn, range_end_pfn);
+ prev_end_pfn = max(prev_end_pfn, range_start_pfn);
+
+ /* Update the hole size cound and move on */
+ if (start_pfn > range_start_pfn) {
+ BUG_ON(prev_end_pfn > start_pfn);
+ hole_pages += start_pfn - prev_end_pfn;
+ }
+ prev_end_pfn = early_node_map[i].end_pfn;
+ }
+
+ return hole_pages;
+}
+
+/**
+ * absent_pages_in_range - Return number of page frames in holes within a range
+ * @start_pfn: The start PFN to start searching for holes
+ * @end_pfn: The end PFN to stop searching for holes
+ *
+ * It returns the number of pages frames in memory holes within a range
+ */
+unsigned long __init absent_pages_in_range(unsigned long start_pfn,
+ unsigned long end_pfn)
+{
+ return __absent_pages_in_range(MAX_NUMNODES, start_pfn, end_pfn);
+}
+
+/* Return the number of page frames in holes in a zone on a node */
+unsigned long __init zone_absent_pages_in_node(int nid,
+ unsigned long zone_type,
+ unsigned long *ignored)
+{
+ return __absent_pages_in_range(nid,
+ arch_zone_lowest_possible_pfn[zone_type],
+ arch_zone_highest_possible_pfn[zone_type]);
+}
+#else
+static inline unsigned long zone_spanned_pages_in_node(int nid,
+ unsigned long zone_type,
+ unsigned long *zones_size)
+{
+ return zones_size[zone_type];
+}
+
+static inline unsigned long zone_absent_pages_in_node(int nid,
+ unsigned long zone_type,
+ unsigned long *zholes_size)
+{
+ if (!zholes_size)
+ return 0;
+
+ return zholes_size[zone_type];
+}
+#endif
+
+static void __init calculate_node_totalpages(struct pglist_data *pgdat,
+ unsigned long *zones_size, unsigned long *zholes_size)
+{
+ unsigned long realtotalpages, totalpages = 0;
+ int i;
+
+ for (i = 0; i < MAX_NR_ZONES; i++)
+ totalpages += zone_spanned_pages_in_node(pgdat->node_id, i,
+ zones_size);
+ pgdat->node_spanned_pages = totalpages;
+
+ realtotalpages = totalpages;
+ for (i = 0; i < MAX_NR_ZONES; i++)
+ realtotalpages -=
+ zone_absent_pages_in_node(pgdat->node_id, i,
+ zholes_size);
+ pgdat->node_present_pages = realtotalpages;
+ printk(KERN_DEBUG "On node %d totalpages: %lu\n", pgdat->node_id,
+ realtotalpages);
+}
+
/*
* Set up the zone data structures:
* - mark all pages reserved
@@ -2087,10 +2363,9 @@ static void __meminit free_area_init_cor
struct zone *zone = pgdat->node_zones + j;
unsigned long size, realsize;
- realsize = size = zones_size[j];
- if (zholes_size)
- realsize -= zholes_size[j];
-
+ size = zone_spanned_pages_in_node(nid, j, zones_size);
+ realsize = size - zone_absent_pages_in_node(nid, j,
+ zholes_size);
if (j < ZONE_HIGHMEM)
nr_kernel_pages += realsize;
nr_all_pages += realsize;
@@ -2159,8 +2434,13 @@ static void __init alloc_node_mem_map(st
/*
* With no DISCONTIG, the global mem_map is just set as node 0's
*/
- if (pgdat == NODE_DATA(0))
+ if (pgdat == NODE_DATA(0)) {
mem_map = NODE_DATA(0)->node_mem_map;
+#ifdef CONFIG_ARCH_POPULATES_NODE_MAP
+ if (page_to_pfn(mem_map) != pgdat->node_start_pfn)
+ mem_map -= pgdat->node_start_pfn;
+#endif /* CONFIG_ARCH_POPULATES_NODE_MAP */
+ }
#endif
#endif /* CONFIG_FLAT_NODE_MEM_MAP */
}
@@ -2171,13 +2451,240 @@ void __meminit free_area_init_node(int n
{
pgdat->node_id = nid;
pgdat->node_start_pfn = node_start_pfn;
- calculate_zone_totalpages(pgdat, zones_size, zholes_size);
+ calculate_node_totalpages(pgdat, zones_size, zholes_size);
alloc_node_mem_map(pgdat);
free_area_init_core(pgdat, zones_size, zholes_size);
}
+#ifdef CONFIG_ARCH_POPULATES_NODE_MAP
+/**
+ * add_active_range - Register a range of PFNs backed by physical memory
+ * @nid: The node ID the range resides on
+ * @start_pfn: The start PFN of the available physical memory
+ * @end_pfn: The end PFN of the available physical memory
+ *
+ * These ranges are stored in an early_node_map[] and later used by
+ * free_area_init_nodes() to calculate zone sizes and holes. If the
+ * range spans a memory hole, it is up to the architecture to ensure
+ * the memory is not freed by the bootmem allocator. If possible
+ * the range being registered will be merged with existing ranges.
+ */
+void __init add_active_range(unsigned int nid, unsigned long start_pfn,
+ unsigned long end_pfn)
+{
+ int i;
+
+ printk(KERN_DEBUG "Entering add_active_range(%d, %lu, %lu) "
+ "%d entries of %d used\n",
+ nid, start_pfn, end_pfn,
+ nr_nodemap_entries, MAX_ACTIVE_REGIONS);
+
+ /* Merge with existing active regions if possible */
+ for (i = 0; i < nr_nodemap_entries; i++) {
+ if (early_node_map[i].nid != nid)
+ continue;
+
+ /* Skip if an existing region covers this new one */
+ if (start_pfn >= early_node_map[i].start_pfn &&
+ end_pfn <= early_node_map[i].end_pfn)
+ return;
+
+ /* Merge forward if suitable */
+ if (start_pfn <= early_node_map[i].end_pfn &&
+ end_pfn > early_node_map[i].end_pfn) {
+ early_node_map[i].end_pfn = end_pfn;
+ return;
+ }
+
+ /* Merge backward if suitable */
+ if (start_pfn < early_node_map[i].end_pfn &&
+ end_pfn >= early_node_map[i].start_pfn) {
+ early_node_map[i].start_pfn = start_pfn;
+ return;
+ }
+ }
+
+ /* Check that early_node_map is large enough */
+ if (i >= MAX_ACTIVE_REGIONS) {
+ printk(KERN_CRIT "More than %d memory regions, truncating\n",
+ MAX_ACTIVE_REGIONS);
+ return;
+ }
+
+ early_node_map[i].nid = nid;
+ early_node_map[i].start_pfn = start_pfn;
+ early_node_map[i].end_pfn = end_pfn;
+ nr_nodemap_entries = i + 1;
+}
+
+/**
+ * shrink_active_range - Shrink an existing registered range of PFNs
+ * @nid: The node id the range is on that should be shrunk
+ * @old_end_pfn: The old end PFN of the range
+ * @new_end_pfn: The new PFN of the range
+ *
+ * i386 with NUMA use alloc_remap() to store a node_mem_map on a local node.
+ * The map is kept at the end physical page range that has already been
+ * registered with add_active_range(). This function allows an arch to shrink
+ * an existing registered range.
+ */
+void __init shrink_active_range(unsigned int nid, unsigned long old_end_pfn,
+ unsigned long new_end_pfn)
+{
+ int i;
+
+ /* Find the old active region end and shrink */
+ for_each_active_range_index_in_nid(i, nid)
+ if (early_node_map[i].end_pfn == old_end_pfn) {
+ early_node_map[i].end_pfn = new_end_pfn;
+ break;
+ }
+}
+
+/**
+ * remove_all_active_ranges - Remove all currently registered regions
+ * During discovery, it may be found that a table like SRAT is invalid
+ * and an alternative discovery method must be used. This function removes
+ * all currently registered regions.
+ */
+void __init remove_all_active_ranges()
+{
+ memset(early_node_map, 0, sizeof(early_node_map));
+ nr_nodemap_entries = 0;
+}
+
+/* Compare two active node_active_regions */
+static int __init cmp_node_active_region(const void *a, const void *b)
+{
+ struct node_active_region *arange = (struct node_active_region *)a;
+ struct node_active_region *brange = (struct node_active_region *)b;
+
+ /* Done this way to avoid overflows */
+ if (arange->start_pfn > brange->start_pfn)
+ return 1;
+ if (arange->start_pfn < brange->start_pfn)
+ return -1;
+
+ return 0;
+}
+
+/* sort the node_map by start_pfn */
+static void __init sort_node_map(void)
+{
+ sort(early_node_map, (size_t)nr_nodemap_entries,
+ sizeof(struct node_active_region),
+ cmp_node_active_region, NULL);
+}
+
+/* Find the lowest pfn for a node. This depends on a sorted early_node_map */
+unsigned long __init find_min_pfn_for_node(unsigned long nid)
+{
+ int i;
+
+ /* Assuming a sorted map, the first range found has the starting pfn */
+ for_each_active_range_index_in_nid(i, nid)
+ return early_node_map[i].start_pfn;
+
+ printk(KERN_WARNING "Could not find start_pfn for node %lu\n", nid);
+ return 0;
+}
+
+/**
+ * find_min_pfn_with_active_regions - Find the minimum PFN registered
+ *
+ * It returns the minimum PFN based on information provided via
+ * add_active_range()
+ */
+unsigned long __init find_min_pfn_with_active_regions(void)
+{
+ return find_min_pfn_for_node(MAX_NUMNODES);
+}
+
+/**
+ * find_max_pfn_with_active_regions - Find the maximum PFN registered
+ *
+ * It returns the maximum PFN based on information provided via
+ * add_active_range()
+ */
+unsigned long __init find_max_pfn_with_active_regions(void)
+{
+ int i;
+ unsigned long max_pfn = 0;
+
+ for (i = 0; i < nr_nodemap_entries; i++)
+ max_pfn = max(max_pfn, early_node_map[i].end_pfn);
+
+ return max_pfn;
+}
+
+/**
+ * free_area_init_nodes - Initialise all pg_data_t and zone data
+ * @arch_max_dma_pfn: The maximum PFN usable for ZONE_DMA
+ * @arch_max_dma32_pfn: The maximum PFN usable for ZONE_DMA32
+ * @arch_max_low_pfn: The maximum PFN usable for ZONE_NORMAL
+ * @arch_max_high_pfn: The maximum PFN usable for ZONE_HIGHMEM
+ *
+ * This will call free_area_init_node() for each active node in the system.
+ * Using the page ranges provided by add_active_range(), the size of each
+ * zone in each node and their holes is calculated. If the maximum PFN
+ * between two adjacent zones match, it is assumed that the zone is empty.
+ * For example, if arch_max_dma_pfn == arch_max_dma32_pfn, it is assumed
+ * that arch_max_dma32_pfn has no pages. It is also assumed that a zone
+ * starts where the previous one ended. For example, ZONE_DMA32 starts
+ * at arch_max_dma_pfn.
+ */
+void __init free_area_init_nodes(unsigned long arch_max_dma_pfn,
+ unsigned long arch_max_dma32_pfn,
+ unsigned long arch_max_low_pfn,
+ unsigned long arch_max_high_pfn)
+{
+ unsigned long nid;
+ int i;
+
+ /* Record where the zone boundaries are */
+ memset(arch_zone_lowest_possible_pfn, 0,
+ sizeof(arch_zone_lowest_possible_pfn));
+ memset(arch_zone_highest_possible_pfn, 0,
+ sizeof(arch_zone_highest_possible_pfn));
+ arch_zone_lowest_possible_pfn[ZONE_DMA] =
+ find_min_pfn_with_active_regions();
+ arch_zone_highest_possible_pfn[ZONE_DMA] = arch_max_dma_pfn;
+ arch_zone_highest_possible_pfn[ZONE_DMA32] = arch_max_dma32_pfn;
+ arch_zone_highest_possible_pfn[ZONE_NORMAL] = arch_max_low_pfn;
+ arch_zone_highest_possible_pfn[ZONE_HIGHMEM] = arch_max_high_pfn;
+ for (i = 1; i < MAX_NR_ZONES; i++)
+ arch_zone_lowest_possible_pfn[i] =
+ arch_zone_highest_possible_pfn[i-1];
+
+ /* Regions in the early_node_map can be in any order */
+ sort_node_map();
+
+ /* Print out the zone ranges */
+ printk("Zone PFN ranges:\n");
+ for (i = 0; i < MAX_NR_ZONES; i++)
+ printk(" %-8s %8lu -> %8lu\n",
+ zone_names[i],
+ arch_zone_lowest_possible_pfn[i],
+ arch_zone_highest_possible_pfn[i]);
+
+ /* Print out the early_node_map[] */
+ printk("early_node_map[%d] active PFN ranges\n", nr_nodemap_entries);
+ for (i = 0; i < nr_nodemap_entries; i++)
+ printk(" %3d: %8lu -> %8lu\n", early_node_map[i].nid,
+ early_node_map[i].start_pfn,
+ early_node_map[i].end_pfn);
+
+ /* Initialise every node */
+ for_each_online_node(nid) {
+ pg_data_t *pgdat = NODE_DATA(nid);
+ free_area_init_node(nid, pgdat, NULL,
+ find_min_pfn_for_node(nid), NULL);
+ }
+}
+#endif /* CONFIG_ARCH_POPULATES_NODE_MAP */
+
#ifndef CONFIG_NEED_MULTIPLE_NODES
static bootmem_data_t contig_bootmem_data;
struct pglist_data contig_page_data = { .bdata = &contig_bootmem_data };
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 2/6] Have Power use add_active_range() and free_area_init_nodes()
2006-07-08 11:10 [PATCH 0/6] Sizing zones and holes in an architecture independent manner V8 Mel Gorman
2006-07-08 11:11 ` [PATCH 1/6] Introduce mechanism for registering active regions of memory Mel Gorman
@ 2006-07-08 11:11 ` Mel Gorman
2006-07-08 11:11 ` [PATCH 3/6] Have x86 use add_active_range() and free_area_init_nodes Mel Gorman
` (5 subsequent siblings)
7 siblings, 0 replies; 27+ messages in thread
From: Mel Gorman @ 2006-07-08 11:11 UTC (permalink / raw)
To: akpm
Cc: davej, tony.luck, Mel Gorman, ak, bob.picco, linux-kernel,
linuxppc-dev, linux-mm
Size zones and holes in an architecture independent manner for Power.
powerpc/Kconfig | 7 --
powerpc/mm/mem.c | 53 ++++++----------
powerpc/mm/numa.c | 157 ++++---------------------------------------------
ppc/Kconfig | 3
ppc/mm/init.c | 26 ++++----
5 files changed, 56 insertions(+), 190 deletions(-)
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-101-add_free_area_init_nodes/arch/powerpc/Kconfig linux-2.6.17-mm6-102-powerpc_use_init_nodes/arch/powerpc/Kconfig
--- linux-2.6.17-mm6-101-add_free_area_init_nodes/arch/powerpc/Kconfig 2006-07-05 14:31:12.000000000 +0100
+++ linux-2.6.17-mm6-102-powerpc_use_init_nodes/arch/powerpc/Kconfig 2006-07-06 11:06:11.000000000 +0100
@@ -715,11 +715,10 @@ config ARCH_SPARSEMEM_DEFAULT
def_bool y
depends on SMP && PPC_PSERIES
-source "mm/Kconfig"
-
-config HAVE_ARCH_EARLY_PFN_TO_NID
+config ARCH_POPULATES_NODE_MAP
def_bool y
- depends on NEED_MULTIPLE_NODES
+
+source "mm/Kconfig"
config ARCH_MEMORY_PROBE
def_bool y
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-101-add_free_area_init_nodes/arch/powerpc/mm/mem.c linux-2.6.17-mm6-102-powerpc_use_init_nodes/arch/powerpc/mm/mem.c
--- linux-2.6.17-mm6-101-add_free_area_init_nodes/arch/powerpc/mm/mem.c 2006-07-05 14:31:12.000000000 +0100
+++ linux-2.6.17-mm6-102-powerpc_use_init_nodes/arch/powerpc/mm/mem.c 2006-07-06 11:06:11.000000000 +0100
@@ -256,20 +256,22 @@ void __init do_init_bootmem(void)
boot_mapsize = init_bootmem(start >> PAGE_SHIFT, total_pages);
+ /* Add active regions with valid PFNs */
+ for (i = 0; i < lmb.memory.cnt; i++) {
+ unsigned long start_pfn, end_pfn;
+ start_pfn = lmb.memory.region[i].base >> PAGE_SHIFT;
+ end_pfn = start_pfn + lmb_size_pages(&lmb.memory, i);
+ add_active_range(0, start_pfn, end_pfn);
+ }
+
/* Add all physical memory to the bootmem map, mark each area
* present.
*/
- for (i = 0; i < lmb.memory.cnt; i++) {
- unsigned long base = lmb.memory.region[i].base;
- unsigned long size = lmb_size_bytes(&lmb.memory, i);
#ifdef CONFIG_HIGHMEM
- if (base >= total_lowmem)
- continue;
- if (base + size > total_lowmem)
- size = total_lowmem - base;
+ free_bootmem_with_active_regions(0, total_lowmem >> PAGE_SHIFT);
+#else
+ free_bootmem_with_active_regions(0, max_pfn);
#endif
- free_bootmem(base, size);
- }
/* reserve the sections we're already using */
for (i = 0; i < lmb.reserved.cnt; i++)
@@ -277,9 +279,8 @@ void __init do_init_bootmem(void)
lmb_size_bytes(&lmb.reserved, i));
/* XXX need to clip this if using highmem? */
- for (i = 0; i < lmb.memory.cnt; i++)
- memory_present(0, lmb_start_pfn(&lmb.memory, i),
- lmb_end_pfn(&lmb.memory, i));
+ sparse_memory_present_with_active_regions(0);
+
init_bootmem_done = 1;
}
@@ -288,8 +289,6 @@ void __init do_init_bootmem(void)
*/
void __init paging_init(void)
{
- unsigned long zones_size[MAX_NR_ZONES];
- unsigned long zholes_size[MAX_NR_ZONES];
unsigned long total_ram = lmb_phys_mem_size();
unsigned long top_of_ram = lmb_end_of_DRAM();
@@ -307,26 +306,18 @@ void __init paging_init(void)
top_of_ram, total_ram);
printk(KERN_DEBUG "Memory hole size: %ldMB\n",
(top_of_ram - total_ram) >> 20);
- /*
- * All pages are DMA-able so we put them all in the DMA zone.
- */
- memset(zones_size, 0, sizeof(zones_size));
- memset(zholes_size, 0, sizeof(zholes_size));
-
- zones_size[ZONE_DMA] = top_of_ram >> PAGE_SHIFT;
- zholes_size[ZONE_DMA] = (top_of_ram - total_ram) >> PAGE_SHIFT;
-
#ifdef CONFIG_HIGHMEM
- zones_size[ZONE_DMA] = total_lowmem >> PAGE_SHIFT;
- zones_size[ZONE_HIGHMEM] = (total_memory - total_lowmem) >> PAGE_SHIFT;
- zholes_size[ZONE_HIGHMEM] = (top_of_ram - total_ram) >> PAGE_SHIFT;
+ free_area_init_nodes(total_lowmem >> PAGE_SHIFT,
+ total_lowmem >> PAGE_SHIFT,
+ total_lowmem >> PAGE_SHIFT,
+ top_of_ram >> PAGE_SHIFT);
#else
- zones_size[ZONE_DMA] = top_of_ram >> PAGE_SHIFT;
- zholes_size[ZONE_DMA] = (top_of_ram - total_ram) >> PAGE_SHIFT;
-#endif /* CONFIG_HIGHMEM */
+ free_area_init_nodes(top_of_ram >> PAGE_SHIFT,
+ top_of_ram >> PAGE_SHIFT,
+ top_of_ram >> PAGE_SHIFT,
+ top_of_ram >> PAGE_SHIFT);
+#endif
- free_area_init_node(0, NODE_DATA(0), zones_size,
- __pa(PAGE_OFFSET) >> PAGE_SHIFT, zholes_size);
}
#endif /* ! CONFIG_NEED_MULTIPLE_NODES */
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-101-add_free_area_init_nodes/arch/powerpc/mm/numa.c linux-2.6.17-mm6-102-powerpc_use_init_nodes/arch/powerpc/mm/numa.c
--- linux-2.6.17-mm6-101-add_free_area_init_nodes/arch/powerpc/mm/numa.c 2006-07-05 14:31:12.000000000 +0100
+++ linux-2.6.17-mm6-102-powerpc_use_init_nodes/arch/powerpc/mm/numa.c 2006-07-06 11:06:11.000000000 +0100
@@ -39,96 +39,6 @@ static bootmem_data_t __initdata plat_no
static int min_common_depth;
static int n_mem_addr_cells, n_mem_size_cells;
-/*
- * We need somewhere to store start/end/node for each region until we have
- * allocated the real node_data structures.
- */
-#define MAX_REGIONS (MAX_LMB_REGIONS*2)
-static struct {
- unsigned long start_pfn;
- unsigned long end_pfn;
- int nid;
-} init_node_data[MAX_REGIONS] __initdata;
-
-int __init early_pfn_to_nid(unsigned long pfn)
-{
- unsigned int i;
-
- for (i = 0; init_node_data[i].end_pfn; i++) {
- unsigned long start_pfn = init_node_data[i].start_pfn;
- unsigned long end_pfn = init_node_data[i].end_pfn;
-
- if ((start_pfn <= pfn) && (pfn < end_pfn))
- return init_node_data[i].nid;
- }
-
- return -1;
-}
-
-void __init add_region(unsigned int nid, unsigned long start_pfn,
- unsigned long pages)
-{
- unsigned int i;
-
- dbg("add_region nid %d start_pfn 0x%lx pages 0x%lx\n",
- nid, start_pfn, pages);
-
- for (i = 0; init_node_data[i].end_pfn; i++) {
- if (init_node_data[i].nid != nid)
- continue;
- if (init_node_data[i].end_pfn == start_pfn) {
- init_node_data[i].end_pfn += pages;
- return;
- }
- if (init_node_data[i].start_pfn == (start_pfn + pages)) {
- init_node_data[i].start_pfn -= pages;
- return;
- }
- }
-
- /*
- * Leave last entry NULL so we dont iterate off the end (we use
- * entry.end_pfn to terminate the walk).
- */
- if (i >= (MAX_REGIONS - 1)) {
- printk(KERN_ERR "WARNING: too many memory regions in "
- "numa code, truncating\n");
- return;
- }
-
- init_node_data[i].start_pfn = start_pfn;
- init_node_data[i].end_pfn = start_pfn + pages;
- init_node_data[i].nid = nid;
-}
-
-/* We assume init_node_data has no overlapping regions */
-void __init get_region(unsigned int nid, unsigned long *start_pfn,
- unsigned long *end_pfn, unsigned long *pages_present)
-{
- unsigned int i;
-
- *start_pfn = -1UL;
- *end_pfn = *pages_present = 0;
-
- for (i = 0; init_node_data[i].end_pfn; i++) {
- if (init_node_data[i].nid != nid)
- continue;
-
- *pages_present += init_node_data[i].end_pfn -
- init_node_data[i].start_pfn;
-
- if (init_node_data[i].start_pfn < *start_pfn)
- *start_pfn = init_node_data[i].start_pfn;
-
- if (init_node_data[i].end_pfn > *end_pfn)
- *end_pfn = init_node_data[i].end_pfn;
- }
-
- /* We didnt find a matching region, return start/end as 0 */
- if (*start_pfn == -1UL)
- *start_pfn = 0;
-}
-
static void __cpuinit map_cpu_to_node(int cpu, int node)
{
numa_cpu_lookup_table[cpu] = node;
@@ -471,8 +381,8 @@ new_range:
continue;
}
- add_region(nid, start >> PAGE_SHIFT,
- size >> PAGE_SHIFT);
+ add_active_range(nid, start >> PAGE_SHIFT,
+ (start >> PAGE_SHIFT) + (size >> PAGE_SHIFT));
if (--ranges)
goto new_range;
@@ -485,6 +395,7 @@ static void __init setup_nonnuma(void)
{
unsigned long top_of_ram = lmb_end_of_DRAM();
unsigned long total_ram = lmb_phys_mem_size();
+ unsigned long start_pfn, end_pfn;
unsigned int i;
printk(KERN_DEBUG "Top of RAM: 0x%lx, Total RAM: 0x%lx\n",
@@ -492,9 +403,11 @@ static void __init setup_nonnuma(void)
printk(KERN_DEBUG "Memory hole size: %ldMB\n",
(top_of_ram - total_ram) >> 20);
- for (i = 0; i < lmb.memory.cnt; ++i)
- add_region(0, lmb.memory.region[i].base >> PAGE_SHIFT,
- lmb_size_pages(&lmb.memory, i));
+ for (i = 0; i < lmb.memory.cnt; ++i) {
+ start_pfn = lmb.memory.region[i].base >> PAGE_SHIFT;
+ end_pfn = start_pfn + lmb_size_pages(&lmb.memory, i);
+ add_active_range(0, start_pfn, end_pfn);
+ }
node_set_online(0);
}
@@ -633,11 +546,11 @@ void __init do_init_bootmem(void)
(void *)(unsigned long)boot_cpuid);
for_each_online_node(nid) {
- unsigned long start_pfn, end_pfn, pages_present;
+ unsigned long start_pfn, end_pfn;
unsigned long bootmem_paddr;
unsigned long bootmap_pages;
- get_region(nid, &start_pfn, &end_pfn, &pages_present);
+ get_pfn_range_for_nid(nid, &start_pfn, &end_pfn);
/* Allocate the node structure node local if possible */
NODE_DATA(nid) = careful_allocation(nid,
@@ -670,19 +583,7 @@ void __init do_init_bootmem(void)
init_bootmem_node(NODE_DATA(nid), bootmem_paddr >> PAGE_SHIFT,
start_pfn, end_pfn);
- /* Add free regions on this node */
- for (i = 0; init_node_data[i].end_pfn; i++) {
- unsigned long start, end;
-
- if (init_node_data[i].nid != nid)
- continue;
-
- start = init_node_data[i].start_pfn << PAGE_SHIFT;
- end = init_node_data[i].end_pfn << PAGE_SHIFT;
-
- dbg("free_bootmem %lx %lx\n", start, end - start);
- free_bootmem_node(NODE_DATA(nid), start, end - start);
- }
+ free_bootmem_with_active_regions(nid, end_pfn);
/* Mark reserved regions on this node */
for (i = 0; i < lmb.reserved.cnt; i++) {
@@ -713,44 +614,14 @@ void __init do_init_bootmem(void)
}
}
- /* Add regions into sparsemem */
- for (i = 0; init_node_data[i].end_pfn; i++) {
- unsigned long start, end;
-
- if (init_node_data[i].nid != nid)
- continue;
-
- start = init_node_data[i].start_pfn;
- end = init_node_data[i].end_pfn;
-
- memory_present(nid, start, end);
- }
+ sparse_memory_present_with_active_regions(nid);
}
}
void __init paging_init(void)
{
- unsigned long zones_size[MAX_NR_ZONES];
- unsigned long zholes_size[MAX_NR_ZONES];
- int nid;
-
- memset(zones_size, 0, sizeof(zones_size));
- memset(zholes_size, 0, sizeof(zholes_size));
-
- for_each_online_node(nid) {
- unsigned long start_pfn, end_pfn, pages_present;
-
- get_region(nid, &start_pfn, &end_pfn, &pages_present);
-
- zones_size[ZONE_DMA] = end_pfn - start_pfn;
- zholes_size[ZONE_DMA] = zones_size[ZONE_DMA] - pages_present;
-
- dbg("free_area_init node %d %lx %lx (hole: %lx)\n", nid,
- zones_size[ZONE_DMA], start_pfn, zholes_size[ZONE_DMA]);
-
- free_area_init_node(nid, NODE_DATA(nid), zones_size, start_pfn,
- zholes_size);
- }
+ unsigned long end_pfn = lmb_end_of_DRAM() >> PAGE_SHIFT;
+ free_area_init_nodes(end_pfn, end_pfn, end_pfn, end_pfn);
}
static int __init early_numa(char *p)
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-101-add_free_area_init_nodes/arch/ppc/Kconfig linux-2.6.17-mm6-102-powerpc_use_init_nodes/arch/ppc/Kconfig
--- linux-2.6.17-mm6-101-add_free_area_init_nodes/arch/ppc/Kconfig 2006-07-05 14:31:12.000000000 +0100
+++ linux-2.6.17-mm6-102-powerpc_use_init_nodes/arch/ppc/Kconfig 2006-07-06 11:06:11.000000000 +0100
@@ -953,6 +953,9 @@ config NR_CPUS
config HIGHMEM
bool "High memory support"
+config ARCH_POPULATES_NODE_MAP
+ def_bool y
+
source kernel/Kconfig.hz
source kernel/Kconfig.preempt
source "mm/Kconfig"
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-101-add_free_area_init_nodes/arch/ppc/mm/init.c linux-2.6.17-mm6-102-powerpc_use_init_nodes/arch/ppc/mm/init.c
--- linux-2.6.17-mm6-101-add_free_area_init_nodes/arch/ppc/mm/init.c 2006-07-05 14:31:12.000000000 +0100
+++ linux-2.6.17-mm6-102-powerpc_use_init_nodes/arch/ppc/mm/init.c 2006-07-06 11:06:11.000000000 +0100
@@ -358,8 +358,7 @@ void __init do_init_bootmem(void)
*/
void __init paging_init(void)
{
- unsigned long zones_size[MAX_NR_ZONES], i;
-
+ unsigned long start_pfn, end_pfn;
#ifdef CONFIG_HIGHMEM
map_page(PKMAP_BASE, 0, 0); /* XXX gross */
pkmap_page_table = pte_offset_kernel(pmd_offset(pgd_offset_k
@@ -369,19 +368,22 @@ void __init paging_init(void)
(KMAP_FIX_BEGIN), KMAP_FIX_BEGIN), KMAP_FIX_BEGIN);
kmap_prot = PAGE_KERNEL;
#endif /* CONFIG_HIGHMEM */
-
- /*
- * All pages are DMA-able so we put them all in the DMA zone.
- */
- zones_size[ZONE_DMA] = total_lowmem >> PAGE_SHIFT;
- for (i = 1; i < MAX_NR_ZONES; i++)
- zones_size[i] = 0;
+ /* All pages are DMA-able so we put them all in the DMA zone. */
+ start_pfn = __pa(PAGE_OFFSET) >> PAGE_SHIFT;
+ end_pfn = start_pfn + (total_memory >> PAGE_SHIFT);
+ add_active_range(0, start_pfn, end_pfn);
#ifdef CONFIG_HIGHMEM
- zones_size[ZONE_HIGHMEM] = (total_memory - total_lowmem) >> PAGE_SHIFT;
+ free_area_init_nodes(total_lowmem >> PAGE_SHIFT,
+ total_lowmem >> PAGE_SHIFT,
+ total_lowmem >> PAGE_SHIFT,
+ total_memory >> PAGE_SHIFT);
+#else
+ free_area_init_nodes(total_memory >> PAGE_SHIFT,
+ total_memory >> PAGE_SHIFT,
+ total_memory >> PAGE_SHIFT,
+ total_memory >> PAGE_SHIFT);
#endif /* CONFIG_HIGHMEM */
-
- free_area_init(zones_size);
}
void __init mem_init(void)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 3/6] Have x86 use add_active_range() and free_area_init_nodes
2006-07-08 11:10 [PATCH 0/6] Sizing zones and holes in an architecture independent manner V8 Mel Gorman
2006-07-08 11:11 ` [PATCH 1/6] Introduce mechanism for registering active regions of memory Mel Gorman
2006-07-08 11:11 ` [PATCH 2/6] Have Power use add_active_range() and free_area_init_nodes() Mel Gorman
@ 2006-07-08 11:11 ` Mel Gorman
2006-07-08 11:12 ` [PATCH 4/6] Have x86_64 " Mel Gorman
` (4 subsequent siblings)
7 siblings, 0 replies; 27+ messages in thread
From: Mel Gorman @ 2006-07-08 11:11 UTC (permalink / raw)
To: akpm
Cc: davej, tony.luck, Mel Gorman, linux-kernel, bob.picco, ak,
linux-mm, linuxppc-dev
Size zones and holes in an architecture independent manner for x86.
Kconfig | 8 +---
kernel/setup.c | 19 +++------
kernel/srat.c | 100 +---------------------------------------------------
mm/discontig.c | 65 +++++++--------------------------
4 files changed, 25 insertions(+), 167 deletions(-)
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-102-powerpc_use_init_nodes/arch/i386/Kconfig linux-2.6.17-mm6-103-x86_use_init_nodes/arch/i386/Kconfig
--- linux-2.6.17-mm6-102-powerpc_use_init_nodes/arch/i386/Kconfig 2006-07-05 14:31:11.000000000 +0100
+++ linux-2.6.17-mm6-103-x86_use_init_nodes/arch/i386/Kconfig 2006-07-06 11:08:03.000000000 +0100
@@ -603,12 +603,10 @@ config ARCH_SELECT_MEMORY_MODEL
def_bool y
depends on ARCH_SPARSEMEM_ENABLE
-source "mm/Kconfig"
+config ARCH_POPULATES_NODE_MAP
+ def_bool y
-config HAVE_ARCH_EARLY_PFN_TO_NID
- bool
- default y
- depends on NUMA
+source "mm/Kconfig"
config HIGHPTE
bool "Allocate 3rd-level pagetables from highmem"
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-102-powerpc_use_init_nodes/arch/i386/kernel/setup.c linux-2.6.17-mm6-103-x86_use_init_nodes/arch/i386/kernel/setup.c
--- linux-2.6.17-mm6-102-powerpc_use_init_nodes/arch/i386/kernel/setup.c 2006-07-05 14:31:11.000000000 +0100
+++ linux-2.6.17-mm6-103-x86_use_init_nodes/arch/i386/kernel/setup.c 2006-07-06 11:08:03.000000000 +0100
@@ -1201,22 +1201,15 @@ static unsigned long __init setup_memory
void __init zone_sizes_init(void)
{
- unsigned long zones_size[MAX_NR_ZONES] = {0, 0, 0};
- unsigned int max_dma, low;
+ unsigned int max_dma;
+#ifndef CONFIG_HIGHMEM
+ unsigned long highend_pfn = max_low_pfn;
+#endif
max_dma = virt_to_phys((char *)MAX_DMA_ADDRESS) >> PAGE_SHIFT;
- low = max_low_pfn;
- if (low < max_dma)
- zones_size[ZONE_DMA] = low;
- else {
- zones_size[ZONE_DMA] = max_dma;
- zones_size[ZONE_NORMAL] = low - max_dma;
-#ifdef CONFIG_HIGHMEM
- zones_size[ZONE_HIGHMEM] = highend_pfn - low;
-#endif
- }
- free_area_init(zones_size);
+ add_active_range(0, 0, highend_pfn);
+ free_area_init_nodes(max_dma, max_dma, max_low_pfn, highend_pfn);
}
#else
extern unsigned long __init setup_memory(void);
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-102-powerpc_use_init_nodes/arch/i386/kernel/srat.c linux-2.6.17-mm6-103-x86_use_init_nodes/arch/i386/kernel/srat.c
--- linux-2.6.17-mm6-102-powerpc_use_init_nodes/arch/i386/kernel/srat.c 2006-07-05 14:31:11.000000000 +0100
+++ linux-2.6.17-mm6-103-x86_use_init_nodes/arch/i386/kernel/srat.c 2006-07-06 11:08:03.000000000 +0100
@@ -54,8 +54,6 @@ struct node_memory_chunk_s {
static struct node_memory_chunk_s node_memory_chunk[MAXCHUNKS];
static int num_memory_chunks; /* total number of memory chunks */
-static int zholes_size_init;
-static unsigned long zholes_size[MAX_NUMNODES * MAX_NR_ZONES];
extern void * boot_ioremap(unsigned long, unsigned long);
@@ -135,50 +133,6 @@ static void __init parse_memory_affinity
"enabled and removable" : "enabled" ) );
}
-#if MAX_NR_ZONES != 4
-#error "MAX_NR_ZONES != 4, chunk_to_zone requires review"
-#endif
-/* Take a chunk of pages from page frame cstart to cend and count the number
- * of pages in each zone, returned via zones[].
- */
-static __init void chunk_to_zones(unsigned long cstart, unsigned long cend,
- unsigned long *zones)
-{
- unsigned long max_dma;
- extern unsigned long max_low_pfn;
-
- int z;
- unsigned long rend;
-
- /* FIXME: MAX_DMA_ADDRESS and max_low_pfn are trying to provide
- * similarly scoped information and should be handled in a consistant
- * manner.
- */
- max_dma = virt_to_phys((char *)MAX_DMA_ADDRESS) >> PAGE_SHIFT;
-
- /* Split the hole into the zones in which it falls. Repeatedly
- * take the segment in which the remaining hole starts, round it
- * to the end of that zone.
- */
- memset(zones, 0, MAX_NR_ZONES * sizeof(long));
- while (cstart < cend) {
- if (cstart < max_dma) {
- z = ZONE_DMA;
- rend = (cend < max_dma)? cend : max_dma;
-
- } else if (cstart < max_low_pfn) {
- z = ZONE_NORMAL;
- rend = (cend < max_low_pfn)? cend : max_low_pfn;
-
- } else {
- z = ZONE_HIGHMEM;
- rend = cend;
- }
- zones[z] += rend - cstart;
- cstart = rend;
- }
-}
-
/*
* The SRAT table always lists ascending addresses, so can always
* assume that the first "start" address that you see is the real
@@ -223,7 +177,6 @@ static int __init acpi20_parse_srat(stru
memset(pxm_bitmap, 0, sizeof(pxm_bitmap)); /* init proximity domain bitmap */
memset(node_memory_chunk, 0, sizeof(node_memory_chunk));
- memset(zholes_size, 0, sizeof(zholes_size));
num_memory_chunks = 0;
while (p < end) {
@@ -287,6 +240,7 @@ static int __init acpi20_parse_srat(stru
printk("chunk %d nid %d start_pfn %08lx end_pfn %08lx\n",
j, chunk->nid, chunk->start_pfn, chunk->end_pfn);
node_read_chunk(chunk->nid, chunk);
+ add_active_range(chunk->nid, chunk->start_pfn, chunk->end_pfn);
}
for_each_online_node(nid) {
@@ -403,57 +357,7 @@ int __init get_memcfg_from_srat(void)
return acpi20_parse_srat((struct acpi_table_srat *)header);
}
out_err:
+ remove_all_active_ranges();
printk("failed to get NUMA memory information from SRAT table\n");
return 0;
}
-
-/* For each node run the memory list to determine whether there are
- * any memory holes. For each hole determine which ZONE they fall
- * into.
- *
- * NOTE#1: this requires knowledge of the zone boundries and so
- * _cannot_ be performed before those are calculated in setup_memory.
- *
- * NOTE#2: we rely on the fact that the memory chunks are ordered by
- * start pfn number during setup.
- */
-static void __init get_zholes_init(void)
-{
- int nid;
- int c;
- int first;
- unsigned long end = 0;
-
- for_each_online_node(nid) {
- first = 1;
- for (c = 0; c < num_memory_chunks; c++){
- if (node_memory_chunk[c].nid == nid) {
- if (first) {
- end = node_memory_chunk[c].end_pfn;
- first = 0;
-
- } else {
- /* Record any gap between this chunk
- * and the previous chunk on this node
- * against the zones it spans.
- */
- chunk_to_zones(end,
- node_memory_chunk[c].start_pfn,
- &zholes_size[nid * MAX_NR_ZONES]);
- }
- }
- }
- }
-}
-
-unsigned long * __init get_zholes_size(int nid)
-{
- if (!zholes_size_init) {
- zholes_size_init++;
- get_zholes_init();
- }
- if (nid >= MAX_NUMNODES || !node_online(nid))
- printk("%s: nid = %d is invalid/offline. num_online_nodes = %d",
- __FUNCTION__, nid, num_online_nodes());
- return &zholes_size[nid * MAX_NR_ZONES];
-}
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-102-powerpc_use_init_nodes/arch/i386/mm/discontig.c linux-2.6.17-mm6-103-x86_use_init_nodes/arch/i386/mm/discontig.c
--- linux-2.6.17-mm6-102-powerpc_use_init_nodes/arch/i386/mm/discontig.c 2006-07-05 14:31:11.000000000 +0100
+++ linux-2.6.17-mm6-103-x86_use_init_nodes/arch/i386/mm/discontig.c 2006-07-06 11:08:03.000000000 +0100
@@ -156,21 +156,6 @@ static void __init find_max_pfn_node(int
BUG();
}
-/* Find the owning node for a pfn. */
-int early_pfn_to_nid(unsigned long pfn)
-{
- int nid;
-
- for_each_node(nid) {
- if (node_end_pfn[nid] == 0)
- break;
- if (node_start_pfn[nid] <= pfn && node_end_pfn[nid] >= pfn)
- return nid;
- }
-
- return 0;
-}
-
/*
* Allocate memory for the pg_data_t for this node via a crude pre-bootmem
* method. For node zero take this from the bottom of memory, for
@@ -226,6 +211,8 @@ static unsigned long calculate_numa_rema
unsigned long pfn;
for_each_online_node(nid) {
+ unsigned old_end_pfn = node_end_pfn[nid];
+
/*
* The acpi/srat node info can show hot-add memroy zones
* where memory could be added but not currently present.
@@ -275,6 +262,7 @@ static unsigned long calculate_numa_rema
node_end_pfn[nid] -= size;
node_remap_start_pfn[nid] = node_end_pfn[nid];
+ shrink_active_range(nid, old_end_pfn, node_end_pfn[nid]);
}
printk("Reserving total of %ld pages for numa KVA remap\n",
reserve_pages);
@@ -351,45 +339,20 @@ unsigned long __init setup_memory(void)
void __init zone_sizes_init(void)
{
int nid;
+ unsigned long max_dma_pfn;
-
- for_each_online_node(nid) {
- unsigned long zones_size[MAX_NR_ZONES] = {0, 0, 0};
- unsigned long *zholes_size;
- unsigned int max_dma;
-
- unsigned long low = max_low_pfn;
- unsigned long start = node_start_pfn[nid];
- unsigned long high = node_end_pfn[nid];
-
- max_dma = virt_to_phys((char *)MAX_DMA_ADDRESS) >> PAGE_SHIFT;
-
- if (node_has_online_mem(nid)){
- if (start > low) {
-#ifdef CONFIG_HIGHMEM
- BUG_ON(start > high);
- zones_size[ZONE_HIGHMEM] = high - start;
-#endif
- } else {
- if (low < max_dma)
- zones_size[ZONE_DMA] = low;
- else {
- BUG_ON(max_dma > low);
- BUG_ON(low > high);
- zones_size[ZONE_DMA] = max_dma;
- zones_size[ZONE_NORMAL] = low - max_dma;
-#ifdef CONFIG_HIGHMEM
- zones_size[ZONE_HIGHMEM] = high - low;
-#endif
- }
- }
+ /* If SRAT has not registered memory, register it now */
+ if (find_max_pfn_with_active_regions() == 0) {
+ for_each_online_node(nid) {
+ if (node_has_online_mem(nid))
+ add_active_range(nid, node_start_pfn[nid],
+ node_end_pfn[nid]);
}
-
- zholes_size = get_zholes_size(nid);
-
- free_area_init_node(nid, NODE_DATA(nid), zones_size, start,
- zholes_size);
}
+
+ max_dma_pfn = virt_to_phys((char *)MAX_DMA_ADDRESS) >> PAGE_SHIFT;
+ free_area_init_nodes(max_dma_pfn, max_dma_pfn,
+ max_low_pfn, highend_pfn);
return;
}
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 4/6] Have x86_64 use add_active_range() and free_area_init_nodes
2006-07-08 11:10 [PATCH 0/6] Sizing zones and holes in an architecture independent manner V8 Mel Gorman
` (2 preceding siblings ...)
2006-07-08 11:11 ` [PATCH 3/6] Have x86 use add_active_range() and free_area_init_nodes Mel Gorman
@ 2006-07-08 11:12 ` Mel Gorman
2006-07-08 11:12 ` [PATCH 5/6] Have ia64 " Mel Gorman
` (3 subsequent siblings)
7 siblings, 0 replies; 27+ messages in thread
From: Mel Gorman @ 2006-07-08 11:12 UTC (permalink / raw)
To: akpm
Cc: davej, tony.luck, Mel Gorman, ak, bob.picco, linux-kernel,
linuxppc-dev, linux-mm
Size zones and holes in an architecture independent manner for x86_64.
arch/x86_64/Kconfig | 3
arch/x86_64/kernel/e820.c | 125 ++++++++++++++-------------------------
arch/x86_64/kernel/setup.c | 7 +-
arch/x86_64/mm/init.c | 62 -------------------
arch/x86_64/mm/k8topology.c | 3
arch/x86_64/mm/numa.c | 18 ++---
arch/x86_64/mm/srat.c | 11 ++-
include/asm-x86_64/e820.h | 5 -
include/asm-x86_64/proto.h | 2
9 files changed, 79 insertions(+), 157 deletions(-)
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-103-x86_use_init_nodes/arch/x86_64/Kconfig linux-2.6.17-mm6-104-x86_64_use_init_nodes/arch/x86_64/Kconfig
--- linux-2.6.17-mm6-103-x86_use_init_nodes/arch/x86_64/Kconfig 2006-07-05 14:31:12.000000000 +0100
+++ linux-2.6.17-mm6-104-x86_64_use_init_nodes/arch/x86_64/Kconfig 2006-07-06 11:09:46.000000000 +0100
@@ -81,6 +81,9 @@ config ARCH_MAY_HAVE_PC_FDC
bool
default y
+config ARCH_POPULATES_NODE_MAP
+ def_bool y
+
config DMI
bool
default y
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-103-x86_use_init_nodes/arch/x86_64/kernel/e820.c linux-2.6.17-mm6-104-x86_64_use_init_nodes/arch/x86_64/kernel/e820.c
--- linux-2.6.17-mm6-103-x86_use_init_nodes/arch/x86_64/kernel/e820.c 2006-07-05 14:31:12.000000000 +0100
+++ linux-2.6.17-mm6-104-x86_64_use_init_nodes/arch/x86_64/kernel/e820.c 2006-07-06 11:09:46.000000000 +0100
@@ -16,6 +16,7 @@
#include <linux/string.h>
#include <linux/kexec.h>
#include <linux/module.h>
+#include <linux/mm.h>
#include <asm/page.h>
#include <asm/e820.h>
@@ -159,58 +160,14 @@ unsigned long __init find_e820_area(unsi
return -1UL;
}
-/*
- * Free bootmem based on the e820 table for a node.
- */
-void __init e820_bootmem_free(pg_data_t *pgdat, unsigned long start,unsigned long end)
-{
- int i;
- for (i = 0; i < e820.nr_map; i++) {
- struct e820entry *ei = &e820.map[i];
- unsigned long last, addr;
-
- if (ei->type != E820_RAM ||
- ei->addr+ei->size <= start ||
- ei->addr >= end)
- continue;
-
- addr = round_up(ei->addr, PAGE_SIZE);
- if (addr < start)
- addr = start;
-
- last = round_down(ei->addr + ei->size, PAGE_SIZE);
- if (last >= end)
- last = end;
-
- if (last > addr && last-addr >= PAGE_SIZE)
- free_bootmem_node(pgdat, addr, last-addr);
- }
-}
-
/*
* Find the highest page frame number we have available
*/
unsigned long __init e820_end_of_ram(void)
{
- int i;
unsigned long end_pfn = 0;
- for (i = 0; i < e820.nr_map; i++) {
- struct e820entry *ei = &e820.map[i];
- unsigned long start, end;
-
- start = round_up(ei->addr, PAGE_SIZE);
- end = round_down(ei->addr + ei->size, PAGE_SIZE);
- if (start >= end)
- continue;
- if (ei->type == E820_RAM) {
- if (end > end_pfn<<PAGE_SHIFT)
- end_pfn = end>>PAGE_SHIFT;
- } else {
- if (end > end_pfn_map<<PAGE_SHIFT)
- end_pfn_map = end>>PAGE_SHIFT;
- }
- }
+ end_pfn = find_max_pfn_with_active_regions();
if (end_pfn > end_pfn_map)
end_pfn_map = end_pfn;
@@ -221,43 +178,10 @@ unsigned long __init e820_end_of_ram(voi
if (end_pfn > end_pfn_map)
end_pfn = end_pfn_map;
+ printk("end_pfn_map = %lu\n", end_pfn_map);
return end_pfn;
}
-/*
- * Compute how much memory is missing in a range.
- * Unlike the other functions in this file the arguments are in page numbers.
- */
-unsigned long __init
-e820_hole_size(unsigned long start_pfn, unsigned long end_pfn)
-{
- unsigned long ram = 0;
- unsigned long start = start_pfn << PAGE_SHIFT;
- unsigned long end = end_pfn << PAGE_SHIFT;
- int i;
- for (i = 0; i < e820.nr_map; i++) {
- struct e820entry *ei = &e820.map[i];
- unsigned long last, addr;
-
- if (ei->type != E820_RAM ||
- ei->addr+ei->size <= start ||
- ei->addr >= end)
- continue;
-
- addr = round_up(ei->addr, PAGE_SIZE);
- if (addr < start)
- addr = start;
-
- last = round_down(ei->addr + ei->size, PAGE_SIZE);
- if (last >= end)
- last = end;
-
- if (last > addr)
- ram += last - addr;
- }
- return ((end - start) - ram) >> PAGE_SHIFT;
-}
-
/*
* Mark e820 reserved areas as busy for the resource manager.
*/
@@ -292,6 +216,49 @@ void __init e820_reserve_resources(void)
}
}
+/* Walk the e820 map and register active regions within a node */
+void __init
+e820_register_active_regions(int nid, unsigned long start_pfn,
+ unsigned long end_pfn)
+{
+ int i;
+ unsigned long ei_startpfn, ei_endpfn;
+ for (i = 0; i < e820.nr_map; i++) {
+ struct e820entry *ei = &e820.map[i];
+ ei_startpfn = round_up(ei->addr, PAGE_SIZE) >> PAGE_SHIFT;
+ ei_endpfn = round_down(ei->addr + ei->size, PAGE_SIZE)
+ >> PAGE_SHIFT;
+
+ /* Skip map entries smaller than a page */
+ if (ei_startpfn > ei_endpfn)
+ continue;
+
+ /* Check if end_pfn_map should be updated */
+ if (ei->type != E820_RAM && ei_endpfn > end_pfn_map)
+ end_pfn_map = ei_endpfn;
+
+ /* Skip if map is outside the node */
+ if (ei->type != E820_RAM ||
+ ei_endpfn <= start_pfn ||
+ ei_startpfn >= end_pfn)
+ continue;
+
+ /* Check for overlaps */
+ if (ei_startpfn < start_pfn)
+ ei_startpfn = start_pfn;
+ if (ei_endpfn > end_pfn)
+ ei_endpfn = end_pfn;
+
+ /* Obey end_user_pfn to save on memmap */
+ if (ei_startpfn >= end_user_pfn)
+ continue;
+ if (ei_endpfn > end_user_pfn)
+ ei_endpfn = end_user_pfn;
+
+ add_active_range(nid, ei_startpfn, ei_endpfn);
+ }
+}
+
/*
* Add a memory region to the kernel e820 map.
*/
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-103-x86_use_init_nodes/arch/x86_64/kernel/setup.c linux-2.6.17-mm6-104-x86_64_use_init_nodes/arch/x86_64/kernel/setup.c
--- linux-2.6.17-mm6-103-x86_use_init_nodes/arch/x86_64/kernel/setup.c 2006-07-05 14:31:12.000000000 +0100
+++ linux-2.6.17-mm6-104-x86_64_use_init_nodes/arch/x86_64/kernel/setup.c 2006-07-06 11:09:46.000000000 +0100
@@ -466,7 +466,8 @@ contig_initmem_init(unsigned long start_
if (bootmap == -1L)
panic("Cannot find bootmem map of size %ld\n",bootmap_size);
bootmap_size = init_bootmem(bootmap >> PAGE_SHIFT, end_pfn);
- e820_bootmem_free(NODE_DATA(0), 0, end_pfn << PAGE_SHIFT);
+ e820_register_active_regions(0, start_pfn, end_pfn);
+ free_bootmem_with_active_regions(0, end_pfn);
reserve_bootmem(bootmap, bootmap_size);
}
#endif
@@ -546,6 +547,7 @@ void __init setup_arch(char **cmdline_p)
early_identify_cpu(&boot_cpu_data);
+ e820_register_active_regions(0, 0, -1UL);
/*
* partially used pages are not usable - thus
* we are rounding upwards:
@@ -571,6 +573,9 @@ void __init setup_arch(char **cmdline_p)
acpi_boot_table_init();
#endif
+ /* Remove active ranges so rediscovery with NUMA-awareness happens */
+ remove_all_active_ranges();
+
#ifdef CONFIG_ACPI_NUMA
/*
* Parse SRAT to discover nodes.
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-103-x86_use_init_nodes/arch/x86_64/mm/init.c linux-2.6.17-mm6-104-x86_64_use_init_nodes/arch/x86_64/mm/init.c
--- linux-2.6.17-mm6-103-x86_use_init_nodes/arch/x86_64/mm/init.c 2006-07-05 14:31:12.000000000 +0100
+++ linux-2.6.17-mm6-104-x86_64_use_init_nodes/arch/x86_64/mm/init.c 2006-07-06 11:09:46.000000000 +0100
@@ -403,69 +403,12 @@ void __cpuinit zap_low_mappings(int cpu)
__flush_tlb_all();
}
-/* Compute zone sizes for the DMA and DMA32 zones in a node. */
-__init void
-size_zones(unsigned long *z, unsigned long *h,
- unsigned long start_pfn, unsigned long end_pfn)
-{
- int i;
- unsigned long w;
-
- for (i = 0; i < MAX_NR_ZONES; i++)
- z[i] = 0;
-
- if (start_pfn < MAX_DMA_PFN)
- z[ZONE_DMA] = MAX_DMA_PFN - start_pfn;
- if (start_pfn < MAX_DMA32_PFN) {
- unsigned long dma32_pfn = MAX_DMA32_PFN;
- if (dma32_pfn > end_pfn)
- dma32_pfn = end_pfn;
- z[ZONE_DMA32] = dma32_pfn - start_pfn;
- }
- z[ZONE_NORMAL] = end_pfn - start_pfn;
-
- /* Remove lower zones from higher ones. */
- w = 0;
- for (i = 0; i < MAX_NR_ZONES; i++) {
- if (z[i])
- z[i] -= w;
- w += z[i];
- }
-
- /* Compute holes */
- w = start_pfn;
- for (i = 0; i < MAX_NR_ZONES; i++) {
- unsigned long s = w;
- w += z[i];
- h[i] = e820_hole_size(s, w);
- }
-
- /* Add the space pace needed for mem_map to the holes too. */
- for (i = 0; i < MAX_NR_ZONES; i++)
- h[i] += (z[i] * sizeof(struct page)) / PAGE_SIZE;
-
- /* The 16MB DMA zone has the kernel and other misc mappings.
- Account them too */
- if (h[ZONE_DMA]) {
- h[ZONE_DMA] += dma_reserve;
- if (h[ZONE_DMA] >= z[ZONE_DMA]) {
- printk(KERN_WARNING
- "Kernel too large and filling up ZONE_DMA?\n");
- h[ZONE_DMA] = z[ZONE_DMA];
- }
- }
-}
-
#ifndef CONFIG_NUMA
void __init paging_init(void)
{
- unsigned long zones[MAX_NR_ZONES], holes[MAX_NR_ZONES];
-
memory_present(0, 0, end_pfn);
sparse_init();
- size_zones(zones, holes, 0, end_pfn);
- free_area_init_node(0, NODE_DATA(0), zones,
- __pa(PAGE_OFFSET) >> PAGE_SHIFT, holes);
+ free_area_init_nodes(MAX_DMA_PFN, MAX_DMA32_PFN, end_pfn, end_pfn);
}
#endif
@@ -614,7 +557,8 @@ void __init mem_init(void)
#else
totalram_pages = free_all_bootmem();
#endif
- reservedpages = end_pfn - totalram_pages - e820_hole_size(0, end_pfn);
+ reservedpages = end_pfn - totalram_pages -
+ absent_pages_in_range(0, end_pfn);
after_bootmem = 1;
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-103-x86_use_init_nodes/arch/x86_64/mm/k8topology.c linux-2.6.17-mm6-104-x86_64_use_init_nodes/arch/x86_64/mm/k8topology.c
--- linux-2.6.17-mm6-103-x86_use_init_nodes/arch/x86_64/mm/k8topology.c 2006-06-18 02:49:35.000000000 +0100
+++ linux-2.6.17-mm6-104-x86_64_use_init_nodes/arch/x86_64/mm/k8topology.c 2006-07-06 11:09:46.000000000 +0100
@@ -146,6 +146,9 @@ int __init k8_scan_nodes(unsigned long s
nodes[nodeid].start = base;
nodes[nodeid].end = limit;
+ e820_register_active_regions(nodeid,
+ nodes[nodeid].start >> PAGE_SHIFT,
+ nodes[nodeid].end >> PAGE_SHIFT);
prevbase = base;
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-103-x86_use_init_nodes/arch/x86_64/mm/numa.c linux-2.6.17-mm6-104-x86_64_use_init_nodes/arch/x86_64/mm/numa.c
--- linux-2.6.17-mm6-103-x86_use_init_nodes/arch/x86_64/mm/numa.c 2006-06-18 02:49:35.000000000 +0100
+++ linux-2.6.17-mm6-104-x86_64_use_init_nodes/arch/x86_64/mm/numa.c 2006-07-06 11:09:46.000000000 +0100
@@ -161,7 +161,7 @@ void __init setup_node_bootmem(int nodei
bootmap_start >> PAGE_SHIFT,
start_pfn, end_pfn);
- e820_bootmem_free(NODE_DATA(nodeid), start, end);
+ free_bootmem_with_active_regions(nodeid, end);
reserve_bootmem_node(NODE_DATA(nodeid), nodedata_phys, pgdat_size);
reserve_bootmem_node(NODE_DATA(nodeid), bootmap_start, bootmap_pages<<PAGE_SHIFT);
@@ -175,13 +175,11 @@ void __init setup_node_bootmem(int nodei
void __init setup_node_zones(int nodeid)
{
unsigned long start_pfn, end_pfn, memmapsize, limit;
- unsigned long zones[MAX_NR_ZONES];
- unsigned long holes[MAX_NR_ZONES];
start_pfn = node_start_pfn(nodeid);
end_pfn = node_end_pfn(nodeid);
- Dprintk(KERN_INFO "Setting up node %d %lx-%lx\n",
+ Dprintk(KERN_INFO "Setting up memmap for node %d %lx-%lx\n",
nodeid, start_pfn, end_pfn);
/* Try to allocate mem_map at end to not fill up precious <4GB
@@ -195,10 +193,6 @@ void __init setup_node_zones(int nodeid)
round_down(limit - memmapsize, PAGE_SIZE),
limit);
#endif
-
- size_zones(zones, holes, start_pfn, end_pfn);
- free_area_init_node(nodeid, NODE_DATA(nodeid), zones,
- start_pfn, holes);
}
void __init numa_init_array(void)
@@ -259,8 +253,11 @@ static int numa_emulation(unsigned long
printk(KERN_ERR "No NUMA hash function found. Emulation disabled.\n");
return -1;
}
- for_each_online_node(i)
+ for_each_online_node(i) {
+ e820_register_active_regions(i, nodes[i].start >> PAGE_SHIFT,
+ nodes[i].end >> PAGE_SHIFT);
setup_node_bootmem(i, nodes[i].start, nodes[i].end);
+ }
numa_init_array();
return 0;
}
@@ -299,6 +296,7 @@ void __init numa_initmem_init(unsigned l
for (i = 0; i < NR_CPUS; i++)
numa_set_node(i, 0);
node_to_cpumask[0] = cpumask_of_cpu(0);
+ e820_register_active_regions(0, start_pfn, end_pfn);
setup_node_bootmem(0, start_pfn << PAGE_SHIFT, end_pfn << PAGE_SHIFT);
}
@@ -346,6 +344,8 @@ void __init paging_init(void)
for_each_online_node(i) {
setup_node_zones(i);
}
+
+ free_area_init_nodes(MAX_DMA_PFN, MAX_DMA32_PFN, end_pfn, end_pfn);
}
/* [numa=off] */
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-103-x86_use_init_nodes/arch/x86_64/mm/srat.c linux-2.6.17-mm6-104-x86_64_use_init_nodes/arch/x86_64/mm/srat.c
--- linux-2.6.17-mm6-103-x86_use_init_nodes/arch/x86_64/mm/srat.c 2006-07-05 14:31:12.000000000 +0100
+++ linux-2.6.17-mm6-104-x86_64_use_init_nodes/arch/x86_64/mm/srat.c 2006-07-06 11:09:46.000000000 +0100
@@ -91,6 +91,7 @@ static __init void bad_srat(void)
apicid_to_node[i] = NUMA_NO_NODE;
for (i = 0; i < MAX_NUMNODES; i++)
nodes_add[i].start = nodes[i].end = 0;
+ remove_all_active_ranges();
}
static __init inline int srat_disabled(void)
@@ -173,7 +174,7 @@ static int hotadd_enough_memory(struct b
if (mem < 0)
return 0;
- allowed = (end_pfn - e820_hole_size(0, end_pfn)) * PAGE_SIZE;
+ allowed = (end_pfn - absent_pages_in_range(0, end_pfn)) * PAGE_SIZE;
allowed = (allowed / 100) * hotadd_percent;
if (allocated + mem > allowed) {
unsigned long range;
@@ -223,7 +224,7 @@ static int reserve_hotadd(int node, unsi
}
/* This check might be a bit too strict, but I'm keeping it for now. */
- if (e820_hole_size(s_pfn, e_pfn) != e_pfn - s_pfn) {
+ if (absent_pages_in_range(s_pfn, e_pfn) != e_pfn - s_pfn) {
printk(KERN_ERR "SRAT: Hotplug area has existing memory\n");
return -1;
}
@@ -317,6 +318,8 @@ acpi_numa_memory_affinity_init(struct ac
printk(KERN_INFO "SRAT: Node %u PXM %u %Lx-%Lx\n", node, pxm,
nd->start, nd->end);
+ e820_register_active_regions(node, nd->start >> PAGE_SHIFT,
+ nd->end >> PAGE_SHIFT);
#ifdef RESERVE_HOTADD
if (ma->flags.hot_pluggable && reserve_hotadd(node, start, end) < 0) {
@@ -341,13 +344,13 @@ static int nodes_cover_memory(void)
unsigned long s = nodes[i].start >> PAGE_SHIFT;
unsigned long e = nodes[i].end >> PAGE_SHIFT;
pxmram += e - s;
- pxmram -= e820_hole_size(s, e);
+ pxmram -= absent_pages_in_range(s, e);
pxmram -= nodes_add[i].end - nodes_add[i].start;
if ((long)pxmram < 0)
pxmram = 0;
}
- e820ram = end_pfn - e820_hole_size(0, end_pfn);
+ e820ram = end_pfn - absent_pages_in_range(0, end_pfn);
/* We seem to lose 3 pages somewhere. Allow a bit of slack. */
if ((long)(e820ram - pxmram) >= 1*1024*1024) {
printk(KERN_ERR
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-103-x86_use_init_nodes/include/asm-x86_64/e820.h linux-2.6.17-mm6-104-x86_64_use_init_nodes/include/asm-x86_64/e820.h
--- linux-2.6.17-mm6-103-x86_use_init_nodes/include/asm-x86_64/e820.h 2006-06-18 02:49:35.000000000 +0100
+++ linux-2.6.17-mm6-104-x86_64_use_init_nodes/include/asm-x86_64/e820.h 2006-07-06 11:09:46.000000000 +0100
@@ -50,10 +50,9 @@ extern void e820_print_map(char *who);
extern int e820_any_mapped(unsigned long start, unsigned long end, unsigned type);
extern int e820_all_mapped(unsigned long start, unsigned long end, unsigned type);
-extern void e820_bootmem_free(pg_data_t *pgdat, unsigned long start,unsigned long end);
extern void e820_setup_gap(void);
-extern unsigned long e820_hole_size(unsigned long start_pfn,
- unsigned long end_pfn);
+extern void e820_register_active_regions(int nid,
+ unsigned long start_pfn, unsigned long end_pfn);
extern void __init parse_memopt(char *p, char **end);
extern void __init parse_memmapopt(char *p, char **end);
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-103-x86_use_init_nodes/include/asm-x86_64/proto.h linux-2.6.17-mm6-104-x86_64_use_init_nodes/include/asm-x86_64/proto.h
--- linux-2.6.17-mm6-103-x86_use_init_nodes/include/asm-x86_64/proto.h 2006-07-05 14:31:17.000000000 +0100
+++ linux-2.6.17-mm6-104-x86_64_use_init_nodes/include/asm-x86_64/proto.h 2006-07-06 11:09:46.000000000 +0100
@@ -24,8 +24,6 @@ extern void mtrr_bp_init(void);
#define mtrr_bp_init() do {} while (0)
#endif
extern void init_memory_mapping(unsigned long start, unsigned long end);
-extern void size_zones(unsigned long *z, unsigned long *h,
- unsigned long start_pfn, unsigned long end_pfn);
extern void system_call(void);
extern int kernel_syscall(void);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 5/6] Have ia64 use add_active_range() and free_area_init_nodes
2006-07-08 11:10 [PATCH 0/6] Sizing zones and holes in an architecture independent manner V8 Mel Gorman
` (3 preceding siblings ...)
2006-07-08 11:12 ` [PATCH 4/6] Have x86_64 " Mel Gorman
@ 2006-07-08 11:12 ` Mel Gorman
2006-07-08 11:12 ` [PATCH 6/6] Account for memmap and optionally the kernel image as holes Mel Gorman
` (2 subsequent siblings)
7 siblings, 0 replies; 27+ messages in thread
From: Mel Gorman @ 2006-07-08 11:12 UTC (permalink / raw)
To: akpm
Cc: davej, tony.luck, Mel Gorman, linux-kernel, bob.picco, ak,
linux-mm, linuxppc-dev
Size zones and holes in an architecture independent manner for ia64.
arch/ia64/Kconfig | 3 ++
arch/ia64/mm/contig.c | 60 +++++-----------------------------------
arch/ia64/mm/discontig.c | 41 ++++-----------------------
arch/ia64/mm/init.c | 12 ++++++++
include/asm-ia64/meminit.h | 1
5 files changed, 30 insertions(+), 87 deletions(-)
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Bob Picco <bob.picco@hp.com>
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-104-x86_64_use_init_nodes/arch/ia64/Kconfig linux-2.6.17-mm6-105-ia64_use_init_nodes/arch/ia64/Kconfig
--- linux-2.6.17-mm6-104-x86_64_use_init_nodes/arch/ia64/Kconfig 2006-07-05 14:31:11.000000000 +0100
+++ linux-2.6.17-mm6-105-ia64_use_init_nodes/arch/ia64/Kconfig 2006-07-06 11:11:30.000000000 +0100
@@ -361,6 +361,9 @@ config NODES_SHIFT
MAX_NUMNODES will be 2^(This value).
If in doubt, use the default.
+config ARCH_POPULATES_NODE_MAP
+ def_bool y
+
# VIRTUAL_MEM_MAP and FLAT_NODE_MEM_MAP are functionally equivalent.
# VIRTUAL_MEM_MAP has been retained for historical reasons.
config VIRTUAL_MEM_MAP
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-104-x86_64_use_init_nodes/arch/ia64/mm/contig.c linux-2.6.17-mm6-105-ia64_use_init_nodes/arch/ia64/mm/contig.c
--- linux-2.6.17-mm6-104-x86_64_use_init_nodes/arch/ia64/mm/contig.c 2006-07-05 14:31:11.000000000 +0100
+++ linux-2.6.17-mm6-105-ia64_use_init_nodes/arch/ia64/mm/contig.c 2006-07-06 11:11:30.000000000 +0100
@@ -25,10 +25,6 @@
#include <asm/sections.h>
#include <asm/mca.h>
-#ifdef CONFIG_VIRTUAL_MEM_MAP
-static unsigned long num_dma_physpages;
-#endif
-
/**
* show_mem - display a memory statistics summary
*
@@ -209,18 +205,6 @@ count_pages (u64 start, u64 end, void *a
return 0;
}
-#ifdef CONFIG_VIRTUAL_MEM_MAP
-static int
-count_dma_pages (u64 start, u64 end, void *arg)
-{
- unsigned long *count = arg;
-
- if (start < MAX_DMA_ADDRESS)
- *count += (min(end, MAX_DMA_ADDRESS) - start) >> PAGE_SHIFT;
- return 0;
-}
-#endif
-
/*
* Set up the page tables.
*/
@@ -229,47 +213,24 @@ void __init
paging_init (void)
{
unsigned long max_dma;
- unsigned long zones_size[MAX_NR_ZONES];
#ifdef CONFIG_VIRTUAL_MEM_MAP
- unsigned long zholes_size[MAX_NR_ZONES];
+ unsigned long nid = 0;
unsigned long max_gap;
#endif
- /* initialize mem_map[] */
-
- memset(zones_size, 0, sizeof(zones_size));
-
num_physpages = 0;
efi_memmap_walk(count_pages, &num_physpages);
max_dma = virt_to_phys((void *) MAX_DMA_ADDRESS) >> PAGE_SHIFT;
#ifdef CONFIG_VIRTUAL_MEM_MAP
- memset(zholes_size, 0, sizeof(zholes_size));
-
- num_dma_physpages = 0;
- efi_memmap_walk(count_dma_pages, &num_dma_physpages);
-
- if (max_low_pfn < max_dma) {
- zones_size[ZONE_DMA] = max_low_pfn;
- zholes_size[ZONE_DMA] = max_low_pfn - num_dma_physpages;
- } else {
- zones_size[ZONE_DMA] = max_dma;
- zholes_size[ZONE_DMA] = max_dma - num_dma_physpages;
- if (num_physpages > num_dma_physpages) {
- zones_size[ZONE_NORMAL] = max_low_pfn - max_dma;
- zholes_size[ZONE_NORMAL] =
- ((max_low_pfn - max_dma) -
- (num_physpages - num_dma_physpages));
- }
- }
-
max_gap = 0;
+ efi_memmap_walk(register_active_ranges, &nid);
efi_memmap_walk(find_largest_hole, (u64 *)&max_gap);
if (max_gap < LARGE_GAP) {
vmem_map = (struct page *) 0;
- free_area_init_node(0, NODE_DATA(0), zones_size, 0,
- zholes_size);
+ free_area_init_nodes(max_dma, max_dma,
+ max_low_pfn, max_low_pfn);
} else {
unsigned long map_size;
@@ -281,19 +242,14 @@ paging_init (void)
efi_memmap_walk(create_mem_map_page_table, NULL);
NODE_DATA(0)->node_mem_map = vmem_map;
- free_area_init_node(0, NODE_DATA(0), zones_size,
- 0, zholes_size);
+ free_area_init_nodes(max_dma, max_dma,
+ max_low_pfn, max_low_pfn);
printk("Virtual mem_map starts at 0x%p\n", mem_map);
}
#else /* !CONFIG_VIRTUAL_MEM_MAP */
- if (max_low_pfn < max_dma)
- zones_size[ZONE_DMA] = max_low_pfn;
- else {
- zones_size[ZONE_DMA] = max_dma;
- zones_size[ZONE_NORMAL] = max_low_pfn - max_dma;
- }
- free_area_init(zones_size);
+ add_active_range(0, 0, max_low_pfn);
+ free_area_init_nodes(max_dma, max_dma, max_low_pfn, max_low_pfn);
#endif /* !CONFIG_VIRTUAL_MEM_MAP */
zero_page_memmap_ptr = virt_to_page(ia64_imva(empty_zero_page));
}
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-104-x86_64_use_init_nodes/arch/ia64/mm/discontig.c linux-2.6.17-mm6-105-ia64_use_init_nodes/arch/ia64/mm/discontig.c
--- linux-2.6.17-mm6-104-x86_64_use_init_nodes/arch/ia64/mm/discontig.c 2006-07-05 14:31:11.000000000 +0100
+++ linux-2.6.17-mm6-105-ia64_use_init_nodes/arch/ia64/mm/discontig.c 2006-07-06 11:11:30.000000000 +0100
@@ -703,6 +703,7 @@ static __init int count_node_pages(unsig
{
unsigned long end = start + len;
+ add_active_range(node, start >> PAGE_SHIFT, end >> PAGE_SHIFT);
mem_data[node].num_physpages += len >> PAGE_SHIFT;
if (start <= __pa(MAX_DMA_ADDRESS))
mem_data[node].num_dma_physpages +=
@@ -727,9 +728,8 @@ static __init int count_node_pages(unsig
void __init paging_init(void)
{
unsigned long max_dma;
- unsigned long zones_size[MAX_NR_ZONES];
- unsigned long zholes_size[MAX_NR_ZONES];
unsigned long pfn_offset = 0;
+ unsigned long max_pfn = 0;
int node;
max_dma = virt_to_phys((void *) MAX_DMA_ADDRESS) >> PAGE_SHIFT;
@@ -746,47 +746,18 @@ void __init paging_init(void)
#endif
for_each_online_node(node) {
- memset(zones_size, 0, sizeof(zones_size));
- memset(zholes_size, 0, sizeof(zholes_size));
-
num_physpages += mem_data[node].num_physpages;
-
- if (mem_data[node].min_pfn >= max_dma) {
- /* All of this node's memory is above ZONE_DMA */
- zones_size[ZONE_NORMAL] = mem_data[node].max_pfn -
- mem_data[node].min_pfn;
- zholes_size[ZONE_NORMAL] = mem_data[node].max_pfn -
- mem_data[node].min_pfn -
- mem_data[node].num_physpages;
- } else if (mem_data[node].max_pfn < max_dma) {
- /* All of this node's memory is in ZONE_DMA */
- zones_size[ZONE_DMA] = mem_data[node].max_pfn -
- mem_data[node].min_pfn;
- zholes_size[ZONE_DMA] = mem_data[node].max_pfn -
- mem_data[node].min_pfn -
- mem_data[node].num_dma_physpages;
- } else {
- /* This node has memory in both zones */
- zones_size[ZONE_DMA] = max_dma -
- mem_data[node].min_pfn;
- zholes_size[ZONE_DMA] = zones_size[ZONE_DMA] -
- mem_data[node].num_dma_physpages;
- zones_size[ZONE_NORMAL] = mem_data[node].max_pfn -
- max_dma;
- zholes_size[ZONE_NORMAL] = zones_size[ZONE_NORMAL] -
- (mem_data[node].num_physpages -
- mem_data[node].num_dma_physpages);
- }
-
pfn_offset = mem_data[node].min_pfn;
#ifdef CONFIG_VIRTUAL_MEM_MAP
NODE_DATA(node)->node_mem_map = vmem_map + pfn_offset;
#endif
- free_area_init_node(node, NODE_DATA(node), zones_size,
- pfn_offset, zholes_size);
+ if (mem_data[node].max_pfn > max_pfn)
+ max_pfn = mem_data[node].max_pfn;
}
+ free_area_init_nodes(max_dma, max_dma, max_pfn, max_pfn);
+
zero_page_memmap_ptr = virt_to_page(ia64_imva(empty_zero_page));
}
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-104-x86_64_use_init_nodes/arch/ia64/mm/init.c linux-2.6.17-mm6-105-ia64_use_init_nodes/arch/ia64/mm/init.c
--- linux-2.6.17-mm6-104-x86_64_use_init_nodes/arch/ia64/mm/init.c 2006-07-05 14:31:11.000000000 +0100
+++ linux-2.6.17-mm6-105-ia64_use_init_nodes/arch/ia64/mm/init.c 2006-07-06 11:11:30.000000000 +0100
@@ -538,6 +538,18 @@ find_largest_hole (u64 start, u64 end, v
last_end = end;
return 0;
}
+
+int __init
+register_active_ranges(u64 start, u64 end, void *nid)
+{
+ BUG_ON(nid == NULL);
+ BUG_ON(*(unsigned long *)nid >= MAX_NUMNODES);
+
+ add_active_range(*(unsigned long *)nid,
+ __pa(start) >> PAGE_SHIFT,
+ __pa(end) >> PAGE_SHIFT);
+ return 0;
+}
#endif /* CONFIG_VIRTUAL_MEM_MAP */
static int __init
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-104-x86_64_use_init_nodes/include/asm-ia64/meminit.h linux-2.6.17-mm6-105-ia64_use_init_nodes/include/asm-ia64/meminit.h
--- linux-2.6.17-mm6-104-x86_64_use_init_nodes/include/asm-ia64/meminit.h 2006-07-05 14:31:17.000000000 +0100
+++ linux-2.6.17-mm6-105-ia64_use_init_nodes/include/asm-ia64/meminit.h 2006-07-06 11:11:30.000000000 +0100
@@ -56,6 +56,7 @@ extern void efi_memmap_init(unsigned lon
extern unsigned long vmalloc_end;
extern struct page *vmem_map;
extern int find_largest_hole (u64 start, u64 end, void *arg);
+ extern int register_active_ranges (u64 start, u64 end, void *arg);
extern int create_mem_map_page_table (u64 start, u64 end, void *arg);
#endif
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 6/6] Account for memmap and optionally the kernel image as holes
2006-07-08 11:10 [PATCH 0/6] Sizing zones and holes in an architecture independent manner V8 Mel Gorman
` (4 preceding siblings ...)
2006-07-08 11:12 ` [PATCH 5/6] Have ia64 " Mel Gorman
@ 2006-07-08 11:12 ` Mel Gorman
2006-07-08 11:42 ` [PATCH 0/6] Sizing zones and holes in an architecture independent manner V8 Heiko Carstens
2006-07-10 11:30 ` [PATCH 6/6] Account for memmap and optionally the kernel image as holes David Howells
7 siblings, 0 replies; 27+ messages in thread
From: Mel Gorman @ 2006-07-08 11:12 UTC (permalink / raw)
To: akpm
Cc: davej, tony.luck, Mel Gorman, ak, bob.picco, linux-kernel,
linuxppc-dev, linux-mm
The x86_64 code accounted for memmap and some portions of the the DMA zone
as holes. This was because those areas would never be reclaimed and accounting
for them as memory affects min watermarks. This patch will account for the
memmap as a memory hole. Architectures may optionally use set_dma_reserve() if
they wish to account for a portion of memory in ZONE_DMA as a hole.
arch/x86_64/mm/init.c | 4 +
include/linux/mm.h | 1
mm/page_alloc.c | 96 +++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 100 insertions(+), 1 deletion(-)
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-105-ia64_use_init_nodes/arch/x86_64/mm/init.c linux-2.6.17-mm6-106-account_kernel_mmap/arch/x86_64/mm/init.c
--- linux-2.6.17-mm6-105-ia64_use_init_nodes/arch/x86_64/mm/init.c 2006-07-06 11:09:46.000000000 +0100
+++ linux-2.6.17-mm6-106-account_kernel_mmap/arch/x86_64/mm/init.c 2006-07-06 11:13:22.000000000 +0100
@@ -658,8 +658,10 @@ void __init reserve_bootmem_generic(unsi
#else
reserve_bootmem(phys, len);
#endif
- if (phys+len <= MAX_DMA_PFN*PAGE_SIZE)
+ if (phys+len <= MAX_DMA_PFN*PAGE_SIZE) {
dma_reserve += len / PAGE_SIZE;
+ set_dma_reserve(dma_reserve);
+ }
}
int kern_addr_valid(unsigned long addr)
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-105-ia64_use_init_nodes/include/linux/mm.h linux-2.6.17-mm6-106-account_kernel_mmap/include/linux/mm.h
--- linux-2.6.17-mm6-105-ia64_use_init_nodes/include/linux/mm.h 2006-07-06 11:04:22.000000000 +0100
+++ linux-2.6.17-mm6-106-account_kernel_mmap/include/linux/mm.h 2006-07-06 11:13:22.000000000 +0100
@@ -1005,6 +1005,7 @@ extern void free_bootmem_with_active_reg
unsigned long max_low_pfn);
extern void sparse_memory_present_with_active_regions(int nid);
#endif /* CONFIG_ARCH_POPULATES_NODE_MAP */
+extern void set_dma_reserve(unsigned long new_dma_reserve);
extern void memmap_init_zone(unsigned long, int, unsigned long, unsigned long);
extern void setup_per_zone_pages_min(void);
extern void mem_init(void);
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-105-ia64_use_init_nodes/mm/page_alloc.c linux-2.6.17-mm6-106-account_kernel_mmap/mm/page_alloc.c
--- linux-2.6.17-mm6-105-ia64_use_init_nodes/mm/page_alloc.c 2006-07-06 21:44:44.000000000 +0100
+++ linux-2.6.17-mm6-106-account_kernel_mmap/mm/page_alloc.c 2006-07-06 11:13:22.000000000 +0100
@@ -87,6 +87,7 @@ int min_free_kbytes = 1024;
unsigned long __meminitdata nr_kernel_pages;
unsigned long __meminitdata nr_all_pages;
+unsigned long __initdata dma_reserve;
#ifdef CONFIG_ARCH_POPULATES_NODE_MAP
/*
@@ -2300,6 +2301,20 @@ unsigned long __init zone_absent_pages_i
arch_zone_lowest_possible_pfn[zone_type],
arch_zone_highest_possible_pfn[zone_type]);
}
+
+/* Return the zone index a PFN is in */
+int memmap_zone_idx(struct page *lmem_map)
+{
+ int i;
+ unsigned long phys_addr = virt_to_phys(lmem_map);
+ unsigned long pfn = phys_addr >> PAGE_SHIFT;
+
+ for (i = 0; i < MAX_NR_ZONES; i++)
+ if (pfn < arch_zone_highest_possible_pfn[i])
+ break;
+
+ return i;
+}
#else
static inline unsigned long zone_spanned_pages_in_node(int nid,
unsigned long zone_type,
@@ -2317,6 +2332,11 @@ static inline unsigned long zone_absent_
return zholes_size[zone_type];
}
+
+static inline int memmap_zone_idx(struct page *lmem_map)
+{
+ return MAX_NR_ZONES;
+}
#endif
static void __init calculate_node_totalpages(struct pglist_data *pgdat,
@@ -2340,6 +2360,58 @@ static void __init calculate_node_totalp
realtotalpages);
}
+#ifdef CONFIG_FLAT_NODE_MEM_MAP
+/* Account for mem_map for CONFIG_FLAT_NODE_MEM_MAP */
+unsigned long __meminit account_memmap(struct pglist_data *pgdat,
+ int zone_index)
+{
+ unsigned long pages = 0;
+ if (zone_index == memmap_zone_idx(pgdat->node_mem_map)) {
+ pages = pgdat->node_spanned_pages;
+ pages = (pages * sizeof(struct page)) >> PAGE_SHIFT;
+ printk(KERN_DEBUG "%lu pages used for memmap\n", pages);
+ }
+ return pages;
+}
+#else
+/* Account for mem_map for CONFIG_SPARSEMEM */
+unsigned long account_memmap(struct pglist_data *pgdat, int zone_index)
+{
+ unsigned long pages = 0;
+ unsigned long memmap_pfn;
+ struct page *memmap_addr;
+ int pnum;
+ unsigned long pgdat_startpfn, pgdat_endpfn;
+ struct mem_section *section;
+
+ pgdat_startpfn = pgdat->node_start_pfn;
+ pgdat_endpfn = pgdat_startpfn + pgdat->node_spanned_pages;
+
+ /* Go through valid sections looking for memmap */
+ for (pnum = 0; pnum < NR_MEM_SECTIONS; pnum++) {
+ if (!valid_section_nr(pnum))
+ continue;
+
+ section = __nr_to_section(pnum);
+ if (!section_has_mem_map(section))
+ continue;
+
+ memmap_addr = __section_mem_map_addr(section);
+ memmap_pfn = (unsigned long)memmap_addr >> PAGE_SHIFT;
+
+ if (memmap_pfn < pgdat_startpfn || memmap_pfn >= pgdat_endpfn)
+ continue;
+
+ if (zone_index == memmap_zone_idx(memmap_addr))
+ pages += (PAGES_PER_SECTION * sizeof(struct page));
+ }
+
+ pages >>= PAGE_SHIFT;
+ printk(KERN_DEBUG "%lu pages used for SPARSE memmap\n", pages);
+ return pages;
+}
+#endif
+
/*
* Set up the zone data structures:
* - mark all pages reserved
@@ -2366,6 +2438,15 @@ static void __meminit free_area_init_cor
size = zone_spanned_pages_in_node(nid, j, zones_size);
realsize = size - zone_absent_pages_in_node(nid, j,
zholes_size);
+
+ realsize -= account_memmap(pgdat, j);
+ /* Account for reserved DMA pages */
+ if (j == ZONE_DMA && realsize > dma_reserve) {
+ realsize -= dma_reserve;
+ printk(KERN_DEBUG "%lu pages DMA reserved\n",
+ dma_reserve);
+ }
+
if (j < ZONE_HIGHMEM)
nr_kernel_pages += realsize;
nr_all_pages += realsize;
@@ -2685,6 +2761,21 @@ void __init free_area_init_nodes(unsigne
}
#endif /* CONFIG_ARCH_POPULATES_NODE_MAP */
+/**
+ * set_dma_reserve - Account the specified number of pages reserved in ZONE_DMA
+ * @new_dma_reserve - The number of pages to mark reserved
+ *
+ * The per-cpu batchsize and zone watermarks are determined by present_pages.
+ * In the DMA zone, a significant percentage may be consumed by kernel image
+ * and other unfreeable allocations which can skew the watermarks badly. This
+ * function may optionally be used to account for unfreeable pages in
+ * ZONE_DMA. The effect will be lower watermarks and smaller per-cpu batchsize
+ */
+void __init set_dma_reserve(unsigned long new_dma_reserve)
+{
+ dma_reserve = new_dma_reserve;
+}
+
#ifndef CONFIG_NEED_MULTIPLE_NODES
static bootmem_data_t contig_bootmem_data;
struct pglist_data contig_page_data = { .bdata = &contig_bootmem_data };
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/6] Sizing zones and holes in an architecture independent manner V8
2006-07-08 11:10 [PATCH 0/6] Sizing zones and holes in an architecture independent manner V8 Mel Gorman
` (5 preceding siblings ...)
2006-07-08 11:12 ` [PATCH 6/6] Account for memmap and optionally the kernel image as holes Mel Gorman
@ 2006-07-08 11:42 ` Heiko Carstens
2006-07-09 15:32 ` Mel Gorman
2006-07-10 11:30 ` [PATCH 6/6] Account for memmap and optionally the kernel image as holes David Howells
7 siblings, 1 reply; 27+ messages in thread
From: Heiko Carstens @ 2006-07-08 11:42 UTC (permalink / raw)
To: Mel Gorman
Cc: akpm, davej, tony.luck, ak, bob.picco, linux-kernel,
linuxppc-dev, linux-mm
On Sat, Jul 08, 2006 at 12:10:42PM +0100, Mel Gorman wrote:
> There are differences in the zone sizes for x86_64 as the arch-specific code
> for x86_64 accounts the kernel image and the starting mem_maps as memory
> holes but the architecture-independent code accounts the memory as present.
Shouldn't this be the same for all architectures? Or to put it in other words:
why does only x86_64 account the kernel image as memory hole?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/6] Sizing zones and holes in an architecture independent manner V8
2006-07-08 11:42 ` [PATCH 0/6] Sizing zones and holes in an architecture independent manner V8 Heiko Carstens
@ 2006-07-09 15:32 ` Mel Gorman
0 siblings, 0 replies; 27+ messages in thread
From: Mel Gorman @ 2006-07-09 15:32 UTC (permalink / raw)
To: Heiko Carstens
Cc: akpm, davej, tony.luck, ak, bob.picco, linux-kernel,
linuxppc-dev, linux-mm
On Sat, 8 Jul 2006, Heiko Carstens wrote:
> On Sat, Jul 08, 2006 at 12:10:42PM +0100, Mel Gorman wrote:
>> There are differences in the zone sizes for x86_64 as the arch-specific code
>> for x86_64 accounts the kernel image and the starting mem_maps as memory
>> holes but the architecture-independent code accounts the memory as present.
>
> Shouldn't this be the same for all architectures?
The comment in the mail is inaccurate because patch 6/6 will account for
the kernel image and mem_map as holes for all architectures if it is
merged. The patch could be submitted independent of arch-independent
zone-sizing.
> Or to put it in other words:
> why does only x86_64 account the kernel image as memory hole?
>
>From Andi Kleen's mails in the thread "[PATCH 0/5] Sizing zones and holes
in an architecture independent manner V7"
>>> Begin extract <<<
> Why is it a performance regression if the image and memmap is accounted
> for as holes? How are those regions different from any other kernel
> allocation or bootmem allocations for example which are not accounted as
> holes?
They are comparatively big and cannot be freed.
>If you are sure that it makes a measurable difference to performance,
There was at least one benchmark/use case where it made a significant
difference, can't remember the exact numbers though.
It affects the low/high water marks in the VM zone balancer.
Especially for the 16MB DMA zone it can make a difference if you
account 4MB kernel in there or not.
>>> End extract <<<
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 6/6] Account for memmap and optionally the kernel image as holes
2006-07-08 11:10 [PATCH 0/6] Sizing zones and holes in an architecture independent manner V8 Mel Gorman
` (6 preceding siblings ...)
2006-07-08 11:42 ` [PATCH 0/6] Sizing zones and holes in an architecture independent manner V8 Heiko Carstens
@ 2006-07-10 11:30 ` David Howells
2006-07-10 15:31 ` Mel Gorman
7 siblings, 1 reply; 27+ messages in thread
From: David Howells @ 2006-07-10 11:30 UTC (permalink / raw)
To: Mel Gorman
Cc: akpm, davej, tony.luck, linux-mm, ak, bob.picco, linux-kernel,
linuxppc-dev
Mel Gorman <mel@csn.ul.ie> wrote:
> +unsigned long __initdata dma_reserve;
Should this be static? Or should it be predeclared in a header file
somewhere?
David
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 6/6] Account for memmap and optionally the kernel image as holes
2006-07-10 11:30 ` [PATCH 6/6] Account for memmap and optionally the kernel image as holes David Howells
@ 2006-07-10 15:31 ` Mel Gorman
0 siblings, 0 replies; 27+ messages in thread
From: Mel Gorman @ 2006-07-10 15:31 UTC (permalink / raw)
To: David Howells
Cc: akpm, davej, tony.luck, linux-mm, ak, bob.picco, linux-kernel,
linuxppc-dev
On (10/07/06 12:30), David Howells didst pronounce:
> Mel Gorman <mel@csn.ul.ie> wrote:
>
> > +unsigned long __initdata dma_reserve;
>
> Should this be static? Or should it be predeclared in a header file
> somewhere?
>
It should be static as it's set by set_dma_reserve(). Thanks.
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.17-mm6-106-account_kernel_mmap/mm/page_alloc.c linux-2.6.17-mm6-107-fixstatic/mm/page_alloc.c
--- linux-2.6.17-mm6-106-account_kernel_mmap/mm/page_alloc.c 2006-07-10 15:55:09.000000000 +0100
+++ linux-2.6.17-mm6-107-fixstatic/mm/page_alloc.c 2006-07-10 16:03:26.000000000 +0100
@@ -87,7 +87,7 @@ int min_free_kbytes = 1024;
unsigned long __meminitdata nr_kernel_pages;
unsigned long __meminitdata nr_all_pages;
-unsigned long __initdata dma_reserve;
+static unsigned long __initdata dma_reserve;
#ifdef CONFIG_ARCH_POPULATES_NODE_MAP
/*
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 5/6] Have ia64 use add_active_range() and free_area_init_nodes
2006-08-21 13:45 [PATCH 0/6] Sizing zones and holes in an architecture independent manner V9 Mel Gorman
@ 2006-08-21 13:46 ` Mel Gorman
0 siblings, 0 replies; 27+ messages in thread
From: Mel Gorman @ 2006-08-21 13:46 UTC (permalink / raw)
To: akpm
Cc: Mel Gorman, tony.luck, linuxppc-dev, linux-kernel, bob.picco, ak,
linux-mm
Size zones and holes in an architecture independent manner for ia64.
arch/ia64/Kconfig | 3 ++
arch/ia64/mm/contig.c | 60 ++++++----------------------------------
arch/ia64/mm/discontig.c | 44 ++++++-----------------------
arch/ia64/mm/init.c | 12 ++++++++
include/asm-ia64/meminit.h | 1
5 files changed, 34 insertions(+), 86 deletions(-)
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Bob Picco <bob.picco@hp.com>
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.18-rc4-mm2-104-x86_64_use_init_nodes/arch/ia64/Kconfig linux-2.6.18-rc4-mm2-105-ia64_use_init_nodes/arch/ia64/Kconfig
--- linux-2.6.18-rc4-mm2-104-x86_64_use_init_nodes/arch/ia64/Kconfig 2006-08-21 09:23:50.000000000 +0100
+++ linux-2.6.18-rc4-mm2-105-ia64_use_init_nodes/arch/ia64/Kconfig 2006-08-21 10:17:10.000000000 +0100
@@ -352,6 +352,9 @@ config NODES_SHIFT
MAX_NUMNODES will be 2^(This value).
If in doubt, use the default.
+config ARCH_POPULATES_NODE_MAP
+ def_bool y
+
# VIRTUAL_MEM_MAP and FLAT_NODE_MEM_MAP are functionally equivalent.
# VIRTUAL_MEM_MAP has been retained for historical reasons.
config VIRTUAL_MEM_MAP
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.18-rc4-mm2-104-x86_64_use_init_nodes/arch/ia64/mm/contig.c linux-2.6.18-rc4-mm2-105-ia64_use_init_nodes/arch/ia64/mm/contig.c
--- linux-2.6.18-rc4-mm2-104-x86_64_use_init_nodes/arch/ia64/mm/contig.c 2006-08-06 19:20:11.000000000 +0100
+++ linux-2.6.18-rc4-mm2-105-ia64_use_init_nodes/arch/ia64/mm/contig.c 2006-08-21 10:17:10.000000000 +0100
@@ -26,7 +26,6 @@
#include <asm/mca.h>
#ifdef CONFIG_VIRTUAL_MEM_MAP
-static unsigned long num_dma_physpages;
static unsigned long max_gap;
#endif
@@ -218,18 +217,6 @@ count_pages (u64 start, u64 end, void *a
return 0;
}
-#ifdef CONFIG_VIRTUAL_MEM_MAP
-static int
-count_dma_pages (u64 start, u64 end, void *arg)
-{
- unsigned long *count = arg;
-
- if (start < MAX_DMA_ADDRESS)
- *count += (min(end, MAX_DMA_ADDRESS) - start) >> PAGE_SHIFT;
- return 0;
-}
-#endif
-
/*
* Set up the page tables.
*/
@@ -238,45 +225,22 @@ void __init
paging_init (void)
{
unsigned long max_dma;
- unsigned long zones_size[MAX_NR_ZONES];
-#ifdef CONFIG_VIRTUAL_MEM_MAP
- unsigned long zholes_size[MAX_NR_ZONES];
-#endif
-
- /* initialize mem_map[] */
-
- memset(zones_size, 0, sizeof(zones_size));
+ unsigned long nid = 0;
+ unsigned long max_zone_pfns[MAX_NR_ZONES];
num_physpages = 0;
efi_memmap_walk(count_pages, &num_physpages);
max_dma = virt_to_phys((void *) MAX_DMA_ADDRESS) >> PAGE_SHIFT;
+ max_zone_pfns[ZONE_DMA] = max_dma;
+ max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
#ifdef CONFIG_VIRTUAL_MEM_MAP
- memset(zholes_size, 0, sizeof(zholes_size));
-
- num_dma_physpages = 0;
- efi_memmap_walk(count_dma_pages, &num_dma_physpages);
-
- if (max_low_pfn < max_dma) {
- zones_size[ZONE_DMA] = max_low_pfn;
- zholes_size[ZONE_DMA] = max_low_pfn - num_dma_physpages;
- } else {
- zones_size[ZONE_DMA] = max_dma;
- zholes_size[ZONE_DMA] = max_dma - num_dma_physpages;
- if (num_physpages > num_dma_physpages) {
- zones_size[ZONE_NORMAL] = max_low_pfn - max_dma;
- zholes_size[ZONE_NORMAL] =
- ((max_low_pfn - max_dma) -
- (num_physpages - num_dma_physpages));
- }
- }
-
+ efi_memmap_walk(register_active_ranges, &nid);
efi_memmap_walk(find_largest_hole, (u64 *)&max_gap);
if (max_gap < LARGE_GAP) {
vmem_map = (struct page *) 0;
- free_area_init_node(0, NODE_DATA(0), zones_size, 0,
- zholes_size);
+ free_area_init_nodes(max_zone_pfns);
} else {
unsigned long map_size;
@@ -289,19 +253,13 @@ paging_init (void)
efi_memmap_walk(create_mem_map_page_table, NULL);
NODE_DATA(0)->node_mem_map = vmem_map;
- free_area_init_node(0, NODE_DATA(0), zones_size,
- 0, zholes_size);
+ free_area_init_nodes(max_zone_pfns);
printk("Virtual mem_map starts at 0x%p\n", mem_map);
}
#else /* !CONFIG_VIRTUAL_MEM_MAP */
- if (max_low_pfn < max_dma)
- zones_size[ZONE_DMA] = max_low_pfn;
- else {
- zones_size[ZONE_DMA] = max_dma;
- zones_size[ZONE_NORMAL] = max_low_pfn - max_dma;
- }
- free_area_init(zones_size);
+ add_active_range(0, 0, max_low_pfn);
+ free_area_init_nodes(max_zone_pfns);
#endif /* !CONFIG_VIRTUAL_MEM_MAP */
zero_page_memmap_ptr = virt_to_page(ia64_imva(empty_zero_page));
}
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.18-rc4-mm2-104-x86_64_use_init_nodes/arch/ia64/mm/discontig.c linux-2.6.18-rc4-mm2-105-ia64_use_init_nodes/arch/ia64/mm/discontig.c
--- linux-2.6.18-rc4-mm2-104-x86_64_use_init_nodes/arch/ia64/mm/discontig.c 2006-08-06 19:20:11.000000000 +0100
+++ linux-2.6.18-rc4-mm2-105-ia64_use_init_nodes/arch/ia64/mm/discontig.c 2006-08-21 10:17:10.000000000 +0100
@@ -654,6 +654,7 @@ static __init int count_node_pages(unsig
{
unsigned long end = start + len;
+ add_active_range(node, start >> PAGE_SHIFT, end >> PAGE_SHIFT);
mem_data[node].num_physpages += len >> PAGE_SHIFT;
if (start <= __pa(MAX_DMA_ADDRESS))
mem_data[node].num_dma_physpages +=
@@ -678,10 +679,10 @@ static __init int count_node_pages(unsig
void __init paging_init(void)
{
unsigned long max_dma;
- unsigned long zones_size[MAX_NR_ZONES];
- unsigned long zholes_size[MAX_NR_ZONES];
unsigned long pfn_offset = 0;
+ unsigned long max_pfn = 0;
int node;
+ unsigned long max_zone_pfns[MAX_NR_ZONES];
max_dma = virt_to_phys((void *) MAX_DMA_ADDRESS) >> PAGE_SHIFT;
@@ -698,47 +699,20 @@ void __init paging_init(void)
#endif
for_each_online_node(node) {
- memset(zones_size, 0, sizeof(zones_size));
- memset(zholes_size, 0, sizeof(zholes_size));
-
num_physpages += mem_data[node].num_physpages;
-
- if (mem_data[node].min_pfn >= max_dma) {
- /* All of this node's memory is above ZONE_DMA */
- zones_size[ZONE_NORMAL] = mem_data[node].max_pfn -
- mem_data[node].min_pfn;
- zholes_size[ZONE_NORMAL] = mem_data[node].max_pfn -
- mem_data[node].min_pfn -
- mem_data[node].num_physpages;
- } else if (mem_data[node].max_pfn < max_dma) {
- /* All of this node's memory is in ZONE_DMA */
- zones_size[ZONE_DMA] = mem_data[node].max_pfn -
- mem_data[node].min_pfn;
- zholes_size[ZONE_DMA] = mem_data[node].max_pfn -
- mem_data[node].min_pfn -
- mem_data[node].num_dma_physpages;
- } else {
- /* This node has memory in both zones */
- zones_size[ZONE_DMA] = max_dma -
- mem_data[node].min_pfn;
- zholes_size[ZONE_DMA] = zones_size[ZONE_DMA] -
- mem_data[node].num_dma_physpages;
- zones_size[ZONE_NORMAL] = mem_data[node].max_pfn -
- max_dma;
- zholes_size[ZONE_NORMAL] = zones_size[ZONE_NORMAL] -
- (mem_data[node].num_physpages -
- mem_data[node].num_dma_physpages);
- }
-
pfn_offset = mem_data[node].min_pfn;
#ifdef CONFIG_VIRTUAL_MEM_MAP
NODE_DATA(node)->node_mem_map = vmem_map + pfn_offset;
#endif
- free_area_init_node(node, NODE_DATA(node), zones_size,
- pfn_offset, zholes_size);
+ if (mem_data[node].max_pfn > max_pfn)
+ max_pfn = mem_data[node].max_pfn;
}
+ max_zone_pfns[ZONE_DMA] = max_dma;
+ max_zone_pfns[ZONE_NORMAL] = max_pfn;
+ free_area_init_nodes(max_zone_pfns);
+
zero_page_memmap_ptr = virt_to_page(ia64_imva(empty_zero_page));
}
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.18-rc4-mm2-104-x86_64_use_init_nodes/arch/ia64/mm/init.c linux-2.6.18-rc4-mm2-105-ia64_use_init_nodes/arch/ia64/mm/init.c
--- linux-2.6.18-rc4-mm2-104-x86_64_use_init_nodes/arch/ia64/mm/init.c 2006-08-06 19:20:11.000000000 +0100
+++ linux-2.6.18-rc4-mm2-105-ia64_use_init_nodes/arch/ia64/mm/init.c 2006-08-21 10:17:10.000000000 +0100
@@ -593,6 +593,18 @@ find_largest_hole (u64 start, u64 end, v
last_end = end;
return 0;
}
+
+int __init
+register_active_ranges(u64 start, u64 end, void *nid)
+{
+ BUG_ON(nid == NULL);
+ BUG_ON(*(unsigned long *)nid >= MAX_NUMNODES);
+
+ add_active_range(*(unsigned long *)nid,
+ __pa(start) >> PAGE_SHIFT,
+ __pa(end) >> PAGE_SHIFT);
+ return 0;
+}
#endif /* CONFIG_VIRTUAL_MEM_MAP */
static int __init
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.18-rc4-mm2-104-x86_64_use_init_nodes/include/asm-ia64/meminit.h linux-2.6.18-rc4-mm2-105-ia64_use_init_nodes/include/asm-ia64/meminit.h
--- linux-2.6.18-rc4-mm2-104-x86_64_use_init_nodes/include/asm-ia64/meminit.h 2006-08-21 09:23:52.000000000 +0100
+++ linux-2.6.18-rc4-mm2-105-ia64_use_init_nodes/include/asm-ia64/meminit.h 2006-08-21 10:17:10.000000000 +0100
@@ -56,6 +56,7 @@ extern void efi_memmap_init(unsigned lon
extern unsigned long vmalloc_end;
extern struct page *vmem_map;
extern int find_largest_hole (u64 start, u64 end, void *arg);
+ extern int register_active_ranges (u64 start, u64 end, void *arg);
extern int create_mem_map_page_table (u64 start, u64 end, void *arg);
extern int vmemmap_find_next_valid_pfn(int, int);
#else
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 5/6] Have ia64 use add_active_range() and free_area_init_nodes
2006-05-19 14:03 ` Mel Gorman
@ 2006-05-19 14:23 ` Andy Whitcroft
0 siblings, 0 replies; 27+ messages in thread
From: Andy Whitcroft @ 2006-05-19 14:23 UTC (permalink / raw)
To: Mel Gorman
Cc: Andrew Morton, davej, tony.luck, linux-kernel, bob.picco, ak,
linux-mm, linuxppc-dev
Mel Gorman wrote:
> On Sun, 14 May 2006, Andrew Morton wrote:
>
>> Mel Gorman <mel@csn.ul.ie> wrote:
>>
>>>
>>> Size zones and holes in an architecture independent manner for ia64.
>>>
>>
>> This one makes my ia64 die very early in boot. The trace is pretty
>> useless.
>>
>> config at http://www.zip.com.au/~akpm/linux/patches/stuff/config-ia64
>>
>
> An indirect fix for this has been set out with a patchset with the
> subject "[PATCH 0/2] Fixes for node alignment and flatmem assumptions" .
> For arch-independent-zone-sizing, the issue was that FLATMEM assumes
> that NODE_DATA(0)->node_start_pfn == 0. This is not the case with
> arch-independent-zone-sizing and IA64. With
> arch-independent-zone-sizing, a nodes node_start_pfn will be at the
> first valid PFN.
>
>> <log snipped>
>>
>> Note the misaligned pfns.
>>
>
> You will still get the message about misaligned PFNs on IA64. This is
> because the lowest zone starts at the lowest available PFN which may not
> be 0 or any other aligned number. It shouldn't make a different - or at
> least I couldn't cause any problems.
With the updates I sent out to the zone alignment checks yesterday this
should now be ignored correctly without comment. The first zone is
allowed to be misaligned because we expect alignment of the mem_map.
With bob picco's patch from your set we ensure it is so.
-apw
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 5/6] Have ia64 use add_active_range() and free_area_init_nodes
2006-05-15 3:31 ` Andrew Morton
2006-05-15 8:21 ` Andy Whitcroft
2006-05-15 12:27 ` Mel Gorman
@ 2006-05-19 14:03 ` Mel Gorman
2006-05-19 14:23 ` Andy Whitcroft
2 siblings, 1 reply; 27+ messages in thread
From: Mel Gorman @ 2006-05-19 14:03 UTC (permalink / raw)
To: Andrew Morton
Cc: Andy Whitcroft, davej, tony.luck, linux-kernel, bob.picco, ak,
linux-mm, linuxppc-dev
On Sun, 14 May 2006, Andrew Morton wrote:
> Mel Gorman <mel@csn.ul.ie> wrote:
>>
>> Size zones and holes in an architecture independent manner for ia64.
>>
>
> This one makes my ia64 die very early in boot. The trace is pretty useless.
>
> config at http://www.zip.com.au/~akpm/linux/patches/stuff/config-ia64
>
An indirect fix for this has been set out with a patchset with the subject
"[PATCH 0/2] Fixes for node alignment and flatmem assumptions" . For
arch-independent-zone-sizing, the issue was that FLATMEM assumes that
NODE_DATA(0)->node_start_pfn == 0. This is not the case with
arch-independent-zone-sizing and IA64. With arch-independent-zone-sizing,
a nodes node_start_pfn will be at the first valid PFN.
> <log snipped>
>
> Note the misaligned pfns.
>
You will still get the message about misaligned PFNs on IA64. This is
because the lowest zone starts at the lowest available PFN which may not
be 0 or any other aligned number. It shouldn't make a different - or at
least I couldn't cause any problems.
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 5/6] Have ia64 use add_active_range() and free_area_init_nodes
2006-05-16 1:34 ` KAMEZAWA Hiroyuki
@ 2006-05-16 2:11 ` Nick Piggin
0 siblings, 0 replies; 27+ messages in thread
From: Nick Piggin @ 2006-05-16 2:11 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki
Cc: apw, akpm, mel, davej, tony.luck, linux-kernel, bob.picco, ak,
linux-mm, linuxppc-dev
KAMEZAWA Hiroyuki wrote:
>
>Andy's page_zone(page) == page_zone(buddy) check is good, I think.
>
>Making alignment is a difficult problem, I think. It complecates many things.
>We can avoid above check only when memory layout is ideal.
>
>BTW, How about following patch ?
>I don't want to say "Oh, you have to re-compile your kernel with
>CONFIG_UNALIGNED_ZONE on your new machine. you are unlucky." to users.
>
No, this is a function of the architecture code, not the specific
machine it is running on.
So if the architecture ensures alignment and no holes, then they don't
need the overhead of CONFIG_UNALIGNED_ZONE or CONFIG_HOLES_IN_ZONE.
If they do not ensure correct alignment, then they must enable
CONFIG_UNALIGNED_ZONE, even if there may be actual systems which do
result in aligned zones.
--
Send instant messages to your online friends http://au.messenger.yahoo.com
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 5/6] Have ia64 use add_active_range() and free_area_init_nodes
2006-05-16 0:31 ` Nick Piggin
@ 2006-05-16 1:34 ` KAMEZAWA Hiroyuki
2006-05-16 2:11 ` Nick Piggin
0 siblings, 1 reply; 27+ messages in thread
From: KAMEZAWA Hiroyuki @ 2006-05-16 1:34 UTC (permalink / raw)
To: Nick Piggin
Cc: apw, akpm, mel, davej, tony.luck, linux-kernel, bob.picco, ak,
linux-mm, linuxppc-dev
On Tue, 16 May 2006 10:31:56 +1000
Nick Piggin <nickpiggin@yahoo.com.au> wrote:
> >But clean-up/speed-up codes vanished these checks. (I don't know when this occurs)
> >Sorry for misses these things.
> >
>
> I think if anything they should be moved into page_is_buddy, however
> page_to_pfn
> is expensive on some architectures, so it is something we want to be
> able to opt
> out of if we do the correct alignment.
>
yes. It's expensive.
I received following e-Mail from Dave Hansen in past.
==
There are some ppc64 machines which have memory laid out like this:
0-100 MB Node0
100-200 MB Node1
200-300 MB Node0
==
I didn't imagine above (strange) configration.
So, simple range check will not work for all configration anyway.
(above example is aligned and has no problem. Moreover, ppc uses SPARSEMEM.)
Andy's page_zone(page) == page_zone(buddy) check is good, I think.
Making alignment is a difficult problem, I think. It complecates many things.
We can avoid above check only when memory layout is ideal.
BTW, How about following patch ?
I don't want to say "Oh, you have to re-compile your kernel with
CONFIG_UNALIGNED_ZONE on your new machine. you are unlucky." to users.
==
Index: linux-2.6.17-rc4-mm1/mm/page_alloc.c
===================================================================
--- linux-2.6.17-rc4-mm1.orig/mm/page_alloc.c
+++ linux-2.6.17-rc4-mm1/mm/page_alloc.c
@@ -58,6 +58,7 @@ unsigned long totalhigh_pages __read_mos
unsigned long totalreserve_pages __read_mostly;
long nr_swap_pages;
int percpu_pagelist_fraction;
+int unaligned_zone;
static void __free_pages_ok(struct page *page, unsigned int order);
@@ -296,12 +297,12 @@ static inline void rmv_page_order(struct
*
* Assumption: *_mem_map is contigious at least up to MAX_ORDER
*/
-static inline struct page *
+static inline long *
__page_find_buddy(struct page *page, unsigned long page_idx, unsigned int order)
{
unsigned long buddy_idx = page_idx ^ (1 << order);
- return page + (buddy_idx - page_idx);
+ return (buddy_idx - page_idx);
}
static inline unsigned long
@@ -310,6 +311,23 @@ __find_combined_index(unsigned long page
return (page_idx & ~(1 << order));
}
+static inline struct page *
+buddy_in_range(struct zone *zone, struct page *page, long buddy_idx)
+{
+ unsigned long pfn;
+ struct page *buddy;
+ if (!unaligned_zone)
+ return page + buddy_idx;
+ pfn = page_to_pfn(page);
+ if (!pfn_valid(pfn + buddy_idx))
+ return NULL;
+ buddy = page + buddy_idx;
+ if (page_zone_id(page) != page_zone_id(buddy))
+ return NULL;
+ return buddy;
+}
+
+
/*
* This function checks whether a page is free && is the buddy
* we can do coalesce a page and its buddy if
@@ -326,15 +344,10 @@ __find_combined_index(unsigned long page
static inline int page_is_buddy(struct page *page, struct page *buddy,
int order)
{
-#ifdef CONFIG_HOLES_IN_ZONE
+#if defined(CONFIG_HOLES_IN_ZONE)
if (!pfn_valid(page_to_pfn(buddy)))
return 0;
#endif
-#ifdef CONFIG_UNALIGNED_ZONE_BOUNDRIES
- if (page_zone_id(page) != page_zone_id(buddy))
- return 0;
-#endif
-
if (PageBuddy(buddy) && page_order(buddy) == order) {
BUG_ON(page_count(buddy) != 0);
return 1;
@@ -383,10 +396,14 @@ static inline void __free_one_page(struc
zone->free_pages += order_size;
while (order < MAX_ORDER-1) {
unsigned long combined_idx;
+ long buddy_idx;
struct free_area *area;
struct page *buddy;
- buddy = __page_find_buddy(page, page_idx, order);
+ buddy_idx = __page_find_buddy(page, page_idx, order);
+ buddy = buddy_in_zone(zone, page, buddy_idx);
+ if (unlikely(!buddy))
+ break;
if (!page_is_buddy(page, buddy, order))
break; /* Move the buddy up one level. */
@@ -2434,10 +2451,9 @@ static void __meminit free_area_init_cor
struct zone *zone = pgdat->node_zones + j;
unsigned long size, realsize;
- if (zone_boundary_align_pfn(zone_start_pfn) != zone_start_pfn)
- printk(KERN_CRIT "node %d zone %s missaligned "
- "start pfn, enable UNALIGNED_ZONE_BOUNDRIES\n",
- nid, zone_names[j]);
+ if (zone_boundary_align_pfn(zone_start_pfn) != zone_start_pfn) {
+ unaligned_zone = 1;
+ }
size = zone_present_pages_in_node(nid, j, zones_size);
realsize = size - zone_absent_pages_in_node(nid, j,
Index: linux-2.6.17-rc4-mm1/include/linux/mmzone.h
===================================================================
--- linux-2.6.17-rc4-mm1.orig/include/linux/mmzone.h
+++ linux-2.6.17-rc4-mm1/include/linux/mmzone.h
@@ -399,11 +399,7 @@ static inline int is_dma(struct zone *zo
static inline unsigned long zone_boundary_align_pfn(unsigned long pfn)
{
-#ifdef CONFIG_UNALIGNED_ZONE_BOUNDRIES
- return pfn;
-#else
return pfn & ~((1 << MAX_ORDER) - 1);
-#endif
}
/* These two functions are used to setup the per zone pages min values */
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 5/6] Have ia64 use add_active_range() and free_area_init_nodes
2006-05-15 10:29 ` KAMEZAWA Hiroyuki
2006-05-15 10:47 ` KAMEZAWA Hiroyuki
2006-05-15 11:02 ` Andy Whitcroft
@ 2006-05-16 0:31 ` Nick Piggin
2006-05-16 1:34 ` KAMEZAWA Hiroyuki
2 siblings, 1 reply; 27+ messages in thread
From: Nick Piggin @ 2006-05-16 0:31 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki
Cc: Andy Whitcroft, akpm, mel, davej, tony.luck, linux-kernel,
bob.picco, ak, linux-mm, linuxppc-dev
KAMEZAWA Hiroyuki wrote:
>On Mon, 15 May 2006 11:19:27 +0100
>Andy Whitcroft <apw@shadowen.org> wrote:
>
>>>
>>>Recently arrived? Over a year ago with the no-buddy-bitmap patches,
>>>right? Just checking because I that's what I'm assuming broke it...
>>>
>>Yep, sorry I forget I was out of the game for 6 months! And yes that
>>was when the requirements were altered.
>>
>>
>When no-bitmap-buddy patches was included,
>
>1. bad_range() is not covered by CONFIG_VM_DEBUG. It always worked.
>==
>static int bad_range(struct zone *zone, struct page *page)
>{
> if (page_to_pfn(page) >= zone->zone_start_pfn + zone->spanned_pages)
> return 1;
> if (page_to_pfn(page) < zone->zone_start_pfn)
> return 1;
>==
>And , this code
>==
> buddy = __page_find_buddy(page, page_idx, order);
>
> if (bad_range(zone, buddy))
> break;
>==
>
>checked whether buddy is in zone and guarantees it to have page struct.
>
Ah, my mistake indeed. Sorry.
>But clean-up/speed-up codes vanished these checks. (I don't know when this occurs)
>Sorry for misses these things.
>
I think if anything they should be moved into page_is_buddy, however
page_to_pfn
is expensive on some architectures, so it is something we want to be
able to opt
out of if we do the correct alignment.
--
Send instant messages to your online friends http://au.messenger.yahoo.com
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 5/6] Have ia64 use add_active_range() and free_area_init_nodes
2006-05-15 12:27 ` Mel Gorman
@ 2006-05-15 22:44 ` Mel Gorman
0 siblings, 0 replies; 27+ messages in thread
From: Mel Gorman @ 2006-05-15 22:44 UTC (permalink / raw)
To: Andrew Morton
Cc: Andy Whitcroft, davej, tony.luck, linux-kernel, bob.picco, ak,
linux-mm, linuxppc-dev
> diff -rup -X /usr/src/patchset-0.5/bin//dontdiff linux-2.6.17-rc4-mm4-clean/mm/page_alloc.c linux-2.6.17-rc4-mm4-ia64_force_alignment/mm/page_alloc.c
> --- linux-2.6.17-rc4-mm4-clean/mm/page_alloc.c 2006-05-15 10:37:55.000000000 +0100
> +++ linux-2.6.17-rc4-mm4-ia64_force_alignment/mm/page_alloc.c 2006-05-15 13:10:42.000000000 +0100
> @@ -2640,14 +2640,20 @@ void __init free_area_init_nodes(unsigne
> {
> unsigned long nid;
> int zone_index;
> + unsigned long lowest_pfn = find_min_pfn_with_active_regions();
> +
> + lowest_pfn = zone_boundary_align_pfn(lowest_pfn);
> + arch_max_dma_pfn = zone_boundary_align_pfn(arch_max_dma_pfn);
> + arch_max_dma32_pfn = zone_boundary_align_pfn(arch_max_dma32_pfn);
> + arch_max_low_pfn = zone_boundary_align_pfn(arch_max_low_pfn);
> + arch_max_high_pfn = zone_boundary_align_pfn(arch_max_high_pfn);
>
> /* Record where the zone boundaries are */
> memset(arch_zone_lowest_possible_pfn, 0,
> sizeof(arch_zone_lowest_possible_pfn));
> memset(arch_zone_highest_possible_pfn, 0,
> sizeof(arch_zone_highest_possible_pfn));
> - arch_zone_lowest_possible_pfn[ZONE_DMA] =
> - find_min_pfn_with_active_regions();
> + arch_zone_lowest_possible_pfn[ZONE_DMA] = lowest_pfn;
> arch_zone_highest_possible_pfn[ZONE_DMA] = arch_max_dma_pfn;
> arch_zone_highest_possible_pfn[ZONE_DMA32] = arch_max_dma32_pfn;
> arch_zone_highest_possible_pfn[ZONE_NORMAL] = arch_max_low_pfn;
>
Ok, this patch is broken in a number of ways. It doesn't help the IA64
problem at all and two other machine configurations failed with the patch
applied during regression testing. Please drop and I'll figure out what
the correct solution is to your IA64 machine not booting.
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 5/6] Have ia64 use add_active_range() and free_area_init_nodes
2006-05-15 3:31 ` Andrew Morton
2006-05-15 8:21 ` Andy Whitcroft
@ 2006-05-15 12:27 ` Mel Gorman
2006-05-15 22:44 ` Mel Gorman
2006-05-19 14:03 ` Mel Gorman
2 siblings, 1 reply; 27+ messages in thread
From: Mel Gorman @ 2006-05-15 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Andy Whitcroft, davej, tony.luck, linux-kernel, bob.picco, ak,
linux-mm, linuxppc-dev
On (14/05/06 20:31), Andrew Morton didst pronounce:
> Mel Gorman <mel@csn.ul.ie> wrote:
> >
> > Size zones and holes in an architecture independent manner for ia64.
> >
>
> This one makes my ia64 die very early in boot. The trace is pretty useless.
>
> config at http://www.zip.com.au/~akpm/linux/patches/stuff/config-ia64
>
> <log snipped>
Curses. When I tried to reproduce this, the machine booted with my default
config but died before initialising the console with your config. The machine
is far away so I can't see the screen or restart the machine remotely so
I can only assume it is dying for the same reasons yours did.
> Note the misaligned pfns.
>
> Andy's (misspelled) CONFIG_UNALIGNED_ZONE_BOUNDRIES patch didn't actually
> include an update to any Kconfig files. But hacking that in by hand didn't
> help.
It would not have helped in this case because the zone boundaries would still
be in the wrong place for ia64. Below is a patch that aligns the zones on
all architectures that use CONFIG_ARCH_POPULATES_NODE_MAP . That is currently
i386, x86_64, powerpc, ppc and ia64. It does *not* align pgdat->node_start_pfn
but I don't believe that it is necessary.
I can't test it on ia64 until I get someone to restart the machine. The patch
compiles and is currently boot-testing on a range of other machines. I hope
to know within 5-6 hours if everything is ok.
diff -rup -X /usr/src/patchset-0.5/bin//dontdiff linux-2.6.17-rc4-mm4-clean/mm/page_alloc.c linux-2.6.17-rc4-mm4-ia64_force_alignment/mm/page_alloc.c
--- linux-2.6.17-rc4-mm4-clean/mm/page_alloc.c 2006-05-15 10:37:55.000000000 +0100
+++ linux-2.6.17-rc4-mm4-ia64_force_alignment/mm/page_alloc.c 2006-05-15 13:10:42.000000000 +0100
@@ -2640,14 +2640,20 @@ void __init free_area_init_nodes(unsigne
{
unsigned long nid;
int zone_index;
+ unsigned long lowest_pfn = find_min_pfn_with_active_regions();
+
+ lowest_pfn = zone_boundary_align_pfn(lowest_pfn);
+ arch_max_dma_pfn = zone_boundary_align_pfn(arch_max_dma_pfn);
+ arch_max_dma32_pfn = zone_boundary_align_pfn(arch_max_dma32_pfn);
+ arch_max_low_pfn = zone_boundary_align_pfn(arch_max_low_pfn);
+ arch_max_high_pfn = zone_boundary_align_pfn(arch_max_high_pfn);
/* Record where the zone boundaries are */
memset(arch_zone_lowest_possible_pfn, 0,
sizeof(arch_zone_lowest_possible_pfn));
memset(arch_zone_highest_possible_pfn, 0,
sizeof(arch_zone_highest_possible_pfn));
- arch_zone_lowest_possible_pfn[ZONE_DMA] =
- find_min_pfn_with_active_regions();
+ arch_zone_lowest_possible_pfn[ZONE_DMA] = lowest_pfn;
arch_zone_highest_possible_pfn[ZONE_DMA] = arch_max_dma_pfn;
arch_zone_highest_possible_pfn[ZONE_DMA32] = arch_max_dma32_pfn;
arch_zone_highest_possible_pfn[ZONE_NORMAL] = arch_max_low_pfn;
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 5/6] Have ia64 use add_active_range() and free_area_init_nodes
2006-05-15 10:29 ` KAMEZAWA Hiroyuki
2006-05-15 10:47 ` KAMEZAWA Hiroyuki
@ 2006-05-15 11:02 ` Andy Whitcroft
2006-05-16 0:31 ` Nick Piggin
2 siblings, 0 replies; 27+ messages in thread
From: Andy Whitcroft @ 2006-05-15 11:02 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki
Cc: nickpiggin, akpm, mel, davej, tony.luck, linux-kernel, bob.picco,
ak, linux-mm, linuxppc-dev
KAMEZAWA Hiroyuki wrote:
> On Mon, 15 May 2006 11:19:27 +0100
> Andy Whitcroft <apw@shadowen.org> wrote:
>
>
>>Nick Piggin wrote:
>>
>>>Andy Whitcroft wrote:
>>>
>>>
>>>>Interesting. You are correct there was no config component, at the time
>>>>I didn't have direct evidence that any architecture needed it, only that
>>>>we had an unchecked requirement on zones, a requirement that had only
>>>>recently arrived with the changes to free buddy detection. I note that
>>>
>>>
>>>Recently arrived? Over a year ago with the no-buddy-bitmap patches,
>>>right? Just checking because I that's what I'm assuming broke it...
>>
>>Yep, sorry I forget I was out of the game for 6 months! And yes that
>>was when the requirements were altered.
>>
>
> When no-bitmap-buddy patches was included,
>
> 1. bad_range() is not covered by CONFIG_VM_DEBUG. It always worked.
> ==
> static int bad_range(struct zone *zone, struct page *page)
> {
> if (page_to_pfn(page) >= zone->zone_start_pfn + zone->spanned_pages)
> return 1;
> if (page_to_pfn(page) < zone->zone_start_pfn)
> return 1;
> ==
> And , this code
> ==
> buddy = __page_find_buddy(page, page_idx, order);
>
> if (bad_range(zone, buddy))
> break;
> ==
>
> checked whether buddy is in zone and guarantees it to have page struct.
>
>
> But clean-up/speed-up codes vanished these checks. (I don't know when this occurs)
> Sorry for misses these things.
Heh, sorry to make it sound like it was you who was responsible.
-apw
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 5/6] Have ia64 use add_active_range() and free_area_init_nodes
2006-05-15 10:29 ` KAMEZAWA Hiroyuki
@ 2006-05-15 10:47 ` KAMEZAWA Hiroyuki
2006-05-15 11:02 ` Andy Whitcroft
2006-05-16 0:31 ` Nick Piggin
2 siblings, 0 replies; 27+ messages in thread
From: KAMEZAWA Hiroyuki @ 2006-05-15 10:47 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki
Cc: apw, nickpiggin, akpm, mel, davej, tony.luck, linux-kernel,
bob.picco, ak, linux-mm, linuxppc-dev
On Mon, 15 May 2006 19:29:18 +0900
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
> On Mon, 15 May 2006 11:19:27 +0100
> Andy Whitcroft <apw@shadowen.org> wrote:
>
> > Nick Piggin wrote:
> > > Andy Whitcroft wrote:
> > >
> > >> Interesting. You are correct there was no config component, at the time
> > >> I didn't have direct evidence that any architecture needed it, only that
> > >> we had an unchecked requirement on zones, a requirement that had only
> > >> recently arrived with the changes to free buddy detection. I note that
> > >
> > >
> > > Recently arrived? Over a year ago with the no-buddy-bitmap patches,
> > > right? Just checking because I that's what I'm assuming broke it...
> >
> > Yep, sorry I forget I was out of the game for 6 months! And yes that
> > was when the requirements were altered.
> >
> When no-bitmap-buddy patches was included,
>
> 1. bad_range() is not covered by CONFIG_VM_DEBUG. It always worked.
> ==
> static int bad_range(struct zone *zone, struct page *page)
> {
> if (page_to_pfn(page) >= zone->zone_start_pfn + zone->spanned_pages)
> return 1;
> if (page_to_pfn(page) < zone->zone_start_pfn)
> return 1;
> ==
> And , this code
> ==
> buddy = __page_find_buddy(page, page_idx, order);
>
> if (bad_range(zone, buddy))
> break;
> ==
>
> checked whether buddy is in zone and guarantees it to have page struct.
>
>
> But clean-up/speed-up codes vanished these checks. (I don't know when this occurs)
> Sorry for misses these things.
>
One more point
When above no-bitmap patches was included, the user of not-aligned zones
are only ia64, I think. Because ia64 used virtual mem_map, page_to_pfn(page)
on CONFIG_DISCONTIG_MEM doesn't access page struct itself.
#define page_to_pfn(page) (page - vmemmap)
So, it didn't panic. ia64/vmemmap was safe.
If other archs used not-aligned zone + CONFIG_DISCONTIGMEM,
not-aligned-zones problem would come out earlier.
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 5/6] Have ia64 use add_active_range() and free_area_init_nodes
2006-05-15 10:19 ` Andy Whitcroft
@ 2006-05-15 10:29 ` KAMEZAWA Hiroyuki
2006-05-15 10:47 ` KAMEZAWA Hiroyuki
` (2 more replies)
0 siblings, 3 replies; 27+ messages in thread
From: KAMEZAWA Hiroyuki @ 2006-05-15 10:29 UTC (permalink / raw)
To: Andy Whitcroft
Cc: nickpiggin, akpm, mel, davej, tony.luck, linux-kernel, bob.picco,
ak, linux-mm, linuxppc-dev
On Mon, 15 May 2006 11:19:27 +0100
Andy Whitcroft <apw@shadowen.org> wrote:
> Nick Piggin wrote:
> > Andy Whitcroft wrote:
> >
> >> Interesting. You are correct there was no config component, at the time
> >> I didn't have direct evidence that any architecture needed it, only that
> >> we had an unchecked requirement on zones, a requirement that had only
> >> recently arrived with the changes to free buddy detection. I note that
> >
> >
> > Recently arrived? Over a year ago with the no-buddy-bitmap patches,
> > right? Just checking because I that's what I'm assuming broke it...
>
> Yep, sorry I forget I was out of the game for 6 months! And yes that
> was when the requirements were altered.
>
When no-bitmap-buddy patches was included,
1. bad_range() is not covered by CONFIG_VM_DEBUG. It always worked.
==
static int bad_range(struct zone *zone, struct page *page)
{
if (page_to_pfn(page) >= zone->zone_start_pfn + zone->spanned_pages)
return 1;
if (page_to_pfn(page) < zone->zone_start_pfn)
return 1;
==
And , this code
==
buddy = __page_find_buddy(page, page_idx, order);
if (bad_range(zone, buddy))
break;
==
checked whether buddy is in zone and guarantees it to have page struct.
But clean-up/speed-up codes vanished these checks. (I don't know when this occurs)
Sorry for misses these things.
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 5/6] Have ia64 use add_active_range() and free_area_init_nodes
2006-05-15 10:00 ` Nick Piggin
@ 2006-05-15 10:19 ` Andy Whitcroft
2006-05-15 10:29 ` KAMEZAWA Hiroyuki
0 siblings, 1 reply; 27+ messages in thread
From: Andy Whitcroft @ 2006-05-15 10:19 UTC (permalink / raw)
To: Nick Piggin
Cc: Andrew Morton, Mel Gorman, davej, tony.luck, linux-kernel,
bob.picco, ak, linux-mm, linuxppc-dev
Nick Piggin wrote:
> Andy Whitcroft wrote:
>
>> Interesting. You are correct there was no config component, at the time
>> I didn't have direct evidence that any architecture needed it, only that
>> we had an unchecked requirement on zones, a requirement that had only
>> recently arrived with the changes to free buddy detection. I note that
>
>
> Recently arrived? Over a year ago with the no-buddy-bitmap patches,
> right? Just checking because I that's what I'm assuming broke it...
Yep, sorry I forget I was out of the game for 6 months! And yes that
was when the requirements were altered.
-apw
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 5/6] Have ia64 use add_active_range() and free_area_init_nodes
2006-05-15 8:21 ` Andy Whitcroft
@ 2006-05-15 10:00 ` Nick Piggin
2006-05-15 10:19 ` Andy Whitcroft
0 siblings, 1 reply; 27+ messages in thread
From: Nick Piggin @ 2006-05-15 10:00 UTC (permalink / raw)
To: Andy Whitcroft
Cc: Andrew Morton, Mel Gorman, davej, tony.luck, linux-kernel,
bob.picco, ak, linux-mm, linuxppc-dev
Andy Whitcroft wrote:
> Interesting. You are correct there was no config component, at the time
> I didn't have direct evidence that any architecture needed it, only that
> we had an unchecked requirement on zones, a requirement that had only
> recently arrived with the changes to free buddy detection. I note that
Recently arrived? Over a year ago with the no-buddy-bitmap patches,
right? Just checking because I that's what I'm assuming broke it...
> MAX_ORDER is 17 for ia64 so that probabally accounts for the
> missalignment. It is clear that the reporting is slightly over-zelous
> as I am reporting zero-sized zones. I'll get that fixed and patch to
> you. I'll also have a look at the patch as added to -mm and try and get
> the rest of the spelling sorted :-/.
>
> I'll go see if we currently have a machine to test this config on.
--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 5/6] Have ia64 use add_active_range() and free_area_init_nodes
2006-05-15 3:31 ` Andrew Morton
@ 2006-05-15 8:21 ` Andy Whitcroft
2006-05-15 10:00 ` Nick Piggin
2006-05-15 12:27 ` Mel Gorman
2006-05-19 14:03 ` Mel Gorman
2 siblings, 1 reply; 27+ messages in thread
From: Andy Whitcroft @ 2006-05-15 8:21 UTC (permalink / raw)
To: Andrew Morton
Cc: Mel Gorman, davej, tony.luck, linux-kernel, bob.picco, ak,
linux-mm, linuxppc-dev
Andrew Morton wrote:
> Mel Gorman <mel@csn.ul.ie> wrote:
>
>>Size zones and holes in an architecture independent manner for ia64.
>>
>
>
> This one makes my ia64 die very early in boot. The trace is pretty useless.
>
> config at http://www.zip.com.au/~akpm/linux/patches/stuff/config-ia64
>
> EFI v1.10 by INTEL: SALsystab=0x3fe4c8c0 ACPI=0x3ff84000 ACPI 2.0=0x3ff83000 MP0
> Early serial console at I/O port 0x2f8 (options '9600n8')
> SAL 3.1: Intel Corp SR870BN4 vers0
> SAL Platform features: BusLock IRQ_Redirection
> SAL: AP wakeup using external interrupt vector 0xf0
> No logical to physical processor mapping available
> iosapic_system_init: Disabling PC-AT compatible 8259 interrupts
> ACPI: Local APIC address c0000000fee00000
> PLATFORM int CPEI (0x3): GSI 22 (level, low) -> CPU 0 (0xc618) vector 30
> register_intr: changing vector 39 from IO-SAPIC-edge to IO-SAPIC-level
> 4 CPUs available, 4 CPUs total
> MCA related initialization done
> node 0 zone DMA missaligned start pfn, enable UNALIGNED_ZONE_BOUNDRIES
> node 0 zone DMA32 missaligned start pfn, enable UNALIGNED_ZONE_BOUNDRIES
> node 0 zone Normal missaligned start pfn, enable UNALIGNED_ZONE_BOUNDRIES
> node 0 zone HighMem missaligned start pfn, enable UNALIGNED_ZONE_BOUNDRIES
> SMP: Allowing 4 CPUs, 0 hotplug CPUs
> Built 1 zonelists
> Kernel command line: BOOT_IMAGE=scsi0:\EFI\redhat\vmlinuz-2.6.17-rc4-mm1 root=/o
> PID hash table entries: 4096 (order: 12, 32768 bytes)
> Console: colour VGA+ 80x25
> Dentry cache hash table entries: 131072 (order: 6, 1048576 bytes)
> Inode-cache hash table entries: 65536 (order: 5, 524288 bytes)
> Placing software IO TLB between 0x4a30000 - 0x8a30000
> Unable to handle kernel NULL pointer dereference (address 0000000000000008)
> swapper[0]: Oops 8813272891392 [1]
> Modules linked in:
>
> Pid: 0, CPU 0, comm: swapper
> psr : 00001010084a6010 ifs : 800000000000060f ip : [<a0000001000e6750>] Notd
> ip is at __free_pages_ok+0x190/0x3c0
> unat: 0000000000000000 pfs : 000000000000060f rsc : 0000000000000003
> rnat: 0000000000ffffff bsps: 00000000000002f9 pr : 80000000afb5956b
> ldrs: 0000000000000000 ccv : 0000000000000000 fpsr: 0009804c8a70433f
> csd : 0930ffff00090000 ssd : 0930ffff00090000
> b0 : a0000001000e6660 b6 : e00000003fe52940 b7 : a000000100790120
> f6 : 1003e6db6db6db6db6db7 f7 : 1003e000000000006dec0
> f8 : 1003e000000000000fb80 f9 : 1003e000000000006e080
> f10 : 1003e000000000000fb40 f11 : 1003e000000000006dec0
> r1 : a000000100af2db0 r2 : 0000000000000001 r3 : 0000000000000000
> r8 : a0000001008f3d38 r9 : 0000000000004000 r10 : 0000000000370400
> r11 : 0000000000004000 r12 : a0000001007b7e10 r13 : a0000001007b0000
> r14 : 0000000000000001 r15 : 0000000100000001 r16 : 0000000100000001
> r17 : 0000000100000001 r18 : 0000000000001041 r19 : 0000000000000000
> r20 : e00000000149df00 r21 : 0000000100000000 r22 : 0000000055555155
> r23 : 00000000ffffffff r24 : e00000000149df08 r25 : 1555555555555155
> r26 : 0000000000000032 r27 : 0000000000000000 r28 : 0000000000000008
> r29 : 0000000000001041 r30 : 0000000000001041 r31 : 0000000000000001
> Unable to handle kernel NULL pointer dereference (address 0000000000000000)
> swapper[0]: Oops 8813272891392 [2]
> Modules linked in:
>
> Pid: 0, CPU 0, comm: swapper
> psr : 0000101008022018 ifs : 8000000000000287 ip : [<a0000001001236c0>] Notd
> ip is at kmem_cache_alloc+0x40/0x100
> unat: 0000000000000000 pfs : 0000000000000712 rsc : 0000000000000003
> rnat: 0000000000000000 bsps: 0000000000000000 pr : 80000000afb59967
> ldrs: 0000000000000000 ccv : 0000000000000000 fpsr: 0009804c8a70033f
> csd : 0930ffff00090000 ssd : 0930ffff00090000
> b0 : a00000010003e450 b6 : a000000100001b50 b7 : a00000010003f320
> f6 : 1003e9e3779b97f4a7c16 f7 : 0ffdb8000000000000000
> f8 : 1003e000000000000007f f9 : 1003e0000000000000379
> f10 : 1003e6db6db6db6db6db7 f11 : 1003e000000000000007f
> r1 : a000000100af2db0 r2 : 0000000000000000 r3 : 0000000000000000
> r8 : 0000000000000000 r9 : 0000000000000000 r10 : a0000001007b0f24
> r11 : 0000000000000000 r12 : a0000001007b7280 r13 : a0000001007b0000
> r14 : 0000000000000000 r15 : 0000000000000000 r16 : a0000001007b7310
> r17 : 0000000000000000 r18 : a0000001007b7478 r19 : 0000000000000000
> r20 : 0000000000000000 r21 : 0000000000000018 r22 : 0000000000000000
> r23 : 0000000000000000 r24 : 0000000000000000 r25 : a0000001007b7308
> r26 : 000000007fffffff r27 : a000000100825520 r28 : a0000001008f3c40
> r29 : a000000100816ca8 r30 : 0000000000000018 r31 : 0000000000000018
>
>
>
> (gdb) l *0xa0000001000e6750
> 0xa0000001000e6750 is in __free_pages_ok (mm.h:324).
> 319 extern void FASTCALL(__page_cache_release(struct page *));
> 320
> 321 static inline int page_count(struct page *page)
> 322 {
> 323 if (unlikely(PageCompound(page)))
> 324 page = (struct page *)page_private(page);
> 325 return atomic_read(&page->_count);
> 326 }
> 327
> 328 static inline void get_page(struct page *page)
>
>
> Note the misaligned pfns.
>
> Andy's (misspelled) CONFIG_UNALIGNED_ZONE_BOUNDRIES patch didn't actually
> include an update to any Kconfig files. But hacking that in by hand didn't
> help.
Interesting. You are correct there was no config component, at the time
I didn't have direct evidence that any architecture needed it, only that
we had an unchecked requirement on zones, a requirement that had only
recently arrived with the changes to free buddy detection. I note that
MAX_ORDER is 17 for ia64 so that probabally accounts for the
missalignment. It is clear that the reporting is slightly over-zelous
as I am reporting zero-sized zones. I'll get that fixed and patch to
you. I'll also have a look at the patch as added to -mm and try and get
the rest of the spelling sorted :-/.
I'll go see if we currently have a machine to test this config on.
-apw
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 5/6] Have ia64 use add_active_range() and free_area_init_nodes
2006-05-08 14:12 ` [PATCH 5/6] Have ia64 use add_active_range() and free_area_init_nodes Mel Gorman
@ 2006-05-15 3:31 ` Andrew Morton
2006-05-15 8:21 ` Andy Whitcroft
` (2 more replies)
0 siblings, 3 replies; 27+ messages in thread
From: Andrew Morton @ 2006-05-15 3:31 UTC (permalink / raw)
To: Mel Gorman, Andy Whitcroft
Cc: davej, tony.luck, linux-kernel, bob.picco, ak, linux-mm, linuxppc-dev
Mel Gorman <mel@csn.ul.ie> wrote:
>
> Size zones and holes in an architecture independent manner for ia64.
>
This one makes my ia64 die very early in boot. The trace is pretty useless.
config at http://www.zip.com.au/~akpm/linux/patches/stuff/config-ia64
EFI v1.10 by INTEL: SALsystab=0x3fe4c8c0 ACPI=0x3ff84000 ACPI 2.0=0x3ff83000 MP0
Early serial console at I/O port 0x2f8 (options '9600n8')
SAL 3.1: Intel Corp SR870BN4 vers0
SAL Platform features: BusLock IRQ_Redirection
SAL: AP wakeup using external interrupt vector 0xf0
No logical to physical processor mapping available
iosapic_system_init: Disabling PC-AT compatible 8259 interrupts
ACPI: Local APIC address c0000000fee00000
PLATFORM int CPEI (0x3): GSI 22 (level, low) -> CPU 0 (0xc618) vector 30
register_intr: changing vector 39 from IO-SAPIC-edge to IO-SAPIC-level
4 CPUs available, 4 CPUs total
MCA related initialization done
node 0 zone DMA missaligned start pfn, enable UNALIGNED_ZONE_BOUNDRIES
node 0 zone DMA32 missaligned start pfn, enable UNALIGNED_ZONE_BOUNDRIES
node 0 zone Normal missaligned start pfn, enable UNALIGNED_ZONE_BOUNDRIES
node 0 zone HighMem missaligned start pfn, enable UNALIGNED_ZONE_BOUNDRIES
SMP: Allowing 4 CPUs, 0 hotplug CPUs
Built 1 zonelists
Kernel command line: BOOT_IMAGE=scsi0:\EFI\redhat\vmlinuz-2.6.17-rc4-mm1 root=/o
PID hash table entries: 4096 (order: 12, 32768 bytes)
Console: colour VGA+ 80x25
Dentry cache hash table entries: 131072 (order: 6, 1048576 bytes)
Inode-cache hash table entries: 65536 (order: 5, 524288 bytes)
Placing software IO TLB between 0x4a30000 - 0x8a30000
Unable to handle kernel NULL pointer dereference (address 0000000000000008)
swapper[0]: Oops 8813272891392 [1]
Modules linked in:
Pid: 0, CPU 0, comm: swapper
psr : 00001010084a6010 ifs : 800000000000060f ip : [<a0000001000e6750>] Notd
ip is at __free_pages_ok+0x190/0x3c0
unat: 0000000000000000 pfs : 000000000000060f rsc : 0000000000000003
rnat: 0000000000ffffff bsps: 00000000000002f9 pr : 80000000afb5956b
ldrs: 0000000000000000 ccv : 0000000000000000 fpsr: 0009804c8a70433f
csd : 0930ffff00090000 ssd : 0930ffff00090000
b0 : a0000001000e6660 b6 : e00000003fe52940 b7 : a000000100790120
f6 : 1003e6db6db6db6db6db7 f7 : 1003e000000000006dec0
f8 : 1003e000000000000fb80 f9 : 1003e000000000006e080
f10 : 1003e000000000000fb40 f11 : 1003e000000000006dec0
r1 : a000000100af2db0 r2 : 0000000000000001 r3 : 0000000000000000
r8 : a0000001008f3d38 r9 : 0000000000004000 r10 : 0000000000370400
r11 : 0000000000004000 r12 : a0000001007b7e10 r13 : a0000001007b0000
r14 : 0000000000000001 r15 : 0000000100000001 r16 : 0000000100000001
r17 : 0000000100000001 r18 : 0000000000001041 r19 : 0000000000000000
r20 : e00000000149df00 r21 : 0000000100000000 r22 : 0000000055555155
r23 : 00000000ffffffff r24 : e00000000149df08 r25 : 1555555555555155
r26 : 0000000000000032 r27 : 0000000000000000 r28 : 0000000000000008
r29 : 0000000000001041 r30 : 0000000000001041 r31 : 0000000000000001
Unable to handle kernel NULL pointer dereference (address 0000000000000000)
swapper[0]: Oops 8813272891392 [2]
Modules linked in:
Pid: 0, CPU 0, comm: swapper
psr : 0000101008022018 ifs : 8000000000000287 ip : [<a0000001001236c0>] Notd
ip is at kmem_cache_alloc+0x40/0x100
unat: 0000000000000000 pfs : 0000000000000712 rsc : 0000000000000003
rnat: 0000000000000000 bsps: 0000000000000000 pr : 80000000afb59967
ldrs: 0000000000000000 ccv : 0000000000000000 fpsr: 0009804c8a70033f
csd : 0930ffff00090000 ssd : 0930ffff00090000
b0 : a00000010003e450 b6 : a000000100001b50 b7 : a00000010003f320
f6 : 1003e9e3779b97f4a7c16 f7 : 0ffdb8000000000000000
f8 : 1003e000000000000007f f9 : 1003e0000000000000379
f10 : 1003e6db6db6db6db6db7 f11 : 1003e000000000000007f
r1 : a000000100af2db0 r2 : 0000000000000000 r3 : 0000000000000000
r8 : 0000000000000000 r9 : 0000000000000000 r10 : a0000001007b0f24
r11 : 0000000000000000 r12 : a0000001007b7280 r13 : a0000001007b0000
r14 : 0000000000000000 r15 : 0000000000000000 r16 : a0000001007b7310
r17 : 0000000000000000 r18 : a0000001007b7478 r19 : 0000000000000000
r20 : 0000000000000000 r21 : 0000000000000018 r22 : 0000000000000000
r23 : 0000000000000000 r24 : 0000000000000000 r25 : a0000001007b7308
r26 : 000000007fffffff r27 : a000000100825520 r28 : a0000001008f3c40
r29 : a000000100816ca8 r30 : 0000000000000018 r31 : 0000000000000018
(gdb) l *0xa0000001000e6750
0xa0000001000e6750 is in __free_pages_ok (mm.h:324).
319 extern void FASTCALL(__page_cache_release(struct page *));
320
321 static inline int page_count(struct page *page)
322 {
323 if (unlikely(PageCompound(page)))
324 page = (struct page *)page_private(page);
325 return atomic_read(&page->_count);
326 }
327
328 static inline void get_page(struct page *page)
Note the misaligned pfns.
Andy's (misspelled) CONFIG_UNALIGNED_ZONE_BOUNDRIES patch didn't actually
include an update to any Kconfig files. But hacking that in by hand didn't
help.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 5/6] Have ia64 use add_active_range() and free_area_init_nodes
2006-05-08 14:10 [PATCH 0/6] Sizing zones and holes in an architecture independent manner V6 Mel Gorman
@ 2006-05-08 14:12 ` Mel Gorman
2006-05-15 3:31 ` Andrew Morton
0 siblings, 1 reply; 27+ messages in thread
From: Mel Gorman @ 2006-05-08 14:12 UTC (permalink / raw)
To: akpm
Cc: davej, tony.luck, Mel Gorman, linux-kernel, bob.picco, ak,
linux-mm, linuxppc-dev
Size zones and holes in an architecture independent manner for ia64.
arch/ia64/Kconfig | 3 ++
arch/ia64/mm/contig.c | 60 +++++-----------------------------------
arch/ia64/mm/discontig.c | 41 ++++-----------------------
arch/ia64/mm/init.c | 12 ++++++++
include/asm-ia64/meminit.h | 1
5 files changed, 30 insertions(+), 87 deletions(-)
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
diff -rup -X /usr/src/patchset-0.5/bin//dontdiff linux-2.6.17-rc3-mm1-104-x86_64_use_init_nodes/arch/ia64/Kconfig linux-2.6.17-rc3-mm1-105-ia64_use_init_nodes/arch/ia64/Kconfig
--- linux-2.6.17-rc3-mm1-104-x86_64_use_init_nodes/arch/ia64/Kconfig 2006-05-01 11:36:54.000000000 +0100
+++ linux-2.6.17-rc3-mm1-105-ia64_use_init_nodes/arch/ia64/Kconfig 2006-05-08 09:21:02.000000000 +0100
@@ -353,6 +353,9 @@ config NODES_SHIFT
MAX_NUMNODES will be 2^(This value).
If in doubt, use the default.
+config ARCH_POPULATES_NODE_MAP
+ def_bool y
+
# VIRTUAL_MEM_MAP and FLAT_NODE_MEM_MAP are functionally equivalent.
# VIRTUAL_MEM_MAP has been retained for historical reasons.
config VIRTUAL_MEM_MAP
diff -rup -X /usr/src/patchset-0.5/bin//dontdiff linux-2.6.17-rc3-mm1-104-x86_64_use_init_nodes/arch/ia64/mm/contig.c linux-2.6.17-rc3-mm1-105-ia64_use_init_nodes/arch/ia64/mm/contig.c
--- linux-2.6.17-rc3-mm1-104-x86_64_use_init_nodes/arch/ia64/mm/contig.c 2006-04-27 03:19:25.000000000 +0100
+++ linux-2.6.17-rc3-mm1-105-ia64_use_init_nodes/arch/ia64/mm/contig.c 2006-05-08 09:21:02.000000000 +0100
@@ -26,10 +26,6 @@
#include <asm/sections.h>
#include <asm/mca.h>
-#ifdef CONFIG_VIRTUAL_MEM_MAP
-static unsigned long num_dma_physpages;
-#endif
-
/**
* show_mem - display a memory statistics summary
*
@@ -212,18 +208,6 @@ count_pages (u64 start, u64 end, void *a
return 0;
}
-#ifdef CONFIG_VIRTUAL_MEM_MAP
-static int
-count_dma_pages (u64 start, u64 end, void *arg)
-{
- unsigned long *count = arg;
-
- if (start < MAX_DMA_ADDRESS)
- *count += (min(end, MAX_DMA_ADDRESS) - start) >> PAGE_SHIFT;
- return 0;
-}
-#endif
-
/*
* Set up the page tables.
*/
@@ -232,47 +216,24 @@ void __init
paging_init (void)
{
unsigned long max_dma;
- unsigned long zones_size[MAX_NR_ZONES];
#ifdef CONFIG_VIRTUAL_MEM_MAP
- unsigned long zholes_size[MAX_NR_ZONES];
+ unsigned long nid = 0;
unsigned long max_gap;
#endif
- /* initialize mem_map[] */
-
- memset(zones_size, 0, sizeof(zones_size));
-
num_physpages = 0;
efi_memmap_walk(count_pages, &num_physpages);
max_dma = virt_to_phys((void *) MAX_DMA_ADDRESS) >> PAGE_SHIFT;
#ifdef CONFIG_VIRTUAL_MEM_MAP
- memset(zholes_size, 0, sizeof(zholes_size));
-
- num_dma_physpages = 0;
- efi_memmap_walk(count_dma_pages, &num_dma_physpages);
-
- if (max_low_pfn < max_dma) {
- zones_size[ZONE_DMA] = max_low_pfn;
- zholes_size[ZONE_DMA] = max_low_pfn - num_dma_physpages;
- } else {
- zones_size[ZONE_DMA] = max_dma;
- zholes_size[ZONE_DMA] = max_dma - num_dma_physpages;
- if (num_physpages > num_dma_physpages) {
- zones_size[ZONE_NORMAL] = max_low_pfn - max_dma;
- zholes_size[ZONE_NORMAL] =
- ((max_low_pfn - max_dma) -
- (num_physpages - num_dma_physpages));
- }
- }
-
max_gap = 0;
+ efi_memmap_walk(register_active_ranges, &nid);
efi_memmap_walk(find_largest_hole, (u64 *)&max_gap);
if (max_gap < LARGE_GAP) {
vmem_map = (struct page *) 0;
- free_area_init_node(0, NODE_DATA(0), zones_size, 0,
- zholes_size);
+ free_area_init_nodes(max_dma, max_dma,
+ max_low_pfn, max_low_pfn);
} else {
unsigned long map_size;
@@ -284,19 +245,14 @@ paging_init (void)
efi_memmap_walk(create_mem_map_page_table, NULL);
NODE_DATA(0)->node_mem_map = vmem_map;
- free_area_init_node(0, NODE_DATA(0), zones_size,
- 0, zholes_size);
+ free_area_init_nodes(max_dma, max_dma,
+ max_low_pfn, max_low_pfn);
printk("Virtual mem_map starts at 0x%p\n", mem_map);
}
#else /* !CONFIG_VIRTUAL_MEM_MAP */
- if (max_low_pfn < max_dma)
- zones_size[ZONE_DMA] = max_low_pfn;
- else {
- zones_size[ZONE_DMA] = max_dma;
- zones_size[ZONE_NORMAL] = max_low_pfn - max_dma;
- }
- free_area_init(zones_size);
+ add_active_range(0, 0, max_low_pfn);
+ free_area_init_nodes(max_dma, max_dma, max_low_pfn, max_low_pfn);
#endif /* !CONFIG_VIRTUAL_MEM_MAP */
zero_page_memmap_ptr = virt_to_page(ia64_imva(empty_zero_page));
}
diff -rup -X /usr/src/patchset-0.5/bin//dontdiff linux-2.6.17-rc3-mm1-104-x86_64_use_init_nodes/arch/ia64/mm/discontig.c linux-2.6.17-rc3-mm1-105-ia64_use_init_nodes/arch/ia64/mm/discontig.c
--- linux-2.6.17-rc3-mm1-104-x86_64_use_init_nodes/arch/ia64/mm/discontig.c 2006-04-27 03:19:25.000000000 +0100
+++ linux-2.6.17-rc3-mm1-105-ia64_use_init_nodes/arch/ia64/mm/discontig.c 2006-05-08 09:21:02.000000000 +0100
@@ -700,6 +700,7 @@ static __init int count_node_pages(unsig
{
unsigned long end = start + len;
+ add_active_range(node, start >> PAGE_SHIFT, end >> PAGE_SHIFT);
mem_data[node].num_physpages += len >> PAGE_SHIFT;
if (start <= __pa(MAX_DMA_ADDRESS))
mem_data[node].num_dma_physpages +=
@@ -724,9 +725,8 @@ static __init int count_node_pages(unsig
void __init paging_init(void)
{
unsigned long max_dma;
- unsigned long zones_size[MAX_NR_ZONES];
- unsigned long zholes_size[MAX_NR_ZONES];
unsigned long pfn_offset = 0;
+ unsigned long max_pfn = 0;
int node;
max_dma = virt_to_phys((void *) MAX_DMA_ADDRESS) >> PAGE_SHIFT;
@@ -743,46 +743,17 @@ void __init paging_init(void)
#endif
for_each_online_node(node) {
- memset(zones_size, 0, sizeof(zones_size));
- memset(zholes_size, 0, sizeof(zholes_size));
-
num_physpages += mem_data[node].num_physpages;
-
- if (mem_data[node].min_pfn >= max_dma) {
- /* All of this node's memory is above ZONE_DMA */
- zones_size[ZONE_NORMAL] = mem_data[node].max_pfn -
- mem_data[node].min_pfn;
- zholes_size[ZONE_NORMAL] = mem_data[node].max_pfn -
- mem_data[node].min_pfn -
- mem_data[node].num_physpages;
- } else if (mem_data[node].max_pfn < max_dma) {
- /* All of this node's memory is in ZONE_DMA */
- zones_size[ZONE_DMA] = mem_data[node].max_pfn -
- mem_data[node].min_pfn;
- zholes_size[ZONE_DMA] = mem_data[node].max_pfn -
- mem_data[node].min_pfn -
- mem_data[node].num_dma_physpages;
- } else {
- /* This node has memory in both zones */
- zones_size[ZONE_DMA] = max_dma -
- mem_data[node].min_pfn;
- zholes_size[ZONE_DMA] = zones_size[ZONE_DMA] -
- mem_data[node].num_dma_physpages;
- zones_size[ZONE_NORMAL] = mem_data[node].max_pfn -
- max_dma;
- zholes_size[ZONE_NORMAL] = zones_size[ZONE_NORMAL] -
- (mem_data[node].num_physpages -
- mem_data[node].num_dma_physpages);
- }
-
pfn_offset = mem_data[node].min_pfn;
#ifdef CONFIG_VIRTUAL_MEM_MAP
NODE_DATA(node)->node_mem_map = vmem_map + pfn_offset;
#endif
- free_area_init_node(node, NODE_DATA(node), zones_size,
- pfn_offset, zholes_size);
+ if (mem_data[node].max_pfn > max_pfn)
+ max_pfn = mem_data[node].max_pfn;
}
+ free_area_init_nodes(max_dma, max_dma, max_pfn, max_pfn);
+
zero_page_memmap_ptr = virt_to_page(ia64_imva(empty_zero_page));
}
diff -rup -X /usr/src/patchset-0.5/bin//dontdiff linux-2.6.17-rc3-mm1-104-x86_64_use_init_nodes/arch/ia64/mm/init.c linux-2.6.17-rc3-mm1-105-ia64_use_init_nodes/arch/ia64/mm/init.c
--- linux-2.6.17-rc3-mm1-104-x86_64_use_init_nodes/arch/ia64/mm/init.c 2006-05-01 11:36:54.000000000 +0100
+++ linux-2.6.17-rc3-mm1-105-ia64_use_init_nodes/arch/ia64/mm/init.c 2006-05-08 09:21:02.000000000 +0100
@@ -539,6 +539,18 @@ find_largest_hole (u64 start, u64 end, v
last_end = end;
return 0;
}
+
+int __init
+register_active_ranges(u64 start, u64 end, void *nid)
+{
+ BUG_ON(nid == NULL);
+ BUG_ON(*(unsigned long *)nid >= MAX_NUMNODES);
+
+ add_active_range(*(unsigned long *)nid,
+ __pa(start) >> PAGE_SHIFT,
+ __pa(end) >> PAGE_SHIFT);
+ return 0;
+}
#endif /* CONFIG_VIRTUAL_MEM_MAP */
static int __init
diff -rup -X /usr/src/patchset-0.5/bin//dontdiff linux-2.6.17-rc3-mm1-104-x86_64_use_init_nodes/include/asm-ia64/meminit.h linux-2.6.17-rc3-mm1-105-ia64_use_init_nodes/include/asm-ia64/meminit.h
--- linux-2.6.17-rc3-mm1-104-x86_64_use_init_nodes/include/asm-ia64/meminit.h 2006-04-27 03:19:25.000000000 +0100
+++ linux-2.6.17-rc3-mm1-105-ia64_use_init_nodes/include/asm-ia64/meminit.h 2006-05-08 09:21:02.000000000 +0100
@@ -56,6 +56,7 @@ extern void efi_memmap_init(unsigned lon
extern unsigned long vmalloc_end;
extern struct page *vmem_map;
extern int find_largest_hole (u64 start, u64 end, void *arg);
+ extern int register_active_ranges (u64 start, u64 end, void *arg);
extern int create_mem_map_page_table (u64 start, u64 end, void *arg);
#endif
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 27+ messages in thread
end of thread, other threads:[~2006-08-21 13:46 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-07-08 11:10 [PATCH 0/6] Sizing zones and holes in an architecture independent manner V8 Mel Gorman
2006-07-08 11:11 ` [PATCH 1/6] Introduce mechanism for registering active regions of memory Mel Gorman
2006-07-08 11:11 ` [PATCH 2/6] Have Power use add_active_range() and free_area_init_nodes() Mel Gorman
2006-07-08 11:11 ` [PATCH 3/6] Have x86 use add_active_range() and free_area_init_nodes Mel Gorman
2006-07-08 11:12 ` [PATCH 4/6] Have x86_64 " Mel Gorman
2006-07-08 11:12 ` [PATCH 5/6] Have ia64 " Mel Gorman
2006-07-08 11:12 ` [PATCH 6/6] Account for memmap and optionally the kernel image as holes Mel Gorman
2006-07-08 11:42 ` [PATCH 0/6] Sizing zones and holes in an architecture independent manner V8 Heiko Carstens
2006-07-09 15:32 ` Mel Gorman
2006-07-10 11:30 ` [PATCH 6/6] Account for memmap and optionally the kernel image as holes David Howells
2006-07-10 15:31 ` Mel Gorman
-- strict thread matches above, loose matches on Subject: below --
2006-08-21 13:45 [PATCH 0/6] Sizing zones and holes in an architecture independent manner V9 Mel Gorman
2006-08-21 13:46 ` [PATCH 5/6] Have ia64 use add_active_range() and free_area_init_nodes Mel Gorman
2006-05-08 14:10 [PATCH 0/6] Sizing zones and holes in an architecture independent manner V6 Mel Gorman
2006-05-08 14:12 ` [PATCH 5/6] Have ia64 use add_active_range() and free_area_init_nodes Mel Gorman
2006-05-15 3:31 ` Andrew Morton
2006-05-15 8:21 ` Andy Whitcroft
2006-05-15 10:00 ` Nick Piggin
2006-05-15 10:19 ` Andy Whitcroft
2006-05-15 10:29 ` KAMEZAWA Hiroyuki
2006-05-15 10:47 ` KAMEZAWA Hiroyuki
2006-05-15 11:02 ` Andy Whitcroft
2006-05-16 0:31 ` Nick Piggin
2006-05-16 1:34 ` KAMEZAWA Hiroyuki
2006-05-16 2:11 ` Nick Piggin
2006-05-15 12:27 ` Mel Gorman
2006-05-15 22:44 ` Mel Gorman
2006-05-19 14:03 ` Mel Gorman
2006-05-19 14:23 ` Andy Whitcroft
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox