linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] Verification and debugging of memory initialisation V4
@ 2008-04-28 19:28 Mel Gorman
  2008-04-28 19:28 ` [PATCH 1/4] Add a basic debugging framework for memory initialisation Mel Gorman
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Mel Gorman @ 2008-04-28 19:28 UTC (permalink / raw)
  To: akpm; +Cc: Mel Gorman, linux-kernel, linux-mm, apw, mingo, clameter

Boot initialisation is very complex, with significant numbers of
architecture-specific routines, hooks and code ordering. While significant
amounts of the initialisation is architecture-independent, it trusts
the data received from the architecture layer. This is a mistake, and has
resulted in a number of difficult-to-diagnose bugs. This patchset adds some
validation and tracing to memory initialisation. It also introduces a few
basic defensive measures.  The validation code can be explicitly disabled
for embedded systems.

I believe it's ready for a round of testing in -mm. The patches are based
against 2.6.25-mm1.

Changelog since V3
  o (Andrew) Only allow disabling of verification checks on CONFIG_EMBEDDED
  o (Andy Whitcroft) Documentation and leader fixups
  o (Andy) Rename mminit_debug_printk to mminit_dprintk for consistency
  o (Andy) Rename mminit_verify_pageflags to mminit_verify_pageflags_layout
  o (Andy) Rename mminit_validate_physlimits to mminit_validate_memmodel_limits
  o (Andy) Fix page->flags bitmap overlap checks
  o (Andy) Fix argument type for level in mminit_dprintk()
  o (Mel) Add WARNING error level that is the default logging level for
  	  mminit_loglevel=. Messages printed at this or lower levels will use
	  KERN_WARNING for the printk loglevel. Otherwise KERN_DEBUG is used.

Changelog since V2
  o (Mel) Rebase to 2.6.25-mm1 and rewrite zonelist dump
  o (Mel) Depend on DEBUG_VM instead of DEBUG_KERNEL
  o (Mel) Use __meminitdata instead of __initdata for logging level
  o (Christoph) Get rid of FLAGS_RESERVED references
  o (Christoph) Print out flag usage information
  o (Ingo) Default do the verifications on DEBUG_VM and instead control the
           level of verbose logging with mminit_loglevel= instead of
	   mminit_debug_level=
  o (Anon) Log at KERN_DEBUG level
  o (Anon) Optimisation to the mminit_debug_printk macro

Changelog since V1
  o (Ingo) Make memory initialisation verification a DEBUG option depending on
    DEBUG_KERNEL option. By default it will then to verify structures but
    tracing can be enabled via the command-line. Without the CONFIG option,
    checks will still be made on PFN ranges passed by the architecture-specific
    code and a warning printed once if a problem is encountered
  o (Ingo) WARN_ON_ONCE when PFNs from the architecture violate SPARSEMEM
    limitations. The warning should be "harmless" as the system will boot
    regardless but it acts as a reminder that bad input is being used.
  o (Anon) Convert mminit_debug_printk() to a macro
  o (Anon) Spelling mistake corrections
  o (Anon) Use of KERN_CONT properly for multiple printks
  o (Mel) Reshuffle the patches so that the zonelist printing is at the
    end of the patchset. This is because -mm requires a different patch to
    print zonelists and this allows the end patch to be temporarily dropped
    when testing against -mm
  o (Mel) Rebase on top of Ingo's sparsemem fix for easier testing
  o (Mel) Document mminit_debug_level=
  o (Mel) Fix check on pageflags where the masks were not being shifted
  o (Mel) The zone ID should should have used page_zonenum not page_zone_id
  o (Mel) Iterate all zonelists correctly
  o (Mel) Correct typo of SECTIONS_SHIFT

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/4] Add a basic debugging framework for memory initialisation
  2008-04-28 19:28 [PATCH 0/4] Verification and debugging of memory initialisation V4 Mel Gorman
@ 2008-04-28 19:28 ` Mel Gorman
  2008-04-28 19:29 ` [PATCH 2/4] Verify the page links and memory model Mel Gorman
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 7+ messages in thread
From: Mel Gorman @ 2008-04-28 19:28 UTC (permalink / raw)
  To: akpm; +Cc: Mel Gorman, linux-kernel, linux-mm, apw, mingo, clameter

This patch adds additional debugging and verification code for memory
initialisation. Once enabled, the verification checks are always run and
when required additional debugging information may be outputted via a
mminit_loglevel= command-line parameter. The verification code is placed
in a new file mm/mm_init.c. Ideally other mm initialisation code will be
moved here over time.

Signed-off-by: Mel Gorman <mel@csn.ul.ie>
---

 Documentation/kernel-parameters.txt |    8 ++++++++
 lib/Kconfig.debug                   |   12 ++++++++++++
 mm/Makefile                         |    1 +
 mm/internal.h                       |   27 +++++++++++++++++++++++++++
 mm/mm_init.c                        |   18 ++++++++++++++++++
 mm/page_alloc.c                     |   16 ++++++++++------
 6 files changed, 76 insertions(+), 6 deletions(-)

diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.25-mm1-clean/Documentation/kernel-parameters.txt linux-2.6.25-mm1-0010_mminit_debug_framework/Documentation/kernel-parameters.txt
--- linux-2.6.25-mm1-clean/Documentation/kernel-parameters.txt	2008-04-22 10:29:56.000000000 +0100
+++ linux-2.6.25-mm1-0010_mminit_debug_framework/Documentation/kernel-parameters.txt	2008-04-28 14:39:59.000000000 +0100
@@ -1185,6 +1185,14 @@ and is between 256 and 4096 characters. 
 
 	mga=		[HW,DRM]
 
+	mminit_loglevel=
+			[KNL] When CONFIG_DEBUG_MEMORY_INIT is set, this
+			parameter allows control of the logging verbosity for
+			the additional memory initialisation checks. A value
+			of 0 disables mminit logging and a level of 4 will
+			log everything. Information is printed at KERN_DEBUG
+			so loglevel=8 may also need to be specified.
+
 	mousedev.tap_time=
 			[MOUSE] Maximum time between finger touching and
 			leaving touchpad surface for touch to be considered
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.25-mm1-clean/lib/Kconfig.debug linux-2.6.25-mm1-0010_mminit_debug_framework/lib/Kconfig.debug
--- linux-2.6.25-mm1-clean/lib/Kconfig.debug	2008-04-22 10:30:04.000000000 +0100
+++ linux-2.6.25-mm1-0010_mminit_debug_framework/lib/Kconfig.debug	2008-04-28 14:39:59.000000000 +0100
@@ -482,6 +482,18 @@ config DEBUG_WRITECOUNT
 
 	  If unsure, say N.
 
+config DEBUG_MEMORY_INIT
+	bool "Debug memory initialisation" if EMBEDDED
+	default !EMBEDDED
+	help
+	  Enable this for additional checks during memory initialisation.
+	  The sanity checks verify aspects of the VM such as the memory model
+	  and other information provided by the architecture. Verbose
+	  information will be printed at KERN_DEBUG loglevel depending 
+	  on the mminit_loglevel= command-line option.
+
+	  If unsure, say Y
+
 config DEBUG_LIST
 	bool "Debug linked list manipulation"
 	depends on DEBUG_KERNEL
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.25-mm1-clean/mm/internal.h linux-2.6.25-mm1-0010_mminit_debug_framework/mm/internal.h
--- linux-2.6.25-mm1-clean/mm/internal.h	2008-04-22 10:30:04.000000000 +0100
+++ linux-2.6.25-mm1-0010_mminit_debug_framework/mm/internal.h	2008-04-28 14:39:59.000000000 +0100
@@ -59,4 +59,31 @@ static inline unsigned long page_order(s
 #define __paginginit __init
 #endif
 
+/* Memory initialisation debug and verification */
+enum mminit_level {
+	MMINIT_WARNING,
+	MMINIT_VERIFY,
+	MMINIT_TRACE
+};
+
+#ifdef CONFIG_DEBUG_MEMORY_INIT
+
+extern int mminit_loglevel;
+
+#define mminit_dprintk(level, prefix, fmt, arg...) \
+do { \
+	if (level < mminit_loglevel) { \
+		printk(level <= MMINIT_WARNING ? KERN_WARNING : KERN_DEBUG \
+			"mminit:: " prefix " " fmt, ##arg); \
+	} \
+} while (0)
+
+#else
+
+static inline void mminit_dprintk(enum mminit_level level,
+				const char *prefix, const char *fmt, ...)
+{
+}
+
+#endif /* CONFIG_DEBUG_MEMORY_INIT */
 #endif
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.25-mm1-clean/mm/Makefile linux-2.6.25-mm1-0010_mminit_debug_framework/mm/Makefile
--- linux-2.6.25-mm1-clean/mm/Makefile	2008-04-22 10:30:04.000000000 +0100
+++ linux-2.6.25-mm1-0010_mminit_debug_framework/mm/Makefile	2008-04-28 14:39:59.000000000 +0100
@@ -33,4 +33,5 @@ obj-$(CONFIG_MIGRATION) += migrate.o
 obj-$(CONFIG_SMP) += allocpercpu.o
 obj-$(CONFIG_QUICKLIST) += quicklist.o
 obj-$(CONFIG_CGROUP_MEM_RES_CTLR) += memcontrol.o
+obj-$(CONFIG_DEBUG_MEMORY_INIT) += mm_init.o
 
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.25-mm1-clean/mm/mm_init.c linux-2.6.25-mm1-0010_mminit_debug_framework/mm/mm_init.c
--- linux-2.6.25-mm1-clean/mm/mm_init.c	2008-04-22 12:29:06.000000000 +0100
+++ linux-2.6.25-mm1-0010_mminit_debug_framework/mm/mm_init.c	2008-04-28 14:39:59.000000000 +0100
@@ -0,0 +1,18 @@
+/*
+ * mm_init.c - Memory initialisation verification and debugging
+ *
+ * Copyright 2008 IBM Corporation, 2008
+ * Author Mel Gorman <mel@csn.ul.ie>
+ *
+ */
+#include <linux/kernel.h>
+#include <linux/init.h>
+
+int __meminitdata mminit_loglevel;
+
+static __init int set_mminit_loglevel(char *str)
+{
+	get_option(&str, &mminit_loglevel);
+	return 0;
+}
+early_param("mminit_loglevel", set_mminit_loglevel);
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.25-mm1-clean/mm/page_alloc.c linux-2.6.25-mm1-0010_mminit_debug_framework/mm/page_alloc.c
--- linux-2.6.25-mm1-clean/mm/page_alloc.c	2008-04-22 10:30:04.000000000 +0100
+++ linux-2.6.25-mm1-0010_mminit_debug_framework/mm/page_alloc.c	2008-04-28 14:39:59.000000000 +0100
@@ -3068,7 +3068,8 @@ void __init sparse_memory_present_with_a
 void __init push_node_boundaries(unsigned int nid,
 		unsigned long start_pfn, unsigned long end_pfn)
 {
-	printk(KERN_DEBUG "Entering push_node_boundaries(%u, %lu, %lu)\n",
+	mminit_dprintk(MMINIT_TRACE, "zoneboundary",
+			"Entering push_node_boundaries(%u, %lu, %lu)\n",
 			nid, start_pfn, end_pfn);
 
 	/* Initialise the boundary for this node if necessary */
@@ -3086,7 +3087,8 @@ void __init push_node_boundaries(unsigne
 static void __meminit account_node_boundary(unsigned int nid,
 		unsigned long *start_pfn, unsigned long *end_pfn)
 {
-	printk(KERN_DEBUG "Entering account_node_boundary(%u, %lu, %lu)\n",
+	mminit_dprintk(MMINIT_TRACE, "zoneboundary",
+			"Entering account_node_boundary(%u, %lu, %lu)\n",
 			nid, *start_pfn, *end_pfn);
 
 	/* Return if boundary information has not been provided */
@@ -3460,8 +3462,8 @@ static void __paginginit free_area_init_
 		memmap_pages = (size * sizeof(struct page)) >> PAGE_SHIFT;
 		if (realsize >= memmap_pages) {
 			realsize -= memmap_pages;
-			printk(KERN_DEBUG
-				"  %s zone: %lu pages used for memmap\n",
+			mminit_dprintk(MMINIT_TRACE, "memmap_init",
+				"%s zone: %lu pages used for memmap\n",
 				zone_names[j], memmap_pages);
 		} else
 			printk(KERN_WARNING
@@ -3471,7 +3473,8 @@ static void __paginginit free_area_init_
 		/* Account for reserved pages */
 		if (j == 0 && realsize > dma_reserve) {
 			realsize -= dma_reserve;
-			printk(KERN_DEBUG "  %s zone: %lu pages reserved\n",
+			mminit_dprintk(MMINIT_TRACE, "memmap_init",
+					"%s zone: %lu pages reserved\n",
 					zone_names[0], dma_reserve);
 		}
 
@@ -3609,7 +3612,8 @@ void __init add_active_range(unsigned in
 {
 	int i;
 
-	printk(KERN_DEBUG "Entering add_active_range(%d, %lu, %lu) "
+	mminit_dprintk(MMINIT_TRACE, "memory_register",
+			"Entering add_active_range(%d, %lu, %lu) "
 			  "%d entries of %d used\n",
 			  nid, start_pfn, end_pfn,
 			  nr_nodemap_entries, MAX_ACTIVE_REGIONS);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 2/4] Verify the page links and memory model
  2008-04-28 19:28 [PATCH 0/4] Verification and debugging of memory initialisation V4 Mel Gorman
  2008-04-28 19:28 ` [PATCH 1/4] Add a basic debugging framework for memory initialisation Mel Gorman
@ 2008-04-28 19:29 ` Mel Gorman
  2008-04-28 19:29 ` [PATCH 3/4] Make defensive checks around PFN values registered for memory usage Mel Gorman
  2008-04-28 19:29 ` [PATCH 4/4] Print out the zonelists on request for manual verification Mel Gorman
  3 siblings, 0 replies; 7+ messages in thread
From: Mel Gorman @ 2008-04-28 19:29 UTC (permalink / raw)
  To: akpm; +Cc: Mel Gorman, linux-kernel, linux-mm, apw, mingo, clameter

This patch prints out information on how the page flags are being used if
mminit_loglevel is MMINIT_VERIFY or higher and unconditionally performs
sanity checks on the flags regardless of loglevel. When the page flags are
updated with section, node and zone information, a check are made to ensure
the values can be retrieved correctly. Finally we confirm that pfn_to_page
and page_to_pfn are the correct inverse functions.

Signed-off-by: Mel Gorman <mel@csn.ul.ie>
---

 mm/internal.h   |   12 ++++++++
 mm/mm_init.c    |   70 +++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/page_alloc.c |    6 ++++
 3 files changed, 88 insertions(+)

diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.25-mm1-0010_mminit_debug_framework/mm/internal.h linux-2.6.25-mm1-0020_memmap_init_debug/mm/internal.h
--- linux-2.6.25-mm1-0010_mminit_debug_framework/mm/internal.h	2008-04-28 14:39:59.000000000 +0100
+++ linux-2.6.25-mm1-0020_memmap_init_debug/mm/internal.h	2008-04-28 14:41:48.000000000 +0100
@@ -78,6 +78,10 @@ do { \
 	} \
 } while (0)
 
+extern void mminit_verify_pageflags_layout(void);
+extern void mminit_verify_page_links(struct page *page,
+		enum zone_type zone, unsigned long nid, unsigned long pfn);
+
 #else
 
 static inline void mminit_dprintk(enum mminit_level level,
@@ -85,5 +89,13 @@ static inline void mminit_dprintk(unsign
 {
 }
 
+static inline void mminit_verify_pageflags_layout(void)
+{
+}
+
+static inline void mminit_verify_page_links(struct page *page,
+		enum zone_type zone, unsigned long nid, unsigned long pfn)
+{
+}
 #endif /* CONFIG_DEBUG_MEMORY_INIT */
 #endif
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.25-mm1-0010_mminit_debug_framework/mm/mm_init.c linux-2.6.25-mm1-0020_memmap_init_debug/mm/mm_init.c
--- linux-2.6.25-mm1-0010_mminit_debug_framework/mm/mm_init.c	2008-04-28 14:39:59.000000000 +0100
+++ linux-2.6.25-mm1-0020_memmap_init_debug/mm/mm_init.c	2008-04-28 14:41:48.000000000 +0100
@@ -7,9 +7,79 @@
  */
 #include <linux/kernel.h>
 #include <linux/init.h>
+#include "internal.h"
 
 int __meminitdata mminit_loglevel;
 
+void __init mminit_verify_pageflags_layout(void)
+{
+	int shift, width;
+	unsigned long or_mask, add_mask;
+
+	shift = 8 * sizeof(unsigned long);
+	width = shift - SECTIONS_WIDTH - NODES_WIDTH - ZONES_WIDTH;
+	mminit_dprintk(MMINIT_TRACE, "pageflags_layout_widths",
+		"Section %d Node %d Zone %d Flags %d\n",
+		SECTIONS_WIDTH,
+		NODES_WIDTH,
+		ZONES_WIDTH,
+		NR_PAGEFLAGS);
+	mminit_dprintk(MMINIT_TRACE, "pageflags_layout_shifts",
+		"Section %d Node %d Zone %d\n",
+#ifdef SECTIONS_SHIFT
+		SECTIONS_SHIFT,
+#else
+		0,
+#endif
+		NODES_SHIFT,
+		ZONES_SHIFT);
+	mminit_dprintk(MMINIT_TRACE, "pageflags_layout_offsets",
+		"Section %d Node %d Zone %d\n",
+		SECTIONS_PGSHIFT,
+		NODES_PGSHIFT,
+		ZONES_PGSHIFT);
+	mminit_dprintk(MMINIT_TRACE, "pageflags_layout_zoneid",
+		"Zone ID: %d -> %d\n",
+		ZONEID_PGOFF, ZONEID_PGOFF + ZONEID_SHIFT);
+	mminit_dprintk(MMINIT_TRACE, "pageflags_layout_usage",
+		"location: %d -> %d unused %d -> %d flags %d -> %d\n",
+		shift, width, width, NR_PAGEFLAGS, NR_PAGEFLAGS, 0);
+#ifdef NODE_NOT_IN_PAGE_FLAGS
+	mminit_dprintk(MMINIT_TRACE, "pageflags_layout_nodeflags",
+		"Node not in page flags");
+#endif
+
+	if (SECTIONS_WIDTH) {
+		shift -= SECTIONS_WIDTH;
+		BUG_ON(shift != SECTIONS_PGSHIFT);
+	}
+	if (NODES_WIDTH) {
+		shift -= NODES_WIDTH;
+		BUG_ON(shift != NODES_PGSHIFT);
+	}
+	if (ZONES_WIDTH) {
+		shift -= ZONES_WIDTH;
+		BUG_ON(shift != ZONES_PGSHIFT);
+	}
+
+	/* Check for bitmask overlaps */
+	or_mask = (ZONES_MASK << ZONES_PGSHIFT) |
+			(NODES_MASK << NODES_PGSHIFT) |
+			(SECTIONS_MASK << SECTIONS_PGSHIFT);
+	add_mask = (ZONES_MASK << ZONES_PGSHIFT) +
+			(NODES_MASK << NODES_PGSHIFT) +
+			(SECTIONS_MASK << SECTIONS_PGSHIFT);
+	BUG_ON(or_mask != add_mask);
+}
+
+void __meminit mminit_verify_page_links(struct page *page, enum zone_type zone,
+			unsigned long nid, unsigned long pfn)
+{
+	BUG_ON(page_to_nid(page) != nid);
+	BUG_ON(page_zonenum(page) != zone);
+	BUG_ON(page_to_pfn(page) != pfn);
+}
+
 static __init int set_mminit_loglevel(char *str)
 {
 	get_option(&str, &mminit_loglevel);
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.25-mm1-0010_mminit_debug_framework/mm/page_alloc.c linux-2.6.25-mm1-0020_memmap_init_debug/mm/page_alloc.c
--- linux-2.6.25-mm1-0010_mminit_debug_framework/mm/page_alloc.c	2008-04-28 14:39:59.000000000 +0100
+++ linux-2.6.25-mm1-0020_memmap_init_debug/mm/page_alloc.c	2008-04-28 14:41:48.000000000 +0100
@@ -2637,6 +2637,7 @@ void __meminit memmap_init_zone(unsigned
 		}
 		page = pfn_to_page(pfn);
 		set_page_links(page, zone, nid, pfn);
+		mminit_verify_page_links(page, zone, nid, pfn);
 		init_page_count(page);
 		reset_page_mapcount(page);
 		pc = page_get_page_cgroup(page);
@@ -2939,6 +2940,10 @@ __meminit int init_currently_empty_zone(
 
 	zone->zone_start_pfn = zone_start_pfn;
 
+	mminit_dprintk(MMINIT_TRACE, "memmap_init",
+			"Initialising map node %d zone %d pfns %lu -> %lu\n",
+			pgdat->node_id, zone_idx(zone),
+			zone_start_pfn, (zone_start_pfn + size));
 	memmap_init(size, pgdat->node_id, zone_idx(zone), zone_start_pfn);
 
 	zone_init_free_lists(zone);
@@ -4012,6 +4017,7 @@ void __init free_area_init_nodes(unsigne
 						early_node_map[i].end_pfn);
 
 	/* Initialise every node */
+	mminit_verify_pageflags_layout();
 	setup_nr_node_ids();
 	for_each_online_node(nid) {
 		pg_data_t *pgdat = NODE_DATA(nid);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 3/4] Make defensive checks around PFN values registered for memory usage
  2008-04-28 19:28 [PATCH 0/4] Verification and debugging of memory initialisation V4 Mel Gorman
  2008-04-28 19:28 ` [PATCH 1/4] Add a basic debugging framework for memory initialisation Mel Gorman
  2008-04-28 19:29 ` [PATCH 2/4] Verify the page links and memory model Mel Gorman
@ 2008-04-28 19:29 ` Mel Gorman
  2008-04-28 19:29 ` [PATCH 4/4] Print out the zonelists on request for manual verification Mel Gorman
  3 siblings, 0 replies; 7+ messages in thread
From: Mel Gorman @ 2008-04-28 19:29 UTC (permalink / raw)
  To: akpm; +Cc: Mel Gorman, linux-kernel, linux-mm, apw, mingo, clameter

There are a number of different views to how much memory is currently
active. There is the arch-independent zone-sizing view, the bootmem allocator
and memory models view. Architectures register this information at different
times and is not necessarily in sync particularly with respect to some
SPARSEMEM limitations. This patch introduces mminit_validate_memmodel_limits()
which is able to validate and correct PFN ranges with respect to the
memory model. It is only SPARSEMEM that currently validates itself.

Signed-off-by: Mel Gorman <mel@csn.ul.ie>
---

 mm/bootmem.c    |    1 +
 mm/internal.h   |   12 ++++++++++++
 mm/page_alloc.c |    2 ++
 mm/sparse.c     |   37 +++++++++++++++++++++++++++++--------
 4 files changed, 44 insertions(+), 8 deletions(-)

diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.25-mm1-0020_memmap_init_debug/mm/bootmem.c linux-2.6.25-mm1-0025_defensive_pfn_checks/mm/bootmem.c
--- linux-2.6.25-mm1-0020_memmap_init_debug/mm/bootmem.c	2008-04-22 10:30:04.000000000 +0100
+++ linux-2.6.25-mm1-0025_defensive_pfn_checks/mm/bootmem.c	2008-04-28 14:41:59.000000000 +0100
@@ -91,6 +91,7 @@ static unsigned long __init init_bootmem
 	bootmem_data_t *bdata = pgdat->bdata;
 	unsigned long mapsize;
 
+	mminit_validate_memmodel_limits(&start, &end);
 	bdata->node_bootmem_map = phys_to_virt(PFN_PHYS(mapstart));
 	bdata->node_boot_start = PFN_PHYS(start);
 	bdata->node_low_pfn = end;
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.25-mm1-0020_memmap_init_debug/mm/internal.h linux-2.6.25-mm1-0025_defensive_pfn_checks/mm/internal.h
--- linux-2.6.25-mm1-0020_memmap_init_debug/mm/internal.h	2008-04-28 14:41:48.000000000 +0100
+++ linux-2.6.25-mm1-0025_defensive_pfn_checks/mm/internal.h	2008-04-28 14:41:59.000000000 +0100
@@ -98,4 +98,16 @@ static inline void mminit_verify_page_li
 {
 }
 #endif /* CONFIG_DEBUG_MEMORY_INIT */
+
+/* mminit_validate_memmodel_limits is independent of CONFIG_DEBUG_MEMORY_INIT */
+#if defined(CONFIG_SPARSEMEM)
+extern void mminit_validate_memmodel_limits(unsigned long *start_pfn,
+				unsigned long *end_pfn);
+#else
+static inline void mminit_validate_memmodel_limits(unsigned long *start_pfn,
+				unsigned long *end_pfn)
+{
+}
+#endif /* CONFIG_SPARSEMEM */
+
 #endif
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.25-mm1-0020_memmap_init_debug/mm/page_alloc.c linux-2.6.25-mm1-0025_defensive_pfn_checks/mm/page_alloc.c
--- linux-2.6.25-mm1-0020_memmap_init_debug/mm/page_alloc.c	2008-04-28 14:41:48.000000000 +0100
+++ linux-2.6.25-mm1-0025_defensive_pfn_checks/mm/page_alloc.c	2008-04-28 14:41:59.000000000 +0100
@@ -3623,6 +3623,8 @@ void __init add_active_range(unsigned in
 			  nid, start_pfn, end_pfn,
 			  nr_nodemap_entries, MAX_ACTIVE_REGIONS);
 
+	mminit_validate_memmodel_limits(&start_pfn, &end_pfn);
+
 	/* Merge with existing active regions if possible */
 	for (i = 0; i < nr_nodemap_entries; i++) {
 		if (early_node_map[i].nid != nid)
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.25-mm1-0020_memmap_init_debug/mm/sparse.c linux-2.6.25-mm1-0025_defensive_pfn_checks/mm/sparse.c
--- linux-2.6.25-mm1-0020_memmap_init_debug/mm/sparse.c	2008-04-22 10:30:04.000000000 +0100
+++ linux-2.6.25-mm1-0025_defensive_pfn_checks/mm/sparse.c	2008-04-28 14:41:59.000000000 +0100
@@ -12,6 +12,7 @@
 #include <asm/dma.h>
 #include <asm/pgalloc.h>
 #include <asm/pgtable.h>
+#include "internal.h"
 
 /*
  * Permanent SPARSEMEM data:
@@ -147,22 +148,41 @@ static inline int sparse_early_nid(struc
 	return (section->section_mem_map >> SECTION_NID_SHIFT);
 }
 
-/* Record a memory area against a node. */
-void __init memory_present(int nid, unsigned long start, unsigned long end)
+/* Validate the physical addressing limitations of the model */
+void __meminit mminit_validate_memmodel_limits(unsigned long *start_pfn,
+						unsigned long *end_pfn)
 {
-	unsigned long max_arch_pfn = 1UL << (MAX_PHYSMEM_BITS-PAGE_SHIFT);
-	unsigned long pfn;
+	unsigned long max_sparsemem_pfn = 1UL << (MAX_PHYSMEM_BITS-PAGE_SHIFT);
 
 	/*
 	 * Sanity checks - do not allow an architecture to pass
 	 * in larger pfns than the maximum scope of sparsemem:
 	 */
-	if (start >= max_arch_pfn)
-		return;
-	if (end >= max_arch_pfn)
-		end = max_arch_pfn;
+	if (*start_pfn > max_sparsemem_pfn) {
+		mminit_dprintk(MMINIT_WARNING, "pfnvalidation",
+			"Start of range %lu -> %lu exceeds SPARSEMEM max %lu\n",
+			*start_pfn, *end_pfn, max_sparsemem_pfn);
+		WARN_ON_ONCE(1);
+		*start_pfn = max_sparsemem_pfn;
+		*end_pfn = max_sparsemem_pfn;
+	}
+
+	if (*end_pfn > max_sparsemem_pfn) {
+		mminit_dprintk(MMINIT_WARNING, "pfnvalidation",
+			"End of range %lu -> %lu exceeds SPARSEMEM max %lu\n",
+			*start_pfn, *end_pfn, max_sparsemem_pfn);
+		WARN_ON_ONCE(1);
+		*end_pfn = max_sparsemem_pfn;
+	}
+}
+
+/* Record a memory area against a node. */
+void __init memory_present(int nid, unsigned long start, unsigned long end)
+{
+	unsigned long pfn;
 
 	start &= PAGE_SECTION_MASK;
+	mminit_validate_memmodel_limits(&start, &end);
 	for (pfn = start; pfn < end; pfn += PAGES_PER_SECTION) {
 		unsigned long section = pfn_to_section_nr(pfn);
 		struct mem_section *ms;
@@ -187,6 +207,7 @@ unsigned long __init node_memmap_size_by
 	unsigned long pfn;
 	unsigned long nr_pages = 0;
 
+	mminit_validate_memmodel_limits(&start_pfn, &end_pfn);
 	for (pfn = start_pfn; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
 		if (nid != early_pfn_to_nid(pfn))
 			continue;

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 4/4] Print out the zonelists on request for manual verification
  2008-04-28 19:28 [PATCH 0/4] Verification and debugging of memory initialisation V4 Mel Gorman
                   ` (2 preceding siblings ...)
  2008-04-28 19:29 ` [PATCH 3/4] Make defensive checks around PFN values registered for memory usage Mel Gorman
@ 2008-04-28 19:29 ` Mel Gorman
  3 siblings, 0 replies; 7+ messages in thread
From: Mel Gorman @ 2008-04-28 19:29 UTC (permalink / raw)
  To: akpm; +Cc: Mel Gorman, linux-kernel, linux-mm, apw, mingo, clameter

This patch prints out the zonelists during boot for manual verification by
the user if the mminit_loglevel is MMINIT_VERIFY or higher.

Signed-off-by: Mel Gorman <mel@csn.ul.ie>
---

 mm/internal.h   |    5 +++++
 mm/mm_init.c    |   45 +++++++++++++++++++++++++++++++++++++++++++++
 mm/page_alloc.c |    1 +
 3 files changed, 51 insertions(+)

diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.25-mm1-0025_defensive_pfn_checks/mm/internal.h linux-2.6.25-mm1-0030_display_zonelist/mm/internal.h
--- linux-2.6.25-mm1-0025_defensive_pfn_checks/mm/internal.h	2008-04-28 14:41:59.000000000 +0100
+++ linux-2.6.25-mm1-0030_display_zonelist/mm/internal.h	2008-04-28 14:47:09.000000000 +0100
@@ -81,6 +81,7 @@ do { \
 extern void mminit_verify_pageflags_layout(void);
 extern void mminit_verify_page_links(struct page *page,
 		enum zone_type zone, unsigned long nid, unsigned long pfn);
+extern void mminit_verify_zonelist(void);
 
 #else
 
@@ -97,6 +98,10 @@ static inline void mminit_verify_page_li
 		enum zone_type zone, unsigned long nid, unsigned long pfn)
 {
 }
+
+static inline void mminit_verify_zonelist(void)
+{
+}
 #endif /* CONFIG_DEBUG_MEMORY_INIT */
 
 /* mminit_validate_memmodel_limits is independent of CONFIG_DEBUG_MEMORY_INIT */
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.25-mm1-0025_defensive_pfn_checks/mm/mm_init.c linux-2.6.25-mm1-0030_display_zonelist/mm/mm_init.c
--- linux-2.6.25-mm1-0025_defensive_pfn_checks/mm/mm_init.c	2008-04-28 14:41:48.000000000 +0100
+++ linux-2.6.25-mm1-0030_display_zonelist/mm/mm_init.c	2008-04-28 14:47:09.000000000 +0100
@@ -11,6 +11,51 @@
 
 int __meminitdata mminit_loglevel;
 
+/* The zonelists are simply reported, validation is manual. */
+void mminit_verify_zonelist(void)
+{
+	int nid;
+
+	if (mminit_loglevel < MMINIT_VERIFY)
+		return;
+
+	for_each_online_node(nid) {
+		pg_data_t *pgdat = NODE_DATA(nid);
+		struct zone *zone;
+		struct zoneref *z;
+		struct zonelist *zonelist;
+		int i, listid, zoneid;
+
+		BUG_ON(MAX_ZONELISTS > 2);
+		for (i = 0; i < MAX_ZONELISTS * MAX_NR_ZONES; i++) {
+
+			/* Identify the zone and nodelist */
+			zoneid = i % MAX_NR_ZONES;
+			listid = i / MAX_NR_ZONES;
+			zonelist = &pgdat->node_zonelists[listid];
+			zone = &pgdat->node_zones[zoneid];
+			if (!populated_zone(zone))
+				continue;
+
+			/* Print information about the zonelist */
+			printk(KERN_DEBUG "mminit::zonelist %s %d:%s = ",
+				listid > 0 ? "thisnode" : "general", nid,
+				zone->name);
+
+			/* Iterate the zonelist */
+			for_each_zone_zonelist(zone, z, zonelist, zoneid) {
+#ifdef CONFIG_NUMA
+				printk(KERN_CONT "%d:%s ",
+					zone->node, zone->name);
+#else
+				printk(KERN_CONT "0:%s ", zone->name);
+#endif /* CONFIG_NUMA */
+			}
+			printk(KERN_CONT "\n");
+		}
+	}
+}
+
 void __init mminit_verify_pageflags_layout(void)
 {
 	int shift, width;
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.25-mm1-0025_defensive_pfn_checks/mm/page_alloc.c linux-2.6.25-mm1-0030_display_zonelist/mm/page_alloc.c
--- linux-2.6.25-mm1-0025_defensive_pfn_checks/mm/page_alloc.c	2008-04-28 14:41:59.000000000 +0100
+++ linux-2.6.25-mm1-0030_display_zonelist/mm/page_alloc.c	2008-04-28 14:47:09.000000000 +0100
@@ -2456,6 +2456,7 @@ void build_all_zonelists(void)
 
 	if (system_state == SYSTEM_BOOTING) {
 		__build_all_zonelists(NULL);
+		mminit_verify_zonelist();
 		cpuset_init_current_mems_allowed();
 	} else {
 		/* we have to stop all cpus to guarantee there is no user

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 4/4] Print out the zonelists on request for manual verification
  2008-04-22 18:31 [PATCH 0/4] Verification and debugging of memory initialisation V3 Mel Gorman
@ 2008-04-22 18:32 ` Mel Gorman
  0 siblings, 0 replies; 7+ messages in thread
From: Mel Gorman @ 2008-04-22 18:32 UTC (permalink / raw)
  To: linux-mm; +Cc: Mel Gorman, mingo, linux-kernel, clameter

This patch prints out the zonelists during boot for manual verification by
the user if the mminit_loglevel is MMINIT_VERIFY or higher. This is useful
for checking if the zonelists were somehow corrupt during initialisation.

Signed-off-by: Mel Gorman <mel@csn.ul.ie>
---

 mm/internal.h   |    5 +++++
 mm/mm_init.c    |   45 +++++++++++++++++++++++++++++++++++++++++++++
 mm/page_alloc.c |    1 +
 3 files changed, 51 insertions(+)

diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.25-mm1-0025_defensive_pfn_checks/mm/internal.h linux-2.6.25-mm1-0030_display_zonelist/mm/internal.h
--- linux-2.6.25-mm1-0025_defensive_pfn_checks/mm/internal.h	2008-04-22 17:49:48.000000000 +0100
+++ linux-2.6.25-mm1-0030_display_zonelist/mm/internal.h	2008-04-22 17:50:06.000000000 +0100
@@ -80,6 +80,7 @@ do { \
 extern void mminit_verify_pageflags(void);
 extern void mminit_verify_page_links(struct page *page,
 		enum zone_type zone, unsigned long nid, unsigned long pfn);
+extern void mminit_verify_zonelist(void);
 
 #else
 
@@ -96,6 +97,10 @@ static inline void mminit_verify_page_li
 		enum zone_type zone, unsigned long nid, unsigned long pfn)
 {
 }
+
+static inline void mminit_verify_zonelist(void)
+{
+}
 #endif /* CONFIG_DEBUG_MEMORY_INIT */
 
 /* mminit_validate_physlimits is independent of CONFIG_DEBUG_MEMORY_INIT */
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.25-mm1-0025_defensive_pfn_checks/mm/mm_init.c linux-2.6.25-mm1-0030_display_zonelist/mm/mm_init.c
--- linux-2.6.25-mm1-0025_defensive_pfn_checks/mm/mm_init.c	2008-04-22 17:49:33.000000000 +0100
+++ linux-2.6.25-mm1-0030_display_zonelist/mm/mm_init.c	2008-04-22 17:50:06.000000000 +0100
@@ -11,6 +11,51 @@
 
 int __meminitdata mminit_loglevel;
 
+/* Note that the verification of correctness is required from the user */
+void mminit_verify_zonelist(void)
+{
+	int nid;
+
+	if (mminit_loglevel < MMINIT_VERIFY)
+		return;
+
+	for_each_online_node(nid) {
+		pg_data_t *pgdat = NODE_DATA(nid);
+		struct zone *zone;
+		struct zoneref *z;
+		struct zonelist *zonelist;
+		int i, listid, zoneid;
+
+		BUG_ON(MAX_ZONELISTS > 2);
+		for (i = 0; i < MAX_ZONELISTS * MAX_NR_ZONES; i++) {
+
+			/* Identify the zone and nodelist */
+			zoneid = i % MAX_NR_ZONES;
+			listid = i / MAX_NR_ZONES;
+			zonelist = &pgdat->node_zonelists[listid];
+			zone = &pgdat->node_zones[zoneid];
+			if (!populated_zone(zone))
+				continue;
+
+			/* Print information about the zonelist */
+			printk(KERN_DEBUG "mminit::zonelist %s %d:%s = ",
+				listid > 0 ? "thisnode" : "general", nid,
+				zone->name);
+
+			/* Iterate the zonelist */
+			for_each_zone_zonelist(zone, z, zonelist, zoneid) {
+#ifdef CONFIG_NUMA
+				printk(KERN_CONT "%d:%s ",
+						zone->node, zone->name);
+#else
+				printk(KERN_CONT "0:%s ", zone->name);
+#endif
+			}
+			printk(KERN_CONT "\n");
+		}
+	}
+}
+
 void __init mminit_verify_pageflags(void)
 {
 	int shift, width;
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.25-mm1-0025_defensive_pfn_checks/mm/page_alloc.c linux-2.6.25-mm1-0030_display_zonelist/mm/page_alloc.c
--- linux-2.6.25-mm1-0025_defensive_pfn_checks/mm/page_alloc.c	2008-04-22 17:49:48.000000000 +0100
+++ linux-2.6.25-mm1-0030_display_zonelist/mm/page_alloc.c	2008-04-22 17:50:06.000000000 +0100
@@ -2456,6 +2456,7 @@ void build_all_zonelists(void)
 
 	if (system_state == SYSTEM_BOOTING) {
 		__build_all_zonelists(NULL);
+		mminit_verify_zonelist();
 		cpuset_init_current_mems_allowed();
 	} else {
 		/* we have to stop all cpus to guarantee there is no user

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 4/4] Print out the zonelists on request for manual verification
  2008-04-17  0:06 [PATCH 0/4] Verification and debugging of memory initialisation V2 Mel Gorman
@ 2008-04-17  0:07 ` Mel Gorman
  0 siblings, 0 replies; 7+ messages in thread
From: Mel Gorman @ 2008-04-17  0:07 UTC (permalink / raw)
  To: linux-mm; +Cc: Mel Gorman, mingo, linux-kernel

This patch prints out the zonelists during boot for manual verification
by the user. This is useful for checking if the zonelists were somehow
corrupt during initialisation. Note that this patch will not work in -mm
due to differences in how zonelists are used.

Signed-off-by: Mel Gorman <mel@csn.ul.ie>
---

 mm/internal.h   |    5 +++++
 mm/mm_init.c    |   40 ++++++++++++++++++++++++++++++++++++++++
 mm/page_alloc.c |    1 +
 3 files changed, 46 insertions(+)

diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.25-rc9-0025_defensive_pfn_checks/mm/internal.h linux-2.6.25-rc9-0030_display_zonelist/mm/internal.h
--- linux-2.6.25-rc9-0025_defensive_pfn_checks/mm/internal.h	2008-04-17 00:20:47.000000000 +0100
+++ linux-2.6.25-rc9-0030_display_zonelist/mm/internal.h	2008-04-17 00:21:07.000000000 +0100
@@ -81,6 +81,7 @@ do { \
 extern void mminit_verify_pageflags(void);
 extern void mminit_verify_page_links(struct page *page,
 		enum zone_type zone, unsigned long nid, unsigned long pfn);
+extern void mminit_verify_zonelist(void);
 
 #else
 
@@ -97,6 +98,10 @@ static inline void mminit_verify_page_li
 		enum zone_type zone, unsigned long nid, unsigned long pfn)
 {
 }
+
+static inline void mminit_verify_zonelist(void)
+{
+}
 #endif /* CONFIG_DEBUG_MEMORY_INIT */
 
 /* mminit_validate_physlimits is independent of CONFIG_DEBUG_MEMORY_INIT */
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.25-rc9-0025_defensive_pfn_checks/mm/mm_init.c linux-2.6.25-rc9-0030_display_zonelist/mm/mm_init.c
--- linux-2.6.25-rc9-0025_defensive_pfn_checks/mm/mm_init.c	2008-04-17 00:20:33.000000000 +0100
+++ linux-2.6.25-rc9-0030_display_zonelist/mm/mm_init.c	2008-04-17 00:21:07.000000000 +0100
@@ -11,6 +11,46 @@
 
 int __initdata mminit_debug_level;
 
+/* Note that the verification of correctness is required from the user */
+void mminit_verify_zonelist(void)
+{
+	int nid;
+
+	if (mminit_debug_level < MMINIT_VERIFY)
+		return;
+
+	for_each_online_node(nid) {
+		pg_data_t *pgdat = NODE_DATA(nid);
+		struct zone *zone;
+		struct zone **z;
+		int listid;
+
+		for (listid = 0; listid < MAX_ZONELISTS; listid++) {
+			zone = &pgdat->node_zones[listid % MAX_NR_ZONES];
+
+			if (!populated_zone(zone))
+				continue;
+
+			printk(KERN_INFO "mminit::zonelist %s %d:%s = ",
+				listid >= MAX_NR_ZONES ? "thisnode" : "general",
+				nid,
+				zone->name);
+			z = pgdat->node_zonelists[listid].zones;
+
+			while (*z != NULL) {
+#ifdef CONFIG_NUMA
+				printk(KERN_CONT "%d:%s ",
+						(*z)->node, (*z)->name);
+#else
+				printk(KERN_CONT "0:%s ", (*z)->name);
+#endif
+				z++;
+			}
+			printk(KERN_CONT "\n");
+		}
+	}
+}
+
 void __init mminit_verify_pageflags(void)
 {
 	unsigned long shift;
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff linux-2.6.25-rc9-0025_defensive_pfn_checks/mm/page_alloc.c linux-2.6.25-rc9-0030_display_zonelist/mm/page_alloc.c
--- linux-2.6.25-rc9-0025_defensive_pfn_checks/mm/page_alloc.c	2008-04-17 00:20:47.000000000 +0100
+++ linux-2.6.25-rc9-0030_display_zonelist/mm/page_alloc.c	2008-04-17 00:21:07.000000000 +0100
@@ -2353,6 +2353,7 @@ void build_all_zonelists(void)
 
 	if (system_state == SYSTEM_BOOTING) {
 		__build_all_zonelists(NULL);
+		mminit_verify_zonelist();
 		cpuset_init_current_mems_allowed();
 	} else {
 		/* we have to stop all cpus to guarantee there is no user

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2008-04-28 19:29 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-04-28 19:28 [PATCH 0/4] Verification and debugging of memory initialisation V4 Mel Gorman
2008-04-28 19:28 ` [PATCH 1/4] Add a basic debugging framework for memory initialisation Mel Gorman
2008-04-28 19:29 ` [PATCH 2/4] Verify the page links and memory model Mel Gorman
2008-04-28 19:29 ` [PATCH 3/4] Make defensive checks around PFN values registered for memory usage Mel Gorman
2008-04-28 19:29 ` [PATCH 4/4] Print out the zonelists on request for manual verification Mel Gorman
  -- strict thread matches above, loose matches on Subject: below --
2008-04-22 18:31 [PATCH 0/4] Verification and debugging of memory initialisation V3 Mel Gorman
2008-04-22 18:32 ` [PATCH 4/4] Print out the zonelists on request for manual verification Mel Gorman
2008-04-17  0:06 [PATCH 0/4] Verification and debugging of memory initialisation V2 Mel Gorman
2008-04-17  0:07 ` [PATCH 4/4] Print out the zonelists on request for manual verification Mel Gorman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox