* [RFC] memory-layout-free zones (for review) [3/3] fix for_each_page_in_zone
@ 2006-02-23 9:00 KAMEZAWA Hiroyuki
2006-02-23 18:12 ` Dave Hansen
0 siblings, 1 reply; 3+ messages in thread
From: KAMEZAWA Hiroyuki @ 2006-02-23 9:00 UTC (permalink / raw)
To: linux-mm
To remove zone_start_pfn/zone_spanned_pages, for_each_page_in_zone()
must be modified. This pacth uses pgdat instead of zones and calls
page_zone() to check page is in zone.
Maybe slower (>_<......
Signed-Off-By: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Index: node-hot-add2/include/linux/mm.h
===================================================================
--- node-hot-add2.orig/include/linux/mm.h
+++ node-hot-add2/include/linux/mm.h
@@ -549,6 +549,64 @@ void page_address_init(void);
#define page_address_init() do { } while(0)
#endif
+
+/*
+ * These inline function for for_each_page_in_zone can work
+ * even if CONFIG_SPARSEMEM=y.
+ */
+static inline struct page *first_page_in_zone(struct zone *zone)
+{
+ struct pglist_data *pgdat;
+ unsigned long start_pfn;
+ unsigned long i = 0;
+
+ if (!populated_zone(zone))
+ return NULL;
+
+ pgdat = zone->zone_pgdat;
+ zone = pgdat->node_start_pfn;
+
+ for (i = 0; i < pgdat->zone_spanned_pages; i++) {
+ if (pfn_valid(start_pfn + i) && page_zone(page) == zone)
+ break;
+ }
+ BUG_ON(i == pgdat->node_spanned_pages); /* zone is populated */
+ return pfn_to_page(start_pfn + i);
+}
+
+static inline struct page *next_page_in_zone(struct page *page,
+ struct zone *zone)
+{
+ struct pglist_data *pgdat;
+ unsigned long start_pfn;
+ unsigned long i;
+
+ if (!populated_zone(zone))
+ return NULL;
+ pgdat = zone->zone_pgdat;
+ start_pfn = pgdat->node_start_pfn;
+ i = page_to_pfn(page) - start_pfn;
+
+ for (i = i + 1; i < pgdat->node_spanned_pages; i++) {
+ if (pfn_vlaid(start_pfn + i) && page_zone(page) == zone)
+ break;
+ }
+ if (i == pgdat->node_spanned_pages)
+ return NULL;
+ return pfn_to_page(start_pfn + i);
+}
+
+/**
+ * for_each_page_in_zone -- helper macro to iterate over all pages in a zone.
+ * @page - pointer to page
+ * @zone - pointer to zone
+ *
+ */
+#define for_each_page_in_zone(page, zone) \
+ for (page = (first_page_in_zone((zone))); \
+ page; \
+ page = next_page_in_zone(page, (zone)));
+
/*
* On an anonymous page mapped into a user virtual memory area,
* page->mapping points to its anon_vma, not to a struct address_space;
Index: node-hot-add2/include/linux/mmzone.h
===================================================================
--- node-hot-add2.orig/include/linux/mmzone.h
+++ node-hot-add2/include/linux/mmzone.h
@@ -457,53 +457,6 @@ static inline struct zone *next_zone(str
zone; \
zone = next_zone(zone))
-/*
- * These inline function for for_each_page_in_zone can work
- * even if CONFIG_SPARSEMEM=y.
- */
-static inline struct page *first_page_in_zone(struct zone *zone)
-{
- unsigned long start_pfn = zone->zone_start_pfn;
- unsigned long i = 0;
-
- if (!populated_zone(zone))
- return NULL;
-
- for (i = 0; i < zone->zone_spanned_pages; i++) {
- if (pfn_valid(start_pfn + i))
- break;
- }
- return pfn_to_page(start_pfn + i);
-}
-
-static inline struct page *next_page_in_zone(struct page *page,
- struct zone *zone)
-{
- unsigned long start_pfn = zone->zone_start_pfn;
- unsigned long i = page_to_pfn(page) - start_pfn;
-
- if (!populated_zone(zone))
- return NULL;
-
- for (i = i + 1; i < zone->zone_spanned_pages; i++) {
- if (pfn_vlaid(start_pfn + i))
- break;
- }
- if (i == zone->zone_spanned_pages)
- return NULL;
- return pfn_to_page(start_pfn + i);
-}
-
-/**
- * for_each_page_in_zone -- helper macro to iterate over all pages in a zone.
- * @page - pointer to page
- * @zone - pointer to zone
- *
- */
-#define for_each_page_in_zone(page, zone) \
- for (page = (first_page_in_zone((zone))); \
- page; \
- page = next_page_in_zone(page, (zone)));
#ifdef CONFIG_SPARSEMEM
#include <asm/sparsemem.h>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: [RFC] memory-layout-free zones (for review) [3/3] fix for_each_page_in_zone
2006-02-23 9:00 [RFC] memory-layout-free zones (for review) [3/3] fix for_each_page_in_zone KAMEZAWA Hiroyuki
@ 2006-02-23 18:12 ` Dave Hansen
2006-02-24 0:03 ` KAMEZAWA Hiroyuki
0 siblings, 1 reply; 3+ messages in thread
From: Dave Hansen @ 2006-02-23 18:12 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki; +Cc: linux-mm
On Thu, 2006-02-23 at 18:00 +0900, KAMEZAWA Hiroyuki wrote:
> +static inline struct page *first_page_in_zone(struct zone *zone)
> +{
> + struct pglist_data *pgdat;
> + unsigned long start_pfn;
> + unsigned long i = 0;
> +
> + if (!populated_zone(zone))
> + return NULL;
> +
> + pgdat = zone->zone_pgdat;
> + zone = pgdat->node_start_pfn;
> +
> + for (i = 0; i < pgdat->zone_spanned_pages; i++) {
> + if (pfn_valid(start_pfn + i) && page_zone(page) == zone)
> + break;
> + }
> + BUG_ON(i == pgdat->node_spanned_pages); /* zone is populated */
> + return pfn_to_page(start_pfn + i);
> +}
I know we don't use this function _too_ much , but it would probably be
nice to make it a little smarter than "i++". We can be pretty sure, at
least with SPARSEMEM that the granularity is larger than that. We can
probably leave it until it gets to be a real problem.
I was also trying to think if a binary search is appropriate here. I
guess it depends on whether we allow the zones to have overlapping pfn
ranges, which I _think_ is one of the goals from these patches. Any
thoughts?
Oh, and I noticed the "pgdat->zone_spanned_pages" bit. Did you compile
this? ;)
> +static inline struct page *next_page_in_zone(struct page *page,
> + struct zone *zone)
> +{
> + struct pglist_data *pgdat;
> + unsigned long start_pfn;
> + unsigned long i;
> +
> + if (!populated_zone(zone))
> + return NULL;
> + pgdat = zone->zone_pgdat;
> + start_pfn = pgdat->node_start_pfn;
> + i = page_to_pfn(page) - start_pfn;
> +
> + for (i = i + 1; i < pgdat->node_spanned_pages; i++) {
> + if (pfn_vlaid(start_pfn + i) && page_zone(page) == zone)
> + break;
> + }
> + if (i == pgdat->node_spanned_pages)
> + return NULL;
> + return pfn_to_page(start_pfn + i);
> +}
Same comment, BTW, about code sharing. Is it something we want to or
can do with these?
-- Dave
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: [RFC] memory-layout-free zones (for review) [3/3] fix for_each_page_in_zone
2006-02-23 18:12 ` Dave Hansen
@ 2006-02-24 0:03 ` KAMEZAWA Hiroyuki
0 siblings, 0 replies; 3+ messages in thread
From: KAMEZAWA Hiroyuki @ 2006-02-24 0:03 UTC (permalink / raw)
To: Dave Hansen; +Cc: linux-mm
Dave Hansen wrote:
> I know we don't use this function _too_ much , but it would probably be
> nice to make it a little smarter than "i++". We can be pretty sure, at
> least with SPARSEMEM that the granularity is larger than that. We can
> probably leave it until it gets to be a real problem.
Yes, SPARSEMEM can skip PAGES_PER_SECTION pages if !pfn_valid()
>
> I was also trying to think if a binary search is appropriate here. I
> guess it depends on whether we allow the zones to have overlapping pfn
> ranges, which I _think_ is one of the goals from these patches. Any
> thoughts?
>
What I'm thinking of is to allow zones to have overlapping pfn ranges.
Showing benefit of it (by patch) is difficult now but I think it's sane
direction.
> Oh, and I noticed the "pgdat->zone_spanned_pages" bit. Did you compile
> this? ;)
>
No (>_<
>> +static inline struct page *next_page_in_zone(struct page *page,
>> + struct zone *zone)
>> +{
>> + struct pglist_data *pgdat;
>> + unsigned long start_pfn;
>> + unsigned long i;
>> +
>> + if (!populated_zone(zone))
>> + return NULL;
>> + pgdat = zone->zone_pgdat;
>> + start_pfn = pgdat->node_start_pfn;
>> + i = page_to_pfn(page) - start_pfn;
>> +
>> + for (i = i + 1; i < pgdat->node_spanned_pages; i++) {
>> + if (pfn_vlaid(start_pfn + i) && page_zone(page) == zone)
>> + break;
>> + }
>> + if (i == pgdat->node_spanned_pages)
>> + return NULL;
>> + return pfn_to_page(start_pfn + i);
>> +}
>
> Same comment, BTW, about code sharing. Is it something we want to or
> can do with these?
>
Hmm...I can't find it. I'll rewrite this code as out-of-line function and
add optimizaion by its memory_model, and do more cleanup.
I'll post these again to -mm before going lkml, and will do compile them in
the next time....
-- Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2006-02-24 0:03 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-02-23 9:00 [RFC] memory-layout-free zones (for review) [3/3] fix for_each_page_in_zone KAMEZAWA Hiroyuki
2006-02-23 18:12 ` Dave Hansen
2006-02-24 0:03 ` KAMEZAWA Hiroyuki
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox