* [PATCH v2 2/8] mm/compaction: correct last_migrated_pfn update in compact_zone
2023-08-02 9:37 [PATCH v2 0/8] Fixes and cleanups to compaction Kemeng Shi
@ 2023-08-02 9:37 ` Kemeng Shi
2023-08-02 11:20 ` Baolin Wang
2023-08-02 9:37 ` [PATCH v2 4/8] mm/compaction: correct comment of fast_find_migrateblock in isolate_migratepages Kemeng Shi
` (4 subsequent siblings)
5 siblings, 1 reply; 11+ messages in thread
From: Kemeng Shi @ 2023-08-02 9:37 UTC (permalink / raw)
To: linux-mm, linux-kernel, akpm, baolin.wang, mgorman, david; +Cc: shikemeng
We record start pfn of last isolated page block with last_migrated_pfn. And
then:
1. We check if we mark the page block skip for exclusive access in
isolate_migratepages_block by test if next migrate pfn is still in last
isolated page block. If so, we will set finish_pageblock to do the rescan.
2. We check if a full cc->order block is scanned by test if last scan range
passes the cc->order block boundary. If so, we flush the pages were freed.
We treat cc->migrate_pfn before isolate_migratepages as the start pfn of
last isolated page range. However, we always align migrate_pfn to page block
or move to another page block in fast_find_migrateblock or in linearly scan
forward in isolate_migratepages before do page isolation in
isolate_migratepages_block.
Update last_migrated_pfn with pageblock_start_pfn(cc->migrate_pfn - 1)
after scan to correctly set start pfn of last isolated page range. To
avoid that:
1. Miss a rescan with finish_pageblock set as last_migrate_pfn does not
point to right pageblock and the migrate will not be in pageblock of
last_migrate_pfn as it should be.
2. Wrongly issue flush by test cc->order block boundary with wrong
last_migrate_pfn.
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
mm/compaction.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index a8cea916df9d..ec3a96b7afce 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -2487,7 +2487,8 @@ compact_zone(struct compact_control *cc, struct capture_control *capc)
goto check_drain;
case ISOLATE_SUCCESS:
update_cached = false;
- last_migrated_pfn = iteration_start_pfn;
+ last_migrated_pfn = max(cc->zone->zone_start_pfn,
+ pageblock_start_pfn(cc->migrate_pfn - 1));
}
err = migrate_pages(&cc->migratepages, compaction_alloc,
--
2.30.0
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: [PATCH v2 2/8] mm/compaction: correct last_migrated_pfn update in compact_zone
2023-08-02 9:37 ` [PATCH v2 2/8] mm/compaction: correct last_migrated_pfn update in compact_zone Kemeng Shi
@ 2023-08-02 11:20 ` Baolin Wang
0 siblings, 0 replies; 11+ messages in thread
From: Baolin Wang @ 2023-08-02 11:20 UTC (permalink / raw)
To: Kemeng Shi, linux-mm, linux-kernel, akpm, mgorman, david
On 8/2/2023 5:37 PM, Kemeng Shi wrote:
> We record start pfn of last isolated page block with last_migrated_pfn. And
> then:
> 1. We check if we mark the page block skip for exclusive access in
> isolate_migratepages_block by test if next migrate pfn is still in last
> isolated page block. If so, we will set finish_pageblock to do the rescan.
> 2. We check if a full cc->order block is scanned by test if last scan range
> passes the cc->order block boundary. If so, we flush the pages were freed.
>
> We treat cc->migrate_pfn before isolate_migratepages as the start pfn of
> last isolated page range. However, we always align migrate_pfn to page block
> or move to another page block in fast_find_migrateblock or in linearly scan
> forward in isolate_migratepages before do page isolation in
> isolate_migratepages_block.
>
> Update last_migrated_pfn with pageblock_start_pfn(cc->migrate_pfn - 1)
> after scan to correctly set start pfn of last isolated page range. To
> avoid that:
> 1. Miss a rescan with finish_pageblock set as last_migrate_pfn does not
> point to right pageblock and the migrate will not be in pageblock of
> last_migrate_pfn as it should be.
> 2. Wrongly issue flush by test cc->order block boundary with wrong
> last_migrate_pfn.
>
> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
LGTM.
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
> mm/compaction.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index a8cea916df9d..ec3a96b7afce 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -2487,7 +2487,8 @@ compact_zone(struct compact_control *cc, struct capture_control *capc)
> goto check_drain;
> case ISOLATE_SUCCESS:
> update_cached = false;
> - last_migrated_pfn = iteration_start_pfn;
> + last_migrated_pfn = max(cc->zone->zone_start_pfn,
> + pageblock_start_pfn(cc->migrate_pfn - 1));
> }
>
> err = migrate_pages(&cc->migratepages, compaction_alloc,
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2 4/8] mm/compaction: correct comment of fast_find_migrateblock in isolate_migratepages
2023-08-02 9:37 [PATCH v2 0/8] Fixes and cleanups to compaction Kemeng Shi
2023-08-02 9:37 ` [PATCH v2 2/8] mm/compaction: correct last_migrated_pfn update in compact_zone Kemeng Shi
@ 2023-08-02 9:37 ` Kemeng Shi
2023-08-02 11:31 ` Baolin Wang
2023-08-02 9:37 ` [PATCH v2 5/8] mm/compaction: correct comment of cached migrate pfn update Kemeng Shi
` (3 subsequent siblings)
5 siblings, 1 reply; 11+ messages in thread
From: Kemeng Shi @ 2023-08-02 9:37 UTC (permalink / raw)
To: linux-mm, linux-kernel, akpm, baolin.wang, mgorman, david; +Cc: shikemeng
After 90ed667c03fe5 ("Revert "Revert "mm/compaction: fix set skip in
fast_find_migrateblock"""), we remove skip set in fast_find_migrateblock.
Correct comment that fast_find_block is used to avoid isolation_suitable
check for pageblock returned from fast_find_migrateblock because
fast_find_migrateblock will mark found pageblock skipped.
Instead, comment that fast_find_block is used to avoid a redundant check
of fast found pageblock which is already checked skip flag inside
fast_find_migrateblock.
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
mm/compaction.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index 984c17a5c8fd..5c9dc4049e8e 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1966,8 +1966,8 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
block_start_pfn = cc->zone->zone_start_pfn;
/*
- * fast_find_migrateblock marks a pageblock skipped so to avoid
- * the isolation_suitable check below, check whether the fast
+ * fast_find_migrateblock will ignore pageblock skipped, so to avoid
+ * the isolation_suitable check below again, check whether the fast
* search was successful.
*/
fast_find_block = low_pfn != cc->migrate_pfn && !cc->fast_search_fail;
--
2.30.0
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: [PATCH v2 4/8] mm/compaction: correct comment of fast_find_migrateblock in isolate_migratepages
2023-08-02 9:37 ` [PATCH v2 4/8] mm/compaction: correct comment of fast_find_migrateblock in isolate_migratepages Kemeng Shi
@ 2023-08-02 11:31 ` Baolin Wang
2023-08-03 1:50 ` Kemeng Shi
0 siblings, 1 reply; 11+ messages in thread
From: Baolin Wang @ 2023-08-02 11:31 UTC (permalink / raw)
To: Kemeng Shi, linux-mm, linux-kernel, akpm, mgorman, david
On 8/2/2023 5:37 PM, Kemeng Shi wrote:
> After 90ed667c03fe5 ("Revert "Revert "mm/compaction: fix set skip in
> fast_find_migrateblock"""), we remove skip set in fast_find_migrateblock.
> Correct comment that fast_find_block is used to avoid isolation_suitable
> check for pageblock returned from fast_find_migrateblock because
> fast_find_migrateblock will mark found pageblock skipped.
> Instead, comment that fast_find_block is used to avoid a redundant check
> of fast found pageblock which is already checked skip flag inside
> fast_find_migrateblock.
>
> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
> ---
> mm/compaction.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 984c17a5c8fd..5c9dc4049e8e 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -1966,8 +1966,8 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
> block_start_pfn = cc->zone->zone_start_pfn;
>
> /*
> - * fast_find_migrateblock marks a pageblock skipped so to avoid
> - * the isolation_suitable check below, check whether the fast
> + * fast_find_migrateblock will ignore pageblock skipped, so to avoid
These seem confusing to me, since the fast_find_migrateblock() did not
ignore the skip flag checking. So how about below words?
"fast_find_migrateblock() has already ensured the pageblock is not set
with a skipped flag, so to avoid the isolation_suitable check below
again ..."
> + * the isolation_suitable check below again, check whether the fast
> * search was successful.
> */
> fast_find_block = low_pfn != cc->migrate_pfn && !cc->fast_search_fail;
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: [PATCH v2 4/8] mm/compaction: correct comment of fast_find_migrateblock in isolate_migratepages
2023-08-02 11:31 ` Baolin Wang
@ 2023-08-03 1:50 ` Kemeng Shi
0 siblings, 0 replies; 11+ messages in thread
From: Kemeng Shi @ 2023-08-03 1:50 UTC (permalink / raw)
To: Baolin Wang, linux-mm, linux-kernel, akpm, mgorman, david
on 8/2/2023 7:31 PM, Baolin Wang wrote:
>
>
> On 8/2/2023 5:37 PM, Kemeng Shi wrote:
>> After 90ed667c03fe5 ("Revert "Revert "mm/compaction: fix set skip in
>> fast_find_migrateblock"""), we remove skip set in fast_find_migrateblock.
>> Correct comment that fast_find_block is used to avoid isolation_suitable
>> check for pageblock returned from fast_find_migrateblock because
>> fast_find_migrateblock will mark found pageblock skipped.
>> Instead, comment that fast_find_block is used to avoid a redundant check
>> of fast found pageblock which is already checked skip flag inside
>> fast_find_migrateblock.
>>
>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
>> ---
>> mm/compaction.c | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index 984c17a5c8fd..5c9dc4049e8e 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -1966,8 +1966,8 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
>> block_start_pfn = cc->zone->zone_start_pfn;
>> /*
>> - * fast_find_migrateblock marks a pageblock skipped so to avoid
>> - * the isolation_suitable check below, check whether the fast
>> + * fast_find_migrateblock will ignore pageblock skipped, so to avoid
>
> These seem confusing to me, since the fast_find_migrateblock() did not ignore the skip flag checking. So how about below words?
>
> "fast_find_migrateblock() has already ensured the pageblock is not set with a skipped flag, so to avoid the isolation_suitable check below again ..."
>
Thanks for the advise. This looks good to me. I will do this in next version.
>> + * the isolation_suitable check below again, check whether the fast
>> * search was successful.
>> */
>> fast_find_block = low_pfn != cc->migrate_pfn && !cc->fast_search_fail;
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2 5/8] mm/compaction: correct comment of cached migrate pfn update
2023-08-02 9:37 [PATCH v2 0/8] Fixes and cleanups to compaction Kemeng Shi
2023-08-02 9:37 ` [PATCH v2 2/8] mm/compaction: correct last_migrated_pfn update in compact_zone Kemeng Shi
2023-08-02 9:37 ` [PATCH v2 4/8] mm/compaction: correct comment of fast_find_migrateblock in isolate_migratepages Kemeng Shi
@ 2023-08-02 9:37 ` Kemeng Shi
2023-08-02 9:37 ` [PATCH v2 6/8] mm/compaction: correct comment to complete migration failure Kemeng Shi
` (2 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: Kemeng Shi @ 2023-08-02 9:37 UTC (permalink / raw)
To: linux-mm, linux-kernel, akpm, baolin.wang, mgorman, david; +Cc: shikemeng
Commit e380bebe47715 ("mm, compaction: keep migration source private to
a single compaction instance") moved update of async and sync
compact_cached_migrate_pfn from update_pageblock_skip to
update_cached_migrate but left the comment behind.
Move the relevant comment to correct this.
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
mm/compaction.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index 5c9dc4049e8e..7f01fbeb3084 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -469,6 +469,7 @@ static void update_cached_migrate(struct compact_control *cc, unsigned long pfn)
pfn = pageblock_end_pfn(pfn);
+ /* Update where async and sync compaction should restart */
if (pfn > zone->compact_cached_migrate_pfn[0])
zone->compact_cached_migrate_pfn[0] = pfn;
if (cc->mode != MIGRATE_ASYNC &&
@@ -490,7 +491,6 @@ static void update_pageblock_skip(struct compact_control *cc,
set_pageblock_skip(page);
- /* Update where async and sync compaction should restart */
if (pfn < zone->compact_cached_free_pfn)
zone->compact_cached_free_pfn = pfn;
}
--
2.30.0
^ permalink raw reply [flat|nested] 11+ messages in thread* [PATCH v2 6/8] mm/compaction: correct comment to complete migration failure
2023-08-02 9:37 [PATCH v2 0/8] Fixes and cleanups to compaction Kemeng Shi
` (2 preceding siblings ...)
2023-08-02 9:37 ` [PATCH v2 5/8] mm/compaction: correct comment of cached migrate pfn update Kemeng Shi
@ 2023-08-02 9:37 ` Kemeng Shi
2023-08-02 9:37 ` [PATCH v2 8/8] mm/compaction: only set skip flag if cc->no_set_skip_hint is false Kemeng Shi
[not found] ` <20230802093741.2333325-2-shikemeng@huaweicloud.com>
5 siblings, 0 replies; 11+ messages in thread
From: Kemeng Shi @ 2023-08-02 9:37 UTC (permalink / raw)
To: linux-mm, linux-kernel, akpm, baolin.wang, mgorman, david; +Cc: shikemeng
Commit cfccd2e63e7e0 ("mm, compaction: finish pageblocks on complete
migration failure") convert cc->order aligned check to page block
order aligned check. Correct comment relevant with it.
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
mm/compaction.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index 7f01fbeb3084..5581e4cccac5 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -2512,7 +2512,7 @@ compact_zone(struct compact_control *cc, struct capture_control *capc)
}
/*
* If an ASYNC or SYNC_LIGHT fails to migrate a page
- * within the current order-aligned block and
+ * within the pageblock_order-aligned block and
* fast_find_migrateblock may be used then scan the
* remainder of the pageblock. This will mark the
* pageblock "skip" to avoid rescanning in the near
--
2.30.0
^ permalink raw reply [flat|nested] 11+ messages in thread* [PATCH v2 8/8] mm/compaction: only set skip flag if cc->no_set_skip_hint is false
2023-08-02 9:37 [PATCH v2 0/8] Fixes and cleanups to compaction Kemeng Shi
` (3 preceding siblings ...)
2023-08-02 9:37 ` [PATCH v2 6/8] mm/compaction: correct comment to complete migration failure Kemeng Shi
@ 2023-08-02 9:37 ` Kemeng Shi
[not found] ` <20230802093741.2333325-2-shikemeng@huaweicloud.com>
5 siblings, 0 replies; 11+ messages in thread
From: Kemeng Shi @ 2023-08-02 9:37 UTC (permalink / raw)
To: linux-mm, linux-kernel, akpm, baolin.wang, mgorman, david; +Cc: shikemeng
Keep the same logic as update_pageblock_skip, only set skip if
no_set_skip_hint is false which is more reasonable.
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
mm/compaction.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index a1cc327d1b32..afc31d27f1ba 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1421,7 +1421,7 @@ fast_isolate_around(struct compact_control *cc, unsigned long pfn)
isolate_freepages_block(cc, &start_pfn, end_pfn, &cc->freepages, 1, false);
/* Skip this pageblock in the future as it's full or nearly full */
- if (start_pfn >= end_pfn)
+ if (start_pfn >= end_pfn && !cc->no_set_skip_hint)
set_pageblock_skip(page);
}
--
2.30.0
^ permalink raw reply [flat|nested] 11+ messages in thread[parent not found: <20230802093741.2333325-2-shikemeng@huaweicloud.com>]
* Re: [PATCH v2 1/8] mm/compaction: avoid missing last page block in section after skip offline sections
[not found] ` <20230802093741.2333325-2-shikemeng@huaweicloud.com>
@ 2023-08-02 8:24 ` David Hildenbrand
2023-08-02 11:10 ` Baolin Wang
1 sibling, 0 replies; 11+ messages in thread
From: David Hildenbrand @ 2023-08-02 8:24 UTC (permalink / raw)
To: Kemeng Shi, linux-mm, linux-kernel, akpm, baolin.wang, mgorman
On 02.08.23 11:37, Kemeng Shi wrote:
> skip_offline_sections_reverse will return the last pfn in found online
> section. Then we set block_start_pfn to start of page block which
> contains the last pfn in section. Then we continue, move one page
> block forward and ignore the last page block in the online section.
> Make block_start_pfn point to first page block after online section to fix
> this:
> 1. make skip_offline_sections_reverse return end pfn of online section,
> i.e. pfn of page block after online section.
> 2. assign block_start_pfn with next_pfn.
>
> Fixes: f63224525309 ("mm: compaction: skip the memory hole rapidly when isolating free pages")
> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
> ---
> mm/compaction.c | 10 +++++++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index cd23da4d2a5b..a8cea916df9d 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -250,6 +250,11 @@ static unsigned long skip_offline_sections(unsigned long start_pfn)
> return 0;
> }
>
> +/*
> + * If the PFN falls into an offline section, return the end PFN of the
> + * next online section in reverse. If the PFN falls into an online section
> + * or if there is no next online section in reverse, return 0.
> + */
> static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
> {
> unsigned long start_nr = pfn_to_section_nr(start_pfn);
> @@ -259,7 +264,7 @@ static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
>
> while (start_nr-- > 0) {
> if (online_section_nr(start_nr))
> - return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1;
> + return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION;
> }
>
> return 0;
> @@ -1668,8 +1673,7 @@ static void isolate_freepages(struct compact_control *cc)
>
> next_pfn = skip_offline_sections_reverse(block_start_pfn);
> if (next_pfn)
> - block_start_pfn = max(pageblock_start_pfn(next_pfn),
> - low_pfn);
> + block_start_pfn = max(next_pfn, low_pfn);
>
> continue;
> }
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: [PATCH v2 1/8] mm/compaction: avoid missing last page block in section after skip offline sections
[not found] ` <20230802093741.2333325-2-shikemeng@huaweicloud.com>
2023-08-02 8:24 ` [PATCH v2 1/8] mm/compaction: avoid missing last page block in section after skip offline sections David Hildenbrand
@ 2023-08-02 11:10 ` Baolin Wang
1 sibling, 0 replies; 11+ messages in thread
From: Baolin Wang @ 2023-08-02 11:10 UTC (permalink / raw)
To: Kemeng Shi, linux-mm, linux-kernel, akpm, mgorman, david
On 8/2/2023 5:37 PM, Kemeng Shi wrote:
> skip_offline_sections_reverse will return the last pfn in found online
> section. Then we set block_start_pfn to start of page block which
> contains the last pfn in section. Then we continue, move one page
> block forward and ignore the last page block in the online section.
> Make block_start_pfn point to first page block after online section to fix
> this:
> 1. make skip_offline_sections_reverse return end pfn of online section,
> i.e. pfn of page block after online section.
> 2. assign block_start_pfn with next_pfn.
>
> Fixes: f63224525309 ("mm: compaction: skip the memory hole rapidly when isolating free pages")
The changes look good to me.
But the commit id is not stable, since it is not merged into mm-stable
branch yet. Not sure how to handle this patch, squash it into the
original patch? Andrew, what do you prefer?
> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
> ---
> mm/compaction.c | 10 +++++++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index cd23da4d2a5b..a8cea916df9d 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -250,6 +250,11 @@ static unsigned long skip_offline_sections(unsigned long start_pfn)
> return 0;
> }
>
> +/*
> + * If the PFN falls into an offline section, return the end PFN of the
> + * next online section in reverse. If the PFN falls into an online section
> + * or if there is no next online section in reverse, return 0.
> + */
> static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
> {
> unsigned long start_nr = pfn_to_section_nr(start_pfn);
> @@ -259,7 +264,7 @@ static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
>
> while (start_nr-- > 0) {
> if (online_section_nr(start_nr))
> - return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1;
> + return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION;
> }
>
> return 0;
> @@ -1668,8 +1673,7 @@ static void isolate_freepages(struct compact_control *cc)
>
> next_pfn = skip_offline_sections_reverse(block_start_pfn);
> if (next_pfn)
> - block_start_pfn = max(pageblock_start_pfn(next_pfn),
> - low_pfn);
> + block_start_pfn = max(next_pfn, low_pfn);
>
> continue;
> }
^ permalink raw reply [flat|nested] 11+ messages in thread