From: Zi Yan <ziy@nvidia.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: linux-mm@kvack.org, David Hildenbrand <david@redhat.com>,
Oscar Salvador <osalvador@suse.de>,
Vlastimil Babka <vbabka@suse.cz>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
Mel Gorman <mgorman@suse.de>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 2/4] mm/page_isolation: remove migratetype from move_freepages_block_isolate()
Date: Fri, 14 Feb 2025 13:04:20 -0500 [thread overview]
Message-ID: <2DBBFC75-07A5-4D31-A6CE-887095AC6C75@nvidia.com> (raw)
In-Reply-To: <20250214172032.GA241035@cmpxchg.org>
On 14 Feb 2025, at 12:20, Johannes Weiner wrote:
> On Fri, Feb 14, 2025 at 10:42:13AM -0500, Zi Yan wrote:
>> Since migratetype is no longer overwritten during pageblock isolation,
>> moving pageblocks to and from MIGRATE_ISOLATE do not need migratetype.
>>
>> Rename move_freepages_block_isolate() to share common code and add
>> pageblock_isolate_and_move_free_pages() and
>> pageblock_unisolate_and_move_free_pages() to be explicit about the page
>> isolation operations.
>>
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>> ---
>> include/linux/page-isolation.h | 4 +--
>> mm/page_alloc.c | 48 +++++++++++++++++++++++++++-------
>> mm/page_isolation.c | 21 +++++++--------
>> 3 files changed, 51 insertions(+), 22 deletions(-)
>>
>> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
>> index 51797dc39cbc..28c56f423e34 100644
>> --- a/include/linux/page-isolation.h
>> +++ b/include/linux/page-isolation.h
>> @@ -27,8 +27,8 @@ static inline bool is_migrate_isolate(int migratetype)
>>
>> void set_pageblock_migratetype(struct page *page, int migratetype);
>>
>> -bool move_freepages_block_isolate(struct zone *zone, struct page *page,
>> - int migratetype);
>> +bool pageblock_isolate_and_move_free_pages(struct zone *zone, struct page *page);
>> +bool pageblock_unisolate_and_move_free_pages(struct zone *zone, struct page *page);
>>
>> int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
>> int migratetype, int flags);
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index f17f4acc38c6..9bba5b1c4f1d 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -1848,10 +1848,10 @@ static unsigned long find_large_buddy(unsigned long start_pfn)
>> }
>>
>> /**
>> - * move_freepages_block_isolate - move free pages in block for page isolation
>> + * __move_freepages_for_page_isolation - move free pages in block for page isolation
>> * @zone: the zone
>> * @page: the pageblock page
>> - * @migratetype: migratetype to set on the pageblock
>> + * @isolate_pageblock to isolate the given pageblock or unisolate it
>> *
>> * This is similar to move_freepages_block(), but handles the special
>> * case encountered in page isolation, where the block of interest
>> @@ -1866,10 +1866,15 @@ static unsigned long find_large_buddy(unsigned long start_pfn)
>> *
>> * Returns %true if pages could be moved, %false otherwise.
>> */
>> -bool move_freepages_block_isolate(struct zone *zone, struct page *page,
>> - int migratetype)
>> +static bool __move_freepages_for_page_isolation(struct zone *zone,
>> + struct page *page, bool isolate_pageblock)
>
> I'm biased, but I think the old name is fine.
>
> bool isolate?
OK. Will use the old name and bool isolate.
>
>> {
>> unsigned long start_pfn, pfn;
>> + int from_mt;
>> + int to_mt;
>> +
>> + if (isolate_pageblock == get_pageblock_isolate(page))
>> + return false;
>>
>> if (!prep_move_freepages_block(zone, page, &start_pfn, NULL, NULL))
>> return false;
>> @@ -1886,7 +1891,10 @@ bool move_freepages_block_isolate(struct zone *zone, struct page *page,
>>
>> del_page_from_free_list(buddy, zone, order,
>> get_pfnblock_migratetype(buddy, pfn));
>> - set_pageblock_migratetype(page, migratetype);
>> + if (isolate_pageblock)
>> + set_pageblock_isolate(page);
>> + else
>> + clear_pageblock_isolate(page);
>
> Since this pattern repeats, maybe create a toggle function that does this?
>
> set_pfnblock_flags_mask(page, (isolate << PB_migrate_isolate),
> page_to_pfn(page),
> (1 << PB_migrate_isolate));
Sure.
>
>> split_large_buddy(zone, buddy, pfn, order, FPI_NONE);
>> return true;
>> }
>> @@ -1897,16 +1905,38 @@ bool move_freepages_block_isolate(struct zone *zone, struct page *page,
>>
>> del_page_from_free_list(page, zone, order,
>> get_pfnblock_migratetype(page, pfn));
>> - set_pageblock_migratetype(page, migratetype);
>> + if (isolate_pageblock)
>> + set_pageblock_isolate(page);
>> + else
>> + clear_pageblock_isolate(page);
>> split_large_buddy(zone, page, pfn, order, FPI_NONE);
>> return true;
>> }
>> move:
>> - __move_freepages_block(zone, start_pfn,
>> - get_pfnblock_migratetype(page, start_pfn),
>> - migratetype);
>> + if (isolate_pageblock) {
>> + from_mt = __get_pfnblock_flags_mask(page, page_to_pfn(page),
>> + MIGRATETYPE_MASK);
>> + to_mt = MIGRATE_ISOLATE;
>> + } else {
>> + from_mt = MIGRATE_ISOLATE;
>> + to_mt = __get_pfnblock_flags_mask(page, page_to_pfn(page),
>> + MIGRATETYPE_MASK);
>> + }
>> +
>> + __move_freepages_block(zone, start_pfn, from_mt, to_mt);
>> return true;
>
> Keeping both MIGRATE_ISOLATE and PB_migrate_isolate encoding the same
> state is fragile. At least in the pfnblock bitmask, there should only
> be one bit encoding this.
>
> Obviously, there are many places in the allocator that care about
> freelist destination: they want MIGRATE_ISOLATE if the bit is set, and
> the "actual" type otherwise.
>
> I think what should work is decoupling enum migratetype from the
> pageblock bits, and then have a parsing layer on top like this:
>
> enum migratetype {
> MIGRATE_UNMOVABLE,
> ...
> MIGRATE_TYPE_BITS,
> MIGRATE_ISOLATE = MIGRATE_TYPE_BITS,
> MIGRATE_TYPES
> };
>
> #define PB_migratetype_bits MIGRATE_TYPE_BITS
>
> static enum migratetype get_pageblock_migratetype()
> {
> flags = get_pfnblock_flags_mask(..., MIGRATETYPE_MASK | (1 << PB_migrate_isolate));
> if (flags & (1 << PB_migrate_isolate))
> return MIGRATE_ISOLATE;
> return flags;
> }
I had something similar in RFC and change to current implementation
to hide the details. But that is fragile like you said. I will try
your approach in the next version.
Thank you for the reviews. :)
Best Regards,
Yan, Zi
next prev parent reply other threads:[~2025-02-14 18:04 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-14 15:42 [PATCH v2 0/4] Make MIGRATE_ISOLATE a standalone bit Zi Yan
2025-02-14 15:42 ` [PATCH v2 1/4] mm/page_isolation: make page isolation " Zi Yan
2025-02-14 16:45 ` Johannes Weiner
2025-02-14 17:58 ` Zi Yan
2025-02-14 15:42 ` [PATCH v2 2/4] mm/page_isolation: remove migratetype from move_freepages_block_isolate() Zi Yan
2025-02-14 17:20 ` Johannes Weiner
2025-02-14 18:04 ` Zi Yan [this message]
2025-02-14 15:42 ` [PATCH v2 3/4] mm/page_isolation: remove migratetype from undo_isolate_page_range() Zi Yan
2025-02-14 15:42 ` [PATCH v2 4/4] mm/page_isolation: remove migratetype parameter from more functions Zi Yan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2DBBFC75-07A5-4D31-A6CE-887095AC6C75@nvidia.com \
--to=ziy@nvidia.com \
--cc=baolin.wang@linux.alibaba.com \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=osalvador@suse.de \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox