linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Zi Yan <ziy@nvidia.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: David Hildenbrand <david@redhat.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Mel Gorman <mgorman@techsingularity.net>,
	Miaohe Lin <linmiaohe@huawei.com>,
	Kefeng Wang <wangkefeng.wang@huawei.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH V2 0/6] mm: page_alloc: freelist migratetype hygiene
Date: Thu, 12 Oct 2023 20:06:59 -0400	[thread overview]
Message-ID: <D6A142AB-08F6-4335-8D08-1743DFAAD10C@nvidia.com> (raw)
In-Reply-To: <20231010211200.GA129823@cmpxchg.org>

[-- Attachment #1: Type: text/plain, Size: 9073 bytes --]

On 10 Oct 2023, at 17:12, Johannes Weiner wrote:

> Hello!
>
> On Mon, Oct 02, 2023 at 10:26:44PM -0400, Zi Yan wrote:
>> On 27 Sep 2023, at 22:51, Zi Yan wrote:
>> I attached my revised patch 2 and 3 (with all the suggestions above).
>
> Thanks! It took me a bit to read through them. It's a tricky codebase!
>
> Some comments below.
>
>> From 1c8f99cff5f469ee89adc33e9c9499254cad13f2 Mon Sep 17 00:00:00 2001
>> From: Zi Yan <ziy@nvidia.com>
>> Date: Mon, 25 Sep 2023 16:27:14 -0400
>> Subject: [PATCH v2 1/2] mm: set migratetype after free pages are moved between
>>  free lists.
>>
>> This avoids changing migratetype after move_freepages() or
>> move_freepages_block(), which is error prone. It also prepares for upcoming
>> changes to fix move_freepages() not moving free pages partially in the
>> range.
>>
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>
> This is great and indeed makes the callsites much simpler. Thanks,
> I'll fold this into the series.
>
>> @@ -1597,9 +1615,29 @@ static int move_freepages(struct zone *zone, unsigned long start_pfn,
>>  			  unsigned long end_pfn, int old_mt, int new_mt)
>>  {
>>  	struct page *page;
>> -	unsigned long pfn;
>> +	unsigned long pfn, pfn2;
>>  	unsigned int order;
>>  	int pages_moved = 0;
>> +	unsigned long mt_changed_pfn = start_pfn - pageblock_nr_pages;
>> +	unsigned long new_start_pfn = get_freepage_start_pfn(start_pfn);
>> +
>> +	/* split at start_pfn if it is in the middle of a free page */
>> +	if (new_start_pfn != start_pfn && PageBuddy(pfn_to_page(new_start_pfn))) {
>> +		struct page *new_page = pfn_to_page(new_start_pfn);
>> +		int new_page_order = buddy_order(new_page);
>
> get_freepage_start_pfn() returns start_pfn if it didn't find a large
> buddy, so the buddy check shouldn't be necessary, right?
>
>> +		if (new_start_pfn + (1 << new_page_order) > start_pfn) {
>
> This *should* be implied according to the comments on
> get_freepage_start_pfn(), but it currently isn't. Doing so would help
> here, and seemingly also in alloc_contig_range().
>
> How about this version of get_freepage_start_pfn()?
>
> /*
>  * Scan the range before this pfn for a buddy that straddles it
>  */
> static unsigned long find_straddling_buddy(unsigned long start_pfn)
> {
> 	int order = 0;
> 	struct page *page;
> 	unsigned long pfn = start_pfn;
>
> 	while (!PageBuddy(page = pfn_to_page(pfn))) {
> 		/* Nothing found */
> 		if (++order > MAX_ORDER)
> 			return start_pfn;
> 		pfn &= ~0UL << order;
> 	}
>
> 	/*
> 	 * Found a preceding buddy, but does it straddle?
> 	 */
> 	if (pfn + (1 << buddy_order(page)) > start_pfn)
> 		return pfn;
>
> 	/* Nothing found */
> 	return start_pfn;
> }
>
>> @@ -1614,10 +1652,43 @@ static int move_freepages(struct zone *zone, unsigned long start_pfn,
>>
>>  		order = buddy_order(page);
>>  		move_to_free_list(page, zone, order, old_mt, new_mt);
>> +		/*
>> +		 * set page migratetype 1) only after we move all free pages in
>> +		 * one pageblock and 2) for all pageblocks within the page.
>> +		 *
>> +		 * for 1), since move_to_free_list() checks page migratetype with
>> +		 * old_mt and changing one page migratetype affects all pages
>> +		 * within the same pageblock, if we are moving more than
>> +		 * one free pages in the same pageblock, setting migratetype
>> +		 * right after first move_to_free_list() triggers the warning
>> +		 * in the following move_to_free_list().
>> +		 *
>> +		 * for 2), when a free page order is greater than pageblock_order,
>> +		 * all pageblocks within the free page need to be changed after
>> +		 * move_to_free_list().
>
> I think this can be somewhat simplified.
>
> There are two assumptions we can make. Buddies always consist of 2^n
> pages. And buddies and pageblocks are naturally aligned. This means
> that if this pageblock has the start of a buddy that straddles into
> the next pageblock(s), it must be the first page in the block. That in
> turn means we can move the handling before the loop.
>
> If we split first, it also makes the loop a little simpler because we
> know that any buddies that start inside this block cannot extend
> beyond it (due to the alignment). The loop how it was originally
> written can remain untouched.
>
>> +		 */
>> +		if (pfn + (1 << order) > pageblock_end_pfn(pfn)) {
>> +			for (pfn2 = pfn;
>> +			     pfn2 < min_t(unsigned long,
>> +					  pfn + (1 << order),
>> +					  end_pfn + 1);
>> +			     pfn2 += pageblock_nr_pages) {
>> +				set_pageblock_migratetype(pfn_to_page(pfn2),
>> +							  new_mt);
>> +				mt_changed_pfn = pfn2;
>
> Hm, this seems to assume that start_pfn to end_pfn can be more than
> one block. Why is that? This function is only used on single blocks.

You are right. I made unnecessary assumptions when I wrote the code.

>
>> +			}
>> +			/* split the free page if it goes beyond the specified range */
>> +			if (pfn + (1 << order) > (end_pfn + 1))
>> +				split_free_page(page, order, end_pfn + 1 - pfn);
>> +		}
>>  		pfn += 1 << order;
>>  		pages_moved += 1 << order;
>>  	}
>> -	set_pageblock_migratetype(pfn_to_page(start_pfn), new_mt);
>> +	/* set migratetype for the remaining pageblocks */
>> +	for (pfn2 = mt_changed_pfn + pageblock_nr_pages;
>> +	     pfn2 <= end_pfn;
>> +	     pfn2 += pageblock_nr_pages)
>> +		set_pageblock_migratetype(pfn_to_page(pfn2), new_mt);
>
> If I rework the code on the above, I'm arriving at the following:
>
> static int move_freepages(struct zone *zone, unsigned long start_pfn,
> 			  unsigned long end_pfn, int old_mt, int new_mt)
> {
> 	struct page *start_page = pfn_to_page(start_pfn);
> 	int pages_moved = 0;
> 	unsigned long pfn;
>
> 	VM_WARN_ON(start_pfn & (pageblock_nr_pages - 1));
> 	VM_WARN_ON(start_pfn + pageblock_nr_pages - 1 != end_pfn);
>
> 	/*
> 	 * A free page may be comprised of 2^n blocks, which means our
> 	 * block of interest could be head or tail in such a page.
> 	 *
> 	 * If we're a tail, update the type of our block, then split
> 	 * the page into pageblocks. The splitting will do the leg
> 	 * work of sorting the blocks into the right freelists.
> 	 *
> 	 * If we're a head, split the page into pageblocks first. This
> 	 * ensures the migratetypes still match up during the freelist
> 	 * removal. Then do the regular scan for buddies in the block
> 	 * of interest, which will handle the rest.
> 	 *
> 	 * In theory, we could try to preserve 2^1 and larger blocks
> 	 * that lie outside our range. In practice, MAX_ORDER is
> 	 * usually one or two pageblocks anyway, so don't bother.
> 	 *
> 	 * Note that this only applies to page isolation, which calls
> 	 * this on random blocks in the pfn range! When we move stuff
> 	 * from inside the page allocator, the pages are coming off
> 	 * the freelist (can't be tail) and multi-block pages are
> 	 * handled directly in the stealing code (can't be a head).
> 	 */
>
> 	/* We're a tail */
> 	pfn = find_straddling_buddy(start_pfn);
> 	if (pfn != start_pfn) {
> 		struct page *free_page = pfn_to_page(pfn);
>
> 		set_pageblock_migratetype(start_page, new_mt);
> 		split_free_page(free_page, buddy_order(free_page),
> 				pageblock_nr_pages);
> 		return pageblock_nr_pages;
> 	}
>
> 	/* We're a head */
> 	if (PageBuddy(start_page) && buddy_order(start_page) > pageblock_order)
> 		split_free_page(start_page, buddy_order(start_page),
> 				pageblock_nr_pages);

This actually can be:

/* We're a head */
if (PageBuddy(start_page) && buddy_order(start_page) > pageblock_order) {
        set_pageblock_migratetype(start_page, new_mt);
        split_free_page(start_page, buddy_order(start_page),
                        pageblock_nr_pages);
        return pageblock_nr_pages;
}


>
> 	/* Move buddies within the block */
> 	while (pfn <= end_pfn) {
> 		struct page *page = pfn_to_page(pfn);
> 		int order, nr_pages;
>
> 		if (!PageBuddy(page)) {
> 			pfn++;
> 			continue;
> 		}
>
> 		/* Make sure we are not inadvertently changing nodes */
> 		VM_BUG_ON_PAGE(page_to_nid(page) != zone_to_nid(zone), page);
> 		VM_BUG_ON_PAGE(page_zone(page) != zone, page);
>
> 		order = buddy_order(page);
> 		nr_pages = 1 << order;
>
> 		move_to_free_list(page, zone, order, old_mt, new_mt);
>
> 		pfn += nr_pages;
> 		pages_moved += nr_pages;
> 	}
>
> 	set_pageblock_migratetype(start_page, new_mt);
>
> 	return pages_moved;
> }
>
> Does this look reasonable to you?

Looks good to me. Thanks.

>
> Note that the page isolation specific stuff comes first. If this code
> holds up, we should be able to move it to page-isolation.c and keep it
> out of the regular allocator path.

You mean move the tail and head part to set_migratetype_isolate()?
And change move_freepages_block() to separate prep_move_freepages_block(),
the tail and head code, and move_freepages()? It should work and looks
like a similar code pattern as steal_suitable_fallback().


--
Best Regards,
Yan, Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

  parent reply	other threads:[~2023-10-13  0:07 UTC|newest]

Thread overview: 83+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-11 19:41 Johannes Weiner
2023-09-11 19:41 ` [PATCH 1/6] mm: page_alloc: remove pcppage migratetype caching Johannes Weiner
2023-09-11 19:59   ` Zi Yan
2023-09-11 21:09     ` Andrew Morton
2023-09-12 13:47   ` Vlastimil Babka
2023-09-12 14:50     ` Johannes Weiner
2023-09-13  9:33       ` Vlastimil Babka
2023-09-13 13:24         ` Johannes Weiner
2023-09-13 13:34           ` Vlastimil Babka
2023-09-12 15:03     ` Johannes Weiner
2023-09-14  7:29       ` Vlastimil Babka
2023-09-14  9:56   ` Mel Gorman
2023-09-27  5:42   ` Huang, Ying
2023-09-27 14:51     ` Johannes Weiner
2023-09-30  4:26       ` Huang, Ying
2023-10-02 14:58         ` Johannes Weiner
2023-09-11 19:41 ` [PATCH 2/6] mm: page_alloc: fix up block types when merging compatible blocks Johannes Weiner
2023-09-11 20:01   ` Zi Yan
2023-09-13  9:52   ` Vlastimil Babka
2023-09-14 10:00   ` Mel Gorman
2023-09-11 19:41 ` [PATCH 3/6] mm: page_alloc: move free pages when converting block during isolation Johannes Weiner
2023-09-11 20:17   ` Zi Yan
2023-09-11 20:47     ` Johannes Weiner
2023-09-11 20:50       ` Zi Yan
2023-09-13 14:31   ` Vlastimil Babka
2023-09-14 10:03   ` Mel Gorman
2023-09-11 19:41 ` [PATCH 4/6] mm: page_alloc: fix move_freepages_block() range error Johannes Weiner
2023-09-11 20:23   ` Zi Yan
2023-09-13 14:40   ` Vlastimil Babka
2023-09-14 13:37     ` Johannes Weiner
2023-09-14 10:03   ` Mel Gorman
2023-09-11 19:41 ` [PATCH 5/6] mm: page_alloc: fix freelist movement during block conversion Johannes Weiner
2023-09-13 19:52   ` Vlastimil Babka
2023-09-14 14:47     ` Johannes Weiner
2023-09-11 19:41 ` [PATCH 6/6] mm: page_alloc: consolidate free page accounting Johannes Weiner
2023-09-13 20:18   ` Vlastimil Babka
2023-09-14  4:11     ` Johannes Weiner
2023-09-14 23:52 ` [PATCH V2 0/6] mm: page_alloc: freelist migratetype hygiene Mike Kravetz
2023-09-15 14:16   ` Johannes Weiner
2023-09-15 15:05     ` Mike Kravetz
2023-09-16 19:57     ` Mike Kravetz
2023-09-16 20:13       ` Andrew Morton
2023-09-18  7:16       ` Vlastimil Babka
2023-09-18 14:52         ` Johannes Weiner
2023-09-18 17:40           ` Mike Kravetz
2023-09-19  6:49             ` Johannes Weiner
2023-09-19 12:37               ` Zi Yan
2023-09-19 15:22                 ` Zi Yan
2023-09-19 18:47               ` Mike Kravetz
2023-09-19 20:57                 ` Zi Yan
2023-09-20  0:32                   ` Mike Kravetz
2023-09-20  1:38                     ` Zi Yan
2023-09-20  6:07                       ` Vlastimil Babka
2023-09-20 13:48                         ` Johannes Weiner
2023-09-20 16:04                           ` Johannes Weiner
2023-09-20 17:23                             ` Zi Yan
2023-09-21  2:31                               ` Zi Yan
2023-09-21 10:19                                 ` David Hildenbrand
2023-09-21 14:47                                   ` Zi Yan
2023-09-25 21:12                                     ` Zi Yan
2023-09-26 17:39                                       ` Johannes Weiner
2023-09-28  2:51                                         ` Zi Yan
2023-10-03  2:26                                           ` Zi Yan
2023-10-10 21:12                                             ` Johannes Weiner
2023-10-11 15:25                                               ` Johannes Weiner
2023-10-11 15:45                                                 ` Johannes Weiner
2023-10-11 15:57                                                   ` Zi Yan
2023-10-13  0:06                                               ` Zi Yan [this message]
2023-10-13 14:51                                                 ` Zi Yan
2023-10-16 13:35                                                   ` Zi Yan
2023-10-16 14:37                                                     ` Johannes Weiner
2023-10-16 15:00                                                       ` Zi Yan
2023-10-16 18:51                                                         ` Johannes Weiner
2023-10-16 19:49                                                           ` Zi Yan
2023-10-16 20:26                                                             ` Johannes Weiner
2023-10-16 20:39                                                               ` Johannes Weiner
2023-10-16 20:48                                                                 ` Zi Yan
2023-09-26 18:19                                     ` David Hildenbrand
2023-09-28  3:22                                       ` Zi Yan
2023-10-02 11:43                                         ` David Hildenbrand
2023-10-03  2:35                                           ` Zi Yan
2023-09-18  7:07     ` Vlastimil Babka
2023-09-18 14:09       ` Johannes Weiner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=D6A142AB-08F6-4335-8D08-1743DFAAD10C@nvidia.com \
    --to=ziy@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mike.kravetz@oracle.com \
    --cc=vbabka@suse.cz \
    --cc=wangkefeng.wang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox