linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Zi Yan <ziy@nvidia.com>
To: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
	Vlastimil Babka <vbabka@suse.cz>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Mel Gorman <mgorman@techsingularity.net>,
	Miaohe Lin <linmiaohe@huawei.com>,
	Kefeng Wang <wangkefeng.wang@huawei.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH V2 0/6] mm: page_alloc: freelist migratetype hygiene
Date: Mon, 02 Oct 2023 22:35:10 -0400	[thread overview]
Message-ID: <6EA38E52-1E76-4A25-BC14-B5D5FC46298B@nvidia.com> (raw)
In-Reply-To: <ac73d772-c585-1d9e-c8ee-36c51b608906@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 4623 bytes --]

On 2 Oct 2023, at 7:43, David Hildenbrand wrote:

>>>> I can do it after I fix this. That change might or might not help only if we make
>>>> some redesign on how migratetype is managed. If MIGRATE_ISOLATE does not
>>>> overwrite existing migratetype, the code might not need to split a page and move
>>>> it to MIGRATE_ISOLATE freelist?
>>>
>>> Did someone test how memory offlining plays along with that? (I can try myself
>>> within the next 1-2 weeks)
>>>
>>> There [mm/memory_hotplug.c:offline_pages] we always cover full MAX_ORDER ranges,
>>> though.
>>>
>>> ret = start_isolate_page_range(start_pfn, end_pfn,
>>> 			       MIGRATE_MOVABLE,
>>> 			       MEMORY_OFFLINE | REPORT_FAILURE,
>>> 			       GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL);
>>
>> Since a full MAX_ORDER range is passed, no free page split will happen.
>
> Okay, thanks for verifying that it should not be affected!
>
>>
>>>
>>>>
>>>> The fundamental issue in alloc_contig_range() is that to work at
>>>> pageblock level, a page (>pageblock_order) can have one part is isolated and
>>>> the rest is a different migratetype. {add_to,move_to,del_page_from}_free_list()
>>>> now checks first pageblock migratetype, so such a page needs to be removed
>>>> from its free_list, set MIGRATE_ISOLATE on one of the pageblock, split, and
>>>> finally put back to multiple free lists. This needs to be done at isolation stage
>>>> before free pages are removed from their free lists (the stage after isolation).
>>>
>>> One idea was to always isolate larger chunks, and handle movability checks/split/etc
>>> at a later stage. Once isolation would be decoupled from the actual/original migratetype,
>>> the could have been easier to handle (especially some corner cases I had in mind back then).
>>
>> I think it is a good idea. When I coded alloc_contig_range() up, I tried to
>> accommodate existing set_migratetype_isolate(), which calls has_unmovable_pages().
>> If these two are decoupled, set_migrateype_isolate() can work on MAX_ORDER-aligned
>> ranges and has_unmovable_pages() can still work on pageblock-aligned ranges.
>> Let me give this a try.
>>
>
> But again, just some thought I had back then, maybe it doesn't help for anything; I found more time to look into the whole thing in more detail.

Sure. The devil is in the details, but I will only know the details and what works
after I code it up. :)

>>>
>>>> If MIGRATE_ISOLATE is a separate flag and we are OK with leaving isolated pages
>>>> in their original migratetype and check migratetype before allocating a page,
>>>> that might help. But that might add extra work (e.g., splitting a partially
>>>> isolated free page before allocation) in the really hot code path, which is not
>>>> desirable.
>>>
>>> With MIGRATE_ISOLATE being a separate flag, one idea was to have not a single
>>> separate isolate list, but one per "proper migratetype". But again, just some random
>>> thoughts I had back then, I never had sufficient time to think it all through.
>>
>> Got it. I will think about it.
>>
>> One question on separate MIGRATE_ISOLATE:
>>
>> the implementation I have in mind is that MIGRATE_ISOLATE will need a dedicated flag
>> bit instead of being one of migratetype. But now there are 5 migratetypes +
>
> Exactly what I was concerned about back then ...
>
>> MIGRATE_ISOLATE and PB_migratetype_bits is 3, so an extra migratetype_bit is needed.
>> But current migratetype implementation is a word-based operation, requiring
>> NR_PAGEBLOCK_BITS to be divisor of BITS_PER_LONG. This means NR_PAGEBLOCK_BITS
>> needs to be increased from 4 to 8 to meet the requirement, wasting a lot of space.
>
> ... until I did the math. Let's assume a pageblock is 2 MiB.
>
> 4/(2* 1024 * 1024 * 8) = 0,00002384185791016 %
>
> 8/(2* 1024 * 1024 * 8) -> 1 / (2* 1024 * 1024) = 0,00004768371582031 %
>
> For a 1 TiB machine that means 256 KiB vs. 512 KiB
>
> I concluded that "wasting a lot of space" is not really the right word to describe that :)
>
> Just to put it into perspective, the memmap (64/4096) for a 1 TiB machine is ... 16 GiB.

You are right. I should have done the math. The absolute increase is not much.

>> An alternative is to have a separate array for MIGRATE_ISOLATE, which requires
>> additional changes. Let me know if you have a better idea. Thanks.
>
> It would probably be cleanest to just use one byte per pageblock. That would cleanup the whole machinery eventually as well.

Let me give this a try and see if it cleans things up.


--
Best Regards,
Yan, Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

  reply	other threads:[~2023-10-03  2:35 UTC|newest]

Thread overview: 83+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-11 19:41 Johannes Weiner
2023-09-11 19:41 ` [PATCH 1/6] mm: page_alloc: remove pcppage migratetype caching Johannes Weiner
2023-09-11 19:59   ` Zi Yan
2023-09-11 21:09     ` Andrew Morton
2023-09-12 13:47   ` Vlastimil Babka
2023-09-12 14:50     ` Johannes Weiner
2023-09-13  9:33       ` Vlastimil Babka
2023-09-13 13:24         ` Johannes Weiner
2023-09-13 13:34           ` Vlastimil Babka
2023-09-12 15:03     ` Johannes Weiner
2023-09-14  7:29       ` Vlastimil Babka
2023-09-14  9:56   ` Mel Gorman
2023-09-27  5:42   ` Huang, Ying
2023-09-27 14:51     ` Johannes Weiner
2023-09-30  4:26       ` Huang, Ying
2023-10-02 14:58         ` Johannes Weiner
2023-09-11 19:41 ` [PATCH 2/6] mm: page_alloc: fix up block types when merging compatible blocks Johannes Weiner
2023-09-11 20:01   ` Zi Yan
2023-09-13  9:52   ` Vlastimil Babka
2023-09-14 10:00   ` Mel Gorman
2023-09-11 19:41 ` [PATCH 3/6] mm: page_alloc: move free pages when converting block during isolation Johannes Weiner
2023-09-11 20:17   ` Zi Yan
2023-09-11 20:47     ` Johannes Weiner
2023-09-11 20:50       ` Zi Yan
2023-09-13 14:31   ` Vlastimil Babka
2023-09-14 10:03   ` Mel Gorman
2023-09-11 19:41 ` [PATCH 4/6] mm: page_alloc: fix move_freepages_block() range error Johannes Weiner
2023-09-11 20:23   ` Zi Yan
2023-09-13 14:40   ` Vlastimil Babka
2023-09-14 13:37     ` Johannes Weiner
2023-09-14 10:03   ` Mel Gorman
2023-09-11 19:41 ` [PATCH 5/6] mm: page_alloc: fix freelist movement during block conversion Johannes Weiner
2023-09-13 19:52   ` Vlastimil Babka
2023-09-14 14:47     ` Johannes Weiner
2023-09-11 19:41 ` [PATCH 6/6] mm: page_alloc: consolidate free page accounting Johannes Weiner
2023-09-13 20:18   ` Vlastimil Babka
2023-09-14  4:11     ` Johannes Weiner
2023-09-14 23:52 ` [PATCH V2 0/6] mm: page_alloc: freelist migratetype hygiene Mike Kravetz
2023-09-15 14:16   ` Johannes Weiner
2023-09-15 15:05     ` Mike Kravetz
2023-09-16 19:57     ` Mike Kravetz
2023-09-16 20:13       ` Andrew Morton
2023-09-18  7:16       ` Vlastimil Babka
2023-09-18 14:52         ` Johannes Weiner
2023-09-18 17:40           ` Mike Kravetz
2023-09-19  6:49             ` Johannes Weiner
2023-09-19 12:37               ` Zi Yan
2023-09-19 15:22                 ` Zi Yan
2023-09-19 18:47               ` Mike Kravetz
2023-09-19 20:57                 ` Zi Yan
2023-09-20  0:32                   ` Mike Kravetz
2023-09-20  1:38                     ` Zi Yan
2023-09-20  6:07                       ` Vlastimil Babka
2023-09-20 13:48                         ` Johannes Weiner
2023-09-20 16:04                           ` Johannes Weiner
2023-09-20 17:23                             ` Zi Yan
2023-09-21  2:31                               ` Zi Yan
2023-09-21 10:19                                 ` David Hildenbrand
2023-09-21 14:47                                   ` Zi Yan
2023-09-25 21:12                                     ` Zi Yan
2023-09-26 17:39                                       ` Johannes Weiner
2023-09-28  2:51                                         ` Zi Yan
2023-10-03  2:26                                           ` Zi Yan
2023-10-10 21:12                                             ` Johannes Weiner
2023-10-11 15:25                                               ` Johannes Weiner
2023-10-11 15:45                                                 ` Johannes Weiner
2023-10-11 15:57                                                   ` Zi Yan
2023-10-13  0:06                                               ` Zi Yan
2023-10-13 14:51                                                 ` Zi Yan
2023-10-16 13:35                                                   ` Zi Yan
2023-10-16 14:37                                                     ` Johannes Weiner
2023-10-16 15:00                                                       ` Zi Yan
2023-10-16 18:51                                                         ` Johannes Weiner
2023-10-16 19:49                                                           ` Zi Yan
2023-10-16 20:26                                                             ` Johannes Weiner
2023-10-16 20:39                                                               ` Johannes Weiner
2023-10-16 20:48                                                                 ` Zi Yan
2023-09-26 18:19                                     ` David Hildenbrand
2023-09-28  3:22                                       ` Zi Yan
2023-10-02 11:43                                         ` David Hildenbrand
2023-10-03  2:35                                           ` Zi Yan [this message]
2023-09-18  7:07     ` Vlastimil Babka
2023-09-18 14:09       ` Johannes Weiner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6EA38E52-1E76-4A25-BC14-B5D5FC46298B@nvidia.com \
    --to=ziy@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mike.kravetz@oracle.com \
    --cc=vbabka@suse.cz \
    --cc=wangkefeng.wang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox