From: Ge Yang <yangge1116@126.com>
To: David Hildenbrand <david@redhat.com>, akpm@linux-foundation.org
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
stable@vger.kernel.org, 21cnbao@gmail.com,
baolin.wang@linux.alibaba.com, muchun.song@linux.dev,
liuzixing@hygon.cn
Subject: Re: [PATCH] replace free hugepage folios after migration
Date: Sun, 22 Dec 2024 16:13:43 +0800 [thread overview]
Message-ID: <d6d92a36-4ed7-4ae8-8b74-48f79a502a36@126.com> (raw)
In-Reply-To: <0ca35fe5-9799-4518-9fb1-701c88501a8d@redhat.com>
在 2024/12/21 22:35, David Hildenbrand 写道:
> On 18.12.24 07:33, yangge1116@126.com wrote:
>> From: yangge <yangge1116@126.com>
>>
>> My machine has 4 NUMA nodes, each equipped with 32GB of memory. I
>> have configured each NUMA node with 16GB of CMA and 16GB of in-use
>> hugetlb pages. The allocation of contiguous memory via the
>> cma_alloc() function can fail probabilistically.
>>
>> The cma_alloc() function may fail if it sees an in-use hugetlb page
>> within the allocation range, even if that page has already been
>> migrated. When in-use hugetlb pages are migrated, they may simply
>> be released back into the free hugepage pool instead of being
>> returned to the buddy system. This can cause the
>> test_pages_isolated() function check to fail, ultimately leading
>> to the failure of the cma_alloc() function:
>> cma_alloc()
>> __alloc_contig_migrate_range() // migrate in-use hugepage
>> test_pages_isolated()
>> __test_page_isolated_in_pageblock()
>> PageBuddy(page) // check if the page is in buddy
>>
>> To address this issue, we will add a function named
>> replace_free_hugepage_folios(). This function will replace the
>> hugepage in the free hugepage pool with a new one and release the
>> old one to the buddy system. After the migration of in-use hugetlb
>> pages is completed, we will invoke the replace_free_hugepage_folios()
>> function to ensure that these hugepages are properly released to
>> the buddy system. Following this step, when the test_pages_isolated()
>> function is executed for inspection, it will successfully pass.
>>
>> Signed-off-by: yangge <yangge1116@126.com>
>> ---
>> include/linux/hugetlb.h | 6 ++++++
>> mm/hugetlb.c | 37 +++++++++++++++++++++++++++++++++++++
>> mm/page_alloc.c | 13 ++++++++++++-
>> 3 files changed, 55 insertions(+), 1 deletion(-)
>>
>> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
>> index ae4fe86..7d36ac8 100644
>> --- a/include/linux/hugetlb.h
>> +++ b/include/linux/hugetlb.h
>> @@ -681,6 +681,7 @@ struct huge_bootmem_page {
>> };
>> int isolate_or_dissolve_huge_page(struct page *page, struct
>> list_head *list);
>> +int replace_free_hugepage_folios(unsigned long start_pfn, unsigned
>> long end_pfn);
>> struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
>> unsigned long addr, int avoid_reserve);
>> struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int
>> preferred_nid,
>> @@ -1059,6 +1060,11 @@ static inline int
>> isolate_or_dissolve_huge_page(struct page *page,
>> return -ENOMEM;
>> }
>> +int replace_free_hugepage_folios(unsigned long start_pfn, unsigned
>> long end_pfn)
>> +{
>> + return 0;
>> +}
>> +
>> static inline struct folio *alloc_hugetlb_folio(struct
>> vm_area_struct *vma,
>> unsigned long addr,
>> int avoid_reserve)
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index 8e1db80..a099c54 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -2975,6 +2975,43 @@ int isolate_or_dissolve_huge_page(struct page
>> *page, struct list_head *list)
>> return ret;
>> }
>> +/*
>> + * replace_free_hugepage_folios - Replace free hugepage folios in a
>> given pfn
>> + * range with new folios.
>> + * @stat_pfn: start pfn of the given pfn range
>> + * @end_pfn: end pfn of the given pfn range
>> + * Returns 0 on success, otherwise negated error.
>> + */
>> +int replace_free_hugepage_folios(unsigned long start_pfn, unsigned
>> long end_pfn)
>> +{
>> + struct hstate *h;
>> + struct folio *folio;
>> + int ret = 0;
>> +
>> + LIST_HEAD(isolate_list);
>> +
>> + while (start_pfn < end_pfn) {
>> + folio = pfn_folio(start_pfn);
>> + if (folio_test_hugetlb(folio)) {
>> + h = folio_hstate(folio);
>> + } else {
>> + start_pfn++;
>> + continue;
>> + }
>> +
>> + if (!folio_ref_count(folio)) {
>> + ret = alloc_and_dissolve_hugetlb_folio(h, folio,
>> &isolate_list);
>> + if (ret)
>> + break;
>> +
>> + putback_movable_pages(&isolate_list);
>> + }
>> + start_pfn++;
>> + }
>> +
>> + return ret;
>> +}
>> +
>> struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
>> unsigned long addr, int avoid_reserve)
>> {
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index dde19db..1dcea28 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -6504,7 +6504,18 @@ int alloc_contig_range_noprof(unsigned long
>> start, unsigned long end,
>> ret = __alloc_contig_migrate_range(&cc, start, end, migratetype);
>> if (ret && ret != -EBUSY)
>> goto done;
>> - ret = 0;
>> +
>> + /*
>> + * When in-use hugetlb pages are migrated, they may simply be
>> + * released back into the free hugepage pool instead of being
>> + * returned to the buddy system. After the migration of in-use
>> + * huge pages is completed, we will invoke the
>> + * replace_free_hugepage_folios() function to ensure that
>> + * these hugepages are properly released to the buddy system.
>> + */
>
> As mentioned in my other mail, what I don't like about this is, IIUC,
> the pages can get reallocated anytime after we successfully migrated
> them, or is there anything that prevents that?
>
The pages can get reallocated anytime after we successfully migrated
them. Currently, I haven't thought of a good way to prevent it.
> Did you ever try allocating a larger range with a single
> alloc_contig_range() call, that possibly has to migrate multiple hugetlb
> folios in one go (and maybe just allocates one of the just-freed hugetlb
> folios as migration target)?
>
I have tried using a single alloc_contig_range() call to allocate a
larger contiguous range, and it works properly. This is because during
the period between __alloc_contig_migrate_range() and
isolate_freepages_range(), no one allocates a hugetlb folio from the
free hugetlb pool.
>
next prev parent reply other threads:[~2024-12-22 8:14 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-18 6:33 yangge1116
2024-12-19 16:40 ` David Hildenbrand
2024-12-20 8:56 ` Ge Yang
2024-12-20 16:30 ` David Hildenbrand
2024-12-21 12:04 ` Ge Yang
2024-12-21 14:32 ` David Hildenbrand
2024-12-22 11:50 ` Ge Yang
2024-12-19 18:43 ` SeongJae Park
2024-12-20 9:03 ` Ge Yang
2024-12-21 14:35 ` David Hildenbrand
2024-12-22 8:13 ` Ge Yang [this message]
2025-01-08 21:05 ` David Hildenbrand
2025-01-09 9:50 ` Ge Yang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d6d92a36-4ed7-4ae8-8b74-48f79a502a36@126.com \
--to=yangge1116@126.com \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=david@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=liuzixing@hygon.cn \
--cc=muchun.song@linux.dev \
--cc=stable@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox