From: Naoya Horiguchi <nao.horiguchi@gmail.com>
To: linux-mm@kvack.org
Cc: Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@redhat.com>,
Mike Kravetz <mike.kravetz@oracle.com>,
Miaohe Lin <linmiaohe@huawei.com>,
Liu Shixin <liushixin2@huawei.com>,
Yang Shi <shy828301@gmail.com>,
Oscar Salvador <osalvador@suse.de>,
Muchun Song <songmuchun@bytedance.com>,
Naoya Horiguchi <naoya.horiguchi@nec.com>,
linux-kernel@vger.kernel.org
Subject: [PATCH v2 1/9] mm/hugetlb: remove checking hstate_is_gigantic() in return_unused_surplus_pages()
Date: Fri, 24 Jun 2022 08:51:45 +0900 [thread overview]
Message-ID: <20220623235153.2623702-2-naoya.horiguchi@linux.dev> (raw)
In-Reply-To: <20220623235153.2623702-1-naoya.horiguchi@linux.dev>
From: Naoya Horiguchi <naoya.horiguchi@nec.com>
I found a weird state of 1GB hugepage pool, caused by the following
procedure:
- run a process reserving all free 1GB hugepages,
- shrink free 1GB hugepage pool to zero (i.e. writing 0 to
/sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages), then
- kill the reserving process.
, then all the hugepages are free *and* surplus at the same time.
$ cat /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages
3
$ cat /sys/kernel/mm/hugepages/hugepages-1048576kB/free_hugepages
3
$ cat /sys/kernel/mm/hugepages/hugepages-1048576kB/resv_hugepages
0
$ cat /sys/kernel/mm/hugepages/hugepages-1048576kB/surplus_hugepages
3
This state is resolved by reserving and allocating the pages then
freeing them again, so this seems not to result in serious problem.
But it's a little surprizing (shrinking pool suddenly fails).
This behavior is caused by hstate_is_gigantic() check in
return_unused_surplus_pages(). This was introduced so long ago in 2008
by commit aa888a74977a ("hugetlb: support larger than MAX_ORDER"), and
it seems to me that this check is no longer unnecessary. Let's remove it.
Signed-off-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
---
mm/hugetlb.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index a57e1be41401..c538278170a2 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2432,10 +2432,6 @@ static void return_unused_surplus_pages(struct hstate *h,
/* Uncommit the reservation */
h->resv_huge_pages -= unused_resv_pages;
- /* Cannot return gigantic pages currently */
- if (hstate_is_gigantic(h))
- goto out;
-
/*
* Part (or even all) of the reservation could have been backed
* by pre-allocated pages. Only free surplus pages.
--
2.25.1
next prev parent reply other threads:[~2022-06-23 23:52 UTC|newest]
Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-06-23 23:51 [PATCH v2 0/9] mm, hwpoison: enable 1GB hugepage support (v2) Naoya Horiguchi
2022-06-23 23:51 ` Naoya Horiguchi [this message]
2022-06-24 2:25 ` [PATCH v2 1/9] mm/hugetlb: remove checking hstate_is_gigantic() in return_unused_surplus_pages() Miaohe Lin
2022-06-24 8:03 ` Muchun Song
2022-06-24 8:15 ` Miaohe Lin
2022-06-24 8:34 ` HORIGUCHI NAOYA(堀口 直也)
2022-06-24 19:11 ` Mike Kravetz
2022-06-27 6:02 ` HORIGUCHI NAOYA(堀口 直也)
2022-06-27 17:25 ` Mike Kravetz
2022-06-28 3:01 ` Miaohe Lin
2022-06-28 8:38 ` HORIGUCHI NAOYA(堀口 直也)
2022-06-30 2:27 ` HORIGUCHI NAOYA(堀口 直也)
2022-06-23 23:51 ` [PATCH v2 2/9] mm/hugetlb: separate path for hwpoison entry in copy_hugetlb_page_range() Naoya Horiguchi
2022-06-24 9:23 ` Miaohe Lin
2022-06-27 6:06 ` HORIGUCHI NAOYA(堀口 直也)
2022-06-24 20:57 ` Mike Kravetz
2022-06-27 6:48 ` HORIGUCHI NAOYA(堀口 直也)
2022-06-27 7:57 ` Muchun Song
2022-06-23 23:51 ` [PATCH v2 3/9] mm/hugetlb: make pud_huge() and huge_pud() aware of non-present pud entry Naoya Horiguchi
2022-06-24 8:40 ` HORIGUCHI NAOYA(堀口 直也)
2022-06-25 0:02 ` Mike Kravetz
2022-06-27 7:07 ` HORIGUCHI NAOYA(堀口 直也)
2022-06-25 9:42 ` Miaohe Lin
2022-06-27 7:24 ` HORIGUCHI NAOYA(堀口 直也)
2022-06-23 23:51 ` [PATCH v2 4/9] mm, hwpoison, hugetlb: support saving mechanism of raw error pages Naoya Horiguchi
2022-06-27 3:16 ` Miaohe Lin
2022-06-27 7:56 ` HORIGUCHI NAOYA(堀口 直也)
2022-06-27 9:26 ` Muchun Song
2022-06-28 2:41 ` HORIGUCHI NAOYA(堀口 直也)
2022-06-28 6:26 ` Muchun Song
2022-06-28 7:51 ` Muchun Song
2022-06-28 8:17 ` HORIGUCHI NAOYA(堀口 直也)
2022-06-28 10:37 ` Muchun Song
2022-06-23 23:51 ` [PATCH v2 5/9] mm, hwpoison: make unpoison aware of raw error info in hwpoisoned hugepage Naoya Horiguchi
2022-06-27 8:32 ` Miaohe Lin
2022-06-27 11:48 ` Miaohe Lin
2022-06-23 23:51 ` [PATCH v2 6/9] mm, hwpoison: set PG_hwpoison for busy hugetlb pages Naoya Horiguchi
2022-06-27 8:39 ` Miaohe Lin
2022-06-23 23:51 ` [PATCH v2 7/9] mm, hwpoison: make __page_handle_poison returns int Naoya Horiguchi
2022-06-27 9:02 ` Miaohe Lin
2022-06-28 6:02 ` HORIGUCHI NAOYA(堀口 直也)
2022-06-23 23:51 ` [PATCH v2 8/9] mm, hwpoison: skip raw hwpoison page in freeing 1GB hugepage Naoya Horiguchi
2022-06-27 12:24 ` Miaohe Lin
2022-06-23 23:51 ` [PATCH v2 9/9] mm, hwpoison: enable memory error handling on " Naoya Horiguchi
2022-06-28 2:06 ` Miaohe Lin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220623235153.2623702-2-naoya.horiguchi@linux.dev \
--to=nao.horiguchi@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=linmiaohe@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=liushixin2@huawei.com \
--cc=mike.kravetz@oracle.com \
--cc=naoya.horiguchi@nec.com \
--cc=osalvador@suse.de \
--cc=shy828301@gmail.com \
--cc=songmuchun@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox