* [PATCH] mm,hugetlb: take hugetlb_lock before decrementing h->resv_huge_pages
@ 2022-10-18 0:25 Rik van Riel
2022-10-18 2:05 ` Mike Kravetz
0 siblings, 1 reply; 2+ messages in thread
From: Rik van Riel @ 2022-10-18 0:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, stable, Naoya Horiguchi, Glen McCready, Mike Kravetz,
Muchun Song, Andrew Morton, kernel-team
The h->*_huge_pages counters are protected by the hugetlb_lock, but
alloc_huge_page has a corner case where it can decrement the counter
outside of the lock.
This could lead to a corrupted value of h->resv_huge_pages, which we
have observed on our systems.
Take the hugetlb_lock before decrementing h->resv_huge_pages to avoid
a potential race.
Fixes: a88c76954804 ("mm: hugetlb: fix hugepage memory leak caused by wrong reserve count")
Cc: stable@kernel.org
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Glen McCready <gkmccready@meta.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Rik van Riel <riel@surriel.com>
---
mm/hugetlb.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index b586cdd75930..dede0337c07c 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2924,11 +2924,11 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
page = alloc_buddy_huge_page_with_mpol(h, vma, addr);
if (!page)
goto out_uncharge_cgroup;
+ spin_lock_irq(&hugetlb_lock);
if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) {
SetHPageRestoreReserve(page);
h->resv_huge_pages--;
}
- spin_lock_irq(&hugetlb_lock);
list_add(&page->lru, &h->hugepage_activelist);
set_page_refcounted(page);
/* Fall through */
--
2.37.2
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [PATCH] mm,hugetlb: take hugetlb_lock before decrementing h->resv_huge_pages
2022-10-18 0:25 [PATCH] mm,hugetlb: take hugetlb_lock before decrementing h->resv_huge_pages Rik van Riel
@ 2022-10-18 2:05 ` Mike Kravetz
0 siblings, 0 replies; 2+ messages in thread
From: Mike Kravetz @ 2022-10-18 2:05 UTC (permalink / raw)
To: Rik van Riel
Cc: linux-kernel, linux-mm, stable, Naoya Horiguchi, Glen McCready,
Muchun Song, Andrew Morton, kernel-team
On 10/17/22 20:25, Rik van Riel wrote:
> The h->*_huge_pages counters are protected by the hugetlb_lock, but
> alloc_huge_page has a corner case where it can decrement the counter
> outside of the lock.
>
> This could lead to a corrupted value of h->resv_huge_pages, which we
> have observed on our systems.
>
> Take the hugetlb_lock before decrementing h->resv_huge_pages to avoid
> a potential race.
>
> Fixes: a88c76954804 ("mm: hugetlb: fix hugepage memory leak caused by wrong reserve count")
> Cc: stable@kernel.org
> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> Cc: Glen McCready <gkmccready@meta.com>
> Cc: Mike Kravetz <mike.kravetz@oracle.com>
> Cc: Muchun Song <songmuchun@bytedance.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Rik van Riel <riel@surriel.com>
> ---
> mm/hugetlb.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
Thanks Rik! That case did slip through the cracks.
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
--
Mike Kravetz
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index b586cdd75930..dede0337c07c 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2924,11 +2924,11 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
> page = alloc_buddy_huge_page_with_mpol(h, vma, addr);
> if (!page)
> goto out_uncharge_cgroup;
> + spin_lock_irq(&hugetlb_lock);
> if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) {
> SetHPageRestoreReserve(page);
> h->resv_huge_pages--;
> }
> - spin_lock_irq(&hugetlb_lock);
> list_add(&page->lru, &h->hugepage_activelist);
> set_page_refcounted(page);
> /* Fall through */
> --
> 2.37.2
>
>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2022-10-18 2:06 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-18 0:25 [PATCH] mm,hugetlb: take hugetlb_lock before decrementing h->resv_huge_pages Rik van Riel
2022-10-18 2:05 ` Mike Kravetz
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox