From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03CABC00140 for ; Tue, 9 Aug 2022 02:48:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 748856B0072; Mon, 8 Aug 2022 22:48:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6F7768E0002; Mon, 8 Aug 2022 22:48:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 598428E0001; Mon, 8 Aug 2022 22:48:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 48D0F6B0072 for ; Mon, 8 Aug 2022 22:48:59 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 2AA84801BC for ; Tue, 9 Aug 2022 02:48:59 +0000 (UTC) X-FDA: 79778521998.08.487C290 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf07.hostedemail.com (Postfix) with ESMTP id C21384002C for ; Tue, 9 Aug 2022 02:48:57 +0000 (UTC) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4M1y9g4ntxzjXkZ; Tue, 9 Aug 2022 10:45:43 +0800 (CST) Received: from [10.174.177.76] (10.174.177.76) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 9 Aug 2022 10:48:53 +0800 Subject: Re: [PATCH] hugetlb: freeze allocated pages before creating hugetlb pages To: Mike Kravetz CC: Muchun Song , Joao Martins , Matthew Wilcox , Michal Hocko , Peter Xu , Andrew Morton , Linux-MM , linux-kernel References: <20220808212836.111749-1-mike.kravetz@oracle.com> From: Miaohe Lin Message-ID: <119542cd-939f-3185-1d51-a177d4da1dff@huawei.com> Date: Tue, 9 Aug 2022 10:48:53 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <20220808212836.111749-1-mike.kravetz@oracle.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.76] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660013338; a=rsa-sha256; cv=none; b=zAV97A+SiGXAzbtemNSyCGnAlA/Cl7IUKawn6oBqZJ4E5y9QMuwbHsxQfVULakkyTQuObv Ds1VIKMBoNPX63WTM0xLajRbzO1URfTu5dd7hiEbjMTwvbE6E1uyCUeCT6b5AbZHSk4kJF K7FLdar2fMLuMi4FWk2ZHvTmA2AhhB8= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf07.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660013338; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dlsCtTfjK0uCaINrQbcC10ieBMJ8v5oLoF2KCDZC2+A=; b=33OX56fsZbkJER0J3Ea2e5gm9zDdU3upmACjQRU1nBLTs9tEyPX2a/zSZ0SgoRsnaqbhyR IM3PfbRm2UzxvTJNSKdaf7j9kHIOa5mNwtP+2IOtj0EnlI4SmmqbtKud2wXvH/nTGxcN4N iCddbFUeoz90TUBl0zNEWSAKiByGQNA= X-Rspamd-Server: rspam10 X-Stat-Signature: bfe7ox8xam9wb5qm7zdogcajb7qjwfs1 Authentication-Results: imf07.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf07.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com X-Rspam-User: X-Rspamd-Queue-Id: C21384002C X-HE-Tag: 1660013337-450370 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2022/8/9 5:28, Mike Kravetz wrote: > When creating hugetlb pages, the hugetlb code must first allocate > contiguous pages from a low level allocator such as buddy, cma or > memblock. The pages returned from these low level allocators are > ref counted. This creates potential issues with other code taking > speculative references on these pages before they can be transformed to > a hugetlb page. This issue has been addressed with methods and code > such as that provided in [1]. > > Recent discussions about vmemmap freeing [2] have indicated that it > would be beneficial to freeze all sub pages, including the head page > of pages returned from low level allocators before converting to a > hugetlb page. This helps avoid races if want to replace the page > containing vmemmap for the head page. > > There have been proposals to change at least the buddy allocator to > return frozen pages as described at [3]. If such a change is made, it > can be employed by the hugetlb code. However, as mentioned above > hugetlb uses several low level allocators so each would need to be > modified to return frozen pages. For now, we can manually freeze the > returned pages. This is done in two places: > 1) alloc_buddy_huge_page, only the returned head page is ref counted. > We freeze the head page, retrying once in the VERY rare case where > there may be an inflated ref count. > 2) prep_compound_gigantic_page, for gigantic pages the current code > freezes all pages except the head page. New code will simply freeze > the head page as well. > > In a few other places, code checks for inflated ref counts on newly > allocated hugetlb pages. With the modifications to freeze after > allocating, this code can be removed. > > After hugetlb pages are freshly allocated, they are often added to the > hugetlb free lists. Since these pages were previously ref counted, this > was done via put_page() which would end up calling the hugetlb > destructor: free_huge_page. With changes to freeze pages, we simply > call free_huge_page directly to add the pages to the free list. > > In a few other places, freshly allocated hugetlb pages were immediately > put into use, and the expectation was they were already ref counted. In > these cases, we must manually ref count the page. > > [1] https://lore.kernel.org/linux-mm/20210622021423.154662-3-mike.kravetz@oracle.com/ > [2] https://lore.kernel.org/linux-mm/20220802180309.19340-1-joao.m.martins@oracle.com/ > [3] https://lore.kernel.org/linux-mm/20220531150611.1303156-1-willy@infradead.org/ > > Signed-off-by: Mike Kravetz > --- > mm/hugetlb.c | 97 +++++++++++++++++++--------------------------------- > 1 file changed, 35 insertions(+), 62 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 28516881a1b2..6b90d85d545b 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1769,13 +1769,12 @@ static bool __prep_compound_gigantic_page(struct page *page, unsigned int order, > { > int i, j; > int nr_pages = 1 << order; > - struct page *p = page + 1; > + struct page *p = page; > > /* we rely on prep_new_huge_page to set the destructor */ > set_compound_order(page, order); > - __ClearPageReserved(page); > __SetPageHead(page); > - for (i = 1; i < nr_pages; i++, p = mem_map_next(p, page, i)) { > + for (i = 0; i < nr_pages; i++, p = mem_map_next(p, page, i)) { > /* > * For gigantic hugepages allocated through bootmem at > * boot, it's safer to be consistent with the not-gigantic > @@ -1814,7 +1813,8 @@ static bool __prep_compound_gigantic_page(struct page *page, unsigned int order, > } else { > VM_BUG_ON_PAGE(page_count(p), p); > } > - set_compound_head(p, page); > + if (i != 0) > + set_compound_head(p, page); It seems we forget to unfreeze the head page in out_error path. If unexpected inflated ref count occurs, the ref count of head page will become negative in free_gigantic_page? Thanks for your patch, Mike. I hope this will help solve the races with memory failure. ;) And I will take a more close review when I have enough time.