From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80054C07E9D for ; Sat, 24 Sep 2022 11:20:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 93DC46B007D; Sat, 24 Sep 2022 07:20:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8EF9C6B007E; Sat, 24 Sep 2022 07:20:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 78DE68E0005; Sat, 24 Sep 2022 07:20:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 660C56B007D for ; Sat, 24 Sep 2022 07:20:12 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 2905CAB6BF for ; Sat, 24 Sep 2022 11:20:12 +0000 (UTC) X-FDA: 79946735064.28.C894826 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf29.hostedemail.com (Postfix) with ESMTP id D296212000C for ; Sat, 24 Sep 2022 11:20:08 +0000 (UTC) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4MZRK567xNzlWkK; Sat, 24 Sep 2022 19:15:53 +0800 (CST) Received: from [10.174.151.185] (10.174.151.185) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Sat, 24 Sep 2022 19:20:04 +0800 Subject: Re: [PATCH v3] hugetlb: freeze allocated pages before creating hugetlb pages To: Mike Kravetz , , CC: Muchun Song , Joao Martins , Matthew Wilcox , Michal Hocko , Peter Xu , Oscar Salvador , Naoya Horiguchi , Vlastimil Babka , Andrew Morton References: <20220921202702.106069-1-mike.kravetz@oracle.com> From: Miaohe Lin Message-ID: Date: Sat, 24 Sep 2022 19:20:04 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <20220921202702.106069-1-mike.kravetz@oracle.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.151.185] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1664018411; a=rsa-sha256; cv=none; b=FZ8r0lyyspyLRuuy/keAaaWl4m1AREPV+NDcaI935q1o0xQlwLNjfbN24W0a7F6h0V4TfC xQNMXIWtMrmcV8wq0Uaz9/fxF7CJmfFH9TgD194DPMn6kWr2zW6U6Wso/N20C7IJkePSp6 b39mU7XI63NVbg1ORupABSeQL2cFTaM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1664018411; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=18DZO6SGCwMOFWyQOilCWm/2B3T3JULIpE+HMVCNAMs=; b=M6bDwer+CJJKnTqjaKTr1v2k4IV48b1S9RRN5oHc9LL4jI2M5EhHG9DZMlRbvI6oO+FiQ/ 28oIRAzWhi7YMFHPALEuzjOchRbzwzMB5ox68yd/ZNWYSUesX0uczYiv6F51S8TOfb+kp/ mzGBrfIMap8RcLfvSjz4PAIiVX9lJ+g= X-Stat-Signature: ckquhbuusdgudhei6zs1qqybid3yimrq X-Rspamd-Queue-Id: D296212000C Authentication-Results: imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-Rspam-User: X-Rspamd-Server: rspam03 X-HE-Tag: 1664018408-814078 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2022/9/22 4:27, Mike Kravetz wrote: > When creating hugetlb pages, the hugetlb code must first allocate > contiguous pages from a low level allocator such as buddy, cma or > memblock. The pages returned from these low level allocators are > ref counted. This creates potential issues with other code taking > speculative references on these pages before they can be transformed to > a hugetlb page. This issue has been addressed with methods and code > such as that provided in [1]. > > Recent discussions about vmemmap freeing [2] have indicated that it > would be beneficial to freeze all sub pages, including the head page > of pages returned from low level allocators before converting to a > hugetlb page. This helps avoid races if we want to replace the page > containing vmemmap for the head page. > > There have been proposals to change at least the buddy allocator to > return frozen pages as described at [3]. If such a change is made, it > can be employed by the hugetlb code. However, as mentioned above > hugetlb uses several low level allocators so each would need to be > modified to return frozen pages. For now, we can manually freeze the > returned pages. This is done in two places: > 1) alloc_buddy_huge_page, only the returned head page is ref counted. > We freeze the head page, retrying once in the VERY rare case where > there may be an inflated ref count. > 2) prep_compound_gigantic_page, for gigantic pages the current code > freezes all pages except the head page. New code will simply freeze > the head page as well. > > In a few other places, code checks for inflated ref counts on newly > allocated hugetlb pages. With the modifications to freeze after > allocating, this code can be removed. > > After hugetlb pages are freshly allocated, they are often added to the > hugetlb free lists. Since these pages were previously ref counted, this > was done via put_page() which would end up calling the hugetlb > destructor: free_huge_page. With changes to freeze pages, we simply > call free_huge_page directly to add the pages to the free list. > > In a few other places, freshly allocated hugetlb pages were immediately > put into use, and the expectation was they were already ref counted. In > these cases, we must manually ref count the page. > > [1] https://lore.kernel.org/linux-mm/20210622021423.154662-3-mike.kravetz@oracle.com/ > [2] https://lore.kernel.org/linux-mm/20220802180309.19340-1-joao.m.martins@oracle.com/ > [3] https://lore.kernel.org/linux-mm/20220809171854.3725722-1-willy@infradead.org/ > > Signed-off-by: Mike Kravetz Thanks Mike. The code looks more simple. Reviewed-by: Miaohe Lin Thanks, Miaohe Lin