From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78481CD128A for ; Mon, 1 Apr 2024 03:14:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CDBB96B0082; Sun, 31 Mar 2024 23:14:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C8B136B0083; Sun, 31 Mar 2024 23:14:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B52CF6B0085; Sun, 31 Mar 2024 23:14:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 95A7A6B0082 for ; Sun, 31 Mar 2024 23:14:51 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 4B76D1A0222 for ; Mon, 1 Apr 2024 03:14:51 +0000 (UTC) X-FDA: 81959495982.05.690F1F6 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf08.hostedemail.com (Postfix) with ESMTP id D3FA0160009 for ; Mon, 1 Apr 2024 03:14:47 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711941289; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=N1cphdh7MfIZrsEUHYPK8TdD3C2bqOqJ3Zx2TeXJIVw=; b=hYnIjWF/RLhaO0UY3/8C7ciEtvkHAv/WRJoY+l7nLgDA/EE+tHi/I4bQY7iuFymwAmLe2F wjF+i0PGNB5jXq/+4JjFsE2pkwNUF5B/9bSx0Oj/AY6dCyBDpjuSX95x+xwLuJNKZqQlP7 xVLMUeOvBnpPbuaw3tWvEm5hpIJPXoA= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711941289; a=rsa-sha256; cv=none; b=s3dnVE2vBGASXlXMVnY6ydVZAhl2M+5MREAT5wakrojkNs3NTdRdYTZnTEN1s75p719JpK jnA/UMkSgAJ6ALXHSPKk66S5o5tvfz/oiniJfd3UW269wSInEyZTUP0OWYhOorBYQ1aN1M r2H/kxWs9UxGvl0uuHZut6priP0ThqQ= Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4V7GJr1fGhz1QBvp; Mon, 1 Apr 2024 11:12:12 +0800 (CST) Received: from canpemm500002.china.huawei.com (unknown [7.192.104.244]) by mail.maildlp.com (Postfix) with ESMTPS id 6695A18007C; Mon, 1 Apr 2024 11:14:43 +0800 (CST) Received: from [10.173.135.154] (10.173.135.154) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Mon, 1 Apr 2024 11:14:42 +0800 Subject: Re: [PATCH 1/9] mm: Always initialise folio->_deferred_list To: Matthew Wilcox CC: , David Hildenbrand , Vlastimil Babka , Muchun Song , Oscar Salvador , Andrew Morton References: <20240321142448.1645400-1-willy@infradead.org> <20240321142448.1645400-2-willy@infradead.org> <41ea6bcc-f40c-a651-2aae-e68fdaeb1021@huawei.com> From: Miaohe Lin Message-ID: Date: Mon, 1 Apr 2024 11:14:42 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.173.135.154] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-Rspamd-Queue-Id: D3FA0160009 X-Rspam-User: X-Stat-Signature: jzrs16t55jhka7nqhzt6ubr3rer7uqox X-Rspamd-Server: rspam01 X-HE-Tag: 1711941287-600928 X-HE-Meta: U2FsdGVkX1/MoNPjxbyW2Fo1dVsFxNVQoXFYvGrZ9nU4Se0PqLeT9owzOcEDQZ7eIKeOK59h4Yb2rkd4dPRIvXelQiupyQySSaNv8yZusHrEia0rKyBFh7p1adWHhps9XuFTM8AD9uYHwRm5BzbDf0KMGyC1jkMhW5anSv2Ws2U5o8TKokNU3nhxvjMPs+GC4hpBYEJ/+ZsSHWE9vRmUF1+pjkJWErWqyaqRqvXN1Oj3P/1Zkqjr/s9C35Ns6qxmkGMRo4e366FbfNXRnv4UdHVHK9axfENEEHgay8TxJxg0hUcWEhbMIXVdethgBiZqI/E0rTsQsVv6TkUMQ0kZ8emg9OVwzl+vJyzbAqbSdblC4L1PvfAEEy/4aWkFwicb7LCVIC805hJSJ4jsZCg0yBWEDW5zvbMgpMjBqoBwMO9qCWbwv3XAo5vUlOTzdjOx5JNF9CcillQRANzyPxoKgQJchM/AkrPglvtdHJ6lkJhEd4jHBgHhfpTLSBknH7gaxmD3et6crQjxsAQgVmBcnQ1xDUZ5yl7Tj4Dnarz7R26zrF409J78yx15Smyj01xQAqG4SGpofaQVa3pTvPHccgdCkAkQGvTjPp0CcfPQlHUK1JM/0RS+cif560ph1PDrZ8o2NWOSqNTjz7cRQMro6cj51sRUniYQx9AIySl3Eh0VVECvZzZeQSAMgxvGbmupxZ7ZYQx2zQH8Q5AOVG7q+dMd/hI9t9AC9PijHWPhpFY3R55rH/501zZCbwStJk1J3tWJCdR5PWN5zicOLycCvBtZk8yYGFCbTHIVAiKBAIn2olZMtQkm789ySorSCVj25XvAMQ9oX+MhSJr0AoRuRcWmZKbGpY4Npo1yLkxlMK6MU4+JtnybsJpu5UfT79c2ubK1emiP07tAhwGsdmlmo2tsKyPeybpOtBQmh3BREiJUPMCqERDEqGtSz9c900MOTLQPm990LU5Z0+KGTUq QecB6vYn hkTmjWZVRx1RN8wj4JxZpm2nYyBEX0D89Y+tofYyQSTrKRI68tPhWZ1lOdBBfw46kOQtIKS2h0e9fFiis9QO1sGrUVqCXRVdrVW47gZwTroNC5eRH3pw+SpKaRa3atliAZhIxdgt+sjPkg3k+tj2HLOIIHBPxyMHxjlfyVJeQrRk4LX/WGrAiMiwOPQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000211, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/3/22 21:00, Matthew Wilcox wrote: > On Fri, Mar 22, 2024 at 04:23:59PM +0800, Miaohe Lin wrote: >>> +++ b/mm/hugetlb.c >>> @@ -1796,7 +1796,8 @@ static void __update_and_free_hugetlb_folio(struct hstate *h, >>> destroy_compound_gigantic_folio(folio, huge_page_order(h)); >>> free_gigantic_folio(folio, huge_page_order(h)); >>> } else { >>> - __free_pages(&folio->page, huge_page_order(h)); >>> + INIT_LIST_HEAD(&folio->_deferred_list); >> >> Will it be better to add a comment to explain why INIT_LIST_HEAD is needed ? Sorry for late, I was on off-the-job training last week. It's really tired. :( > > Maybe? Something like > /* We reused this space for our own purposes */ This one looks good to me. > >>> + folio_put(folio); >> >> Can all __free_pages be replaced with folio_put in mm/hugetlb.c? > > There's only one left, and indeed it can! > > I'll drop this into my tree and send it as a proper patch later. > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 333f6278ef63..43cc7e6bc374 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -2177,13 +2177,13 @@ static struct folio *alloc_buddy_hugetlb_folio(struct hstate *h, > nodemask_t *node_alloc_noretry) > { > int order = huge_page_order(h); > - struct page *page; > + struct folio *folio; > bool alloc_try_hard = true; > bool retry = true; > > /* > - * By default we always try hard to allocate the page with > - * __GFP_RETRY_MAYFAIL flag. However, if we are allocating pages in > + * By default we always try hard to allocate the folio with > + * __GFP_RETRY_MAYFAIL flag. However, if we are allocating folios in > * a loop (to adjust global huge page counts) and previous allocation > * failed, do not continue to try hard on the same node. Use the > * node_alloc_noretry bitmap to manage this state information. > @@ -2196,43 +2196,42 @@ static struct folio *alloc_buddy_hugetlb_folio(struct hstate *h, > if (nid == NUMA_NO_NODE) > nid = numa_mem_id(); > retry: > - page = __alloc_pages(gfp_mask, order, nid, nmask); > + folio = __folio_alloc(gfp_mask, order, nid, nmask); > > - /* Freeze head page */ > - if (page && !page_ref_freeze(page, 1)) { > - __free_pages(page, order); > + if (folio && !folio_ref_freeze(folio, 1)) { > + folio_put(folio); > if (retry) { /* retry once */ > retry = false; > goto retry; > } > /* WOW! twice in a row. */ > - pr_warn("HugeTLB head page unexpected inflated ref count\n"); > - page = NULL; > + pr_warn("HugeTLB unexpected inflated folio ref count\n"); > + folio = NULL; > } > > /* > - * If we did not specify __GFP_RETRY_MAYFAIL, but still got a page this > - * indicates an overall state change. Clear bit so that we resume > - * normal 'try hard' allocations. > + * If we did not specify __GFP_RETRY_MAYFAIL, but still got a > + * folio this indicates an overall state change. Clear bit so > + * that we resume normal 'try hard' allocations. > */ > - if (node_alloc_noretry && page && !alloc_try_hard) > + if (node_alloc_noretry && folio && !alloc_try_hard) > node_clear(nid, *node_alloc_noretry); > > /* > - * If we tried hard to get a page but failed, set bit so that > + * If we tried hard to get a folio but failed, set bit so that > * subsequent attempts will not try as hard until there is an > * overall state change. > */ > - if (node_alloc_noretry && !page && alloc_try_hard) > + if (node_alloc_noretry && !folio && alloc_try_hard) > node_set(nid, *node_alloc_noretry); > > - if (!page) { > + if (!folio) { > __count_vm_event(HTLB_BUDDY_PGALLOC_FAIL); > return NULL; > } > > __count_vm_event(HTLB_BUDDY_PGALLOC); > - return page_folio(page); > + return folio; > } > > static struct folio *__alloc_fresh_hugetlb_folio(struct hstate *h, > . This also looks good to me. Thanks for your work.