From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 733C2C54E71 for ; Fri, 22 Mar 2024 13:00:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E60706B0092; Fri, 22 Mar 2024 09:00:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E10EC6B0093; Fri, 22 Mar 2024 09:00:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CD8556B0095; Fri, 22 Mar 2024 09:00:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id BB5826B0092 for ; Fri, 22 Mar 2024 09:00:14 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 62B18A03AF for ; Fri, 22 Mar 2024 13:00:14 +0000 (UTC) X-FDA: 81924683148.29.F2E6CF6 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf15.hostedemail.com (Postfix) with ESMTP id 8934FA0013 for ; Fri, 22 Mar 2024 13:00:11 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=WV87e2UB; dmarc=none; spf=none (imf15.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711112411; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zBd8H+Q1IulEWjjIzOZTYVYc1PAxMc3xumiiQYJpKls=; b=zxPib+JlNl9zsqTOp0HENlplNE5GLDO0Lsb//WY8wUGMRvonNl9v//LHHLxiLxtCYMjeYJ SIYEOJuBhdBS7vpwV6RuYDLG2BSsfs/FdwCMoUa5iaewqjCWYVf+H0YY0IcslNPAXxMJ5x ZYXGzRMenC7dV+32DwDu2U5H84uRrzo= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=WV87e2UB; dmarc=none; spf=none (imf15.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711112411; a=rsa-sha256; cv=none; b=7kjyyU1qU9NQID500eaSmbuBL7o+lko1Q4uttdWErjfa1qePu/VTmyEXsKRPADjluioa9q Zy95038nJaE/RnFZr16Pdzq0294lhsX9JcIlyyTdvB6IFbKwi4ct0EU6hspEfLPmQk7iqs lYTe/AS0xMh7aWCJak6Y0lGtFSsXAVw= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=zBd8H+Q1IulEWjjIzOZTYVYc1PAxMc3xumiiQYJpKls=; b=WV87e2UBPVVr0avKSxiuZWtbhs B9dRJ/f5chWuaPwGtOZQ6XHcE/Idg5/2z0lj+pO+AKSWPhuFHkPR8vzG6B/eUBYbXbBLYlZCuCF+f 3JcksFWabc6aICMChLrhnrVT5GA9NPvZZPyKMxY3ACKYM0AplxW5Ss+cKRByaY13SXBQJlclD0kpR S3QSoNfUH01qSONUBwaACQhNN3bdx7oQGr0LJTDKWaBWm7zbApRfyL7vg4JC7MjUc+nzrPw7r0zVv x1S3JZI2Rl2dt/06s783ENQB67sycl+tHyubpH+/IIBkQwSJFILJDTJ4MYLWtFU2pOl6LYZe8/dFD WTXEx8oA==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rneVA-00000009KKM-2Vfr; Fri, 22 Mar 2024 13:00:04 +0000 Date: Fri, 22 Mar 2024 13:00:04 +0000 From: Matthew Wilcox To: Miaohe Lin Cc: linux-mm@kvack.org, David Hildenbrand , Vlastimil Babka , Muchun Song , Oscar Salvador , Andrew Morton Subject: Re: [PATCH 1/9] mm: Always initialise folio->_deferred_list Message-ID: References: <20240321142448.1645400-1-willy@infradead.org> <20240321142448.1645400-2-willy@infradead.org> <41ea6bcc-f40c-a651-2aae-e68fdaeb1021@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <41ea6bcc-f40c-a651-2aae-e68fdaeb1021@huawei.com> X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 8934FA0013 X-Stat-Signature: o44xgxj5shsmhkaimeohgjr15yuw636k X-HE-Tag: 1711112411-233496 X-HE-Meta: U2FsdGVkX19HtjHN0Iom/2P/zB6T5T3VE7UzzheJo2GSI7LhtvX8QqvzLQ5oa7SdVQJHSW3awhzxncpojgTWQk6IijRtzcbWLE659k+uzmvzNQFOlBG/1AI0VO0qmGEVr6bm954jRiE0H0fGdRsI4+Z0vFWGB/SK1C5bqG4bKA0LYDQXqiwea+NgG3KKN4uYq3mjY5CbAV/VQ0yM8lwgysEBNuegYLe5zdB0prBDgKlYPWnhCKJQYLIlxonKfFjMf8A8noXO8WNIVWsEmnRX8YYhadPbSHrkoeM8gc3QsZ7v79YOLyC43eptXMfAXVYMbA0GWP3RvEexTAQmhjgDo1lkludDTN0/s7RKWb/iJ1iHEfYyH0PF4h+CAClxoNnaoken/Q21NHm0o6352z43ZHppX0XSMml6D9XUxA6n7u5zSOEkXdVzYQ745kESkUreyeHCV7IkIwB+IgP4nuHW8rY2wzxMx5ZgEPEQrgMXJUDNls4kWnC4pH8NV0kX8/c69Jvbcu3e8MPnsIEZF7svO4ngZLJe5Tu63xVbY+qgrLZdfJepUXwGx0do8oKVL1gg3RTBuBrLbdqRsAcLtadbn8vv54fccNMveUQpmy7Hvy+2bRxKyoxtAMNtefPNEe1rgvneqG082eTcJ5/0wk6d+yqzjozcW40hyppxlcAn2o2uZ1b7XizZgqFCF+5m3FapAaS64V8WKgWfkOuHSAw0aKTpyNSaHxiWYNK0FyCQ5pdX2aTLs2Dk3L6Dd0UV19NSzQ+W2/DmxYE/ycTxhwN4Dd1qUB3++mmKbITczYTDRvbUgBiYZEiLmLEA0eFg+L5hMVtsttrjaZigTa1TU97DEMaIUuf5luX7fytd5UcOkEbkwy/y1f44Ao05o09MoMYiayVBVj1bjrKdydyuCuP5q1X8DiPT/l0dmu3wdLzKUY7Xvw+6ENzOVeuUotRKvClZ/T/JGp4J3tRxArKhPDE YaOtYLmy vxrKduK8XqNOaXFmuoaS25kAFGdYH/mOAEi+b4zOtyMBljEEu6331nVilZsEx6a3tvYQlDADaoBXPmcL1m9NCD+I3tSDJfzTkkaYSQhnyoRBHfcGW7xGv43NYp5hKvyChBoKszNbjXUFiaCkF0ig44xzECaVynfDLTtd8JKAR2YCA+UWfueqgzQghsg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000445, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Mar 22, 2024 at 04:23:59PM +0800, Miaohe Lin wrote: > > +++ b/mm/hugetlb.c > > @@ -1796,7 +1796,8 @@ static void __update_and_free_hugetlb_folio(struct hstate *h, > > destroy_compound_gigantic_folio(folio, huge_page_order(h)); > > free_gigantic_folio(folio, huge_page_order(h)); > > } else { > > - __free_pages(&folio->page, huge_page_order(h)); > > + INIT_LIST_HEAD(&folio->_deferred_list); > > Will it be better to add a comment to explain why INIT_LIST_HEAD is needed ? Maybe? Something like /* We reused this space for our own purposes */ > > + folio_put(folio); > > Can all __free_pages be replaced with folio_put in mm/hugetlb.c? There's only one left, and indeed it can! I'll drop this into my tree and send it as a proper patch later. diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 333f6278ef63..43cc7e6bc374 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2177,13 +2177,13 @@ static struct folio *alloc_buddy_hugetlb_folio(struct hstate *h, nodemask_t *node_alloc_noretry) { int order = huge_page_order(h); - struct page *page; + struct folio *folio; bool alloc_try_hard = true; bool retry = true; /* - * By default we always try hard to allocate the page with - * __GFP_RETRY_MAYFAIL flag. However, if we are allocating pages in + * By default we always try hard to allocate the folio with + * __GFP_RETRY_MAYFAIL flag. However, if we are allocating folios in * a loop (to adjust global huge page counts) and previous allocation * failed, do not continue to try hard on the same node. Use the * node_alloc_noretry bitmap to manage this state information. @@ -2196,43 +2196,42 @@ static struct folio *alloc_buddy_hugetlb_folio(struct hstate *h, if (nid == NUMA_NO_NODE) nid = numa_mem_id(); retry: - page = __alloc_pages(gfp_mask, order, nid, nmask); + folio = __folio_alloc(gfp_mask, order, nid, nmask); - /* Freeze head page */ - if (page && !page_ref_freeze(page, 1)) { - __free_pages(page, order); + if (folio && !folio_ref_freeze(folio, 1)) { + folio_put(folio); if (retry) { /* retry once */ retry = false; goto retry; } /* WOW! twice in a row. */ - pr_warn("HugeTLB head page unexpected inflated ref count\n"); - page = NULL; + pr_warn("HugeTLB unexpected inflated folio ref count\n"); + folio = NULL; } /* - * If we did not specify __GFP_RETRY_MAYFAIL, but still got a page this - * indicates an overall state change. Clear bit so that we resume - * normal 'try hard' allocations. + * If we did not specify __GFP_RETRY_MAYFAIL, but still got a + * folio this indicates an overall state change. Clear bit so + * that we resume normal 'try hard' allocations. */ - if (node_alloc_noretry && page && !alloc_try_hard) + if (node_alloc_noretry && folio && !alloc_try_hard) node_clear(nid, *node_alloc_noretry); /* - * If we tried hard to get a page but failed, set bit so that + * If we tried hard to get a folio but failed, set bit so that * subsequent attempts will not try as hard until there is an * overall state change. */ - if (node_alloc_noretry && !page && alloc_try_hard) + if (node_alloc_noretry && !folio && alloc_try_hard) node_set(nid, *node_alloc_noretry); - if (!page) { + if (!folio) { __count_vm_event(HTLB_BUDDY_PGALLOC_FAIL); return NULL; } __count_vm_event(HTLB_BUDDY_PGALLOC); - return page_folio(page); + return folio; } static struct folio *__alloc_fresh_hugetlb_folio(struct hstate *h,