From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CE5DC433FE for ; Sat, 19 Nov 2022 19:00:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 675DC6B0072; Sat, 19 Nov 2022 14:00:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 624BD80007; Sat, 19 Nov 2022 14:00:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4EC736B0074; Sat, 19 Nov 2022 14:00:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 3CA1C6B0072 for ; Sat, 19 Nov 2022 14:00:24 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id CF71C80B0D for ; Sat, 19 Nov 2022 19:00:23 +0000 (UTC) X-FDA: 80151107526.05.8F680EB Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf29.hostedemail.com (Postfix) with ESMTP id B47FD120012 for ; Sat, 19 Nov 2022 19:00:22 +0000 (UTC) Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AJIO6hQ022059; Sat, 19 Nov 2022 19:00:03 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=date : from : to : cc : subject : message-id : references : mime-version : content-type : in-reply-to; s=pp1; bh=pwoWkD8s+KiO6sLchTJXK5HK/l+sY8UQR3J4SaztgOk=; b=UZ/0NrlDNdrJL2v9cU/YX0fmJ9H9Km42f2dnXUVJaGzyYNjOcjirbb5S59KslSdZ4T8t dKx80NPQFPEQTdJxY2XznG8tY8J8XyB62zDUrz4A4BkFjPN0OE4yVm2ArnGsyOdUqRm1 jALpB+hvsSZxP+HmjozkKo3bkRGvTqhisC4LLrdCt53njSb5WdPp8OinFoVDH2GZTQDW zpoXv2zNNllbqkAzZqJxeUVp0a5hCMBhPM0xafyPqbdcRk+RJZ3x8ubO5kX1RhuzCfDt NBEbUgMiOJoIPa2tRyZatDXk/Xx+e+mz93j9t86tZtMTg5JbXC4f1/dbJlIsoPHELGQ4 nQ== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3kxs2hj48e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 19 Nov 2022 19:00:02 +0000 Received: from m0098409.ppops.net (m0098409.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AJJ02FK009066; Sat, 19 Nov 2022 19:00:02 GMT Received: from ppma05fra.de.ibm.com (6c.4a.5195.ip4.static.sl-reverse.com [149.81.74.108]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3kxs2hj47a-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 19 Nov 2022 19:00:02 +0000 Received: from pps.filterd (ppma05fra.de.ibm.com [127.0.0.1]) by ppma05fra.de.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 2AJIpBHK015304; Sat, 19 Nov 2022 18:59:59 GMT Received: from b06cxnps3075.portsmouth.uk.ibm.com (d06relay10.portsmouth.uk.ibm.com [9.149.109.195]) by ppma05fra.de.ibm.com with ESMTP id 3kxps8gfd3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 19 Nov 2022 18:59:59 +0000 Received: from b06wcsmtp001.portsmouth.uk.ibm.com (b06wcsmtp001.portsmouth.uk.ibm.com [9.149.105.160]) by b06cxnps3075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 2AJIxvFr3670664 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sat, 19 Nov 2022 18:59:57 GMT Received: from b06wcsmtp001.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 430A7A405B; Sat, 19 Nov 2022 18:59:57 +0000 (GMT) Received: from b06wcsmtp001.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id BFC59A4054; Sat, 19 Nov 2022 18:59:53 +0000 (GMT) Received: from tarunpc (unknown [9.43.35.39]) by b06wcsmtp001.portsmouth.uk.ibm.com (Postfix) with ESMTPS; Sat, 19 Nov 2022 18:59:53 +0000 (GMT) Date: Sun, 20 Nov 2022 00:29:50 +0530 From: Tarun Sahu To: Sidhartha Kumar Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com Subject: Re: [PATCH mm-unstable v3 06/10] mm/hugetlb: convert add_hugetlb_page() to folios and add hugetlb_cma_folio() Message-ID: <20221119185950.nko3aki3gvh4zu64@tarunpc> References: <20221117211501.17150-1-sidhartha.kumar@oracle.com> <20221117211501.17150-7-sidhartha.kumar@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20221117211501.17150-7-sidhartha.kumar@oracle.com> X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: 5cgfyLRlbfQwKp2uabYZdrcGLgUPTKrN X-Proofpoint-GUID: NjJ8oITLL_qlSfywWCokd_f_1wq1q-i7 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-18_08,2022-11-18_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 lowpriorityscore=0 mlxscore=0 spamscore=0 clxscore=1015 impostorscore=0 adultscore=0 priorityscore=1501 malwarescore=0 bulkscore=0 mlxlogscore=999 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211190143 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668884423; a=rsa-sha256; cv=none; b=lxA6mXHQ+5eIj+0tmReeo/bGlEao/0vx5h9C6UYq47/NsmStKTIT7fUuPrAHjrKyafWf+E Iy80VWKKyCbj1RRd9lzgVhmlHTL5rbgU9Ri6i1aL8KxA8S6HxVUb1T1QrGm5HK/yMR3ITS F3v61a/aaqFxktb+DjDQdCYti94ZkJc= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b="UZ/0NrlD"; dmarc=pass (policy=none) header.from=ibm.com; spf=pass (imf29.hostedemail.com: domain of tsahu@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=tsahu@linux.ibm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668884423; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pwoWkD8s+KiO6sLchTJXK5HK/l+sY8UQR3J4SaztgOk=; b=DFl0Q13Qr1XcZuRlQe7jObRC7Eric+lh1HXVn5vPEeGGYs5oqs1l5lSuYsQZWwwcyX574P gi28iwG0Z++rx9AlO/IXlKSx2/BCUbjhaRQnR2qHEiMSTcwhwnQF3EcQPRzy5AZP+raK66 aOx4W7i692HsMTQYWZObb6SCJADfxLQ= X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: B47FD120012 Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b="UZ/0NrlD"; dmarc=pass (policy=none) header.from=ibm.com; spf=pass (imf29.hostedemail.com: domain of tsahu@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=tsahu@linux.ibm.com X-Stat-Signature: ziw69tcjkzbtto19pznhduojohfrdjsf X-HE-Tag: 1668884422-121939 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi, Though it is already merged, it is just comment thing. On Nov 17 2022, Sidhartha Kumar wrote: > Convert add_hugetlb_page() to take in a folio, also convert > hugetlb_cma_page() to take in a folio. > > Signed-off-by: Sidhartha Kumar > --- > mm/hugetlb.c | 40 ++++++++++++++++++++-------------------- > 1 file changed, 20 insertions(+), 20 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 80301fab56d8..bf36aa8e6072 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -54,13 +54,13 @@ struct hstate hstates[HUGE_MAX_HSTATE]; > #ifdef CONFIG_CMA > static struct cma *hugetlb_cma[MAX_NUMNODES]; > static unsigned long hugetlb_cma_size_in_node[MAX_NUMNODES] __initdata; > -static bool hugetlb_cma_page(struct page *page, unsigned int order) > +static bool hugetlb_cma_folio(struct folio *folio, unsigned int order) > { > - return cma_pages_valid(hugetlb_cma[page_to_nid(page)], page, > + return cma_pages_valid(hugetlb_cma[folio_nid(folio)], &folio->page, > 1 << order); > } > #else > -static bool hugetlb_cma_page(struct page *page, unsigned int order) > +static bool hugetlb_cma_folio(struct folio *folio, unsigned int order) > { > return false; > } > @@ -1506,17 +1506,17 @@ static void remove_hugetlb_folio_for_demote(struct hstate *h, struct folio *foli > __remove_hugetlb_folio(h, folio, adjust_surplus, true); > } > > -static void add_hugetlb_page(struct hstate *h, struct page *page, > +static void add_hugetlb_folio(struct hstate *h, struct folio *folio, > bool adjust_surplus) > { > int zeroed; > - int nid = page_to_nid(page); > + int nid = folio_nid(folio); > > - VM_BUG_ON_PAGE(!HPageVmemmapOptimized(page), page); > + VM_BUG_ON_FOLIO(!folio_test_hugetlb_vmemmap_optimized(folio), folio); > > lockdep_assert_held(&hugetlb_lock); > > - INIT_LIST_HEAD(&page->lru); > + INIT_LIST_HEAD(&folio->lru); > h->nr_huge_pages++; > h->nr_huge_pages_node[nid]++; > > @@ -1525,21 +1525,21 @@ static void add_hugetlb_page(struct hstate *h, struct page *page, > h->surplus_huge_pages_node[nid]++; > } > > - set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); > - set_page_private(page, 0); > + folio_set_compound_dtor(folio, HUGETLB_PAGE_DTOR); > + folio_change_private(folio, 0); > /* > * We have to set HPageVmemmapOptimized again as above ^ This can be changed to folio version of itself. > - * set_page_private(page, 0) cleared it. > + * folio_change_private(folio, 0) cleared it. > */ > - SetHPageVmemmapOptimized(page); > + folio_set_hugetlb_vmemmap_optimized(folio); > > /* > - * This page is about to be managed by the hugetlb allocator and > + * This folio is about to be managed by the hugetlb allocator and > * should have no users. Drop our reference, and check for others > * just in case. > */ > - zeroed = put_page_testzero(page); > - if (!zeroed) > + zeroed = folio_put_testzero(folio); > + if (unlikely(!zeroed)) > /* > * It is VERY unlikely soneone else has taken a ref on > * the page. In this case, we simply return as the > @@ -1548,8 +1548,8 @@ static void add_hugetlb_page(struct hstate *h, struct page *page, > */ > return; > > - arch_clear_hugepage_flags(page); > - enqueue_huge_page(h, page); > + arch_clear_hugepage_flags(&folio->page); > + enqueue_huge_page(h, &folio->page); > } > > static void __update_and_free_page(struct hstate *h, struct page *page) > @@ -1575,7 +1575,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) > * page and put the page back on the hugetlb free list and treat > * as a surplus page. > */ > - add_hugetlb_page(h, page, true); > + add_hugetlb_folio(h, page_folio(page), true); > spin_unlock_irq(&hugetlb_lock); > return; > } > @@ -1600,7 +1600,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) > * need to be given back to CMA in free_gigantic_page. > */ > if (hstate_is_gigantic(h) || > - hugetlb_cma_page(page, huge_page_order(h))) { > + hugetlb_cma_folio(folio, huge_page_order(h))) { > destroy_compound_gigantic_folio(folio, huge_page_order(h)); > free_gigantic_page(page, huge_page_order(h)); > } else { > @@ -2184,7 +2184,7 @@ int dissolve_free_huge_page(struct page *page) > update_and_free_hugetlb_folio(h, folio, false); > } else { > spin_lock_irq(&hugetlb_lock); > - add_hugetlb_page(h, &folio->page, false); > + add_hugetlb_folio(h, folio, false); > h->max_huge_pages++; > spin_unlock_irq(&hugetlb_lock); > } > @@ -3451,7 +3451,7 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) > /* Allocation of vmemmmap failed, we can not demote page */ > spin_lock_irq(&hugetlb_lock); > set_page_refcounted(page); > - add_hugetlb_page(h, page, false); > + add_hugetlb_folio(h, page_folio(page), false); > return rc; > } > > -- > 2.38.1 >