From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 082B9CD68ED for ; Tue, 10 Oct 2023 06:59:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 72BE88D00AB; Tue, 10 Oct 2023 02:59:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6DBE18D006D; Tue, 10 Oct 2023 02:59:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5CB388D00AB; Tue, 10 Oct 2023 02:59:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 4DD448D006D for ; Tue, 10 Oct 2023 02:59:09 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 1DF6B1A0198 for ; Tue, 10 Oct 2023 06:59:09 +0000 (UTC) X-FDA: 81328650018.30.E19D5E1 Received: from out-193.mta1.migadu.com (out-193.mta1.migadu.com [95.215.58.193]) by imf21.hostedemail.com (Postfix) with ESMTP id D25B21C0003 for ; Tue, 10 Oct 2023 06:59:06 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=FrDZusWT; spf=pass (imf21.hostedemail.com: domain of muchun.song@linux.dev designates 95.215.58.193 as permitted sender) smtp.mailfrom=muchun.song@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1696921147; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/e6vb1bWQTImJ55mJskAw3PJCVgy9GbqhEsCFyFthVg=; b=2lLsT1AvGMz53XN3CHik/r9WDFZ/RpFPVEuetZDnmf245Lb4NL1rQQLJA2AYxAD8RiY4U1 REdkNWIjWrs6Ybmlklcc2xQ93kb1Bu46nw7zceVmZVwagNuGxk8fKcm7gH6YSHYM7Bs5MM SBqOTwkUKdjGr+y7LlRVloBkpl1mdqM= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=FrDZusWT; spf=pass (imf21.hostedemail.com: domain of muchun.song@linux.dev designates 95.215.58.193 as permitted sender) smtp.mailfrom=muchun.song@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1696921147; a=rsa-sha256; cv=none; b=ZRl/GMwSM8a5LxkZshx/X46W++1FFSK24UECCo01x4UrV+qRU47J6VqP/B7gY+k46Drwh8 MitdqQp20LNdmB5UM0coneKhWWBZcbfz/xvrPAdjC3X7LOVPxq4Ry9FdzwZ7GYUKuwGYVb qtoaxmAhqp9HVL+95mtwL2q0sbtkNac= Message-ID: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1696921144; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/e6vb1bWQTImJ55mJskAw3PJCVgy9GbqhEsCFyFthVg=; b=FrDZusWTJBCDVcBM+vcpe9w2jZzfwxtG18yor58cE8MDaJI8WiagwyryK+UBvslDFOt6W5 LhW2j8S3Mxd2qSWp2xhx7vDOgsHCfr7Gy6Hf2KffvKBk139/1fPkA3L+ssrFg8gTZmCTEM hvo4O/IcN3Dd83tO44CW7fXOa1ABb1U= Date: Tue, 10 Oct 2023 14:58:56 +0800 MIME-Version: 1.0 Subject: Re: [PATCH 1/1] hugetlb_vmemmap: use folio argument for hugetlb_vmemmap_* functions To: Usama Arif , linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, akpm@linux-foundation.org, mike.kravetz@oracle.com, songmuchun@bytedance.com, fam.zheng@bytedance.com, liangma@liangbit.com, punit.agrawal@bytedance.com References: <20231009151830.2248885-1-usama.arif@bytedance.com> <20231009151830.2248885-2-usama.arif@bytedance.com> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Muchun Song In-Reply-To: <20231009151830.2248885-2-usama.arif@bytedance.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: D25B21C0003 X-Rspam-User: X-Stat-Signature: k84tkb5t7dhkncjcztyk8j3j9iqxnyxb X-Rspamd-Server: rspam01 X-HE-Tag: 1696921146-20781 X-HE-Meta: U2FsdGVkX1/Bax8pwXmqeKoanGc653VgBIx1aFvvyY3asukvpvq62tM7TAKmiiNB/zZFrPCTotWIHaGiAtmZabzb7JPOHY/tNlb3XlZkFdsuPqr0zWglJ2Cxr4is1KoMWRbXyy2cpGAr3e8e05AzccJ2U1Xh6QEmPMGDgYMv7SYwRnePuBW6Xw9zT1m8U1oF+hmWcEJKJBQ2uZ7e2810gUHb80iohY8cWG93khfc2lAKrCWQ8wQ/UK0fQqctO/OxDjP1hkPYBpyXoCZRQf3NSC5f3Ex3fkjm95rLdQorXL6gE1zVtR1HJR10oq1vAPjaiDFhBgpxoMFmlx5AraVLsRTM/KOh+XkCO72TvU8582466N3DPPuTZFNmeZ4tVCctSUSPeU4uhf/ZK5Zq5eA9TIRdkw+16xD258xQ2SDlknj7hEoXHAAPQnZkOPQFl2fJk4+aI2xKufF/4zIp9q65/WsquY+cvNQeMV3R9RMox6Ug4SBCZY5SLEF357jiJm7hCY17laCP3YzIQJR/k22poQ/asYUVs+AlGAnyhSbCHWAmuago0xcIlwlxVHkuwawOhtqxdzoQFewm10gyMBfY8xXFpzR4/SRfObXMKV3EmdiYvcT/yZ+uBrbgUEXkGVpcJNVrSSU0I9igpr99+pROrIGGtkNO2zNpgDKwh/prngz+UqWBSLKjP2pA6E/094RynqjshCwqWq+3Yeg3LcFhN37jRkiQWpRG1I+onpkfclFMg/KqlCJU5cH7qN5EVLck8DSpThhXEsSS2HEgBpnxP8pj9PbdzN5TAvCfi770GlAAt25p8N7mf03IOGAyTzN+aXIeW6T5ZIMXTXsMl4A78LccJ47uTIh89gXNvng3ak7PUQo6aYUJJhGSZNJGxGoaPZKAuzfaJWLk+hJTz3ChapaU0Ow8pe7zBRbcLiiFa/HjGCowq0NbvQ8pjzZkSLr5OkwFud/rhYNknbQ/xOt OLOVmmoJ 1AustFoFkQv/+UU6KkHWArpjxARVcfZZdoK5CjapZKmkbfRxSofdZPpMgZ6/Iy12m1PQkhJoeGvMxc85FRGjXoewErJbQEQcWOwKFdkeKFvMGx8sncsaAj2rtLxpka2SRkdo8mrt04cINq9OD8xvpFEAjjx8/qhQxEyp9yuVmcTjZ9KGJ3Q5a6AV5uLisVy+bFwdI X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2023/10/9 23:18, Usama Arif wrote: > Most function calls in hugetlb.c are made with folio arguments. > This brings hugetlb_vmemmap calls inline with them by using folio > instead of head struct page. Head struct page is still needed > within these functions. > > The set/clear/test functions for hugepages are also changed to > folio versions. > > Signed-off-by: Usama Arif > --- > mm/hugetlb.c | 10 +++++----- > mm/hugetlb_vmemmap.c | 42 ++++++++++++++++++++++-------------------- > mm/hugetlb_vmemmap.h | 8 ++++---- > 3 files changed, 31 insertions(+), 29 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index b12f5fd295bb..73803d62066a 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1606,7 +1606,7 @@ static void __update_and_free_hugetlb_folio(struct hstate *h, > * is no longer identified as a hugetlb page. hugetlb_vmemmap_restore > * can only be passed hugetlb pages and will BUG otherwise. > */ > - if (clear_dtor && hugetlb_vmemmap_restore(h, &folio->page)) { > + if (clear_dtor && hugetlb_vmemmap_restore(h, folio)) { > spin_lock_irq(&hugetlb_lock); > /* > * If we cannot allocate vmemmap pages, just refuse to free the > @@ -1749,7 +1749,7 @@ static void bulk_vmemmap_restore_error(struct hstate *h, > * quit processing the list to retry the bulk operation. > */ > list_for_each_entry_safe(folio, t_folio, folio_list, lru) > - if (hugetlb_vmemmap_restore(h, &folio->page)) { > + if (hugetlb_vmemmap_restore(h, folio)) { > list_del(&folio->lru); > spin_lock_irq(&hugetlb_lock); > add_hugetlb_folio(h, folio, true); > @@ -1907,7 +1907,7 @@ static void init_new_hugetlb_folio(struct hstate *h, struct folio *folio) > static void __prep_new_hugetlb_folio(struct hstate *h, struct folio *folio) > { > init_new_hugetlb_folio(h, folio); > - hugetlb_vmemmap_optimize(h, &folio->page); > + hugetlb_vmemmap_optimize(h, folio); > } > > static void prep_new_hugetlb_folio(struct hstate *h, struct folio *folio, int nid) > @@ -2312,7 +2312,7 @@ int dissolve_free_huge_page(struct page *page) > * Attempt to allocate vmemmmap here so that we can take > * appropriate action on failure. > */ > - rc = hugetlb_vmemmap_restore(h, &folio->page); > + rc = hugetlb_vmemmap_restore(h, folio); > if (!rc) { > update_and_free_hugetlb_folio(h, folio, false); > } else { > @@ -3721,7 +3721,7 @@ static int demote_free_hugetlb_folio(struct hstate *h, struct folio *folio) > * passed hugetlb folios and will BUG otherwise. > */ > if (folio_test_hugetlb(folio)) { > - rc = hugetlb_vmemmap_restore(h, &folio->page); > + rc = hugetlb_vmemmap_restore(h, folio); > if (rc) { > /* Allocation of vmemmmap failed, we can not demote folio */ > spin_lock_irq(&hugetlb_lock); > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c > index d2999c303031..84b5ac93b9e5 100644 > --- a/mm/hugetlb_vmemmap.c > +++ b/mm/hugetlb_vmemmap.c > @@ -495,14 +495,15 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); > static bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON); > core_param(hugetlb_free_vmemmap, vmemmap_optimize_enabled, bool, 0); > > -static int __hugetlb_vmemmap_restore(const struct hstate *h, struct page *head, unsigned long flags) > +static int __hugetlb_vmemmap_restore(const struct hstate *h, struct folio *folio, unsigned long flags) > { > int ret; > + struct page *head = &folio->page; > unsigned long vmemmap_start = (unsigned long)head, vmemmap_end; > unsigned long vmemmap_reuse; > > VM_WARN_ON_ONCE(!PageHuge(head)); > - if (!HPageVmemmapOptimized(head)) > + if (!folio_test_hugetlb_vmemmap_optimized(folio)) > return 0; > > vmemmap_end = vmemmap_start + hugetlb_vmemmap_size(h); > @@ -518,7 +519,7 @@ static int __hugetlb_vmemmap_restore(const struct hstate *h, struct page *head, > */ > ret = vmemmap_remap_alloc(vmemmap_start, vmemmap_end, vmemmap_reuse, flags); > if (!ret) { > - ClearHPageVmemmapOptimized(head); > + folio_clear_hugetlb_vmemmap_optimized(folio); > static_branch_dec(&hugetlb_optimize_vmemmap_key); > } > > @@ -530,14 +531,14 @@ static int __hugetlb_vmemmap_restore(const struct hstate *h, struct page *head, > * hugetlb_vmemmap_optimize()) vmemmap pages which > * will be reallocated and remapped. > * @h: struct hstate. > - * @head: the head page whose vmemmap pages will be restored. > + * @folio: the folio whose vmemmap pages will be restored. > * > - * Return: %0 if @head's vmemmap pages have been reallocated and remapped, > + * Return: %0 if @folio's vmemmap pages have been reallocated and remapped, > * negative error code otherwise. > */ > -int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > +int hugetlb_vmemmap_restore(const struct hstate *h, struct folio *folio) I'd like to rename this to hugetlb_vmemmap_restore_folio to be consistent with hugetlb_vmemmap_restore_folios. > { > - return __hugetlb_vmemmap_restore(h, head, 0); > + return __hugetlb_vmemmap_restore(h, folio, 0); > } > > /** > @@ -563,7 +564,7 @@ long hugetlb_vmemmap_restore_folios(const struct hstate *h, > > list_for_each_entry_safe(folio, t_folio, folio_list, lru) { > if (folio_test_hugetlb_vmemmap_optimized(folio)) { > - ret = __hugetlb_vmemmap_restore(h, &folio->page, > + ret = __hugetlb_vmemmap_restore(h, folio, > VMEMMAP_REMAP_NO_TLB_FLUSH); > if (ret) > break; > @@ -641,11 +642,12 @@ static bool vmemmap_should_optimize(const struct hstate *h, const struct page *h > } > > static int __hugetlb_vmemmap_optimize(const struct hstate *h, > - struct page *head, > + struct folio *folio, > struct list_head *vmemmap_pages, > unsigned long flags) > { > int ret = 0; > + struct page *head = &folio->page; > unsigned long vmemmap_start = (unsigned long)head, vmemmap_end; > unsigned long vmemmap_reuse; > > @@ -665,7 +667,7 @@ static int __hugetlb_vmemmap_optimize(const struct hstate *h, > * If there is an error during optimization, we will immediately FLUSH > * the TLB and clear the flag below. > */ > - SetHPageVmemmapOptimized(head); > + folio_set_hugetlb_vmemmap_optimized(folio); > > vmemmap_end = vmemmap_start + hugetlb_vmemmap_size(h); > vmemmap_reuse = vmemmap_start; > @@ -681,27 +683,27 @@ static int __hugetlb_vmemmap_optimize(const struct hstate *h, > vmemmap_pages, flags); > if (ret) { > static_branch_dec(&hugetlb_optimize_vmemmap_key); > - ClearHPageVmemmapOptimized(head); > + folio_clear_hugetlb_vmemmap_optimized(folio); > } > > return ret; > } > > /** > - * hugetlb_vmemmap_optimize - optimize @head page's vmemmap pages. > + * hugetlb_vmemmap_optimize - optimize @folio's vmemmap pages. > * @h: struct hstate. > - * @head: the head page whose vmemmap pages will be optimized. > + * @folio: the folio whose vmemmap pages will be optimized. > * > - * This function only tries to optimize @head's vmemmap pages and does not > + * This function only tries to optimize @folio's vmemmap pages and does not > * guarantee that the optimization will succeed after it returns. The caller > - * can use HPageVmemmapOptimized(@head) to detect if @head's vmemmap pages > - * have been optimized. > + * can use folio_test_hugetlb_vmemmap_optimized(@folio) to detect if @folio's > + * vmemmap pages have been optimized. > */ > -void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head) > +void hugetlb_vmemmap_optimize(const struct hstate *h, struct folio *folio) The same as here. Otherwise, LGTM. Please free to add: Reviewed-by: Muchun Song in you next edition. Thanks. > { > LIST_HEAD(vmemmap_pages); > > - __hugetlb_vmemmap_optimize(h, head, &vmemmap_pages, 0); > + __hugetlb_vmemmap_optimize(h, folio, &vmemmap_pages, 0); > free_vmemmap_page_list(&vmemmap_pages); > } > > @@ -745,7 +747,7 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l > flush_tlb_all(); > > list_for_each_entry(folio, folio_list, lru) { > - int ret = __hugetlb_vmemmap_optimize(h, &folio->page, > + int ret = __hugetlb_vmemmap_optimize(h, folio, > &vmemmap_pages, > VMEMMAP_REMAP_NO_TLB_FLUSH); > > @@ -761,7 +763,7 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l > flush_tlb_all(); > free_vmemmap_page_list(&vmemmap_pages); > INIT_LIST_HEAD(&vmemmap_pages); > - __hugetlb_vmemmap_optimize(h, &folio->page, > + __hugetlb_vmemmap_optimize(h, folio, > &vmemmap_pages, > VMEMMAP_REMAP_NO_TLB_FLUSH); > } > diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h > index a0dcf49f46ba..6a06dccd7ffa 100644 > --- a/mm/hugetlb_vmemmap.h > +++ b/mm/hugetlb_vmemmap.h > @@ -18,11 +18,11 @@ > #define HUGETLB_VMEMMAP_RESERVE_PAGES (HUGETLB_VMEMMAP_RESERVE_SIZE / sizeof(struct page)) > > #ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP > -int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head); > +int hugetlb_vmemmap_restore(const struct hstate *h, struct folio *folio); > long hugetlb_vmemmap_restore_folios(const struct hstate *h, > struct list_head *folio_list, > struct list_head *non_hvo_folios); > -void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head); > +void hugetlb_vmemmap_optimize(const struct hstate *h, struct folio *folio); > void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_list); > > static inline unsigned int hugetlb_vmemmap_size(const struct hstate *h) > @@ -43,7 +43,7 @@ static inline unsigned int hugetlb_vmemmap_optimizable_size(const struct hstate > return size > 0 ? size : 0; > } > #else > -static inline int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > +static inline int hugetlb_vmemmap_restore(const struct hstate *h, struct folio *folio) > { > return 0; > } > @@ -56,7 +56,7 @@ static long hugetlb_vmemmap_restore_folios(const struct hstate *h, > return 0; > } > > -static inline void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head) > +static inline void hugetlb_vmemmap_optimize(const struct hstate *h, struct folio *folio) > { > } >