From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7130FC433C1 for ; Sat, 27 Mar 2021 06:37:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BBCD9619F8 for ; Sat, 27 Mar 2021 06:37:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BBCD9619F8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0F2626B006C; Sat, 27 Mar 2021 02:37:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0A1106B006E; Sat, 27 Mar 2021 02:37:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E35B46B0070; Sat, 27 Mar 2021 02:37:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0214.hostedemail.com [216.40.44.214]) by kanga.kvack.org (Postfix) with ESMTP id C98586B006C for ; Sat, 27 Mar 2021 02:37:21 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 7BC921828ECE3 for ; Sat, 27 Mar 2021 06:37:21 +0000 (UTC) X-FDA: 77964697482.18.AFEEF29 Received: from mail-pg1-f174.google.com (mail-pg1-f174.google.com [209.85.215.174]) by imf03.hostedemail.com (Postfix) with ESMTP id 45088C0001FE for ; Sat, 27 Mar 2021 06:37:19 +0000 (UTC) Received: by mail-pg1-f174.google.com with SMTP id f10so6027538pgl.9 for ; Fri, 26 Mar 2021 23:37:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=LwQqp/hFmpQ8FvYYk0Qausemeu6GykiOlynYCYbnNjM=; b=tZD+Yn6Pt45RtuplRxEeh2IxAQBJ0FW7mqnkuv7uYhIqHcL+eEVWJJxU6YEiOVV55I nbL2JQrxNZyEfuNgDVOJ/Wu0Jx/k9qwHln5GHLzRHJ63KDAvQkfFqRTijjTDj4p1k99r dEru9CJABtWsxMvbsxs6M53Bm3SOIFtBB57B6bmnHj00OqznTj01DRE6/WlWJLMwv3XX sLB+HLRZyReBdQ9Z9oVKHtG3EUWKbXy0Bdu6+LLDRmDiN9umQmUO1Xcj5PRV5VlTe4hY FP+nawVnSX5bcYgjTKn/S8KzBGShx6QDh1SmeDEgns2CnBNTiaijwYaAtQ+GXo4hAque /kCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=LwQqp/hFmpQ8FvYYk0Qausemeu6GykiOlynYCYbnNjM=; b=brY1lS4Wzt24QNR87tTZ6Pg76mEHAaC9jKvLW0PZMFhByS7OEV38mBWNahz4ohZS3E PYyV4Lxgho3udA3fqWBa2wuS8dnJkTeO3z1CEJ2iOM7QNWpXOxg36GP8nK7mrwz4pb+w 4mX3hZCuXphccSRYqUmTTL0EoOgXuDtd3vaZ2rkiHnZpPnCVf7765cEY4wCyXlOMaI5C pWzf/nw4Izpr1f7SvE+LFMURjgIDPk0smpjwNsQioQlKOlDADoEGAGwoWM4QCVCiKC8e zzNVoimpguqW/GZyu8wTVJMRqK2pGHIcuDPtqVFs+RvHmdrYsZOF+6jtq3UWHWKd3u8s Y34w== X-Gm-Message-State: AOAM533U2CGtcVTcUbGV/KQ8UcOareKDLMYpytK8PCID4BJelipEK22D pB2wAzOQJBE7x1yoE/o4DEsKdOXbSpP+ivEd9Q7JlA== X-Google-Smtp-Source: ABdhPJydbnzq8ohROlyDGf7Pn4/Y4kJFfypc+2bTmGovTec6aMM8XzNAg98d02wNGQbcQp0PrcgwDdR27N2oW5wn8z4= X-Received: by 2002:a63:161c:: with SMTP id w28mr15618359pgl.341.1616827039107; Fri, 26 Mar 2021 23:37:19 -0700 (PDT) MIME-Version: 1.0 References: <20210325002835.216118-1-mike.kravetz@oracle.com> <20210325002835.216118-5-mike.kravetz@oracle.com> In-Reply-To: <20210325002835.216118-5-mike.kravetz@oracle.com> From: Muchun Song Date: Sat, 27 Mar 2021 14:36:42 +0800 Message-ID: Subject: Re: [External] [PATCH 4/8] hugetlb: create remove_hugetlb_page() to separate functionality To: Mike Kravetz Cc: Linux Memory Management List , LKML , Roman Gushchin , Michal Hocko , Shakeel Butt , Oscar Salvador , David Hildenbrand , David Rientjes , Miaohe Lin , Peter Zijlstra , Matthew Wilcox , HORIGUCHI NAOYA , "Aneesh Kumar K . V" , Waiman Long , Peter Xu , Mina Almasry , Hillf Danton , Andrew Morton Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: ru8gxx1f9umbu65oc19m746mdndf87qo X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 45088C0001FE Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf03; identity=mailfrom; envelope-from=""; helo=mail-pg1-f174.google.com; client-ip=209.85.215.174 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1616827039-340357 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Mar 25, 2021 at 8:29 AM Mike Kravetz wrote: > > The new remove_hugetlb_page() routine is designed to remove a hugetlb > page from hugetlbfs processing. It will remove the page from the active > or free list, update global counters and set the compound page > destructor to NULL so that PageHuge() will return false for the 'page'. > After this call, the 'page' can be treated as a normal compound page or > a collection of base size pages. > > remove_hugetlb_page is to be called with the hugetlb_lock held. > > Creating this routine and separating functionality is in preparation for > restructuring code to reduce lock hold times. > > Signed-off-by: Mike Kravetz Reviewed-by: Muchun Song Thanks for your effort on this. > --- > mm/hugetlb.c | 70 +++++++++++++++++++++++++++++++++------------------- > 1 file changed, 45 insertions(+), 25 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 404b0b1c5258..3938ec086b5c 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1327,6 +1327,46 @@ static inline void destroy_compound_gigantic_page(struct page *page, > unsigned int order) { } > #endif > > +/* > + * Remove hugetlb page from lists, and update dtor so that page appears > + * as just a compound page. A reference is held on the page. > + * NOTE: hugetlb specific page flags stored in page->private are not > + * automatically cleared. These flags may be used in routines > + * which operate on the resulting compound page. > + * > + * Must be called with hugetlb lock held. > + */ > +static void remove_hugetlb_page(struct hstate *h, struct page *page, > + bool adjust_surplus) > +{ > + int nid = page_to_nid(page); > + > + if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) > + return; > + > + list_del(&page->lru); > + > + if (HPageFreed(page)) { > + h->free_huge_pages--; > + h->free_huge_pages_node[nid]--; > + ClearHPageFreed(page); > + } > + if (adjust_surplus) { > + h->surplus_huge_pages--; > + h->surplus_huge_pages_node[nid]--; > + } > + > + VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page); > + VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page); > + > + ClearHPageTemporary(page); > + set_page_refcounted(page); > + set_compound_page_dtor(page, NULL_COMPOUND_DTOR); > + > + h->nr_huge_pages--; > + h->nr_huge_pages_node[nid]--; > +} > + > static void update_and_free_page(struct hstate *h, struct page *page) > { > int i; > @@ -1335,8 +1375,6 @@ static void update_and_free_page(struct hstate *h, struct page *page) > if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) > return; > > - h->nr_huge_pages--; > - h->nr_huge_pages_node[page_to_nid(page)]--; > for (i = 0; i < pages_per_huge_page(h); > i++, subpage = mem_map_next(subpage, page, i)) { > subpage->flags &= ~(1 << PG_locked | 1 << PG_error | > @@ -1344,10 +1382,6 @@ static void update_and_free_page(struct hstate *h, struct page *page) > 1 << PG_active | 1 << PG_private | > 1 << PG_writeback); > } > - VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page); > - VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page); > - set_compound_page_dtor(page, NULL_COMPOUND_DTOR); > - set_page_refcounted(page); > if (hstate_is_gigantic(h)) { > destroy_compound_gigantic_page(page, huge_page_order(h)); > free_gigantic_page(page, huge_page_order(h)); > @@ -1415,15 +1449,12 @@ static void __free_huge_page(struct page *page) > h->resv_huge_pages++; > > if (HPageTemporary(page)) { > - list_del(&page->lru); > - ClearHPageTemporary(page); > + remove_hugetlb_page(h, page, false); > update_and_free_page(h, page); > } else if (h->surplus_huge_pages_node[nid]) { > /* remove the page from active list */ > - list_del(&page->lru); > + remove_hugetlb_page(h, page, true); > update_and_free_page(h, page); > - h->surplus_huge_pages--; > - h->surplus_huge_pages_node[nid]--; > } else { > arch_clear_hugepage_flags(page); > enqueue_huge_page(h, page); > @@ -1708,13 +1739,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, > struct page *page = > list_entry(h->hugepage_freelists[node].next, > struct page, lru); > - list_del(&page->lru); > - h->free_huge_pages--; > - h->free_huge_pages_node[node]--; > - if (acct_surplus) { > - h->surplus_huge_pages--; > - h->surplus_huge_pages_node[node]--; > - } > + remove_hugetlb_page(h, page, acct_surplus); > update_and_free_page(h, page); > ret = 1; > break; > @@ -1752,7 +1777,6 @@ int dissolve_free_huge_page(struct page *page) > if (!page_count(page)) { > struct page *head = compound_head(page); > struct hstate *h = page_hstate(head); > - int nid = page_to_nid(head); > if (h->free_huge_pages - h->resv_huge_pages == 0) > goto out; > > @@ -1783,9 +1807,7 @@ int dissolve_free_huge_page(struct page *page) > SetPageHWPoison(page); > ClearPageHWPoison(head); > } > - list_del(&head->lru); > - h->free_huge_pages--; > - h->free_huge_pages_node[nid]--; > + remove_hugetlb_page(h, page, false); > h->max_huge_pages--; > update_and_free_page(h, head); > rc = 0; > @@ -2553,10 +2575,8 @@ static void try_to_free_low(struct hstate *h, unsigned long count, > return; > if (PageHighMem(page)) > continue; > - list_del(&page->lru); > + remove_hugetlb_page(h, page, false); > update_and_free_page(h, page); > - h->free_huge_pages--; > - h->free_huge_pages_node[page_to_nid(page)]--; > } > } > } > -- > 2.30.2 >