From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DEF6C433B4 for ; Thu, 15 Apr 2021 08:43:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 77420611C0 for ; Thu, 15 Apr 2021 08:43:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 77420611C0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0DB156B0036; Thu, 15 Apr 2021 04:43:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 08A596B0074; Thu, 15 Apr 2021 04:43:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DD1CF6B0075; Thu, 15 Apr 2021 04:43:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0155.hostedemail.com [216.40.44.155]) by kanga.kvack.org (Postfix) with ESMTP id C232A6B0036 for ; Thu, 15 Apr 2021 04:43:16 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 82D071801E8D6 for ; Thu, 15 Apr 2021 08:43:16 +0000 (UTC) X-FDA: 78033961992.31.E5BDD86 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by imf14.hostedemail.com (Postfix) with ESMTP id BAB15C0001FA for ; Thu, 15 Apr 2021 08:43:07 +0000 (UTC) Received: by mail-pl1-f181.google.com with SMTP id n10so522996plc.0 for ; Thu, 15 Apr 2021 01:43:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fwwWxcjL7/ZgriiZ7PmyesiU1nwrUlYRrkWxiyoK9uI=; b=uGVt48g0XVlIcaAbjG/DC3OyQg/WtSM/UNJw0xZQOs8v9zKnd0od2YBC9Vv6stjOIP TLnzQJ7j/n9/soHUp/1wLXLTxBO7+nVk/BsLcMHaHEAXmsEcXNS5ZfPLVJd5rgP/Im7t q0rKn6ob2GSNMv/mJHprkx/6bAf1fCxl4ke47jXre+E0QFEvmzzyXz6pMg5iSKM8+rNC hsLg9hp7TSpq2lOvp3tIyABbPkpve6lmJPhyfQN8h54/+f7XahG7Tn2yG33rVkfQ7D0Z ly3gvORZc13KKzhjNAat+N3ZUmrgpoP1XCUtJQmiQHAgtIe2xTGpUBlWB59dVMiohUO8 eD1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fwwWxcjL7/ZgriiZ7PmyesiU1nwrUlYRrkWxiyoK9uI=; b=A/gFoA6+krw7XPMeRGrsNuT+Hlh+1sGBwH8dYo9QXWZdrxLrqbcRLU0SOeWdC6VwNs 0x32YZcpdhdxY719lE9x7frgwFoEEgmmcL6nmRNBWVxtpnvcf8uaohxlKTmoaM1YzXic H4DlArenPUbw2ryQ9Bjf6eCEi0Wdx/XvkAfnJunsPMU0H3A7H+lxgwBGuJ3C5ny57hoM fqO9X6jn8HDmWfAwJsf2Th6jfztW2ND1dDM5m4sGWcmH+fKiOwQ0LXJHMZiJGLTXJHyu RLiH9P/P814WCGZLWX9R43N02VIG8WBKJIiZ3BGKRIXitqXbyKOKwG4ZGTlG5g4hDcLn 9TMQ== X-Gm-Message-State: AOAM532kkBhlRyaXtZHnRu5wetS1mDw+Fxyr0curPmHmL9rHctv3Ejas w0xOyzxg7a5DYGD9kgsIO8vYAw== X-Google-Smtp-Source: ABdhPJyT6R/pSknWjXspgUSIfiX+ZW/dd+KcodopiAjEXv1L4FG3GBuVGWAEIWxO4wr8mNsGNaXa+g== X-Received: by 2002:a17:902:7682:b029:eb:2900:ed69 with SMTP id m2-20020a1709027682b02900eb2900ed69mr2530505pll.53.1618476195259; Thu, 15 Apr 2021 01:43:15 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.234]) by smtp.gmail.com with ESMTPSA id e13sm1392365pgt.91.2021.04.15.01.43.05 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Apr 2021 01:43:14 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com, joao.m.martins@oracle.com Cc: duanxiongchun@bytedance.com, fam.zheng@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v20 5/9] mm: hugetlb: defer freeing of HugeTLB pages Date: Thu, 15 Apr 2021 16:40:01 +0800 Message-Id: <20210415084005.25049-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210415084005.25049-1-songmuchun@bytedance.com> References: <20210415084005.25049-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: BAB15C0001FA X-Stat-Signature: zxykfygbkyjiwfya8ngzsi388itwxtuj Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf14; identity=mailfrom; envelope-from=""; helo=mail-pl1-f181.google.com; client-ip=209.85.214.181 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618476187-832714 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the subsequent patch, we should allocate the vmemmap pages when freeing a HugeTLB page. But update_and_free_page() can be called under any context, so we cannot use GFP_KERNEL to allocate vmemmap pages. However, we can defer the actual freeing in a kworker to prevent from using GFP_ATOMIC to allocate the vmemmap pages. The __update_and_free_page() is where the call to allocate vmemmmap pages will be inserted. Signed-off-by: Muchun Song --- mm/hugetlb.c | 73 ++++++++++++++++++++++++++++++++++++++++++++++= ++---- mm/hugetlb_vmemmap.c | 12 --------- mm/hugetlb_vmemmap.h | 17 ++++++++++++ 3 files changed, 85 insertions(+), 17 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 923d05e2806b..eeb8f5480170 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1376,7 +1376,7 @@ static void remove_hugetlb_page(struct hstate *h, s= truct page *page, h->nr_huge_pages_node[nid]--; } =20 -static void update_and_free_page(struct hstate *h, struct page *page) +static void __update_and_free_page(struct hstate *h, struct page *page) { int i; struct page *subpage =3D page; @@ -1399,12 +1399,73 @@ static void update_and_free_page(struct hstate *h= , struct page *page) } } =20 +/* + * As update_and_free_page() can be called under any context, so we cann= ot + * use GFP_KERNEL to allocate vmemmap pages. However, we can defer the + * actual freeing in a workqueue to prevent from using GFP_ATOMIC to all= ocate + * the vmemmap pages. + * + * free_hpage_workfn() locklessly retrieves the linked list of pages to = be + * freed and frees them one-by-one. As the page->mapping pointer is goin= g + * to be cleared in free_hpage_workfn() anyway, it is reused as the llis= t_node + * structure of a lockless linked list of huge pages to be freed. + */ +static LLIST_HEAD(hpage_freelist); + +static void free_hpage_workfn(struct work_struct *work) +{ + struct llist_node *node; + + node =3D llist_del_all(&hpage_freelist); + + while (node) { + struct page *page; + struct hstate *h; + + page =3D container_of((struct address_space **)node, + struct page, mapping); + node =3D node->next; + page->mapping =3D NULL; + h =3D page_hstate(page); + + __update_and_free_page(h, page); + + cond_resched(); + } +} +static DECLARE_WORK(free_hpage_work, free_hpage_workfn); + +static inline void flush_free_hpage_work(struct hstate *h) +{ + if (free_vmemmap_pages_per_hpage(h)) + flush_work(&free_hpage_work); +} + +static void update_and_free_page(struct hstate *h, struct page *page, + bool atomic) +{ + if (!free_vmemmap_pages_per_hpage(h) || !atomic) { + __update_and_free_page(h, page); + return; + } + + /* + * Defer freeing to avoid using GFP_ATOMIC to allocate vmemmap pages. + * + * Only call schedule_work() if hpage_freelist is previously + * empty. Otherwise, schedule_work() had been called but the workfn + * hasn't retrieved the list yet. + */ + if (llist_add((struct llist_node *)&page->mapping, &hpage_freelist)) + schedule_work(&free_hpage_work); +} + static void update_and_free_pages_bulk(struct hstate *h, struct list_hea= d *list) { struct page *page, *t_page; =20 list_for_each_entry_safe(page, t_page, list, lru) { - update_and_free_page(h, page); + update_and_free_page(h, page, false); cond_resched(); } } @@ -1471,12 +1532,12 @@ void free_huge_page(struct page *page) if (HPageTemporary(page)) { remove_hugetlb_page(h, page, false); spin_unlock_irqrestore(&hugetlb_lock, flags); - update_and_free_page(h, page); + update_and_free_page(h, page, true); } else if (h->surplus_huge_pages_node[nid]) { /* remove the page from active list */ remove_hugetlb_page(h, page, true); spin_unlock_irqrestore(&hugetlb_lock, flags); - update_and_free_page(h, page); + update_and_free_page(h, page, true); } else { arch_clear_hugepage_flags(page); enqueue_huge_page(h, page); @@ -1785,7 +1846,7 @@ int dissolve_free_huge_page(struct page *page) remove_hugetlb_page(h, page, false); h->max_huge_pages--; spin_unlock_irq(&hugetlb_lock); - update_and_free_page(h, head); + update_and_free_page(h, head, false); return 0; } out: @@ -2627,6 +2688,7 @@ static int set_max_huge_pages(struct hstate *h, uns= igned long count, int nid, * pages in hstate via the proc/sysfs interfaces. */ mutex_lock(&h->resize_lock); + flush_free_hpage_work(h); spin_lock_irq(&hugetlb_lock); =20 /* @@ -2736,6 +2798,7 @@ static int set_max_huge_pages(struct hstate *h, uns= igned long count, int nid, /* free the pages after dropping lock */ spin_unlock_irq(&hugetlb_lock); update_and_free_pages_bulk(h, &page_list); + flush_free_hpage_work(h); spin_lock_irq(&hugetlb_lock); =20 while (count < persistent_huge_pages(h)) { diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index e45a138a7f85..cb28c5b6c9ff 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -180,18 +180,6 @@ #define RESERVE_VMEMMAP_NR 2U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) =20 -/* - * How many vmemmap pages associated with a HugeTLB page that can be fre= ed - * to the buddy allocator. - * - * Todo: Returns zero for now, which means the feature is disabled. We w= ill - * enable it once all the infrastructure is there. - */ -static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h= ) -{ - return 0; -} - static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hst= ate *h) { return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 6923f03534d5..01f8637adbe0 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -12,9 +12,26 @@ =20 #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP void free_huge_page_vmemmap(struct hstate *h, struct page *head); + +/* + * How many vmemmap pages associated with a HugeTLB page that can be fre= ed + * to the buddy allocator. + * + * Todo: Returns zero for now, which means the feature is disabled. We w= ill + * enable it once all the infrastructure is there. + */ +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h= ) +{ + return 0; +} #else static inline void free_huge_page_vmemmap(struct hstate *h, struct page = *head) { } + +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h= ) +{ + return 0; +} #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ #endif /* _LINUX_HUGETLB_VMEMMAP_H */ --=20 2.11.0