From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96062C4361B for ; Thu, 17 Dec 2020 12:16:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E73FE238E8 for ; Thu, 17 Dec 2020 12:16:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E73FE238E8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3ACB76B0070; Thu, 17 Dec 2020 07:16:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3821C8D0002; Thu, 17 Dec 2020 07:16:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 249C78D0001; Thu, 17 Dec 2020 07:16:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0102.hostedemail.com [216.40.44.102]) by kanga.kvack.org (Postfix) with ESMTP id 0A65C6B0070 for ; Thu, 17 Dec 2020 07:16:06 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B7D8E8249980 for ; Thu, 17 Dec 2020 12:16:05 +0000 (UTC) X-FDA: 77602671090.28.judge14_211502427434 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id 76C5B6D68 for ; Thu, 17 Dec 2020 12:16:05 +0000 (UTC) X-HE-Tag: judge14_211502427434 X-Filterd-Recvd-Size: 25352 Received: from mail-pg1-f176.google.com (mail-pg1-f176.google.com [209.85.215.176]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Thu, 17 Dec 2020 12:16:04 +0000 (UTC) Received: by mail-pg1-f176.google.com with SMTP id z21so3046202pgj.4 for ; Thu, 17 Dec 2020 04:16:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XhlcKgdce83OWuotPe64NNg06UApv0YS6YzH8RPh8pY=; b=DyrNgEobqygUrtUU3srCQVqaPQanikngMH6WrSaFvwfbxinThInF+MncTcbF475o6F +t2BfYKvmT/tLcDkNAQ1ZMVShoj0H7kh0MoyxFNwRgBGeEhs6NlBRqkFFMHe2GHbd6rT wPu46wMclzcoQgtiXHRp6tAQcjxL5pzjtMvJuogTo7kl1718B+Te70h6pEHXZN0xDjUa I4gVNEwVqTFIA7IstKJX3vKI/k/3/ZP+KEme47YkpmokomekAsXXNex9dXfS9Olpo/BS rrVk1k8nt2V6SPsB4rtpK7JtTVZjWzsr/Cu2gCrMJRkVA2bC02gnemL6krMQpuZrkgDT fPqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XhlcKgdce83OWuotPe64NNg06UApv0YS6YzH8RPh8pY=; b=SMAl5h5rgmmoGMuQzAUhXwFf7Yq/03cUc2Fkbf/V9lt75Y9GTwAkqRXZ0xcSaLHX23 kkPFhb4lEDq0FqzUen5rhJQ6ULtXPysGeu7KArEC+P69hB0iREI8YIHJSyEVFyBkItEx 3NA+mdOxOp1k4RezNeQvVgTpP+t8fjdd91pDENzV6eXnaUHC8jVNCCNaS7HFZnu5F25a UO0WwLu4aniPeCyugzSBhEuEEkRb5DuyHXAWWr6WcYxDzcMyknz1zoBu43Zyt6fbUubF TmSyL7g7NAhKrKa2n1dR5w7Bi1eHXNQFZISCpVo5QjRnC58mRqIq9mASjKh0VSfWBG8t UBaA== X-Gm-Message-State: AOAM5330Ggu1uQmclXG1tJvKGuRQpStKzUqSogKn6Erwak9lyOtPZyWL OA77pp7us/nmdqLaf+yYYyVPfQ== X-Google-Smtp-Source: ABdhPJw/X0CnPocyHtEHCrwuabMzEHgQ8IFLHnm6NY/mdpxsJMwj7T7qLIM0KzSc4/ljgUO9Jq89Hg== X-Received: by 2002:a62:1716:0:b029:19d:b78b:ef02 with SMTP id 22-20020a6217160000b029019db78bef02mr9029683pfx.11.1608207362906; Thu, 17 Dec 2020 04:16:02 -0800 (PST) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id n15sm2775691pgl.31.2020.12.17.04.15.51 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 17 Dec 2020 04:16:02 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v10 03/11] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page Date: Thu, 17 Dec 2020 20:12:55 +0800 Message-Id: <20201217121303.13386-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201217121303.13386-1-songmuchun@bytedance.com> References: <20201217121303.13386-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Every HugeTLB has more than one struct page structure. We __know__ that we only use the first 4(HUGETLB_CGROUP_MIN_ORDER) struct page structures to store metadata associated with each HugeTLB. There are a lot of struct page structures associated with each HugeTLB page. For tail pages, the value of compound_head is the same. So we can reuse first page of tail page structures. We map the virtual addresses of the remaining pages of tail page structures to the first tail page struct, and then free these page frames. Therefore, we need to reserve two pages as vmemmap areas. When we allocate a HugeTLB page from the buddy, we can free some vmemmap pages associated with each HugeTLB page. It is more appropriate to do it in the prep_new_huge_page(). The free_vmemmap_pages_per_hpage(), which indicates how many vmemmap pages associated with a HugeTLB page can be freed, returns zero for now, which means the feature is disabled. We will enable it once all the infrastructure is there. Signed-off-by: Muchun Song --- include/linux/bootmem_info.h | 27 +++++- include/linux/mm.h | 2 + mm/Makefile | 1 + mm/hugetlb.c | 3 + mm/hugetlb_vmemmap.c | 207 +++++++++++++++++++++++++++++++++++++= ++++++ mm/hugetlb_vmemmap.h | 20 +++++ mm/sparse-vmemmap.c | 177 ++++++++++++++++++++++++++++++++++++ 7 files changed, 436 insertions(+), 1 deletion(-) create mode 100644 mm/hugetlb_vmemmap.c create mode 100644 mm/hugetlb_vmemmap.h diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index 4ed6dee1adc9..4c80b7be1771 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -2,7 +2,7 @@ #ifndef __LINUX_BOOTMEM_INFO_H #define __LINUX_BOOTMEM_INFO_H =20 -#include +#include =20 /* * Types for free bootmem stored in page->lru.next. These have to be in @@ -22,6 +22,27 @@ void __init register_page_bootmem_info_node(struct pgl= ist_data *pgdat); void get_page_bootmem(unsigned long info, struct page *page, unsigned long type); void put_page_bootmem(struct page *page); + +/* + * Any memory allocated via the memblock allocator and not via the + * buddy will be marked reserved already in the memmap. For those + * pages, we can call this function to free it to buddy allocator. + */ +static inline void free_bootmem_page(struct page *page) +{ + unsigned long magic =3D (unsigned long)page->freelist; + + /* + * The reserve_bootmem_region sets the reserved flag on bootmem + * pages. + */ + VM_WARN_ON(page_ref_count(page) !=3D 2); + + if (magic =3D=3D SECTION_INFO || magic =3D=3D MIX_SECTION_INFO) + put_page_bootmem(page); + else + VM_WARN_ON(1); +} #else static inline void register_page_bootmem_info_node(struct pglist_data *p= gdat) { @@ -35,6 +56,10 @@ static inline void get_page_bootmem(unsigned long info= , struct page *page, unsigned long type) { } + +static inline void free_bootmem_page(struct page *page) +{ +} #endif =20 #endif /* __LINUX_BOOTMEM_INFO_H */ diff --git a/include/linux/mm.h b/include/linux/mm.h index eabe7d9f80d8..0ecad1a41190 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3005,6 +3005,8 @@ static inline void print_vma_addr(char *prefix, uns= igned long rip) } #endif =20 +void vmemmap_remap_free(unsigned long start, unsigned long size); + void *sparse_buffer_alloc(unsigned long size); struct page * __populate_section_memmap(unsigned long pfn, unsigned long nr_pages, int nid, struct vmem_altmap *altmap); diff --git a/mm/Makefile b/mm/Makefile index ed4b88fa0f5e..056801d8daae 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -71,6 +71,7 @@ obj-$(CONFIG_FRONTSWAP) +=3D frontswap.o obj-$(CONFIG_ZSWAP) +=3D zswap.o obj-$(CONFIG_HAS_DMA) +=3D dmapool.o obj-$(CONFIG_HUGETLBFS) +=3D hugetlb.o +obj-$(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP) +=3D hugetlb_vmemmap.o obj-$(CONFIG_NUMA) +=3D mempolicy.o obj-$(CONFIG_SPARSEMEM) +=3D sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) +=3D sparse-vmemmap.o diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 1f3bf1710b66..140135fc8113 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -42,6 +42,7 @@ #include #include #include "internal.h" +#include "hugetlb_vmemmap.h" =20 int hugetlb_max_hstate __read_mostly; unsigned int default_hstate_idx; @@ -1497,6 +1498,8 @@ void free_huge_page(struct page *page) =20 static void prep_new_huge_page(struct hstate *h, struct page *page, int = nid) { + free_huge_page_vmemmap(h, page); + INIT_LIST_HEAD(&page->lru); set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); set_hugetlb_cgroup(page, NULL); diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c new file mode 100644 index 000000000000..5cf7b6122c86 --- /dev/null +++ b/mm/hugetlb_vmemmap.c @@ -0,0 +1,207 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Free some vmemmap pages of HugeTLB + * + * Copyright (c) 2020, Bytedance. All rights reserved. + * + * Author: Muchun Song + * + * The struct page structures (page structs) are used to describe a phys= ical + * page frame. By default, there is a one-to-one mapping from a page fra= me to + * it's corresponding page struct. + * + * The HugeTLB pages consist of multiple base page size pages and is sup= ported + * by many architectures. See hugetlbpage.rst in the Documentation direc= tory + * for more details. On the x86-64 architecture, HugeTLB pages of size 2= MB and + * 1GB are currently supported. Since the base page size on x86 is 4KB, = a 2MB + * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consis= ts of + * 4096 base pages. For each base page, there is a corresponding page st= ruct. + * + * Within the HugeTLB subsystem, only the first 4 page structs are used = to + * contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_O= RDER + * provides this upper limit. The only 'useful' information in the remai= ning + * page structs is the compound_head field, and this field is the same f= or all + * tail pages. + * + * By removing redundant page structs for HugeTLB pages, memory can retu= rned to + * the buddy allocator for other uses. + * + * Different architectures support different HugeTLB pages. For example,= the + * following table is the HugeTLB page size supported by x86 and arm64 + * architectures. Becasue arm64 supports 4k, 16k, and 64k base pages and + * supports contiguous entries, so it supports many kinds of sizes of Hu= geTLB + * page. + * + * +--------------+-----------+-----------------------------------------= ------+ + * | Architecture | Page Size | HugeTLB Page Size = | + * +--------------+-----------+-----------+-----------+-----------+-----= ------+ + * | x86-64 | 4KB | 2MB | 1GB | | = | + * +--------------+-----------+-----------+-----------+-----------+-----= ------+ + * | | 4KB | 64KB | 2MB | 32MB | 1= GB | + * | +-----------+-----------+-----------+-----------+-----= ------+ + * | arm64 | 16KB | 2MB | 32MB | 1GB | = | + * | +-----------+-----------+-----------+-----------+-----= ------+ + * | | 64KB | 2MB | 512MB | 16GB | = | + * +--------------+-----------+-----------+-----------+-----------+-----= ------+ + * + * When the system boot up, every HugeTLB page has more than one struct = page + * structs whose size is (unit: pages): + * + * struct_size =3D HugeTLB_Size / PAGE_SIZE * sizeof(struct page) / P= AGE_SIZE + * + * Where HugeTLB_Size is the size of the HugeTLB page. We know that the = size + * of the HugeTLB page is always n times PAGE_SIZE. So we can get the fo= llowing + * relationship. + * + * HugeTLB_Size =3D n * PAGE_SIZE + * + * Then, + * + * struct_size =3D n * PAGE_SIZE / PAGE_SIZE * sizeof(struct page) / = PAGE_SIZE + * =3D n * sizeof(struct page) / PAGE_SIZE + * + * We can use huge mapping at the pud/pmd level for the HugeTLB page. + * + * For the HugeTLB page of the pmd level mapping, then + * + * struct_size =3D n * sizeof(struct page) / PAGE_SIZE + * =3D PAGE_SIZE / sizeof(pte_t) * sizeof(struct page) / = PAGE_SIZE + * =3D sizeof(struct page) / sizeof(pte_t) + * =3D 64 / 8 + * =3D 8 (pages) + * + * Where n is how many pte entries which one page can contains. So the v= alue of + * n is (PAGE_SIZE / sizeof(pte_t)). + * + * This optimization only supports 64-bit system, so the value of sizeof= (pte_t) + * is 8. And this optimization also applicable only when the size of str= uct page + * is a power of two. In most cases, the size of struct page is 64 (e.g.= x86-64 + * and arm64). So if we use pmd level mapping for a HugeTLB page, the si= ze of + * struct page structs of it is 8 pages whose size depends on the size o= f the + * base page. + * + * For the HugeTLB page of the pud level mapping, then + * + * struct_size =3D PAGE_SIZE / sizeof(pmd_t) * struct_size(pmd) + * =3D PAGE_SIZE / 8 * 8 (pages) + * =3D PAGE_SIZE (pages) + * + * Where the struct_size(pmd) is the size of the struct page structs of = a + * HugeTLB page of the pmd level mapping. + * + * Next, we take the pmd level mapping of the HugeTLB page as an example= to + * show the internal implementation of this optimization. There are 8 pa= ges + * struct page structs associated with a HugeTLB page which is pmd mappe= d. + * + * Here is how things look before optimization. + * + * HugeTLB struct pages(8 pages) page frame(= 8 pages) + * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----= ------+ + * | | | 0 | -------------> | = 0 | + * | | +-----------+ +-----= ------+ + * | | | 1 | -------------> | = 1 | + * | | +-----------+ +-----= ------+ + * | | | 2 | -------------> | = 2 | + * | | +-----------+ +-----= ------+ + * | | | 3 | -------------> | = 3 | + * | | +-----------+ +-----= ------+ + * | | | 4 | -------------> | = 4 | + * | PMD | +-----------+ +-----= ------+ + * | level | | 5 | -------------> | = 5 | + * | mapping | +-----------+ +-----= ------+ + * | | | 6 | -------------> | = 6 | + * | | +-----------+ +-----= ------+ + * | | | 7 | -------------> | = 7 | + * | | +-----------+ +-----= ------+ + * | | + * | | + * | | + * +-----------+ + * + * The value of page->compound_head is the same for all tail pages. The = first + * page of page structs (page 0) associated with the HugeTLB page contai= ns the 4 + * page structs necessary to describe the HugeTLB. The only use of the r= emaining + * pages of page structs (page 1 to page 7) is to point to page->compoun= d_head. + * Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page = structs + * will be used for each HugeTLB page. This will allow us to free the re= maining + * 6 pages to the buddy allocator. + * + * Here is how things look after remapping. + * + * HugeTLB struct pages(8 pages) page frame(= 8 pages) + * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----= ------+ + * | | | 0 | -------------> | = 0 | + * | | +-----------+ +-----= ------+ + * | | | 1 | -------------> | = 1 | + * | | +-----------+ +-----= ------+ + * | | | 2 | ----------------^ ^ ^= ^ ^ ^ + * | | +-----------+ | |= | | | + * | | | 3 | ------------------+ |= | | | + * | | +-----------+ |= | | | + * | | | 4 | --------------------+= | | | + * | PMD | +-----------+ = | | | + * | level | | 5 | ---------------------= -+ | | + * | mapping | +-----------+ = | | + * | | | 6 | ---------------------= ---+ | + * | | +-----------+ = | + * | | | 7 | ---------------------= -----+ + * | | +-----------+ + * | | + * | | + * | | + * +-----------+ + * + * When a HugeTLB is freed to the buddy system, we should allocate 6 pag= es for + * vmemmap pages and restore the previous mapping relationship. + * + * For the HugeTLB page of the pud level mapping. It is similar to the f= ormer. + * We also can use this approach to free (PAGE_SIZE - 2) vmemmap pages. + * + * Apart from the HugeTLB page of the pmd/pud level mapping, some archit= ectures + * (e.g. aarch64) provides a contiguous bit in the translation table ent= ries + * that hints to the MMU to indicate that it is one of a contiguous set = of + * entries that can be cached in a single TLB entry. + * + * The contiguous bit is used to increase the mapping size at the pmd an= d pte + * (last) level. So this type of HugeTLB page can be optimized only when= its + * size of the struct page structs is greater than 2 pages. + */ +#include "hugetlb_vmemmap.h" + +/* + * There are a lot of struct page structures associated with each HugeTL= B page. + * For tail pages, the value of compound_head is the same. So we can reu= se first + * page of tail page structures. We map the virtual addresses of the rem= aining + * pages of tail page structures to the first tail page struct, and then= free + * these page frames. Therefore, we need to reserve two pages as vmemmap= areas. + */ +#define RESERVE_VMEMMAP_NR 2U +#define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) + +/* + * How many vmemmap pages associated with a HugeTLB page that can be fre= ed + * to the buddy allocator. + * + * Todo: Returns zero for now, which means the feature is disabled. We w= ill + * enable it once all the infrastructure is there. + */ +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h= ) +{ + return 0; +} + +static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hst= ate *h) +{ + return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; +} + +void free_huge_page_vmemmap(struct hstate *h, struct page *head) +{ + unsigned long vmemmap_addr =3D (unsigned long)head; + + if (!free_vmemmap_pages_per_hpage(h)) + return; + + vmemmap_remap_free(vmemmap_addr + RESERVE_VMEMMAP_SIZE, + free_vmemmap_pages_size_per_hpage(h)); +} diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h new file mode 100644 index 000000000000..6923f03534d5 --- /dev/null +++ b/mm/hugetlb_vmemmap.h @@ -0,0 +1,20 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Free some vmemmap pages of HugeTLB + * + * Copyright (c) 2020, Bytedance. All rights reserved. + * + * Author: Muchun Song + */ +#ifndef _LINUX_HUGETLB_VMEMMAP_H +#define _LINUX_HUGETLB_VMEMMAP_H +#include + +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +void free_huge_page_vmemmap(struct hstate *h, struct page *head); +#else +static inline void free_huge_page_vmemmap(struct hstate *h, struct page = *head) +{ +} +#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ +#endif /* _LINUX_HUGETLB_VMEMMAP_H */ diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 16183d85a7d5..6cf2fdfb81e9 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -27,8 +27,185 @@ #include #include #include +#include +#include + #include #include +#include + +/* + * vmemmap_remap_walk - walk vmemmap page table + * + * @remap_pte: called for each non-empty PTE (lowest-level) entry. + * @reuse_page: the page which is reused for the tail vmemmap pages. + * @reuse_addr: the virtual address of the @reuse_page page. + * @vmemmap_pages: the list head of the vmemmap pages that can be freed. + */ +struct vmemmap_remap_walk { + void (*remap_pte)(pte_t *pte, unsigned long addr, + struct vmemmap_remap_walk *walk); + struct page *reuse_page; + unsigned long reuse_addr; + struct list_head *vmemmap_pages; +}; + +static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + pte_t *pte; + + pte =3D pte_offset_kernel(pmd, addr); + + if (walk->reuse_addr =3D=3D addr) { + BUG_ON(pte_none(*pte)); + walk->reuse_page =3D pte_page(*pte++); + addr +=3D PAGE_SIZE; + } + + for (; addr !=3D end; addr +=3D PAGE_SIZE, pte++) { + BUG_ON(pte_none(*pte)); + + walk->remap_pte(pte, addr, walk); + } +} + +static void vmemmap_pmd_range(pud_t *pud, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + pmd_t *pmd; + unsigned long next; + + pmd =3D pmd_offset(pud, addr); + do { + BUG_ON(pmd_none(*pmd)); + + next =3D pmd_addr_end(addr, end); + vmemmap_pte_range(pmd, addr, next, walk); + } while (pmd++, addr =3D next, addr !=3D end); +} + +static void vmemmap_pud_range(p4d_t *p4d, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + pud_t *pud; + unsigned long next; + + pud =3D pud_offset(p4d, addr); + do { + BUG_ON(pud_none(*pud)); + + next =3D pud_addr_end(addr, end); + vmemmap_pmd_range(pud, addr, next, walk); + } while (pud++, addr =3D next, addr !=3D end); +} + +static void vmemmap_p4d_range(pgd_t *pgd, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + p4d_t *p4d; + unsigned long next; + + p4d =3D p4d_offset(pgd, addr); + do { + BUG_ON(p4d_none(*p4d)); + + next =3D p4d_addr_end(addr, end); + vmemmap_pud_range(p4d, addr, next, walk); + } while (p4d++, addr =3D next, addr !=3D end); +} + +static void vmemmap_remap_range(unsigned long start, unsigned long end, + struct vmemmap_remap_walk *walk) +{ + unsigned long addr =3D start - PAGE_SIZE; + unsigned long next; + pgd_t *pgd; + + VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE)); + VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE)); + + walk->reuse_page =3D NULL; + walk->reuse_addr =3D addr; + + pgd =3D pgd_offset_k(addr); + do { + BUG_ON(pgd_none(*pgd)); + + next =3D pgd_addr_end(addr, end); + vmemmap_p4d_range(pgd, addr, next, walk); + } while (pgd++, addr =3D next, addr !=3D end); + + flush_tlb_kernel_range(start, end); +} + +/* + * Free a vmemmap page. A vmemmap page can be allocated from the membloc= k + * allocator or buddy allocator. If the PG_reserved flag is set, it mean= s + * that it allocated from the memblock allocator, just free it via the + * free_bootmem_page(). Otherwise, use __free_page(). + */ +static inline void free_vmemmap_page(struct page *page) +{ + if (PageReserved(page)) + free_bootmem_page(page); + else + __free_page(page); +} + +/* Free a list of the vmemmap pages */ +static void free_vmemmap_page_list(struct list_head *list) +{ + struct page *page, *next; + + list_for_each_entry_safe(page, next, list, lru) { + list_del(&page->lru); + free_vmemmap_page(page); + } +} + +static void vmemmap_remap_pte(pte_t *pte, unsigned long addr, + struct vmemmap_remap_walk *walk) +{ + /* + * Make the tail pages are mapped with read-only to catch + * illegal write operation to the tail pages. + */ + pgprot_t pgprot =3D PAGE_KERNEL_RO; + pte_t entry =3D mk_pte(walk->reuse_page, pgprot); + struct page *page; + + page =3D pte_page(*pte); + list_add(&page->lru, walk->vmemmap_pages); + + set_pte_at(&init_mm, addr, pte, entry); +} + +/** + * vmemmap_remap_free - remap the vmemmap virtual address range + * [start, start + size) to the page which + * [start - PAGE_SIZE, start) is mapped, + * then free vmemmap pages. + * @start: start address of the vmemmap virtual address range + * @size: size of the vmemmap virtual address range + */ +void vmemmap_remap_free(unsigned long start, unsigned long size) +{ + unsigned long end =3D start + size; + LIST_HEAD(vmemmap_pages); + + struct vmemmap_remap_walk walk =3D { + .remap_pte =3D vmemmap_remap_pte, + .vmemmap_pages =3D &vmemmap_pages, + }; + + vmemmap_remap_range(start, end, &walk); + free_vmemmap_page_list(&vmemmap_pages); +} =20 /* * Allocate a block of memory to be used to back the virtual memory map --=20 2.11.0