From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3789DC433E0 for ; Thu, 25 Feb 2021 13:25:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 956DA64EAF for ; Thu, 25 Feb 2021 13:25:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 956DA64EAF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1B3468D0027; Thu, 25 Feb 2021 08:25:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 162A98D0005; Thu, 25 Feb 2021 08:25:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 02AA38D0027; Thu, 25 Feb 2021 08:25:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0205.hostedemail.com [216.40.44.205]) by kanga.kvack.org (Postfix) with ESMTP id E28BE8D0005 for ; Thu, 25 Feb 2021 08:25:04 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id B027E18260C9B for ; Thu, 25 Feb 2021 13:25:04 +0000 (UTC) X-FDA: 77856860928.10.4FE0AA4 Received: from mail-pj1-f42.google.com (mail-pj1-f42.google.com [209.85.216.42]) by imf08.hostedemail.com (Postfix) with ESMTP id 4452980192DD for ; Thu, 25 Feb 2021 13:24:52 +0000 (UTC) Received: by mail-pj1-f42.google.com with SMTP id b15so3508250pjb.0 for ; Thu, 25 Feb 2021 05:25:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=K4J3haDAJ7QJxoTpvySgVqpHEf8e816X1dBjkdT55lc=; b=xzL7ucVJKMK+qpeLfm8I6lhRkoSlwlpw8ddroBi3q6hnKTwLYxiJTtYwk3lBkfvtbY A+VFYTsuv/FPipAdgEbBS9qXQtMRmjiQzCb77SlKMKNjEcmz4XOePQrFjxYKeU6oQ1mo 8Q6zMgdDOrjuCjcjXpR4o/OVz6ZAB9t6KHuI2N8ps14i9kPw+jmvNNWBRIK0gPx3iJU4 yaQPanNFGZ+GVcgtECKzN2pvomGMGFY8gYpFTQEdmNGGNGw7dGqEXu+mQ0+p95h6xbCI MZSOaw/JGxMyFQGpSAQHTBSMAPHb8CHtyNpdMRAOA8yCvybXyydIj59vbo5vqxaZ2zzM HO/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=K4J3haDAJ7QJxoTpvySgVqpHEf8e816X1dBjkdT55lc=; b=O5o9pfFobZtEx101TgrL3ZGYuPrsfku9CLhaQ1z9ms52J456jyZtdT1ApYC/DLaySi gHxy/NFNZ0LxEhydPiGnXAh0kOedJf/rwY7MMM6bWNNgVfvfPAFFrYJ2s9c5x1AVLBG9 /80K/6Yzt+M32NkoHNNNpQW0EfZjLI7r5hnZ0OtXe1VV9V62Oa1slM0rWziWtpZoIr/p VPO1AKOUoM2befut4OH4WW93C84/KOo56NsWeY6rsqvWgo6LWvA9eoJnmSbHMI4KRjWT VtF1iTYYBiKz6hmOBKdyzL+6Co7dHCg+stiGAzhVJv87EFc0i80d0sZZGB57HSy9umkC WM4Q== X-Gm-Message-State: AOAM530RV3bs++T4h/oL43XIvqN/z2CR8dFz53lOP4B0Eh/ZW6KuKJtQ 52eAuzVaKU7TO9AGDD2ZuPUYLw== X-Google-Smtp-Source: ABdhPJxB2OmbGU+4eJj1lcXMayr0DmEtTShSi9vY80Pf69MDNlG5F7jwZmSMlcWQY0ie6n5UzOl53w== X-Received: by 2002:a17:90b:3783:: with SMTP id mz3mr3449157pjb.88.1614259503262; Thu, 25 Feb 2021 05:25:03 -0800 (PST) Received: from localhost.localdomain ([139.177.225.236]) by smtp.gmail.com with ESMTPSA id x190sm6424676pfx.166.2021.02.25.05.24.52 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 25 Feb 2021 05:25:02 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com, joao.m.martins@oracle.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song , Miaohe Lin Subject: [PATCH v17 8/9] mm: hugetlb: gather discrete indexes of tail page Date: Thu, 25 Feb 2021 21:21:29 +0800 Message-Id: <20210225132130.26451-9-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210225132130.26451-1-songmuchun@bytedance.com> References: <20210225132130.26451-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Stat-Signature: fs9ro6qoksnoej1a3t5ngz3uidtw8zhq X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 4452980192DD Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf08; identity=mailfrom; envelope-from=""; helo=mail-pj1-f42.google.com; client-ip=209.85.216.42 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1614259492-16539 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For HugeTLB page, there are more metadata to save in the struct page. But the head struct page cannot meet our needs, so we have to abuse other tail struct page to store the metadata. In order to avoid conflicts caused by subsequent use of more tail struct pages, we can gather these discrete indexes of tail struct page. In this case, it will be easier to add a new tail page index later. There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct page structs that can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, so add a BUILD_BUG_ON to catch invalid usage of the tail struct page. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador Reviewed-by: Miaohe Lin --- include/linux/hugetlb.h | 24 ++++++++++++++++++++++-- include/linux/hugetlb_cgroup.h | 19 +++++++++++-------- mm/hugetlb.c | 6 +++--- mm/hugetlb_vmemmap.c | 8 ++++++++ 4 files changed, 44 insertions(+), 13 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index a4d80f7263fc..c70421e26189 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -28,6 +28,26 @@ typedef struct { unsigned long pd; } hugepd_t; #include #include =20 +/* + * For HugeTLB page, there are more metadata to save in the struct page.= But + * the head struct page cannot meet our needs, so we have to abuse other= tail + * struct page to store the metadata. In order to avoid conflicts caused= by + * subsequent use of more tail struct pages, we gather these discrete in= dexes + * of tail struct page here. + */ +enum { + SUBPAGE_INDEX_SUBPOOL =3D 1, /* reuse page->private */ +#ifdef CONFIG_CGROUP_HUGETLB + SUBPAGE_INDEX_CGROUP, /* reuse page->private */ + SUBPAGE_INDEX_CGROUP_RSVD, /* reuse page->private */ + __MAX_CGROUP_SUBPAGE_INDEX =3D SUBPAGE_INDEX_CGROUP_RSVD, +#endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + SUBPAGE_INDEX_HWPOISON, /* reuse page->private */ +#endif + __NR_USED_SUBPAGE, +}; + struct hugepage_subpool { spinlock_t lock; long count; @@ -607,13 +627,13 @@ extern unsigned int default_hstate_idx; */ static inline struct hugepage_subpool *hugetlb_page_subpool(struct page = *hpage) { - return (struct hugepage_subpool *)(hpage+1)->private; + return (void *)page_private(hpage + SUBPAGE_INDEX_SUBPOOL); } =20 static inline void hugetlb_set_page_subpool(struct page *hpage, struct hugepage_subpool *subpool) { - set_page_private(hpage+1, (unsigned long)subpool); + set_page_private(hpage + SUBPAGE_INDEX_SUBPOOL, (unsigned long)subpool)= ; } =20 static inline struct hstate *hstate_file(struct file *f) diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgrou= p.h index 2ad6e92f124a..54ec689e3c9c 100644 --- a/include/linux/hugetlb_cgroup.h +++ b/include/linux/hugetlb_cgroup.h @@ -21,15 +21,16 @@ struct hugetlb_cgroup; struct resv_map; struct file_region; =20 +#ifdef CONFIG_CGROUP_HUGETLB /* * Minimum page order trackable by hugetlb cgroup. * At least 4 pages are necessary for all the tracking information. - * The second tail page (hpage[2]) is the fault usage cgroup. - * The third tail page (hpage[3]) is the reservation usage cgroup. + * The second tail page (hpage[SUBPAGE_INDEX_CGROUP]) is the fault + * usage cgroup. The third tail page (hpage[SUBPAGE_INDEX_CGROUP_RSVD]) + * is the reservation usage cgroup. */ -#define HUGETLB_CGROUP_MIN_ORDER 2 +#define HUGETLB_CGROUP_MIN_ORDER order_base_2(__MAX_CGROUP_SUBPAGE_INDEX= + 1) =20 -#ifdef CONFIG_CGROUP_HUGETLB enum hugetlb_memory_event { HUGETLB_MAX, HUGETLB_NR_MEMORY_EVENTS, @@ -66,9 +67,9 @@ __hugetlb_cgroup_from_page(struct page *page, bool rsvd= ) if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return NULL; if (rsvd) - return (struct hugetlb_cgroup *)page[3].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP_RSVD); else - return (struct hugetlb_cgroup *)page[2].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP); } =20 static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct pag= e *page) @@ -90,9 +91,11 @@ static inline int __set_hugetlb_cgroup(struct page *pa= ge, if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return -1; if (rsvd) - page[3].private =3D (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD, + (unsigned long)h_cg); else - page[2].private =3D (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP, + (unsigned long)h_cg); return 0; } =20 diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 4d192ba183f9..31518b39f18d 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1312,7 +1312,7 @@ static inline void hwpoison_subpage_deliver(struct = hstate *h, struct page *head) if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h)) return; =20 - page =3D head + page_private(head + 4); + page =3D head + page_private(head + SUBPAGE_INDEX_HWPOISON); =20 /* * Move PageHWPoison flag from head page to the raw error page, @@ -1331,7 +1331,7 @@ static inline void hwpoison_subpage_set(struct hsta= te *h, struct page *head, return; =20 if (free_vmemmap_pages_per_hpage(h)) { - set_page_private(head + 4, page - head); + set_page_private(head + SUBPAGE_INDEX_HWPOISON, page - head); } else if (page !=3D head) { /* * Move PageHWPoison flag from head page to the raw error page, @@ -1347,7 +1347,7 @@ static inline void hwpoison_subpage_clear(struct hs= tate *h, struct page *head) if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h)) return; =20 - set_page_private(head + 4, 0); + set_page_private(head + SUBPAGE_INDEX_HWPOISON, 0); } #else static inline void hwpoison_subpage_deliver(struct hstate *h, struct pag= e *head) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index b65f0d5189bd..33e42678abe3 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -257,6 +257,14 @@ void __init hugetlb_vmemmap_init(struct hstate *h) unsigned int nr_pages =3D pages_per_huge_page(h); unsigned int vmemmap_pages; =20 + /* + * There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct + * page structs that can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, + * so add a BUILD_BUG_ON to catch invalid usage of the tail struct page= . + */ + BUILD_BUG_ON(__NR_USED_SUBPAGE >=3D + RESERVE_VMEMMAP_SIZE / sizeof(struct page)); + if (!hugetlb_free_vmemmap_enabled) return; =20 --=20 2.11.0