From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83E9DC433DB for ; Tue, 22 Dec 2020 14:31:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F184423124 for ; Tue, 22 Dec 2020 14:31:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F184423124 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7B83C6B00A0; Tue, 22 Dec 2020 09:31:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6C64D6B00A1; Tue, 22 Dec 2020 09:31:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F2158D0009; Tue, 22 Dec 2020 09:31:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0008.hostedemail.com [216.40.44.8]) by kanga.kvack.org (Postfix) with ESMTP id 393AD6B00A0 for ; Tue, 22 Dec 2020 09:31:25 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id F384D8249980 for ; Tue, 22 Dec 2020 14:31:24 +0000 (UTC) X-FDA: 77621156130.14.push47_5d02f3827460 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin14.hostedemail.com (Postfix) with ESMTP id 1FB061802B9FC for ; Tue, 22 Dec 2020 14:31:24 +0000 (UTC) X-HE-Tag: push47_5d02f3827460 X-Filterd-Recvd-Size: 11030 Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Tue, 22 Dec 2020 14:31:23 +0000 (UTC) Received: by mail-pl1-f178.google.com with SMTP id j1so7525387pld.3 for ; Tue, 22 Dec 2020 06:31:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1seh5M2ZUWi9UZdev/oNQbO4cU6q+XgrZrg9wVC/rxI=; b=BmDQtNH2NR0AF6lP6dxq23r14xZG2DlZb8P9suzwQaNZvdsoqxz2s0GXzA3nZHToV5 qu84jRwDui1Tml+0SIenBvWz2p62UliZemDh5icDRIW+H/r4nIbccIkSW+FRMrOqLNuJ FVfNAmocE4t1okkwuky0BmTiUG2NnbHzoECZz2iIiiktrUFdJ0Zfr8UD+dl6zvEBxPgq M492QnFsjVHED72zxZZkjgzG2MNLJVkV9u1qeOd932h+JrNwtS6vVRiuwyJel5eKv/9E fUJHpTCDLjF3ZjXfbpCM11KpkxFeaL0G7oGCrtDY1SdR58YOyKfeeJ+e4xD0TZCuFp2C dYUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1seh5M2ZUWi9UZdev/oNQbO4cU6q+XgrZrg9wVC/rxI=; b=BsUnanR9Uyd2MM6n3e293uSJlOH9pb50Ws+AzXypTrCO0rqv2PVko/SRBE5VLDVucO cJsLb2N3qj93sPqg9uKmM/8UBmZCn1gwCPkPXDIx4VYm+As7BPy1k5qbaOESYJMKiF86 eA6jdF2Gx/oAzEZi2hFv6vEJQeQqPmw2zLhdOPzZAAlsjHNxX0Y8tkLGQOf6a+Csp8V5 8czbXvzKonbI+iSI3c1lOKQY6DrqDIyskj02BE/Dbkf3LHbOMJFfy9utDb6RR5bDQX+G 0vvOltvASlEt/mFPhC9Cdb33JUxHpYpG2xLjKL8N8Q1jsmPCisZL2iJjVXRA24pwDVrJ Lvdw== X-Gm-Message-State: AOAM533gXlVrj9LNepBBtQBmLFGL3V9BVdW5jZRZtXmd8xn/FE0V9AiV qzUT1yq3TiPAb1WEFxHktC1yag== X-Google-Smtp-Source: ABdhPJzhMmKvQnNlH+bczONQXN1FohKsO2+35KUwG81SzatSK6MxjwvChkwr8r/0rr9MCW637ooCVg== X-Received: by 2002:a17:90a:6d62:: with SMTP id z89mr22819034pjj.71.1608647482726; Tue, 22 Dec 2020 06:31:22 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id a31sm21182088pgb.93.2020.12.22.06.31.12 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 22 Dec 2020 06:31:22 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v11 10/11] mm/hugetlb: Gather discrete indexes of tail page Date: Tue, 22 Dec 2020 22:24:39 +0800 Message-Id: <20201222142440.28930-11-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201222142440.28930-1-songmuchun@bytedance.com> References: <20201222142440.28930-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For HugeTLB page, there are more metadata to save in the struct page. But the head struct page cannot meet our needs, so we have to abuse other tail struct page to store the metadata. In order to avoid conflicts caused by subsequent use of more tail struct pages, we can gather these discrete indexes of tail struct page. In this case, it will be easier to add a new tail page index later. There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct page structs that can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, so add a BUILD_BUG_ON to catch invalid usage of the tail struct page. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador --- include/linux/hugetlb.h | 13 +++++++++++++ include/linux/hugetlb_cgroup.h | 15 +++++++++------ mm/hugetlb.c | 35 +++++++++++++++++++++++++++-------- mm/hugetlb_vmemmap.c | 8 ++++++++ 4 files changed, 57 insertions(+), 14 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 66d82ae7b712..7295f6b3d55e 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -28,6 +28,19 @@ typedef struct { unsigned long pd; } hugepd_t; #include #include =20 +enum { + SUBPAGE_INDEX_ACTIVE =3D 1, /* reuse page flags of PG_private */ + SUBPAGE_INDEX_TEMPORARY, /* reuse page->mapping */ +#ifdef CONFIG_CGROUP_HUGETLB + SUBPAGE_INDEX_CGROUP =3D SUBPAGE_INDEX_TEMPORARY,/* reuse page->private= */ + SUBPAGE_INDEX_CGROUP_RSVD, /* reuse page->private */ +#endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + SUBPAGE_INDEX_HWPOISON, /* reuse page->private */ +#endif + NR_USED_SUBPAGE, +}; + struct hugepage_subpool { spinlock_t lock; long count; diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgrou= p.h index 2ad6e92f124a..3d3c1c49efe4 100644 --- a/include/linux/hugetlb_cgroup.h +++ b/include/linux/hugetlb_cgroup.h @@ -24,8 +24,9 @@ struct file_region; /* * Minimum page order trackable by hugetlb cgroup. * At least 4 pages are necessary for all the tracking information. - * The second tail page (hpage[2]) is the fault usage cgroup. - * The third tail page (hpage[3]) is the reservation usage cgroup. + * The second tail page (hpage[SUBPAGE_INDEX_CGROUP]) is the fault + * usage cgroup. The third tail page (hpage[SUBPAGE_INDEX_CGROUP_RSVD]) + * is the reservation usage cgroup. */ #define HUGETLB_CGROUP_MIN_ORDER 2 =20 @@ -66,9 +67,9 @@ __hugetlb_cgroup_from_page(struct page *page, bool rsvd= ) if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return NULL; if (rsvd) - return (struct hugetlb_cgroup *)page[3].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP_RSVD); else - return (struct hugetlb_cgroup *)page[2].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP); } =20 static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct pag= e *page) @@ -90,9 +91,11 @@ static inline int __set_hugetlb_cgroup(struct page *pa= ge, if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return -1; if (rsvd) - page[3].private =3D (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD, + (unsigned long)h_cg); else - page[2].private =3D (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP, + (unsigned long)h_cg); return 0; } =20 diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f19e841236d7..2764af5fa0b3 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1356,6 +1356,7 @@ static inline void __update_and_free_page(struct hs= tate *h, struct page *page) schedule_work(&hpage_update_work); } =20 +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP static inline void hwpoison_subpage_deliver(struct hstate *h, struct pag= e *head) { struct page *page; @@ -1363,7 +1364,7 @@ static inline void hwpoison_subpage_deliver(struct = hstate *h, struct page *head) if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h)) return; =20 - page =3D head + page_private(head + 4); + page =3D head + page_private(head + SUBPAGE_INDEX_HWPOISON); =20 /* * Move PageHWPoison flag from head page to the raw error page, @@ -1382,7 +1383,7 @@ static inline void hwpoison_subpage_set(struct hsta= te *h, struct page *head, return; =20 if (free_vmemmap_pages_per_hpage(h)) { - set_page_private(head + 4, page - head); + set_page_private(head + SUBPAGE_INDEX_HWPOISON, page - head); } else if (page !=3D head) { /* * Move PageHWPoison flag from head page to the raw error page, @@ -1392,6 +1393,24 @@ static inline void hwpoison_subpage_set(struct hst= ate *h, struct page *head, ClearPageHWPoison(head); } } +#else +static inline void hwpoison_subpage_deliver(struct hstate *h, struct pag= e *head) +{ +} + +static inline void hwpoison_subpage_set(struct hstate *h, struct page *h= ead, + struct page *page) +{ + if (PageHWPoison(head) && page !=3D head) { + /* + * Move PageHWPoison flag from head page to the raw error page, + * which makes any subpages rather than the error page reusable. + */ + SetPageHWPoison(page); + ClearPageHWPoison(head); + } +} +#endif =20 static void update_and_free_page(struct hstate *h, struct page *page) { @@ -1459,20 +1478,20 @@ struct hstate *size_to_hstate(unsigned long size) bool page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHuge(page), page); - return PageHead(page) && PagePrivate(&page[1]); + return PageHead(page) && PagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } =20 /* never called for tail page */ static void set_page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHeadHuge(page), page); - SetPagePrivate(&page[1]); + SetPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } =20 static void clear_page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHeadHuge(page), page); - ClearPagePrivate(&page[1]); + ClearPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } =20 /* @@ -1484,17 +1503,17 @@ static inline bool PageHugeTemporary(struct page = *page) if (!PageHuge(page)) return false; =20 - return (unsigned long)page[2].mapping =3D=3D -1U; + return (unsigned long)page[SUBPAGE_INDEX_TEMPORARY].mapping =3D=3D -1U; } =20 static inline void SetPageHugeTemporary(struct page *page) { - page[2].mapping =3D (void *)-1U; + page[SUBPAGE_INDEX_TEMPORARY].mapping =3D (void *)-1U; } =20 static inline void ClearPageHugeTemporary(struct page *page) { - page[2].mapping =3D NULL; + page[SUBPAGE_INDEX_TEMPORARY].mapping =3D NULL; } =20 static void __free_huge_page(struct page *page) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 7dcb4aa1e512..6b8f7bb2273e 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -242,6 +242,14 @@ void __init hugetlb_vmemmap_init(struct hstate *h) unsigned int nr_pages =3D pages_per_huge_page(h); unsigned int vmemmap_pages; =20 + /* + * There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct + * page structs that can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, + * so add a BUILD_BUG_ON to catch invalid usage of the tail struct page= . + */ + BUILD_BUG_ON(NR_USED_SUBPAGE >=3D + RESERVE_VMEMMAP_SIZE / sizeof(struct page)); + if (!hugetlb_free_vmemmap_enabled) return; =20 --=20 2.11.0