From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 80E7BD6D230 for ; Thu, 18 Dec 2025 15:10:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EEBAF6B0093; Thu, 18 Dec 2025 10:10:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E5DE66B0095; Thu, 18 Dec 2025 10:10:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D73C36B0096; Thu, 18 Dec 2025 10:10:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C076A6B0093 for ; Thu, 18 Dec 2025 10:10:04 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 80DCC8886D for ; Thu, 18 Dec 2025 15:10:04 +0000 (UTC) X-FDA: 84232927128.02.91AF3DB Received: from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31]) by imf18.hostedemail.com (Postfix) with ESMTP id 5A54C1C0012 for ; Thu, 18 Dec 2025 15:10:02 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=uJB8GRqe; spf=pass (imf18.hostedemail.com: domain of kas@kernel.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=kas@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1766070602; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bhPPYUce8DcjshKY8UeKIJvpCjyK2q2QTtqzoWk3VfY=; b=PXgciuKNpT1Q4rOb8pROFs2q/s6/Tpx/1XkOckqFbT2UGPPPCWP2u0quZb05vTywVpD5f3 1iyalyWuLE2O5QSYQP0p74LMEgXin7wDJjUdWpDD//8d9wy0XsR78YxRsIWCNESveA/Hp2 ZWFwTsxZMHomoRfTSKL+4hu6EBlP/qY= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=uJB8GRqe; spf=pass (imf18.hostedemail.com: domain of kas@kernel.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=kas@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1766070602; a=rsa-sha256; cv=none; b=47K217hvPlFFI5sTUnknj9ECl23W1Az9bi7l5GvCFHD+PNuIo+7XF74XY0oCrGszB5Kv8p U8zDrgUVtx9qbIpW9sHW+BUaMQZXkLQVUYtwMT8Ulzns4jqJpt/7W5A8HJWyXUbWOD1Jjk p8i00MJdD+jVvFpziiFTbb7ezvI0u9A= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 65F0E444F3; Thu, 18 Dec 2025 15:10:01 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7688DC2BCAF; Thu, 18 Dec 2025 15:10:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766070601; bh=rFXx9rg6JtmkrQuzSonKVJTS1e9bp4Kxox6mSNMInbs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uJB8GRqea+Dew4Clo2/uEvWZCjAJJkXl2399PAy+0kR6bcYrdPz1laBGfADdyfrys j4vtrIbDvj1wfdZxsSaRyKLCy9pij1aLVKkBFK+VzuoUO5HpBQXhOq6kIX6H5PM412 rL1ftnZLMyIXSSBIxbxlT2vtilKA7ZUnGvICBl0m9SJmDdLgsYcg/sAyYiFXQfJUWr 3njBUY2eZ5d3UBmFc9bekxRpa0Bvipcrne88vDQ1r7HgtW09Gkc+xs/v+6rVqHPbVj b2RaPOgpEVbA1lNWEHvcwZcfXGMDXzZeo3nA1RaoL/5jWaEneaVtraQ5Y47PwKsDSF 1cxyYRTu0RX7g== Received: from phl-compute-11.internal (phl-compute-11.internal [10.202.2.51]) by mailfauth.phl.internal (Postfix) with ESMTP id 8EAC1F40074; Thu, 18 Dec 2025 10:09:59 -0500 (EST) Received: from phl-frontend-03 ([10.202.2.162]) by phl-compute-11.internal (MEProxy); Thu, 18 Dec 2025 10:09:59 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefgedrtddtgdegheejhecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjug hrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpefmihhrhihlucfu hhhuthhsvghmrghuuceokhgrsheskhgvrhhnvghlrdhorhhgqeenucggtffrrghtthgvrh hnpefhudefjeehhfektdeuvdefveejffdvhfevtddugfduffejfeeikeelhedvtedvveen ucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehkihhrih hllhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqudeiudduiedvieehhedq vdekgeeggeejvdekqdhkrghspeepkhgvrhhnvghlrdhorhhgsehshhhuthgvmhhovhdrnh grmhgvpdhnsggprhgtphhtthhopedvtddpmhhouggvpehsmhhtphhouhhtpdhrtghpthht oheprghkphhmsehlihhnuhigqdhfohhunhgurghtihhonhdrohhrghdprhgtphhtthhope hmuhgthhhunhdrshhonhhgsehlihhnuhigrdguvghvpdhrtghpthhtohepuggrvhhiuges khgvrhhnvghlrdhorhhgpdhrtghpthhtohepfihilhhlhiesihhnfhhrrgguvggrugdroh hrghdprhgtphhtthhopehushgrmhgrrghrihhfieegvdesghhmrghilhdrtghomhdprhgt phhtthhopehfvhgulhesghhoohhglhgvrdgtohhmpdhrtghpthhtohepohhsrghlvhgrug horhesshhushgvrdguvgdprhgtphhtthhopehrphhptheskhgvrhhnvghlrdhorhhgpdhr tghpthhtohepvhgsrggskhgrsehsuhhsvgdrtgii X-ME-Proxy: Feedback-ID: i10464835:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 18 Dec 2025 10:09:59 -0500 (EST) From: Kiryl Shutsemau To: Andrew Morton , Muchun Song , David Hildenbrand , Matthew Wilcox , Usama Arif , Frank van der Linden Cc: Oscar Salvador , Mike Rapoport , Vlastimil Babka , Lorenzo Stoakes , Zi Yan , Baoquan He , Michal Hocko , Johannes Weiner , Jonathan Corbet , kernel-team@meta.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, Kiryl Shutsemau Subject: [PATCHv2 04/14] mm: Rename the 'compound_head' field in the 'struct page' to 'compound_info' Date: Thu, 18 Dec 2025 15:09:35 +0000 Message-ID: <20251218150949.721480-5-kas@kernel.org> X-Mailer: git-send-email 2.51.2 In-Reply-To: <20251218150949.721480-1-kas@kernel.org> References: <20251218150949.721480-1-kas@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 5A54C1C0012 X-Stat-Signature: smockxyt131axo8k598hr3esdqicgnss X-Rspam-User: X-HE-Tag: 1766070602-409888 X-HE-Meta: U2FsdGVkX1+49bI8AXvDRYVNZhESTWHz0u3fx1FPaJ9tjvhBW+UwuL8aZukQR+SQCRhgvP5qAGEAOhn5yfhwbfoRsesWi3/ppsr3G/8fqBzwZd1V7xSLqSdNsy2m2+XwYzQyrkE4Ypaj/5zoKilmdZADAm4d+e9u6z8n8Tb1EczPd8nyG2qhKz83w3msjGRBcGZOT0gWsPi0OqWwyTTleKOHNlXvTxg8MGuHS+aKI24Zzya8fM3hbgROm2ER5nv3xDeYX3XRlOyscY3bHSf/xGRHb1j6I4e2vMw69dG62O63d+AoLpBR1DGTMEzCyx8mt122U+nKL6xJK6gTKQ9rhRUmrARXpSOPEys2QUqvL8mwat91GmqoDcKmFAGqiwFZUyVPoMYD5nIkJZjMXlfltoJASoSakedUxc+CM0aZGz50hV8b9yvX6AsotNoLnnk1aYd/SxPuYdPnwDPYpfmNwZ7zs2F+hW4BPtwjc3aLREffGhyQtDncDevL1VM7mdS//tEEoX2LXFcaJuk55au7n+nVAIMBaUVAn57SlC8smc62QpP0/CYpaaTbGjxaXTq/W+3x2KpH5pFNDlJH2in2chPoPFFNYDSSjP7hVLaF6gPomFXOYC/6kkTQ5V6uHbVZ1zmoWw1Bd3V/GQhE8CmWCv3Nv9B4hMvJqT4nZ24/2pdq5wMuDhSKImDFpFk7s5znDDVnyg3iwlQ1AP6KdjSyTo0EoDRUnhhh+cV0P3IbKGYpiG7G/Jcp4ZQFfmrTgr9LW8h+PYt5A/4XL4HiV9iOueBV8Mkj8zjhkeg2+Ar5Xj9XyOzTyH23JKKhs4mXJXX/Y/0Z2VQ9CMaTKDfrq9RYmmRFGwCkFCe7Nao1ZXoDyS8QKmyK8v7mjEw3x5L1rcItSlM0r9R1N4rFGD1Ae5hHHh4BgjjEqqH0Ceezn1u9w8LwOn/lLaPDOEZv6NUnR/rShh7ZRIaZxI5yCu5gmDB x6bElIHv vRAk251/eAZ5VgmBirVg/aDAyqTn3L3MYDHX4eN3DzTdjWiakrhGXT9fib2dG1Y+i/J2O+fjFnxKuK011jeRdoau5lyVzZLVxRcnluB2Nvha5GjIJIY10lOlz01kOITzGsRDkvtIt6BX0Lt9K2N52hbRh7faLBxsc+TDmJNPo7g2B7PYDY1odhehIZEcS+QrtPWeN+DUXB/71UuhMGdtcYKvo9w6JWx2GhTCHa3JMv/U/djMnr60/0ix/xa5rtJdLLFKHOVacI2CsXpJOlIdYt+3HZe85mmruHMHmGAoZGmWDER5AP+CefFwzLA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The 'compound_head' field in the 'struct page' encodes whether the page is a tail and where to locate the head page. Bit 0 is set if the page is a tail, and the remaining bits in the field point to the head page. As preparation for changing how the field encodes information about the head page, rename the field to 'compound_info'. Signed-off-by: Kiryl Shutsemau --- .../admin-guide/kdump/vmcoreinfo.rst | 2 +- Documentation/mm/vmemmap_dedup.rst | 6 +++--- include/linux/mm_types.h | 20 +++++++++---------- include/linux/page-flags.h | 18 ++++++++--------- include/linux/types.h | 2 +- kernel/vmcore_info.c | 2 +- mm/page_alloc.c | 2 +- mm/slab.h | 2 +- mm/util.c | 2 +- 9 files changed, 28 insertions(+), 28 deletions(-) diff --git a/Documentation/admin-guide/kdump/vmcoreinfo.rst b/Documentation/admin-guide/kdump/vmcoreinfo.rst index 404a15f6782c..7663c610fe90 100644 --- a/Documentation/admin-guide/kdump/vmcoreinfo.rst +++ b/Documentation/admin-guide/kdump/vmcoreinfo.rst @@ -141,7 +141,7 @@ nodemask_t The size of a nodemask_t type. Used to compute the number of online nodes. -(page, flags|_refcount|mapping|lru|_mapcount|private|compound_order|compound_head) +(page, flags|_refcount|mapping|lru|_mapcount|private|compound_order|compound_info) ---------------------------------------------------------------------------------- User-space tools compute their values based on the offset of these diff --git a/Documentation/mm/vmemmap_dedup.rst b/Documentation/mm/vmemmap_dedup.rst index b4a55b6569fa..1863d88d2dcb 100644 --- a/Documentation/mm/vmemmap_dedup.rst +++ b/Documentation/mm/vmemmap_dedup.rst @@ -24,7 +24,7 @@ For each base page, there is a corresponding ``struct page``. Within the HugeTLB subsystem, only the first 4 ``struct page`` are used to contain unique information about a HugeTLB page. ``__NR_USED_SUBPAGE`` provides this upper limit. The only 'useful' information in the remaining ``struct page`` -is the compound_head field, and this field is the same for all tail pages. +is the compound_info field, and this field is the same for all tail pages. By removing redundant ``struct page`` for HugeTLB pages, memory can be returned to the buddy allocator for other uses. @@ -124,10 +124,10 @@ Here is how things look before optimization:: | | +-----------+ -The value of page->compound_head is the same for all tail pages. The first +The value of page->compound_info is the same for all tail pages. The first page of ``struct page`` (page 0) associated with the HugeTLB page contains the 4 ``struct page`` necessary to describe the HugeTLB. The only use of the remaining -pages of ``struct page`` (page 1 to page 7) is to point to page->compound_head. +pages of ``struct page`` (page 1 to page 7) is to point to page->compound_info. Therefore, we can remap pages 1 to 7 to page 0. Only 1 page of ``struct page`` will be used for each HugeTLB page. This will allow us to free the remaining 7 pages to the buddy allocator. diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 90e5790c318f..a94683272869 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -125,14 +125,14 @@ struct page { atomic_long_t pp_ref_count; }; struct { /* Tail pages of compound page */ - unsigned long compound_head; /* Bit zero is set */ + unsigned long compound_info; /* Bit zero is set */ }; struct { /* ZONE_DEVICE pages */ /* - * The first word is used for compound_head or folio + * The first word is used for compound_info or folio * pgmap */ - void *_unused_pgmap_compound_head; + void *_unused_pgmap_compound_info; void *zone_device_data; /* * ZONE_DEVICE private pages are counted as being @@ -383,7 +383,7 @@ struct folio { /* private: avoid cluttering the output */ /* For the Unevictable "LRU list" slot */ struct { - /* Avoid compound_head */ + /* Avoid compound_info */ void *__filler; /* public: */ unsigned int mlock_count; @@ -484,7 +484,7 @@ struct folio { FOLIO_MATCH(flags, flags); FOLIO_MATCH(lru, lru); FOLIO_MATCH(mapping, mapping); -FOLIO_MATCH(compound_head, lru); +FOLIO_MATCH(compound_info, lru); FOLIO_MATCH(__folio_index, index); FOLIO_MATCH(private, private); FOLIO_MATCH(_mapcount, _mapcount); @@ -503,7 +503,7 @@ FOLIO_MATCH(_last_cpupid, _last_cpupid); static_assert(offsetof(struct folio, fl) == \ offsetof(struct page, pg) + sizeof(struct page)) FOLIO_MATCH(flags, _flags_1); -FOLIO_MATCH(compound_head, _head_1); +FOLIO_MATCH(compound_info, _head_1); FOLIO_MATCH(_mapcount, _mapcount_1); FOLIO_MATCH(_refcount, _refcount_1); #undef FOLIO_MATCH @@ -511,13 +511,13 @@ FOLIO_MATCH(_refcount, _refcount_1); static_assert(offsetof(struct folio, fl) == \ offsetof(struct page, pg) + 2 * sizeof(struct page)) FOLIO_MATCH(flags, _flags_2); -FOLIO_MATCH(compound_head, _head_2); +FOLIO_MATCH(compound_info, _head_2); #undef FOLIO_MATCH #define FOLIO_MATCH(pg, fl) \ static_assert(offsetof(struct folio, fl) == \ offsetof(struct page, pg) + 3 * sizeof(struct page)) FOLIO_MATCH(flags, _flags_3); -FOLIO_MATCH(compound_head, _head_3); +FOLIO_MATCH(compound_info, _head_3); #undef FOLIO_MATCH /** @@ -583,8 +583,8 @@ struct ptdesc { #define TABLE_MATCH(pg, pt) \ static_assert(offsetof(struct page, pg) == offsetof(struct ptdesc, pt)) TABLE_MATCH(flags, pt_flags); -TABLE_MATCH(compound_head, pt_list); -TABLE_MATCH(compound_head, _pt_pad_1); +TABLE_MATCH(compound_info, pt_list); +TABLE_MATCH(compound_info, _pt_pad_1); TABLE_MATCH(mapping, __page_mapping); TABLE_MATCH(__folio_index, pt_index); TABLE_MATCH(rcu_head, pt_rcu_head); diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index d4952573a4af..72c933a43b6a 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -213,7 +213,7 @@ static __always_inline const struct page *page_fixed_fake_head(const struct page /* * Only addresses aligned with PAGE_SIZE of struct page may be fake head * struct page. The alignment check aims to avoid access the fields ( - * e.g. compound_head) of the @page[1]. It can avoid touch a (possibly) + * e.g. compound_info) of the @page[1]. It can avoid touch a (possibly) * cold cacheline in some cases. */ if (IS_ALIGNED((unsigned long)page, PAGE_SIZE) && @@ -223,7 +223,7 @@ static __always_inline const struct page *page_fixed_fake_head(const struct page * because the @page is a compound page composed with at least * two contiguous pages. */ - unsigned long head = READ_ONCE(page[1].compound_head); + unsigned long head = READ_ONCE(page[1].compound_info); if (likely(head & 1)) return (const struct page *)(head - 1); @@ -281,7 +281,7 @@ static __always_inline int page_is_fake_head(const struct page *page) static __always_inline unsigned long _compound_head(const struct page *page) { - unsigned long head = READ_ONCE(page->compound_head); + unsigned long head = READ_ONCE(page->compound_info); if (unlikely(head & 1)) return head - 1; @@ -320,13 +320,13 @@ static __always_inline unsigned long _compound_head(const struct page *page) static __always_inline int PageTail(const struct page *page) { - return READ_ONCE(page->compound_head) & 1 || page_is_fake_head(page); + return READ_ONCE(page->compound_info) & 1 || page_is_fake_head(page); } static __always_inline int PageCompound(const struct page *page) { return test_bit(PG_head, &page->flags.f) || - READ_ONCE(page->compound_head) & 1; + READ_ONCE(page->compound_info) & 1; } #define PAGE_POISON_PATTERN -1l @@ -348,7 +348,7 @@ static const unsigned long *const_folio_flags(const struct folio *folio, { const struct page *page = &folio->page; - VM_BUG_ON_PGFLAGS(page->compound_head & 1, page); + VM_BUG_ON_PGFLAGS(page->compound_info & 1, page); VM_BUG_ON_PGFLAGS(n > 0 && !test_bit(PG_head, &page->flags.f), page); return &page[n].flags.f; } @@ -357,7 +357,7 @@ static unsigned long *folio_flags(struct folio *folio, unsigned n) { struct page *page = &folio->page; - VM_BUG_ON_PGFLAGS(page->compound_head & 1, page); + VM_BUG_ON_PGFLAGS(page->compound_info & 1, page); VM_BUG_ON_PGFLAGS(n > 0 && !test_bit(PG_head, &page->flags.f), page); return &page[n].flags.f; } @@ -869,12 +869,12 @@ static __always_inline void set_compound_head(struct page *page, const struct page *head, unsigned int order) { - WRITE_ONCE(page->compound_head, (unsigned long)head + 1); + WRITE_ONCE(page->compound_info, (unsigned long)head + 1); } static __always_inline void clear_compound_head(struct page *page) { - WRITE_ONCE(page->compound_head, 0); + WRITE_ONCE(page->compound_info, 0); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE diff --git a/include/linux/types.h b/include/linux/types.h index 6dfdb8e8e4c3..3a65f0ef4a73 100644 --- a/include/linux/types.h +++ b/include/linux/types.h @@ -234,7 +234,7 @@ struct ustat { * * This guarantee is important for few reasons: * - future call_rcu_lazy() will make use of lower bits in the pointer; - * - the structure shares storage space in struct page with @compound_head, + * - the structure shares storage space in struct page with @compound_info, * which encode PageTail() in bit 0. The guarantee is needed to avoid * false-positive PageTail(). */ diff --git a/kernel/vmcore_info.c b/kernel/vmcore_info.c index e066d31d08f8..782bc2050a40 100644 --- a/kernel/vmcore_info.c +++ b/kernel/vmcore_info.c @@ -175,7 +175,7 @@ static int __init crash_save_vmcoreinfo_init(void) VMCOREINFO_OFFSET(page, lru); VMCOREINFO_OFFSET(page, _mapcount); VMCOREINFO_OFFSET(page, private); - VMCOREINFO_OFFSET(page, compound_head); + VMCOREINFO_OFFSET(page, compound_info); VMCOREINFO_OFFSET(pglist_data, node_zones); VMCOREINFO_OFFSET(pglist_data, nr_zones); #ifdef CONFIG_FLATMEM diff --git a/mm/page_alloc.c b/mm/page_alloc.c index fe77c00c99df..cecd6d89ff60 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -704,7 +704,7 @@ static inline bool pcp_allowed_order(unsigned int order) * The first PAGE_SIZE page is called the "head page" and have PG_head set. * * The remaining PAGE_SIZE pages are called "tail pages". PageTail() is encoded - * in bit 0 of page->compound_head. The rest of bits is pointer to head page. + * in bit 0 of page->compound_info. The rest of bits is pointer to head page. * * The first tail page's ->compound_order holds the order of allocation. * This usage means that zero-order pages may not be compound. diff --git a/mm/slab.h b/mm/slab.h index 078daecc7cf5..b471877af296 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -104,7 +104,7 @@ struct slab { #define SLAB_MATCH(pg, sl) \ static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl)) SLAB_MATCH(flags, flags); -SLAB_MATCH(compound_head, slab_cache); /* Ensure bit 0 is clear */ +SLAB_MATCH(compound_info, slab_cache); /* Ensure bit 0 is clear */ SLAB_MATCH(_refcount, __page_refcount); #ifdef CONFIG_MEMCG SLAB_MATCH(memcg_data, obj_exts); diff --git a/mm/util.c b/mm/util.c index 8989d5767528..cbf93cf3223a 100644 --- a/mm/util.c +++ b/mm/util.c @@ -1244,7 +1244,7 @@ void snapshot_page(struct page_snapshot *ps, const struct page *page) again: memset(&ps->folio_snapshot, 0, sizeof(struct folio)); memcpy(&ps->page_snapshot, page, sizeof(*page)); - head = ps->page_snapshot.compound_head; + head = ps->page_snapshot.compound_info; if ((head & 1) == 0) { ps->idx = 0; foliop = (struct folio *)&ps->page_snapshot; -- 2.51.2