From: Kefeng Wang <wangkefeng.wang@huawei.com>
To: Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@redhat.com>,
Oscar Salvador <osalvador@suse.de>,
Muchun Song <muchun.song@linux.dev>, <linux-mm@kvack.org>
Cc: <sidhartha.kumar@oracle.com>, <jane.chu@oracle.com>,
Zi Yan <ziy@nvidia.com>, Vlastimil Babka <vbabka@suse.cz>,
Brendan Jackman <jackmanb@google.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Matthew Wilcox <willy@infradead.org>,
Kefeng Wang <wangkefeng.wang@huawei.com>
Subject: [PATCH v5 2/6] mm: page_alloc: add __split_page()
Date: Tue, 30 Dec 2025 15:24:18 +0800 [thread overview]
Message-ID: <20251230072422.265265-3-wangkefeng.wang@huawei.com> (raw)
In-Reply-To: <20251230072422.265265-1-wangkefeng.wang@huawei.com>
Factor out the splitting of non-compound page from make_alloc_exact()
and split_page() into a new helper function __split_page().
While at it, convert the VM_BUG_ON_PAGE() into a VM_WARN_ON_PAGE().
Acked-by: David Hildenbrand <david@redhat.com>
Acked-by: Muchun Song <muchun.song@linux.dev>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
include/linux/mmdebug.h | 10 ++++++++++
mm/page_alloc.c | 21 +++++++++++++--------
2 files changed, 23 insertions(+), 8 deletions(-)
diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
index 14a45979cccc..ab60ffba08f5 100644
--- a/include/linux/mmdebug.h
+++ b/include/linux/mmdebug.h
@@ -47,6 +47,15 @@ void vma_iter_dump_tree(const struct vma_iterator *vmi);
BUG(); \
} \
} while (0)
+#define VM_WARN_ON_PAGE(cond, page) ({ \
+ int __ret_warn = !!(cond); \
+ \
+ if (unlikely(__ret_warn)) { \
+ dump_page(page, "VM_WARN_ON_PAGE(" __stringify(cond)")");\
+ WARN_ON(1); \
+ } \
+ unlikely(__ret_warn); \
+})
#define VM_WARN_ON_ONCE_PAGE(cond, page) ({ \
static bool __section(".data..once") __warned; \
int __ret_warn_once = !!(cond); \
@@ -122,6 +131,7 @@ void vma_iter_dump_tree(const struct vma_iterator *vmi);
#define VM_BUG_ON_MM(cond, mm) VM_BUG_ON(cond)
#define VM_WARN_ON(cond) BUILD_BUG_ON_INVALID(cond)
#define VM_WARN_ON_ONCE(cond) BUILD_BUG_ON_INVALID(cond)
+#define VM_WARN_ON_PAGE(cond, page) BUILD_BUG_ON_INVALID(cond)
#define VM_WARN_ON_ONCE_PAGE(cond, page) BUILD_BUG_ON_INVALID(cond)
#define VM_WARN_ON_FOLIO(cond, folio) BUILD_BUG_ON_INVALID(cond)
#define VM_WARN_ON_ONCE_FOLIO(cond, folio) BUILD_BUG_ON_INVALID(cond)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 206397ed33a7..b9bfbb69537e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3080,6 +3080,15 @@ void free_unref_folios(struct folio_batch *folios)
folio_batch_reinit(folios);
}
+static void __split_page(struct page *page, unsigned int order)
+{
+ VM_WARN_ON_PAGE(PageCompound(page), page);
+
+ split_page_owner(page, order, 0);
+ pgalloc_tag_split(page_folio(page), order, 0);
+ split_page_memcg(page, order);
+}
+
/*
* split_page takes a non-compound higher-order page, and splits it into
* n (1<<order) sub-pages: page[0..n]
@@ -3092,14 +3101,12 @@ void split_page(struct page *page, unsigned int order)
{
int i;
- VM_BUG_ON_PAGE(PageCompound(page), page);
- VM_BUG_ON_PAGE(!page_count(page), page);
+ VM_WARN_ON_PAGE(!page_count(page), page);
for (i = 1; i < (1 << order); i++)
set_page_refcounted(page + i);
- split_page_owner(page, order, 0);
- pgalloc_tag_split(page_folio(page), order, 0);
- split_page_memcg(page, order);
+
+ __split_page(page, order);
}
EXPORT_SYMBOL_GPL(split_page);
@@ -5383,9 +5390,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order,
struct page *page = virt_to_page((void *)addr);
struct page *last = page + nr;
- split_page_owner(page, order, 0);
- pgalloc_tag_split(page_folio(page), order, 0);
- split_page_memcg(page, order);
+ __split_page(page, order);
while (page < --last)
set_page_refcounted(last);
--
2.27.0
next prev parent reply other threads:[~2025-12-30 7:24 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-30 7:24 [PATCH v5 mm-new 0/6] mm: hugetlb: allocate frozen gigantic folio Kefeng Wang
2025-12-30 7:24 ` [PATCH v5 1/6] mm: debug_vm_pgtable: add debug_vm_pgtable_free_huge_page() Kefeng Wang
2025-12-30 7:24 ` Kefeng Wang [this message]
2025-12-30 7:24 ` [PATCH v5 3/6] mm: cma: kill cma_pages_valid() Kefeng Wang
2025-12-30 7:24 ` [PATCH v5 4/6] mm: page_alloc: add alloc_contig_frozen_{range,pages}() Kefeng Wang
2025-12-31 2:57 ` Zi Yan
2025-12-30 7:24 ` [PATCH v5 5/6] mm: cma: add cma_alloc_frozen{_compound}() Kefeng Wang
2025-12-31 2:59 ` Zi Yan
2025-12-30 7:24 ` [PATCH v5 6/6] mm: hugetlb: allocate frozen pages for gigantic allocation Kefeng Wang
2025-12-31 2:50 ` Muchun Song
2025-12-31 3:00 ` Zi Yan
2025-12-30 18:17 ` [PATCH v5 mm-new 0/6] mm: hugetlb: allocate frozen gigantic folio Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251230072422.265265-3-wangkefeng.wang@huawei.com \
--to=wangkefeng.wang@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=jackmanb@google.com \
--cc=jane.chu@oracle.com \
--cc=linux-mm@kvack.org \
--cc=muchun.song@linux.dev \
--cc=osalvador@suse.de \
--cc=sidhartha.kumar@oracle.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox