From: "David Hildenbrand (Arm)" <david@kernel.org>
To: "Tejun Heo" <tj@kernel.org>,
"Johannes Weiner" <hannes@cmpxchg.org>,
"Michal Koutný" <mkoutny@suse.com>,
"Jonathan Corbet" <corbet@lwn.net>,
"Shuah Khan" <skhan@linuxfoundation.org>,
"Andrew Morton" <akpm@linux-foundation.org>,
"Lorenzo Stoakes" <ljs@kernel.org>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
"Vlastimil Babka" <vbabka@kernel.org>,
"Mike Rapoport" <rppt@kernel.org>,
"Suren Baghdasaryan" <surenb@google.com>,
"Michal Hocko" <mhocko@suse.com>,
"Rik van Riel" <riel@surriel.com>, "Harry Yoo" <harry@kernel.org>,
"Jann Horn" <jannh@google.com>,
"Brendan Jackman" <jackmanb@google.com>,
"Zi Yan" <ziy@nvidia.com>, "Pedro Falcato" <pfalcato@suse.de>,
"Matthew Wilcox" <willy@infradead.org>
Cc: cgroups@vger.kernel.org, linux-doc@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
linux-fsdevel@vger.kernel.org,
"David Hildenbrand (Arm)" <david@kernel.org>
Subject: [PATCH RFC 07/13] mm/rmap: remove CONFIG_PAGE_MAPCOUNT
Date: Sun, 12 Apr 2026 20:59:38 +0200 [thread overview]
Message-ID: <20260412-mapcount-v1-7-05e8dfab52e0@kernel.org> (raw)
In-Reply-To: <20260412-mapcount-v1-0-05e8dfab52e0@kernel.org>
page->mapcount is still updated but essentially unused. So let's
remove CONFIG_PAGE_MAPCOUNT. Given that CONFIG_NO_PAGE_MAPCOUNT is the
only remaining variant, that Kconfig can go as well.
We can replace some instances of "orig_nr_pages" by the "nr_pages" as
the latter is no longer modified.
Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
---
Documentation/mm/transhuge.rst | 3 ---
include/linux/rmap.h | 11 +----------
mm/Kconfig | 17 -----------------
mm/rmap.c | 36 ++++++------------------------------
4 files changed, 7 insertions(+), 60 deletions(-)
diff --git a/Documentation/mm/transhuge.rst b/Documentation/mm/transhuge.rst
index f200c1ac19cb..eb5ac076e4c6 100644
--- a/Documentation/mm/transhuge.rst
+++ b/Documentation/mm/transhuge.rst
@@ -129,9 +129,6 @@ pages:
corresponding mapcount), and the current status ("maybe mapped shared" vs.
"mapped exclusively").
- With CONFIG_PAGE_MAPCOUNT, we also increment/decrement
- page->_mapcount.
-
split_huge_page internally has to distribute the refcounts in the head
page to the tail pages before clearing all PG_head/tail bits from the page
structures. It can be done easily for refcounts taken by page table
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index e5569f5fdaec..4894e43e5f52 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -493,8 +493,6 @@ static __always_inline void __folio_dup_file_rmap(struct folio *folio,
struct page *page, int nr_pages, struct vm_area_struct *dst_vma,
enum pgtable_level level)
{
- const int orig_nr_pages = nr_pages;
-
__folio_rmap_sanity_checks(folio, page, nr_pages, level);
switch (level) {
@@ -504,12 +502,7 @@ static __always_inline void __folio_dup_file_rmap(struct folio *folio,
break;
}
- if (IS_ENABLED(CONFIG_PAGE_MAPCOUNT)) {
- do {
- atomic_inc(&page->_mapcount);
- } while (page++, --nr_pages > 0);
- }
- folio_add_large_mapcount(folio, orig_nr_pages, dst_vma);
+ folio_add_large_mapcount(folio, nr_pages, dst_vma);
break;
case PGTABLE_LEVEL_PMD:
case PGTABLE_LEVEL_PUD:
@@ -608,8 +601,6 @@ static __always_inline int __folio_try_dup_anon_rmap(struct folio *folio,
do {
if (PageAnonExclusive(page))
ClearPageAnonExclusive(page);
- if (IS_ENABLED(CONFIG_PAGE_MAPCOUNT))
- atomic_inc(&page->_mapcount);
} while (page++, --nr_pages > 0);
folio_add_large_mapcount(folio, orig_nr_pages, dst_vma);
break;
diff --git a/mm/Kconfig b/mm/Kconfig
index bd283958d675..576db4fdf16e 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -948,25 +948,8 @@ config READ_ONLY_THP_FOR_FS
support of file THPs will be developed in the next few release
cycles.
-config NO_PAGE_MAPCOUNT
- bool "No per-page mapcount (EXPERIMENTAL)"
- help
- Do not maintain per-page mapcounts for pages part of larger
- allocations, such as transparent huge pages.
-
- When this config option is enabled, some interfaces that relied on
- this information will rely on less-precise per-allocation information
- instead: for example, using the average per-page mapcount in such
- a large allocation instead of the per-page mapcount.
-
- EXPERIMENTAL because the impact of some changes is still unclear.
-
endif # TRANSPARENT_HUGEPAGE
-# simple helper to make the code a bit easier to read
-config PAGE_MAPCOUNT
- def_bool !NO_PAGE_MAPCOUNT
-
#
# The architecture supports pgtable leaves that is larger than PAGE_SIZE
#
diff --git a/mm/rmap.c b/mm/rmap.c
index df42c38fe387..27488183448b 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1354,7 +1354,6 @@ static __always_inline void __folio_add_rmap(struct folio *folio,
enum pgtable_level level)
{
int nr = 0, nr_pmdmapped = 0, mapcount;
- const int orig_nr_pages = nr_pages;
__folio_rmap_sanity_checks(folio, page, nr_pages, level);
@@ -1365,14 +1364,8 @@ static __always_inline void __folio_add_rmap(struct folio *folio,
break;
}
- if (IS_ENABLED(CONFIG_PAGE_MAPCOUNT)) {
- do {
- atomic_inc(&page->_mapcount);
- } while (page++, --nr_pages > 0);
- }
-
- mapcount = folio_add_return_large_mapcount(folio, orig_nr_pages, vma);
- if (mapcount == orig_nr_pages)
+ mapcount = folio_add_return_large_mapcount(folio, nr_pages, vma);
+ if (mapcount == nr_pages)
nr = folio_large_nr_pages(folio);
break;
case PGTABLE_LEVEL_PMD:
@@ -1518,15 +1511,6 @@ static __always_inline void __folio_add_anon_rmap(struct folio *folio,
VM_WARN_ON_FOLIO(folio_test_large(folio) &&
folio_entire_mapcount(folio) > 1 &&
PageAnonExclusive(cur_page), folio);
- if (IS_ENABLED(CONFIG_NO_PAGE_MAPCOUNT))
- continue;
-
- /*
- * While PTE-mapping a THP we have a PMD and a PTE
- * mapping.
- */
- VM_WARN_ON_FOLIO(atomic_read(&cur_page->_mapcount) > 0 &&
- PageAnonExclusive(cur_page), folio);
}
/*
@@ -1628,14 +1612,12 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
int i;
nr = folio_large_nr_pages(folio);
- for (i = 0; i < nr; i++) {
- struct page *page = folio_page(folio, i);
+ if (exclusive) {
+ for (i = 0; i < nr; i++) {
+ struct page *page = folio_page(folio, i);
- if (IS_ENABLED(CONFIG_PAGE_MAPCOUNT))
- /* increment count (starts at -1) */
- atomic_set(&page->_mapcount, 0);
- if (exclusive)
SetPageAnonExclusive(page);
+ }
}
folio_set_large_mapcount(folio, nr, vma);
@@ -1769,12 +1751,6 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
if (!mapcount)
nr = folio_large_nr_pages(folio);
- if (IS_ENABLED(CONFIG_PAGE_MAPCOUNT)) {
- do {
- atomic_dec(&page->_mapcount);
- } while (page++, --nr_pages > 0);
- }
-
partially_mapped = __folio_certainly_partially_mapped(folio, mapcount);
break;
case PGTABLE_LEVEL_PMD:
--
2.43.0
next prev parent reply other threads:[~2026-04-12 19:00 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-12 18:59 [PATCH RFC 00/13] mm/rmap: support arbitrary folio mappings David Hildenbrand (Arm)
2026-04-12 18:59 ` [PATCH RFC 01/13] mm/rmap: remove folio->_nr_pages_mapped David Hildenbrand (Arm)
2026-04-12 18:59 ` [PATCH RFC 02/13] fs/proc/task_mmu: remove CONFIG_PAGE_MAPCOUNT handling for "mapmax" David Hildenbrand (Arm)
2026-04-12 18:59 ` [PATCH RFC 03/13] fs/proc/page: remove CONFIG_PAGE_MAPCOUNT handling for kpagecount David Hildenbrand (Arm)
2026-04-12 18:59 ` [PATCH RFC 04/13] fs/proc/task_mmu: remove CONFIG_PAGE_MAPCOUNT handling for PM_MMAP_EXCLUSIVE David Hildenbrand (Arm)
2026-04-12 18:59 ` [PATCH RFC 05/13] fs/proc/task_mmu: remove mapcount comment in smaps_account() David Hildenbrand (Arm)
2026-04-12 18:59 ` [PATCH RFC 06/13] fs/proc/task_mmu: remove CONFIG_PAGE_MAPCOUNT handling " David Hildenbrand (Arm)
2026-04-12 18:59 ` David Hildenbrand (Arm) [this message]
2026-04-12 18:59 ` [PATCH RFC 08/13] mm: re-consolidate folio->_entire_mapcount David Hildenbrand (Arm)
2026-04-12 18:59 ` [PATCH RFC 09/13] mm: move _large_mapcount to _mapcount in page[1] of a large folio David Hildenbrand (Arm)
2026-04-12 18:59 ` [PATCH RFC 10/13] mm: re-consolidate folio->_pincount David Hildenbrand (Arm)
2026-04-12 18:59 ` [PATCH RFC 11/13] mm/rmap: stop using the entire mapcount for hugetlb folios David Hildenbrand (Arm)
2026-04-12 18:59 ` [PATCH RFC 12/13] mm/rmap: large mapcount interface cleanups David Hildenbrand (Arm)
2026-04-12 18:59 ` [PATCH RFC 13/13] mm/rmap: support arbitrary folio mappings David Hildenbrand (Arm)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260412-mapcount-v1-7-05e8dfab52e0@kernel.org \
--to=david@kernel.org \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=corbet@lwn.net \
--cc=hannes@cmpxchg.org \
--cc=harry@kernel.org \
--cc=jackmanb@google.com \
--cc=jannh@google.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=mhocko@suse.com \
--cc=mkoutny@suse.com \
--cc=pfalcato@suse.de \
--cc=riel@surriel.com \
--cc=rppt@kernel.org \
--cc=skhan@linuxfoundation.org \
--cc=surenb@google.com \
--cc=tj@kernel.org \
--cc=vbabka@kernel.org \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox