linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 0/2] mm/highmem: don't track highmem pages manually
@ 2024-06-07  8:37 David Hildenbrand
  2024-06-07  8:37 ` [PATCH v1 1/2] mm/highmem: reimplement totalhigh_pages() by walking zones David Hildenbrand
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: David Hildenbrand @ 2024-06-07  8:37 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-mm, David Hildenbrand, Andrew Morton, Wei Yang

Let's remove highmem special-casing from adjust_managed_page_count(),
to result in less confusion why memblock manually adjusts
totalram_pages, and __free_pages_core() only adjusts the zone's
managed pages -- what about the highmem pages that
adjust_managed_page_count() updates?

Now, we only maintain totalram_pages and a zone's managed pages
independent of highmem support. We can derive the number of highmem pages
simply by looking at the relevant zone's managed pages. I don't think
there is any particular fast path that needs a maximum-efficient
totalhigh_pages() implementation.

Note that highmem memory is currently initialized using
free_highmem_page()->free_reserved_page(), not __free_pages_core(). In the
future we might want to also use __free_pages_core() to initialize
highmem memory, to make that less special, and consider moving
totalram_pages updates into __free_pages_core() [1], so we can just use
adjust_managed_page_count() in there as well.

Booting a simple kernel in QEMU reveals no highmem accounting change:

Before:
  Memory: 3095448K/3145208K available (14802K kernel code, 2073K rwdata,
  5000K rodata, 740K init, 556K bss, 49760K reserved, 0K cma-reserved,
  2244488K highmem)

After:
  Memory: 3095276K/3145208K available (14802K kernel code, 2073K rwdata,
  5000K rodata, 740K init, 556K bss, 49932K reserved, 0K cma-reserved,
  2244488K highmem)

[1] https://lkml.kernel.org/r/20240601133402.2675-1-richard.weiyang@gmail.com

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Wei Yang <richard.weiyang@gmail.com>

David Hildenbrand (2):
  mm/highmem: reimplement totalhigh_pages() by walking zones
  mm/highmem: make nr_free_highpages() return "unsigned long"

 include/linux/highmem-internal.h | 17 ++++++-----------
 include/linux/highmem.h          |  2 +-
 mm/highmem.c                     | 20 +++++++++++++++-----
 mm/page_alloc.c                  |  4 ----
 4 files changed, 22 insertions(+), 21 deletions(-)


base-commit: 19b8422c5bd56fb5e7085995801c6543a98bda1f
-- 
2.45.1



^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2024-06-12  7:34 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-06-07  8:37 [PATCH v1 0/2] mm/highmem: don't track highmem pages manually David Hildenbrand
2024-06-07  8:37 ` [PATCH v1 1/2] mm/highmem: reimplement totalhigh_pages() by walking zones David Hildenbrand
2024-06-08  0:48   ` Wei Yang
2024-06-10  3:23   ` Oscar Salvador
2024-06-07  8:37 ` [PATCH v1 2/2] mm/highmem: make nr_free_highpages() return "unsigned long" David Hildenbrand
2024-06-08  0:51   ` Wei Yang
2024-06-10  3:40   ` Oscar Salvador
2024-06-10  8:22     ` David Hildenbrand
2024-06-11  0:56       ` Wei Yang
     [not found]         ` <04b3dda2-c6a8-4f26-90b8-75fe7580d63e@redhat.com>
2024-06-12  7:01           ` Wei Yang
2024-06-12  7:22             ` David Hildenbrand
2024-06-12  7:34               ` Wei Yang
2024-06-08  0:45 ` [PATCH v1 0/2] mm/highmem: don't track highmem pages manually Wei Yang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox