From: Christoph Lameter <clameter@sgi.com>
From: Christoph Lameter <clameter@sgi.com>
To: linux-kernel@vger.kernel.org
Cc: akpm@osdl.org, Hugh Dickins <hugh@veritas.com>,
Con Kolivas <kernel@kolivas.org>,
Marcelo Tosatti <marcelo@kvack.org>,
Nick Piggin <nickpiggin@yahoo.com.au>,
linux-mm@kvack.org, Andi Kleen <ak@suse.de>,
Dave Chinner <dgc@sgi.com>, Christoph Lameter <clameter@sgi.com>
Subject: [PATCH 05/21] Conversion of nr_pagecache to per zone counter
Subject: zoned vm counters: conversion of nr_pagecache to per zone counter
Date: Mon, 12 Jun 2006 14:13:10 -0700 (PDT) [thread overview]
Message-ID: <20060612211310.20862.738.sendpatchset@schroedinger.engr.sgi.com> (raw)
In-Reply-To: <20060612211244.20862.41106.sendpatchset@schroedinger.engr.sgi.com>
Currently a single atomic variable is used to establish the size of the page
cache in the whole machine. The zoned VM counters have the same method of
implementation as the nr_pagecache code but also allow the determination of
the pagecache size per zone.
Remove the special implementation for nr_pagecache and make it a zoned
counter.
Updates of the page cache counters are always performed with interrupts off.
We can therefore use the __ variant here.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Index: linux-2.6.17-rc6-cl/arch/sparc64/kernel/sys_sunos32.c
===================================================================
--- linux-2.6.17-rc6-cl.orig/arch/sparc64/kernel/sys_sunos32.c 2006-06-12 12:42:42.240680230 -0700
+++ linux-2.6.17-rc6-cl/arch/sparc64/kernel/sys_sunos32.c 2006-06-12 13:00:50.575648675 -0700
@@ -155,7 +155,7 @@ asmlinkage int sunos_brk(u32 baddr)
* simple, it hopefully works in most obvious cases.. Easy to
* fool it, but this should catch most mistakes.
*/
- freepages = get_page_cache_size();
+ freepages = global_page_state(NR_PAGECACHE);
freepages >>= 1;
freepages += nr_free_pages();
freepages += nr_swap_pages;
Index: linux-2.6.17-rc6-cl/arch/sparc/kernel/sys_sunos.c
===================================================================
--- linux-2.6.17-rc6-cl.orig/arch/sparc/kernel/sys_sunos.c 2006-06-12 12:42:42.249468748 -0700
+++ linux-2.6.17-rc6-cl/arch/sparc/kernel/sys_sunos.c 2006-06-12 13:00:50.577601679 -0700
@@ -196,7 +196,7 @@ asmlinkage int sunos_brk(unsigned long b
* simple, it hopefully works in most obvious cases.. Easy to
* fool it, but this should catch most mistakes.
*/
- freepages = get_page_cache_size();
+ freepages = global_page_state(NR_PAGECACHE);
freepages >>= 1;
freepages += nr_free_pages();
freepages += nr_swap_pages;
Index: linux-2.6.17-rc6-cl/fs/proc/proc_misc.c
===================================================================
--- linux-2.6.17-rc6-cl.orig/fs/proc/proc_misc.c 2006-06-12 12:57:23.383401642 -0700
+++ linux-2.6.17-rc6-cl/fs/proc/proc_misc.c 2006-06-12 13:00:50.578578181 -0700
@@ -142,7 +142,8 @@ static int meminfo_read_proc(char *page,
allowed = ((totalram_pages - hugetlb_total_pages())
* sysctl_overcommit_ratio / 100) + total_swap_pages;
- cached = get_page_cache_size() - total_swapcache_pages - i.bufferram;
+ cached = global_page_state(NR_PAGECACHE) -
+ total_swapcache_pages - i.bufferram;
if (cached < 0)
cached = 0;
Index: linux-2.6.17-rc6-cl/include/linux/pagemap.h
===================================================================
--- linux-2.6.17-rc6-cl.orig/include/linux/pagemap.h 2006-06-12 12:42:50.853428498 -0700
+++ linux-2.6.17-rc6-cl/include/linux/pagemap.h 2006-06-12 13:00:50.579554683 -0700
@@ -115,51 +115,6 @@ int add_to_page_cache_lru(struct page *p
extern void remove_from_page_cache(struct page *page);
extern void __remove_from_page_cache(struct page *page);
-extern atomic_t nr_pagecache;
-
-#ifdef CONFIG_SMP
-
-#define PAGECACHE_ACCT_THRESHOLD max(16, NR_CPUS * 2)
-DECLARE_PER_CPU(long, nr_pagecache_local);
-
-/*
- * pagecache_acct implements approximate accounting for pagecache.
- * vm_enough_memory() do not need high accuracy. Writers will keep
- * an offset in their per-cpu arena and will spill that into the
- * global count whenever the absolute value of the local count
- * exceeds the counter's threshold.
- *
- * MUST be protected from preemption.
- * current protection is mapping->page_lock.
- */
-static inline void pagecache_acct(int count)
-{
- long *local;
-
- local = &__get_cpu_var(nr_pagecache_local);
- *local += count;
- if (*local > PAGECACHE_ACCT_THRESHOLD || *local < -PAGECACHE_ACCT_THRESHOLD) {
- atomic_add(*local, &nr_pagecache);
- *local = 0;
- }
-}
-
-#else
-
-static inline void pagecache_acct(int count)
-{
- atomic_add(count, &nr_pagecache);
-}
-#endif
-
-static inline unsigned long get_page_cache_size(void)
-{
- int ret = atomic_read(&nr_pagecache);
- if (unlikely(ret < 0))
- ret = 0;
- return ret;
-}
-
/*
* Return byte-offset into filesystem object for page.
*/
Index: linux-2.6.17-rc6-cl/mm/filemap.c
===================================================================
--- linux-2.6.17-rc6-cl.orig/mm/filemap.c 2006-06-12 12:42:52.024254482 -0700
+++ linux-2.6.17-rc6-cl/mm/filemap.c 2006-06-12 13:00:50.581507687 -0700
@@ -126,7 +126,7 @@ void __remove_from_page_cache(struct pag
radix_tree_delete(&mapping->page_tree, page->index);
page->mapping = NULL;
mapping->nrpages--;
- pagecache_acct(-1);
+ __dec_zone_page_state(page, NR_PAGECACHE);
}
EXPORT_SYMBOL(__remove_from_page_cache);
@@ -424,7 +424,7 @@ int add_to_page_cache(struct page *page,
page->mapping = mapping;
page->index = offset;
mapping->nrpages++;
- pagecache_acct(1);
+ __inc_zone_page_state(page, NR_PAGECACHE);
}
write_unlock_irq(&mapping->tree_lock);
radix_tree_preload_end();
Index: linux-2.6.17-rc6-cl/mm/mmap.c
===================================================================
--- linux-2.6.17-rc6-cl.orig/mm/mmap.c 2006-06-12 12:42:52.037925511 -0700
+++ linux-2.6.17-rc6-cl/mm/mmap.c 2006-06-12 13:00:50.582484189 -0700
@@ -96,7 +96,7 @@ int __vm_enough_memory(long pages, int c
if (sysctl_overcommit_memory == OVERCOMMIT_GUESS) {
unsigned long n;
- free = get_page_cache_size();
+ free = global_page_state(NR_PAGECACHE);
free += nr_swap_pages;
/*
Index: linux-2.6.17-rc6-cl/mm/nommu.c
===================================================================
--- linux-2.6.17-rc6-cl.orig/mm/nommu.c 2006-06-05 17:57:02.000000000 -0700
+++ linux-2.6.17-rc6-cl/mm/nommu.c 2006-06-12 13:00:50.583460691 -0700
@@ -1122,7 +1122,7 @@ int __vm_enough_memory(long pages, int c
if (sysctl_overcommit_memory == OVERCOMMIT_GUESS) {
unsigned long n;
- free = get_page_cache_size();
+ free = global_page_state(NR_PAGECACHE);
free += nr_swap_pages;
/*
Index: linux-2.6.17-rc6-cl/mm/page_alloc.c
===================================================================
--- linux-2.6.17-rc6-cl.orig/mm/page_alloc.c 2006-06-12 12:57:23.385354646 -0700
+++ linux-2.6.17-rc6-cl/mm/page_alloc.c 2006-06-12 13:00:50.584437193 -0700
@@ -2231,16 +2231,11 @@ static int page_alloc_cpu_notify(struct
unsigned long action, void *hcpu)
{
int cpu = (unsigned long)hcpu;
- long *count;
unsigned long *src, *dest;
if (action == CPU_DEAD) {
int i;
- /* Drain local pagecache count. */
- count = &per_cpu(nr_pagecache_local, cpu);
- atomic_add(*count, &nr_pagecache);
- *count = 0;
local_irq_disable();
__drain_pages(cpu);
Index: linux-2.6.17-rc6-cl/mm/swap_state.c
===================================================================
--- linux-2.6.17-rc6-cl.orig/mm/swap_state.c 2006-06-12 12:42:52.062338063 -0700
+++ linux-2.6.17-rc6-cl/mm/swap_state.c 2006-06-12 13:00:50.585413695 -0700
@@ -89,7 +89,7 @@ static int __add_to_swap_cache(struct pa
SetPageSwapCache(page);
set_page_private(page, entry.val);
total_swapcache_pages++;
- pagecache_acct(1);
+ __inc_zone_page_state(page, NR_PAGECACHE);
}
write_unlock_irq(&swapper_space.tree_lock);
radix_tree_preload_end();
@@ -135,7 +135,7 @@ void __delete_from_swap_cache(struct pag
set_page_private(page, 0);
ClearPageSwapCache(page);
total_swapcache_pages--;
- pagecache_acct(-1);
+ __dec_zone_page_state(page, NR_PAGECACHE);
INC_CACHE_INFO(del_total);
}
Index: linux-2.6.17-rc6-cl/include/linux/mmzone.h
===================================================================
--- linux-2.6.17-rc6-cl.orig/include/linux/mmzone.h 2006-06-12 12:57:23.383401642 -0700
+++ linux-2.6.17-rc6-cl/include/linux/mmzone.h 2006-06-12 13:00:50.585413695 -0700
@@ -49,7 +49,7 @@ struct zone_padding {
enum zone_stat_item {
NR_MAPPED, /* mapped into pagetables.
only modified from process context */
-
+ NR_PAGECACHE,
NR_STAT_ITEMS };
struct per_cpu_pages {
Index: linux-2.6.17-rc6-cl/arch/s390/appldata/appldata_mem.c
===================================================================
--- linux-2.6.17-rc6-cl.orig/arch/s390/appldata/appldata_mem.c 2006-06-12 12:42:42.183066607 -0700
+++ linux-2.6.17-rc6-cl/arch/s390/appldata/appldata_mem.c 2006-06-12 13:00:50.586390197 -0700
@@ -130,7 +130,8 @@ static void appldata_get_mem_data(void *
mem_data->totalhigh = P2K(val.totalhigh);
mem_data->freehigh = P2K(val.freehigh);
mem_data->bufferram = P2K(val.bufferram);
- mem_data->cached = P2K(atomic_read(&nr_pagecache) - val.bufferram);
+ mem_data->cached = P2K(global_page_state(NR_PAGECACHE)
+ - val.bufferram);
si_swapinfo(&val);
mem_data->totalswap = P2K(val.totalswap);
Index: linux-2.6.17-rc6-cl/drivers/base/node.c
===================================================================
--- linux-2.6.17-rc6-cl.orig/drivers/base/node.c 2006-06-12 12:57:23.382425140 -0700
+++ linux-2.6.17-rc6-cl/drivers/base/node.c 2006-06-12 13:00:50.587366699 -0700
@@ -69,6 +69,7 @@ static ssize_t node_read_meminfo(struct
"Node %d LowFree: %8lu kB\n"
"Node %d Dirty: %8lu kB\n"
"Node %d Writeback: %8lu kB\n"
+ "Node %d PageCache: %8lu kB\n"
"Node %d Mapped: %8lu kB\n"
"Node %d Slab: %8lu kB\n",
nid, K(i.totalram),
@@ -82,6 +83,7 @@ static ssize_t node_read_meminfo(struct
nid, K(i.freeram - i.freehigh),
nid, K(ps.nr_dirty),
nid, K(ps.nr_writeback),
+ nid, K(node_page_state(nid, NR_PAGECACHE)),
nid, K(node_page_state(nid, NR_MAPPED)),
nid, K(ps.nr_slab));
n += hugetlb_report_node_meminfo(nid, buf + n);
Index: linux-2.6.17-rc6-cl/mm/vmstat.c
===================================================================
--- linux-2.6.17-rc6-cl.orig/mm/vmstat.c 2006-06-12 13:00:45.074036260 -0700
+++ linux-2.6.17-rc6-cl/mm/vmstat.c 2006-06-12 13:01:24.304028650 -0700
@@ -464,6 +464,7 @@ struct seq_operations fragmentation_op =
static char *vmstat_text[] = {
/* Zoned VM counters */
"nr_mapped",
+ "nr_pagecache",
/* Page state */
"nr_dirty",
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2006-06-12 21:13 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-06-12 21:12 [PATCH 00/21] Zoned VM counters V3 Christoph Lameter
2006-06-12 21:12 ` [PATCH 01/21] Create vmstat.c/.h from page_alloc.c/.h Christoph Lameter
2006-06-12 21:12 ` [PATCH 02/21] Basic ZVC (zoned vm counter) implementation, zoned vm counters: per zone counter functionality Christoph Lameter, Christoph Lameter
2006-06-13 5:37 ` Nick Piggin
2006-06-13 15:55 ` Christoph Lameter
2006-06-14 1:21 ` Nick Piggin
2006-06-14 5:37 ` Hugh Dickins
2006-06-14 5:53 ` Andi Kleen
2006-06-14 16:14 ` Christoph Lameter
2006-06-12 21:13 ` [PATCH 03/21] Convert nr_mapped to per zone counter, zoned vm counters: conversion of nr_mapped to per zone counter Christoph Lameter, Christoph Lameter
2006-06-12 21:13 ` [PATCH 04/21] swap_prefetch: Convert nr_mapped to ZVC Christoph Lameter
2006-06-12 23:36 ` Con Kolivas
2006-06-12 21:13 ` Christoph Lameter, Christoph Lameter [this message]
2006-06-12 21:13 ` [PATCH 06/21] Remove nr_mapped from scan controls structure, zoned VM stats: Remove nr_mapped from scan control Christoph Lameter, Christoph Lameter
2006-06-12 21:13 ` [PATCH 07/21] Split NR_ANON off from NR_MAPPED, zoned VM stats: Add NR_ANON Christoph Lameter, Christoph Lameter
2006-06-12 21:13 ` [PATCH 08/21] swap_prefetch: Split NR_ANON off NR_MAPPED Christoph Lameter
2006-06-12 23:36 ` Con Kolivas
2006-06-12 21:13 ` [PATCH 09/21] zone_reclaim: remove /proc/sys/vm/zone_reclaim_interval, zoned vm counters: use per zone counters to remove zone_reclaim_interval Christoph Lameter, Christoph Lameter
2006-06-12 21:13 ` [PATCH 10/21] Conversion of nr_slab to per zone counter, zoned vm counters: conversion of nr_slab to per zone counter Christoph Lameter, Christoph Lameter
2006-06-12 21:13 ` [PATCH 11/21] swap_prefetch: Conversion of nr_slab to ZVC Christoph Lameter
2006-06-12 21:13 ` [PATCH 12/21] Conversion of nr_pagetables to per zone counter, zoned vm counters: conversion of nr_pagetable to per zone counter Christoph Lameter, Christoph Lameter
2006-06-12 21:13 ` [PATCH 13/21] Conversion of nr_dirty to per zone counter, zoned vm counters: conversion of nr_dirty " Christoph Lameter, Christoph Lameter
2006-06-12 21:13 ` [PATCH 14/21] swap_prefetch: Conversion of nr_dirty to ZVC Christoph Lameter
2006-06-12 23:38 ` Con Kolivas
2006-06-12 21:14 ` [PATCH 15/21] reiser4: Conversiion of nr_dirty to ZVC, reiser4: conversion of nr_dirty to per zone counter Christoph Lameter, Christoph Lameter
2006-06-12 21:14 ` [PATCH 16/21] Conversion of nr_writeback to per zone counter, zoned vm counters: conversion of nr_writeback " Christoph Lameter, Christoph Lameter
2006-06-12 21:14 ` [PATCH 17/21] swap_prefetch: Conversion of nr_writeback to ZVC, swap_prefetch: " Christoph Lameter, Christoph Lameter
2006-06-12 23:38 ` [PATCH 17/21] swap_prefetch: Conversion of nr_writeback to ZVC Con Kolivas
2006-06-12 21:14 ` [PATCH 18/21] Conversion of nr_unstable to per zone counter, zoned vm counters: conversion of nr_unstable to per zone counter Christoph Lameter, Christoph Lameter
2006-06-12 21:14 ` [PATCH 19/21] swap_prefetch: Conversion of nr_unstable to ZVC, swap_prefetch: " Christoph Lameter, Christoph Lameter
2006-06-12 23:40 ` [PATCH 19/21] swap_prefetch: Conversion of nr_unstable to ZVC Con Kolivas
2006-06-12 23:48 ` Christoph Lameter
2006-06-12 23:57 ` Con Kolivas
2006-06-12 23:59 ` Con Kolivas
2006-06-13 0:08 ` Christoph Lameter
2006-06-13 0:19 ` Con Kolivas
2006-06-12 21:14 ` [PATCH 20/21] Conversion of nr_bounce to per zone counter, zoned vm counters: conversion of nr_bounce to per zone counter Christoph Lameter, Christoph Lameter
2006-06-12 21:14 ` [PATCH 21/21] Remove useless struct wbs, zoned vm counters: remove useless writeback structure Christoph Lameter, Christoph Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20060612211310.20862.738.sendpatchset@schroedinger.engr.sgi.com \
--to=clameter@sgi.com \
--cc=ak@suse.de \
--cc=akpm@osdl.org \
--cc=dgc@sgi.com \
--cc=hugh@veritas.com \
--cc=kernel@kolivas.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=marcelo@kvack.org \
--cc=nickpiggin@yahoo.com.au \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox