* Re: new procfs memory analysis feature
@ 2006-12-11 8:13 Albert Cahalan
2006-12-12 1:15 ` Joe Green
0 siblings, 1 reply; 10+ messages in thread
From: Albert Cahalan @ 2006-12-11 8:13 UTC (permalink / raw)
To: linux-mm, linux-kernel, akpm, dsingleton, jgreen
David Singleton writes:
> Add variation of /proc/PID/smaps called /proc/PID/pagemaps.
> Shows reference counts for individual pages instead of aggregate totals.
> Allows more detailed memory usage information for memory analysis tools.
> An example of the output shows the shared text VMA for ld.so and
> the share depths of the pages in the VMA.
>
> a7f4b000-a7f65000 r-xp 00000000 00:0d 19185826 /lib/ld-2.5.90.so
> 11 11 11 11 11 11 11 11 11 13 13 13 13 13 13 13 8 8 8 13 13 13 13 13 13 13
Arrrgh! Not another ghastly maps file!
The original was mildly defective. Somebody thought " (deleted)" was
a reserved filename extension. Somebody thought "/SYSV*" was also
some kind of reserved namespace. Nobody ever thought to bother with
a properly specified grammar; it's more fun to blame application
developers for guessing as best they can. The use of %08lx is quite
a wart too, looking ridiculous on 64-bit systems.
Now we have /proc/*/smaps, which should make decent programmers cry.
Really now, WTF? It has compact non-obvious parts, which would be a
nice choice for performance if not for being MIXED with wordy bloated
parts of a completely different nature. Parsing is terribly painful.
Supposedly there is a NUMA version too.
Along the way, nobody bothered to add support for describing the
page size (IMHO your format ***severely*** needs this) or for the
various VMA flags to indicate if memory is locked, randomized, etc.
There can be a million pages in a mapping for a 32-bit process.
If my guess (since you too failed to document your format) is right,
you propose to have one decimal value per page. In other words,
the lines of this file can be megabytes long without even getting
to the issue of 64-bit hardware. This is no text file!
How about a proper system call? Enough is enough already. Take a
look at the mincore system call. Imagine it taking a PID. The 7
available bits probably won't do, so expand that a bit. Just take
the user-allowed parts of the VMA and/or PTE (both varients are
good to have) and put them in a struct. There may be some value
in having both low-privilage and high-privilege versions of this.
BTW, you might wish to ensure that Wine can implement VirtualQueryEx
perfectly based on this.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: new procfs memory analysis feature 2006-12-11 8:13 new procfs memory analysis feature Albert Cahalan @ 2006-12-12 1:15 ` Joe Green 0 siblings, 0 replies; 10+ messages in thread From: Joe Green @ 2006-12-12 1:15 UTC (permalink / raw) To: Albert Cahalan; +Cc: linux-mm, linux-kernel, akpm, dsingleton Albert Cahalan wrote: > David Singleton writes: > >> Add variation of /proc/PID/smaps called /proc/PID/pagemaps. >> Shows reference counts for individual pages instead of aggregate totals. >> Allows more detailed memory usage information for memory analysis tools. >> An example of the output shows the shared text VMA for ld.so and >> the share depths of the pages in the VMA. >> >> a7f4b000-a7f65000 r-xp 00000000 00:0d 19185826 /lib/ld-2.5.90.so >> 11 11 11 11 11 11 11 11 11 13 13 13 13 13 13 13 8 8 8 13 13 13 13 13 >> 13 13 > > Arrrgh! Not another ghastly maps file! > > Now we have /proc/*/smaps, which should make decent programmers cry. Yes, that's what we based this implementation on. :) > Along the way, nobody bothered to add support for describing the > page size (IMHO your format ***severely*** needs this) Since the map size and an entry for each page is given, it's possible to figure out the page size, assuming each map uses only a single page size. But adding the page size would be reasonable. > There can be a million pages in a mapping for a 32-bit process. > If my guess (since you too failed to document your format) is right, > you propose to have one decimal value per page. Yes, that's right. We considered using repeat counts for sequences pages with the same reference count (quite common), but it hasn't been necessary in our application (see below). > In other words, the lines of this file can be megabytes long without > even getting > to the issue of 64-bit hardware. This is no text file! > > How about a proper system call? Our use for this is to optimize memory usage on very small embedded systems, so the number of pages hasn't been a problem. For the same reason, not needing a special program on the target system to read the data is an advantage, because each extra program needed adds to the footprint problem. The data is taken off the target and interpreted on another system, which often is of a different architecture, so the portable text format is useful also. This isn't mean to say your arguments aren't important, I'm just explaining why this implementation is useful for us. -- Joe Green <jgreen@mvista.com> MontaVista Software, Inc. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 10+ messages in thread
[parent not found: <45789124.1070207@mvista.com>]
* Re: new procfs memory analysis feature [not found] <45789124.1070207@mvista.com> @ 2006-12-07 22:36 ` Andrew Morton 2006-12-08 0:30 ` david singleton ` (2 more replies) 0 siblings, 3 replies; 10+ messages in thread From: Andrew Morton @ 2006-12-07 22:36 UTC (permalink / raw) To: David Singleton; +Cc: linux-kernel, linux-mm On Thu, 07 Dec 2006 14:09:40 -0800 David Singleton <dsingleton@mvista.com> wrote: > > Andrew, > > this implements a feature for memory analysis tools to go along with > smaps. > It shows reference counts for individual pages instead of aggregate > totals for a given VMA. > It helps memory analysis tools determine how well pages are being > shared, or not, > in a shared libraries, etc. > > The per page information is presented in /proc/<pid>/pagemaps. > I think the concept is not a bad one, frankly - this requirement arises frequently. What bugs me is that it only displays the mapcount and dirtiness. Perhaps there are other things which people want to know. I'm not sure what they would be though. I wonder if it would be insane to display the info via a filesystem: cat /mnt/pagemaps/$(pidof crond)/pgd0/pmd1/pte45 Probably it would. > Index: linux-2.6.18/Documentation/filesystems/proc.txt Against 2.6.18? I didn't know you could still buy copies of that ;) This patch's changelog should include sample output. Your email client wordwraps patches, and it replaces tabs with spaces. > ... > > +static void pagemaps_pte_range(struct vm_area_struct *vma, pmd_t *pmd, > + unsigned long addr, unsigned long end, > + struct seq_file *m) > +{ > + pte_t *pte, ptent; > + spinlock_t *ptl; > + struct page *page; > + int mapcount = 0; > + > + pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); > + do { > + ptent = *pte; > + if (pte_present(ptent)) { > + page = vm_normal_page(vma, addr, ptent); > + if (page) { > + if (pte_dirty(ptent)) > + mapcount = -page_mapcount(page); > + else > + mapcount = page_mapcount(page); > + } else { > + mapcount = 1; > + } > + } > + seq_printf(m, " %d", mapcount); > + > + } while (pte++, addr += PAGE_SIZE, addr != end); Well that's cute. As long as both seq_file and pte-pages are of size PAGE_SIZE, and as long as pte's are more than three bytes, this will not overflow the seq_file output buffer. hm. Unless the pages are all dirty and the mapcounts are all 10000. I think it will overflow then? > + > +static inline void pagemaps_pmd_range(struct vm_area_struct *vma, pud_t > *pud, > + unsigned long addr, unsigned long end, > + struct seq_file *m) > +{ > + pmd_t *pmd; > + unsigned long next; > + > + pmd = pmd_offset(pud, addr); > + do { > + next = pmd_addr_end(addr, end); > + if (pmd_none_or_clear_bad(pmd)) > + continue; > + pagemaps_pte_range(vma, pmd, addr, next, m); > + } while (pmd++, addr = next, addr != end); > +} > + > +static inline void pagemaps_pud_range(struct vm_area_struct *vma, pgd_t > *pgd, > + unsigned long addr, unsigned long end, > + struct seq_file *m) > +{ > + pud_t *pud; > + unsigned long next; > + > + pud = pud_offset(pgd, addr); > + do { > + next = pud_addr_end(addr, end); > + if (pud_none_or_clear_bad(pud)) > + continue; > + pagemaps_pmd_range(vma, pud, addr, next, m); > + } while (pud++, addr = next, addr != end); > +} > + > +static inline void pagemaps_pgd_range(struct vm_area_struct *vma, > + unsigned long addr, unsigned long end, > + struct seq_file *m) > +{ > + pgd_t *pgd; > + unsigned long next; > + > + pgd = pgd_offset(vma->vm_mm, addr); > + do { > + next = pgd_addr_end(addr, end); > + if (pgd_none_or_clear_bad(pgd)) > + continue; > + pagemaps_pud_range(vma, pgd, addr, next, m); > + } while (pgd++, addr = next, addr != end); > +} I think that's our eighth open-coded pagetable walker. Apparently they are all slightly different. Perhaps we shouild do something about that one day. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: new procfs memory analysis feature 2006-12-07 22:36 ` Andrew Morton @ 2006-12-08 0:30 ` david singleton 2006-12-08 1:07 ` david singleton 2006-12-08 6:21 ` Paul Cameron Davies 2 siblings, 0 replies; 10+ messages in thread From: david singleton @ 2006-12-08 0:30 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-mm, linux-kernel On Dec 7, 2006, at 2:36 PM, Andrew Morton wrote: > On Thu, 07 Dec 2006 14:09:40 -0800 > David Singleton <dsingleton@mvista.com> wrote: > >> >> Andrew, >> >> this implements a feature for memory analysis tools to go along >> with >> smaps. >> It shows reference counts for individual pages instead of aggregate >> totals for a given VMA. >> It helps memory analysis tools determine how well pages are being >> shared, or not, >> in a shared libraries, etc. >> >> The per page information is presented in /proc/<pid>/pagemaps. >> > > I think the concept is not a bad one, frankly - this requirement arises > frequently. What bugs me is that it only displays the mapcount and > dirtiness. Perhaps there are other things which people want to know. > I'm > not sure what they would be though. > > I wonder if it would be insane to display the info via a filesystem: > > cat /mnt/pagemaps/$(pidof crond)/pgd0/pmd1/pte45 > > Probably it would. > >> Index: linux-2.6.18/Documentation/filesystems/proc.txt > > Against 2.6.18? I didn't know you could still buy copies of that ;) whoops, I have an old copy. let me make a patch against 2.6.19. > > This patch's changelog should include sample output. okay. > > Your email client wordwraps patches, and it replaces tabs with spaces. Is an attachment okay? gziped tarfile? a new mailer? David > >> ... >> >> +static void pagemaps_pte_range(struct vm_area_struct *vma, pmd_t >> *pmd, >> + unsigned long addr, unsigned long end, >> + struct seq_file *m) >> +{ >> + pte_t *pte, ptent; >> + spinlock_t *ptl; >> + struct page *page; >> + int mapcount = 0; >> + >> + pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); >> + do { >> + ptent = *pte; >> + if (pte_present(ptent)) { >> + page = vm_normal_page(vma, addr, ptent); >> + if (page) { >> + if (pte_dirty(ptent)) >> + mapcount = >> -page_mapcount(page); >> + else >> + mapcount = >> page_mapcount(page); >> + } else { >> + mapcount = 1; >> + } >> + } >> + seq_printf(m, " %d", mapcount); >> + >> + } while (pte++, addr += PAGE_SIZE, addr != end); > > Well that's cute. As long as both seq_file and pte-pages are of size > PAGE_SIZE, and as long as pte's are more than three bytes, this will > not > overflow the seq_file output buffer. > > hm. Unless the pages are all dirty and the mapcounts are all 10000. I > think it will overflow then? > >> + >> +static inline void pagemaps_pmd_range(struct vm_area_struct *vma, >> pud_t >> *pud, >> + unsigned long addr, unsigned long end, >> + struct seq_file *m) >> +{ >> + pmd_t *pmd; >> + unsigned long next; >> + >> + pmd = pmd_offset(pud, addr); >> + do { >> + next = pmd_addr_end(addr, end); >> + if (pmd_none_or_clear_bad(pmd)) >> + continue; >> + pagemaps_pte_range(vma, pmd, addr, next, m); >> + } while (pmd++, addr = next, addr != end); >> +} >> + >> +static inline void pagemaps_pud_range(struct vm_area_struct *vma, >> pgd_t >> *pgd, >> + unsigned long addr, unsigned long end, >> + struct seq_file *m) >> +{ >> + pud_t *pud; >> + unsigned long next; >> + >> + pud = pud_offset(pgd, addr); >> + do { >> + next = pud_addr_end(addr, end); >> + if (pud_none_or_clear_bad(pud)) >> + continue; >> + pagemaps_pmd_range(vma, pud, addr, next, m); >> + } while (pud++, addr = next, addr != end); >> +} >> + >> +static inline void pagemaps_pgd_range(struct vm_area_struct *vma, >> + unsigned long addr, unsigned long end, >> + struct seq_file *m) >> +{ >> + pgd_t *pgd; >> + unsigned long next; >> + >> + pgd = pgd_offset(vma->vm_mm, addr); >> + do { >> + next = pgd_addr_end(addr, end); >> + if (pgd_none_or_clear_bad(pgd)) >> + continue; >> + pagemaps_pud_range(vma, pgd, addr, next, m); >> + } while (pgd++, addr = next, addr != end); >> +} > > I think that's our eighth open-coded pagetable walker. Apparently > they are > all slightly different. Perhaps we shouild do something about that one > day. > > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: new procfs memory analysis feature 2006-12-07 22:36 ` Andrew Morton 2006-12-08 0:30 ` david singleton @ 2006-12-08 1:07 ` david singleton 2006-12-08 1:46 ` Andrew Morton 2006-12-08 6:21 ` Paul Cameron Davies 2 siblings, 1 reply; 10+ messages in thread From: david singleton @ 2006-12-08 1:07 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-mm, linux-kernel [-- Attachment #1: Type: text/plain, Size: 32 bytes --] Attached is the 2.6.19 patch. [-- Attachment #2: pagemaps.patch --] [-- Type: application/octet-stream, Size: 6425 bytes --] Signed-off-by: David Singleton <dsingleton@mvista.com> Signed-off-by: Joe Green <jgreen@mvista.com> Add variation of /proc/PID/smaps called /proc/PID/pagemaps. Shows reference counts for individual pages instead of aggregate totals. Allows more detailed memory usage information for memory analysis tools. An example of the output shows the shared text VMA for ld.so and the share depths of the pages in the VMA. a7f4b000-a7f65000 r-xp 00000000 00:0d 19185826 /lib/ld-2.5.90.so 11 11 11 11 11 11 11 11 11 13 13 13 13 13 13 13 8 8 8 13 13 13 13 13 13 13 Documentation/filesystems/proc.txt | 3 - fs/proc/base.c | 2 fs/proc/internal.h | 5 - fs/proc/task_mmu.c | 110 +++++++++++++++++++++++++++++++++++++ 4 files changed, 115 insertions(+), 5 deletions(-) Index: linux-2.6.19/Documentation/filesystems/proc.txt =================================================================== --- linux-2.6.19.orig/Documentation/filesystems/proc.txt +++ linux-2.6.19/Documentation/filesystems/proc.txt @@ -130,12 +130,13 @@ Table 1-1: Process specific entries in / fd Directory, which contains all file descriptors maps Memory maps to executables and library files (2.4) mem Memory held by this process + pagemaps Based on maps, presents page ref counts for each mapped file root Link to the root directory of this process + smaps Extension based on maps, presenting the rss size for each mapped file stat Process status statm Process memory status information status Process status in human readable form wchan If CONFIG_KALLSYMS is set, a pre-decoded wchan - smaps Extension based on maps, presenting the rss size for each mapped file .............................................................................. For example, to get the status information of a process, all you have to do is Index: linux-2.6.19/fs/proc/base.c =================================================================== --- linux-2.6.19.orig/fs/proc/base.c +++ linux-2.6.19/fs/proc/base.c @@ -1773,6 +1773,7 @@ static struct pid_entry tgid_base_stuff[ REG("mountstats", S_IRUSR, mountstats), #ifdef CONFIG_MMU REG("smaps", S_IRUGO, smaps), + REG("pagemaps", S_IRUGO, pagemaps), #endif #ifdef CONFIG_SECURITY DIR("attr", S_IRUGO|S_IXUGO, attr_dir), @@ -2047,6 +2048,7 @@ static struct pid_entry tid_base_stuff[] REG("mounts", S_IRUGO, mounts), #ifdef CONFIG_MMU REG("smaps", S_IRUGO, smaps), + REG("pagemaps", S_IRUGO, pagemaps), #endif #ifdef CONFIG_SECURITY DIR("attr", S_IRUGO|S_IXUGO, attr_dir), Index: linux-2.6.19/fs/proc/internal.h =================================================================== --- linux-2.6.19.orig/fs/proc/internal.h +++ linux-2.6.19/fs/proc/internal.h @@ -41,10 +41,7 @@ extern int proc_pid_statm(struct task_st extern struct file_operations proc_maps_operations; extern struct file_operations proc_numa_maps_operations; extern struct file_operations proc_smaps_operations; - -extern struct file_operations proc_maps_operations; -extern struct file_operations proc_numa_maps_operations; -extern struct file_operations proc_smaps_operations; +extern struct file_operations proc_pagemaps_operations; void free_proc_entry(struct proc_dir_entry *de); Index: linux-2.6.19/fs/proc/task_mmu.c =================================================================== --- linux-2.6.19.orig/fs/proc/task_mmu.c +++ linux-2.6.19/fs/proc/task_mmu.c @@ -429,6 +429,116 @@ static int do_maps_open(struct inode *in return ret; } +static void pagemaps_pte_range(struct vm_area_struct *vma, pmd_t *pmd, + unsigned long addr, unsigned long end, + struct seq_file *m) +{ + pte_t *pte, ptent; + spinlock_t *ptl; + struct page *page; + int mapcount = 0; + + pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); + do { + ptent = *pte; + if (pte_present(ptent)) { + page = vm_normal_page(vma, addr, ptent); + if (page) { + if (pte_dirty(ptent)) + mapcount = -page_mapcount(page); + else + mapcount = page_mapcount(page); + } else { + mapcount = 1; + } + } + seq_printf(m, " %d", mapcount); + + } while (pte++, addr += PAGE_SIZE, addr != end); + seq_putc(m, '\n'); + + pte_unmap_unlock(pte - 1, ptl); + cond_resched(); + +} + +static inline void pagemaps_pmd_range(struct vm_area_struct *vma, pud_t *pud, + unsigned long addr, unsigned long end, + struct seq_file *m) +{ + pmd_t *pmd; + unsigned long next; + + pmd = pmd_offset(pud, addr); + do { + next = pmd_addr_end(addr, end); + if (pmd_none_or_clear_bad(pmd)) + continue; + pagemaps_pte_range(vma, pmd, addr, next, m); + } while (pmd++, addr = next, addr != end); +} + +static inline void pagemaps_pud_range(struct vm_area_struct *vma, pgd_t *pgd, + unsigned long addr, unsigned long end, + struct seq_file *m) +{ + pud_t *pud; + unsigned long next; + + pud = pud_offset(pgd, addr); + do { + next = pud_addr_end(addr, end); + if (pud_none_or_clear_bad(pud)) + continue; + pagemaps_pmd_range(vma, pud, addr, next, m); + } while (pud++, addr = next, addr != end); +} + +static inline void pagemaps_pgd_range(struct vm_area_struct *vma, + unsigned long addr, unsigned long end, + struct seq_file *m) +{ + pgd_t *pgd; + unsigned long next; + + pgd = pgd_offset(vma->vm_mm, addr); + do { + next = pgd_addr_end(addr, end); + if (pgd_none_or_clear_bad(pgd)) + continue; + pagemaps_pud_range(vma, pgd, addr, next, m); + } while (pgd++, addr = next, addr != end); +} + +static int show_pagemap(struct seq_file *m, void *v) +{ + struct vm_area_struct *vma = v; + + show_map_internal(m, v, NULL); + if (vma->vm_mm && !is_vm_hugetlb_page(vma)) + pagemaps_pgd_range(vma, vma->vm_start, vma->vm_end, m); + return 0; +} + +static struct seq_operations proc_pid_pagemaps_op = { + .start = m_start, + .next = m_next, + .stop = m_stop, + .show = show_pagemap +}; + +static int pagemaps_open(struct inode *inode, struct file *file) +{ + return do_maps_open(inode, file, &proc_pid_pagemaps_op); +} + +struct file_operations proc_pagemaps_operations = { + .open = pagemaps_open, + .read = seq_read, + .llseek = seq_lseek, + .release = seq_release_private, +}; + static int maps_open(struct inode *inode, struct file *file) { return do_maps_open(inode, file, &proc_pid_maps_op); [-- Attachment #3: Type: text/plain, Size: 4621 bytes --] On Dec 7, 2006, at 2:36 PM, Andrew Morton wrote: > On Thu, 07 Dec 2006 14:09:40 -0800 > David Singleton <dsingleton@mvista.com> wrote: > >> >> Andrew, >> >> this implements a feature for memory analysis tools to go along >> with >> smaps. >> It shows reference counts for individual pages instead of aggregate >> totals for a given VMA. >> It helps memory analysis tools determine how well pages are being >> shared, or not, >> in a shared libraries, etc. >> >> The per page information is presented in /proc/<pid>/pagemaps. >> > > I think the concept is not a bad one, frankly - this requirement arises > frequently. What bugs me is that it only displays the mapcount and > dirtiness. Perhaps there are other things which people want to know. > I'm > not sure what they would be though. > > I wonder if it would be insane to display the info via a filesystem: > > cat /mnt/pagemaps/$(pidof crond)/pgd0/pmd1/pte45 > > Probably it would. > >> Index: linux-2.6.18/Documentation/filesystems/proc.txt > > Against 2.6.18? I didn't know you could still buy copies of that ;) > > This patch's changelog should include sample output. > > Your email client wordwraps patches, and it replaces tabs with spaces. > >> ... >> >> +static void pagemaps_pte_range(struct vm_area_struct *vma, pmd_t >> *pmd, >> + unsigned long addr, unsigned long end, >> + struct seq_file *m) >> +{ >> + pte_t *pte, ptent; >> + spinlock_t *ptl; >> + struct page *page; >> + int mapcount = 0; >> + >> + pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); >> + do { >> + ptent = *pte; >> + if (pte_present(ptent)) { >> + page = vm_normal_page(vma, addr, ptent); >> + if (page) { >> + if (pte_dirty(ptent)) >> + mapcount = >> -page_mapcount(page); >> + else >> + mapcount = >> page_mapcount(page); >> + } else { >> + mapcount = 1; >> + } >> + } >> + seq_printf(m, " %d", mapcount); >> + >> + } while (pte++, addr += PAGE_SIZE, addr != end); > > Well that's cute. As long as both seq_file and pte-pages are of size > PAGE_SIZE, and as long as pte's are more than three bytes, this will > not > overflow the seq_file output buffer. > > hm. Unless the pages are all dirty and the mapcounts are all 10000. I > think it will overflow then? > >> + >> +static inline void pagemaps_pmd_range(struct vm_area_struct *vma, >> pud_t >> *pud, >> + unsigned long addr, unsigned long end, >> + struct seq_file *m) >> +{ >> + pmd_t *pmd; >> + unsigned long next; >> + >> + pmd = pmd_offset(pud, addr); >> + do { >> + next = pmd_addr_end(addr, end); >> + if (pmd_none_or_clear_bad(pmd)) >> + continue; >> + pagemaps_pte_range(vma, pmd, addr, next, m); >> + } while (pmd++, addr = next, addr != end); >> +} >> + >> +static inline void pagemaps_pud_range(struct vm_area_struct *vma, >> pgd_t >> *pgd, >> + unsigned long addr, unsigned long end, >> + struct seq_file *m) >> +{ >> + pud_t *pud; >> + unsigned long next; >> + >> + pud = pud_offset(pgd, addr); >> + do { >> + next = pud_addr_end(addr, end); >> + if (pud_none_or_clear_bad(pud)) >> + continue; >> + pagemaps_pmd_range(vma, pud, addr, next, m); >> + } while (pud++, addr = next, addr != end); >> +} >> + >> +static inline void pagemaps_pgd_range(struct vm_area_struct *vma, >> + unsigned long addr, unsigned long end, >> + struct seq_file *m) >> +{ >> + pgd_t *pgd; >> + unsigned long next; >> + >> + pgd = pgd_offset(vma->vm_mm, addr); >> + do { >> + next = pgd_addr_end(addr, end); >> + if (pgd_none_or_clear_bad(pgd)) >> + continue; >> + pagemaps_pud_range(vma, pgd, addr, next, m); >> + } while (pgd++, addr = next, addr != end); >> +} > > I think that's our eighth open-coded pagetable walker. Apparently > they are > all slightly different. Perhaps we shouild do something about that one > day. > > ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: new procfs memory analysis feature 2006-12-08 1:07 ` david singleton @ 2006-12-08 1:46 ` Andrew Morton 2006-12-08 1:53 ` david singleton 0 siblings, 1 reply; 10+ messages in thread From: Andrew Morton @ 2006-12-08 1:46 UTC (permalink / raw) To: david singleton; +Cc: linux-mm, linux-kernel On Thu, 7 Dec 2006 17:07:22 -0800 david singleton <dsingleton@mvista.com> wrote: > Attached is the 2.6.19 patch. It still has the overflow bug. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: new procfs memory analysis feature 2006-12-08 1:46 ` Andrew Morton @ 2006-12-08 1:53 ` david singleton 0 siblings, 0 replies; 10+ messages in thread From: david singleton @ 2006-12-08 1:53 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-mm, linux-kernel On Dec 7, 2006, at 5:46 PM, Andrew Morton wrote: > On Thu, 7 Dec 2006 17:07:22 -0800 > david singleton <dsingleton@mvista.com> wrote: > >> Attached is the 2.6.19 patch. > > It still has the overflow bug. >> + do { >> + ptent = *pte; >> + if (pte_present(ptent)) { >> + page = vm_normal_page(vma, addr, ptent); >> + if (page) { >> + if (pte_dirty(ptent)) >> + mapcount = >> -page_mapcount(page); >> + else >> + mapcount = >> page_mapcount(page); >> + } else { >> + mapcount = 1; >> + } >> + } >> + seq_printf(m, " %d", mapcount); >> + >> + } while (pte++, addr += PAGE_SIZE, addr != end); > > Well that's cute. As long as both seq_file and pte-pages are of size > PAGE_SIZE, and as long as pte's are more than three bytes, this will > not > overflow the seq_file output buffer. > > hm. Unless the pages are all dirty and the mapcounts are all 10000. I > think it will overflow then? > I guess that could happen? Any suggestions? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: new procfs memory analysis feature 2006-12-07 22:36 ` Andrew Morton 2006-12-08 0:30 ` david singleton 2006-12-08 1:07 ` david singleton @ 2006-12-08 6:21 ` Paul Cameron Davies 2006-12-08 21:46 ` Jeremy Fitzhardinge 2 siblings, 1 reply; 10+ messages in thread From: Paul Cameron Davies @ 2006-12-08 6:21 UTC (permalink / raw) To: Andrew Morton; +Cc: David Singleton, linux-kernel, linux-mm, Lee.Schermerhorn On Thu, 7 Dec 2006, Andrew Morton wrote: > I think that's our eighth open-coded pagetable walker. Apparently they are > all slightly different. Perhaps we shouild do something about that one > day. At UNSW we have abstracted the page table into its own layer, and are running an alternate page table (a GPT), under a clean page table interface (PTI). The PTI gathers all the open coded iterators togethers into one place, which would be a good precursor to providing generic iterators for non performance critical iterations. We are completing the updating/enhancements to this PTI for the latest kernel, to be released just prior to LCA. This PTI is benchmarking well. We also plan to release the experimental guarded page table (GPT) running under this PTI. Paul Davies Gelato@UNSW ~ -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: new procfs memory analysis feature 2006-12-08 6:21 ` Paul Cameron Davies @ 2006-12-08 21:46 ` Jeremy Fitzhardinge 2006-12-11 2:19 ` Paul Cameron Davies 0 siblings, 1 reply; 10+ messages in thread From: Jeremy Fitzhardinge @ 2006-12-08 21:46 UTC (permalink / raw) To: Paul Cameron Davies Cc: Andrew Morton, David Singleton, linux-kernel, linux-mm, Lee.Schermerhorn Paul Cameron Davies wrote: > The PTI gathers all the open coded iterators togethers into one place, > which would be a good precursor to providing generic iterators for > non performance critical iterations. > > We are completing the updating/enhancements to this PTI for the latest > kernel, to be released just prior to LCA. This PTI is benchmarking > well. We also plan to release the experimental guarded page table > (GPT) running under this PTI. I looked at implementing linear pagetable mappings for x86 as a way of getting rid of CONFIG_HIGHPTE, and to make pagetable manipulations generally more efficient. I gave up on it after a while because all the existing pagetable accessors are not suitable for a linear pagetable, and I didn't want to have to introduce a pile of new pagetable interfaces. Would the PTI interface be helpful for this? Thanks, J -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: new procfs memory analysis feature 2006-12-08 21:46 ` Jeremy Fitzhardinge @ 2006-12-11 2:19 ` Paul Cameron Davies 0 siblings, 0 replies; 10+ messages in thread From: Paul Cameron Davies @ 2006-12-11 2:19 UTC (permalink / raw) To: Jeremy Fitzhardinge Cc: Andrew Morton, David Singleton, linux-kernel, linux-mm, Lee.Schermerhorn On Fri, 8 Dec 2006, Jeremy Fitzhardinge wrote: > I looked at implementing linear pagetable mappings for x86 as a way of > getting rid of CONFIG_HIGHPTE, and to make pagetable manipulations > generally more efficient. I gave up on it after a while because all the > existing pagetable accessors are not suitable for a linear pagetable, > and I didn't want to have to introduce a pile of new pagetable > interfaces. Would the PTI interface be helpful for this? Yes. The PTI is a useful vehicle for experimentation with page tables. The PTI has two components. The first component provides for architectural and implementation independent page table access. The second component provides for architecture dependendent access, but I have only done this for IA64. However, abstracting out the page table implementation for the arch dependent stuff on x86 would enable experimentation with implementing linear page table mappings for x86, while leaving the current implementation in place as an alternative page table. Cheers Paul Davies -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2006-12-12 1:15 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-12-11 8:13 new procfs memory analysis feature Albert Cahalan
2006-12-12 1:15 ` Joe Green
[not found] <45789124.1070207@mvista.com>
2006-12-07 22:36 ` Andrew Morton
2006-12-08 0:30 ` david singleton
2006-12-08 1:07 ` david singleton
2006-12-08 1:46 ` Andrew Morton
2006-12-08 1:53 ` david singleton
2006-12-08 6:21 ` Paul Cameron Davies
2006-12-08 21:46 ` Jeremy Fitzhardinge
2006-12-11 2:19 ` Paul Cameron Davies
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox