From: Michal Hocko <mhocko@suse.cz>
To: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm: hide per-cpu lists in output of show_mem()
Date: Mon, 23 Feb 2015 15:37:46 +0100 [thread overview]
Message-ID: <20150223143746.GG24272@dhcp22.suse.cz> (raw)
In-Reply-To: <20150220143942.19568.4548.stgit@buzz>
On Fri 20-02-15 17:39:42, Konstantin Khlebnikov wrote:
> This makes show_mem() much less verbose at huge machines. Instead of
> huge and almost useless dump of counters for each per-zone per-cpu
> lists this patch prints sum of these counters for each zone (free_pcp)
> and size of per-cpu list for current cpu (local_pcp).
I like this! I do not remember when I found this information useful
while debugging either an allocation failure warning or OOM killer
report.
> Flag SHOW_MEM_PERCPU_LISTS reverts old verbose mode.
Nobody seems to be using this flag so why bother?
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Acked-by: Michal Hocko <mhocko@suse.cz>
> ---
> include/linux/mm.h | 1 +
> mm/page_alloc.c | 32 +++++++++++++++++++++++++-------
> 2 files changed, 26 insertions(+), 7 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 028565a..0538de0 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1126,6 +1126,7 @@ extern void pagefault_out_of_memory(void);
> * various contexts.
> */
> #define SHOW_MEM_FILTER_NODES (0x0001u) /* disallowed nodes */
> +#define SHOW_MEM_PERCPU_LISTS (0x0002u) /* per-zone per-cpu */
>
> extern void show_free_areas(unsigned int flags);
> extern bool skip_free_areas_node(unsigned int flags, int nid);
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index a47f0b2..e591f3b 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3198,20 +3198,29 @@ static void show_migration_types(unsigned char type)
> */
> void show_free_areas(unsigned int filter)
> {
> + unsigned long free_pcp = 0;
> int cpu;
> struct zone *zone;
>
> for_each_populated_zone(zone) {
> if (skip_free_areas_node(filter, zone_to_nid(zone)))
> continue;
> - show_node(zone);
> - printk("%s per-cpu:\n", zone->name);
> +
> + if (filter & SHOW_MEM_PERCPU_LISTS) {
> + show_node(zone);
> + printk("%s per-cpu:\n", zone->name);
> + }
>
> for_each_online_cpu(cpu) {
> struct per_cpu_pageset *pageset;
>
> pageset = per_cpu_ptr(zone->pageset, cpu);
>
> + free_pcp += pageset->pcp.count;
> +
> + if (!(filter & SHOW_MEM_PERCPU_LISTS))
> + continue;
> +
> printk("CPU %4d: hi:%5d, btch:%4d usd:%4d\n",
> cpu, pageset->pcp.high,
> pageset->pcp.batch, pageset->pcp.count);
> @@ -3220,11 +3229,10 @@ void show_free_areas(unsigned int filter)
>
> printk("active_anon:%lu inactive_anon:%lu isolated_anon:%lu\n"
> " active_file:%lu inactive_file:%lu isolated_file:%lu\n"
> - " unevictable:%lu"
> - " dirty:%lu writeback:%lu unstable:%lu\n"
> - " free:%lu slab_reclaimable:%lu slab_unreclaimable:%lu\n"
> + " unevictable:%lu dirty:%lu writeback:%lu unstable:%lu\n"
> + " slab_reclaimable:%lu slab_unreclaimable:%lu\n"
> " mapped:%lu shmem:%lu pagetables:%lu bounce:%lu\n"
> - " free_cma:%lu\n",
> + " free:%lu free_pcp:%lu free_cma:%lu\n",
> global_page_state(NR_ACTIVE_ANON),
> global_page_state(NR_INACTIVE_ANON),
> global_page_state(NR_ISOLATED_ANON),
> @@ -3235,13 +3243,14 @@ void show_free_areas(unsigned int filter)
> global_page_state(NR_FILE_DIRTY),
> global_page_state(NR_WRITEBACK),
> global_page_state(NR_UNSTABLE_NFS),
> - global_page_state(NR_FREE_PAGES),
> global_page_state(NR_SLAB_RECLAIMABLE),
> global_page_state(NR_SLAB_UNRECLAIMABLE),
> global_page_state(NR_FILE_MAPPED),
> global_page_state(NR_SHMEM),
> global_page_state(NR_PAGETABLE),
> global_page_state(NR_BOUNCE),
> + global_page_state(NR_FREE_PAGES),
> + free_pcp,
> global_page_state(NR_FREE_CMA_PAGES));
>
> for_each_populated_zone(zone) {
> @@ -3249,6 +3258,11 @@ void show_free_areas(unsigned int filter)
>
> if (skip_free_areas_node(filter, zone_to_nid(zone)))
> continue;
> +
> + free_pcp = 0;
> + for_each_online_cpu(cpu)
> + free_pcp += per_cpu_ptr(zone->pageset, cpu)->pcp.count;
> +
> show_node(zone);
> printk("%s"
> " free:%lukB"
> @@ -3275,6 +3289,8 @@ void show_free_areas(unsigned int filter)
> " pagetables:%lukB"
> " unstable:%lukB"
> " bounce:%lukB"
> + " free_pcp:%lukB"
> + " local_pcp:%ukB"
> " free_cma:%lukB"
> " writeback_tmp:%lukB"
> " pages_scanned:%lu"
> @@ -3306,6 +3322,8 @@ void show_free_areas(unsigned int filter)
> K(zone_page_state(zone, NR_PAGETABLE)),
> K(zone_page_state(zone, NR_UNSTABLE_NFS)),
> K(zone_page_state(zone, NR_BOUNCE)),
> + K(free_pcp),
> + K(this_cpu_read(zone->pageset->pcp.count)),
> K(zone_page_state(zone, NR_FREE_CMA_PAGES)),
> K(zone_page_state(zone, NR_WRITEBACK_TEMP)),
> K(zone_page_state(zone, NR_PAGES_SCANNED)),
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
--
Michal Hocko
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2015-02-23 14:37 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-02-20 14:39 Konstantin Khlebnikov
2015-02-20 14:40 ` Konstantin Khlebnikov
2015-02-20 21:45 ` Andrew Morton
2015-02-23 14:37 ` Michal Hocko [this message]
2015-02-24 9:03 ` Konstantin Khlebnikov
2015-02-25 21:44 ` Andrew Morton
2015-02-26 6:14 ` [PATCH] mm: completely remove dumping per-cpu lists from show_mem() Konstantin Khlebnikov
2015-02-26 23:14 ` Dave Hansen
2015-02-24 20:41 ` [PATCH] mm: hide per-cpu lists in output of show_mem() David Rientjes
2015-02-24 22:05 ` Konstantin Khlebnikov
2015-02-24 22:31 ` David Rientjes
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150223143746.GG24272@dhcp22.suse.cz \
--to=mhocko@suse.cz \
--cc=akpm@linux-foundation.org \
--cc=khlebnikov@yandex-team.ru \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox