* Re: [PATCH 2/2] mm, vmstat: reduce zone->lock holding time by /proc/pagetypeinfo
2019-10-25 7:26 ` [PATCH 2/2] mm, vmstat: reduce zone->lock holding time by /proc/pagetypeinfo Michal Hocko
@ 2019-10-25 7:35 ` Vlastimil Babka
2019-10-25 8:21 ` David Hildenbrand
` (2 subsequent siblings)
3 siblings, 0 replies; 9+ messages in thread
From: Vlastimil Babka @ 2019-10-25 7:35 UTC (permalink / raw)
To: Michal Hocko, Andrew Morton, Mel Gorman, Waiman Long
Cc: Johannes Weiner, Roman Gushchin, Konstantin Khlebnikov,
Jann Horn, Song Liu, Greg Kroah-Hartman, Rafael Aquini, linux-mm,
LKML, Michal Hocko
On 10/25/19 9:26 AM, Michal Hocko wrote:
> From: Michal Hocko <mhocko@suse.com>
>
> pagetypeinfo_showfree_print is called by zone->lock held in irq mode.
> This is not really nice because it blocks both any interrupts on that
> cpu and the page allocator. On large machines this might even trigger
> the hard lockup detector.
>
> Considering the pagetypeinfo is a debugging tool we do not really need
> exact numbers here. The primary reason to look at the outuput is to see
> how pageblocks are spread among different migratetypes and low number of
> pages is much more interesting therefore putting a bound on the number
> of pages on the free_list sounds like a reasonable tradeoff.
>
> The new output will simply tell
> [...]
> Node 6, zone Normal, type Movable >100000 >100000 >100000 >100000 41019 31560 23996 10054 3229 983 648
>
> instead of
> Node 6, zone Normal, type Movable 399568 294127 221558 102119 41019 31560 23996 10054 3229 983 648
>
> The limit has been chosen arbitrary and it is a subject of a future
> change should there be a need for that.
>
> While we are at it, also drop the zone lock after each free_list
> iteration which will help with the IRQ and page allocator responsiveness
> even further as the IRQ lock held time is always bound to those 100k
> pages.
>
> Suggested-by: Andrew Morton <akpm@linux-foundation.org>
> Reviewed-by: Waiman Long <longman@redhat.com>
> Signed-off-by: Michal Hocko <mhocko@suse.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
> ---
> mm/vmstat.c | 23 ++++++++++++++++++++---
> 1 file changed, 20 insertions(+), 3 deletions(-)
>
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 4e885ecd44d1..ddb89f4e0486 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -1383,12 +1383,29 @@ static void pagetypeinfo_showfree_print(struct seq_file *m,
> unsigned long freecount = 0;
> struct free_area *area;
> struct list_head *curr;
> + bool overflow = false;
>
> area = &(zone->free_area[order]);
>
> - list_for_each(curr, &area->free_list[mtype])
> - freecount++;
> - seq_printf(m, "%6lu ", freecount);
> + list_for_each(curr, &area->free_list[mtype]) {
> + /*
> + * Cap the free_list iteration because it might
> + * be really large and we are under a spinlock
> + * so a long time spent here could trigger a
> + * hard lockup detector. Anyway this is a
> + * debugging tool so knowing there is a handful
> + * of pages in this order should be more than
> + * sufficient
> + */
> + if (++freecount >= 100000) {
> + overflow = true;
> + break;
> + }
> + }
> + seq_printf(m, "%s%6lu ", overflow ? ">" : "", freecount);
> + spin_unlock_irq(&zone->lock);
> + cond_resched();
> + spin_lock_irq(&zone->lock);
> }
> seq_putc(m, '\n');
> }
>
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH 2/2] mm, vmstat: reduce zone->lock holding time by /proc/pagetypeinfo
2019-10-25 7:26 ` [PATCH 2/2] mm, vmstat: reduce zone->lock holding time by /proc/pagetypeinfo Michal Hocko
2019-10-25 7:35 ` Vlastimil Babka
@ 2019-10-25 8:21 ` David Hildenbrand
2019-10-25 12:52 ` Rafael Aquini
2019-10-25 21:08 ` David Rientjes
3 siblings, 0 replies; 9+ messages in thread
From: David Hildenbrand @ 2019-10-25 8:21 UTC (permalink / raw)
To: Michal Hocko, Andrew Morton, Mel Gorman, Waiman Long
Cc: Johannes Weiner, Roman Gushchin, Vlastimil Babka,
Konstantin Khlebnikov, Jann Horn, Song Liu, Greg Kroah-Hartman,
Rafael Aquini, linux-mm, LKML, Michal Hocko
On 25.10.19 09:26, Michal Hocko wrote:
> From: Michal Hocko <mhocko@suse.com>
>
> pagetypeinfo_showfree_print is called by zone->lock held in irq mode.
> This is not really nice because it blocks both any interrupts on that
> cpu and the page allocator. On large machines this might even trigger
> the hard lockup detector.
>
> Considering the pagetypeinfo is a debugging tool we do not really need
> exact numbers here. The primary reason to look at the outuput is to see
> how pageblocks are spread among different migratetypes and low number of
> pages is much more interesting therefore putting a bound on the number
> of pages on the free_list sounds like a reasonable tradeoff.
>
> The new output will simply tell
> [...]
> Node 6, zone Normal, type Movable >100000 >100000 >100000 >100000 41019 31560 23996 10054 3229 983 648
>
> instead of
> Node 6, zone Normal, type Movable 399568 294127 221558 102119 41019 31560 23996 10054 3229 983 648
>
> The limit has been chosen arbitrary and it is a subject of a future
> change should there be a need for that.
>
> While we are at it, also drop the zone lock after each free_list
> iteration which will help with the IRQ and page allocator responsiveness
> even further as the IRQ lock held time is always bound to those 100k
> pages.
>
> Suggested-by: Andrew Morton <akpm@linux-foundation.org>
> Reviewed-by: Waiman Long <longman@redhat.com>
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
> mm/vmstat.c | 23 ++++++++++++++++++++---
> 1 file changed, 20 insertions(+), 3 deletions(-)
>
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 4e885ecd44d1..ddb89f4e0486 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -1383,12 +1383,29 @@ static void pagetypeinfo_showfree_print(struct seq_file *m,
> unsigned long freecount = 0;
> struct free_area *area;
> struct list_head *curr;
> + bool overflow = false;
>
> area = &(zone->free_area[order]);
>
> - list_for_each(curr, &area->free_list[mtype])
> - freecount++;
> - seq_printf(m, "%6lu ", freecount);
> + list_for_each(curr, &area->free_list[mtype]) {
> + /*
> + * Cap the free_list iteration because it might
> + * be really large and we are under a spinlock
> + * so a long time spent here could trigger a
> + * hard lockup detector. Anyway this is a
> + * debugging tool so knowing there is a handful
> + * of pages in this order should be more than
"of this order" ?
> + * sufficient
s/sufficient"/sufficient." ?
> + */
> + if (++freecount >= 100000) {
> + overflow = true;
> + break;
> + }
> + }
> + seq_printf(m, "%s%6lu ", overflow ? ">" : "", freecount);
> + spin_unlock_irq(&zone->lock);
> + cond_resched();
> + spin_lock_irq(&zone->lock);
> }
> seq_putc(m, '\n');
> }
>
Acked-by: David Hildenbrand <david@redhat.com>
--
Thanks,
David / dhildenb
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH 2/2] mm, vmstat: reduce zone->lock holding time by /proc/pagetypeinfo
2019-10-25 7:26 ` [PATCH 2/2] mm, vmstat: reduce zone->lock holding time by /proc/pagetypeinfo Michal Hocko
2019-10-25 7:35 ` Vlastimil Babka
2019-10-25 8:21 ` David Hildenbrand
@ 2019-10-25 12:52 ` Rafael Aquini
2019-10-25 21:08 ` David Rientjes
3 siblings, 0 replies; 9+ messages in thread
From: Rafael Aquini @ 2019-10-25 12:52 UTC (permalink / raw)
To: Michal Hocko
Cc: Andrew Morton, Mel Gorman, Waiman Long, Johannes Weiner,
Roman Gushchin, Vlastimil Babka, Konstantin Khlebnikov,
Jann Horn, Song Liu, Greg Kroah-Hartman, linux-mm, LKML,
Michal Hocko
On Fri, Oct 25, 2019 at 09:26:10AM +0200, Michal Hocko wrote:
> From: Michal Hocko <mhocko@suse.com>
>
> pagetypeinfo_showfree_print is called by zone->lock held in irq mode.
> This is not really nice because it blocks both any interrupts on that
> cpu and the page allocator. On large machines this might even trigger
> the hard lockup detector.
>
> Considering the pagetypeinfo is a debugging tool we do not really need
> exact numbers here. The primary reason to look at the outuput is to see
> how pageblocks are spread among different migratetypes and low number of
> pages is much more interesting therefore putting a bound on the number
> of pages on the free_list sounds like a reasonable tradeoff.
>
> The new output will simply tell
> [...]
> Node 6, zone Normal, type Movable >100000 >100000 >100000 >100000 41019 31560 23996 10054 3229 983 648
>
> instead of
> Node 6, zone Normal, type Movable 399568 294127 221558 102119 41019 31560 23996 10054 3229 983 648
>
> The limit has been chosen arbitrary and it is a subject of a future
> change should there be a need for that.
>
> While we are at it, also drop the zone lock after each free_list
> iteration which will help with the IRQ and page allocator responsiveness
> even further as the IRQ lock held time is always bound to those 100k
> pages.
>
> Suggested-by: Andrew Morton <akpm@linux-foundation.org>
> Reviewed-by: Waiman Long <longman@redhat.com>
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
> mm/vmstat.c | 23 ++++++++++++++++++++---
> 1 file changed, 20 insertions(+), 3 deletions(-)
>
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 4e885ecd44d1..ddb89f4e0486 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -1383,12 +1383,29 @@ static void pagetypeinfo_showfree_print(struct seq_file *m,
> unsigned long freecount = 0;
> struct free_area *area;
> struct list_head *curr;
> + bool overflow = false;
>
> area = &(zone->free_area[order]);
>
> - list_for_each(curr, &area->free_list[mtype])
> - freecount++;
> - seq_printf(m, "%6lu ", freecount);
> + list_for_each(curr, &area->free_list[mtype]) {
> + /*
> + * Cap the free_list iteration because it might
> + * be really large and we are under a spinlock
> + * so a long time spent here could trigger a
> + * hard lockup detector. Anyway this is a
> + * debugging tool so knowing there is a handful
> + * of pages in this order should be more than
> + * sufficient
> + */
> + if (++freecount >= 100000) {
> + overflow = true;
> + break;
> + }
> + }
> + seq_printf(m, "%s%6lu ", overflow ? ">" : "", freecount);
> + spin_unlock_irq(&zone->lock);
> + cond_resched();
> + spin_lock_irq(&zone->lock);
> }
> seq_putc(m, '\n');
> }
> --
> 2.20.1
>
Acked-by: Rafael Aquini <aquini@redhat.com>
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH 2/2] mm, vmstat: reduce zone->lock holding time by /proc/pagetypeinfo
2019-10-25 7:26 ` [PATCH 2/2] mm, vmstat: reduce zone->lock holding time by /proc/pagetypeinfo Michal Hocko
` (2 preceding siblings ...)
2019-10-25 12:52 ` Rafael Aquini
@ 2019-10-25 21:08 ` David Rientjes
3 siblings, 0 replies; 9+ messages in thread
From: David Rientjes @ 2019-10-25 21:08 UTC (permalink / raw)
To: Michal Hocko
Cc: Andrew Morton, Mel Gorman, Waiman Long, Johannes Weiner,
Roman Gushchin, Vlastimil Babka, Konstantin Khlebnikov,
Jann Horn, Song Liu, Greg Kroah-Hartman, Rafael Aquini, linux-mm,
LKML, Michal Hocko
On Fri, 25 Oct 2019, Michal Hocko wrote:
> From: Michal Hocko <mhocko@suse.com>
>
> pagetypeinfo_showfree_print is called by zone->lock held in irq mode.
> This is not really nice because it blocks both any interrupts on that
> cpu and the page allocator. On large machines this might even trigger
> the hard lockup detector.
>
> Considering the pagetypeinfo is a debugging tool we do not really need
> exact numbers here. The primary reason to look at the outuput is to see
> how pageblocks are spread among different migratetypes and low number of
> pages is much more interesting therefore putting a bound on the number
> of pages on the free_list sounds like a reasonable tradeoff.
>
> The new output will simply tell
> [...]
> Node 6, zone Normal, type Movable >100000 >100000 >100000 >100000 41019 31560 23996 10054 3229 983 648
>
> instead of
> Node 6, zone Normal, type Movable 399568 294127 221558 102119 41019 31560 23996 10054 3229 983 648
>
> The limit has been chosen arbitrary and it is a subject of a future
> change should there be a need for that.
>
> While we are at it, also drop the zone lock after each free_list
> iteration which will help with the IRQ and page allocator responsiveness
> even further as the IRQ lock held time is always bound to those 100k
> pages.
>
> Suggested-by: Andrew Morton <akpm@linux-foundation.org>
> Reviewed-by: Waiman Long <longman@redhat.com>
> Signed-off-by: Michal Hocko <mhocko@suse.com>
I think 100k is a very reasonable threshold.
Acked-by: David Rientjes <rientjes@google.com>
> ---
> mm/vmstat.c | 23 ++++++++++++++++++++---
> 1 file changed, 20 insertions(+), 3 deletions(-)
>
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 4e885ecd44d1..ddb89f4e0486 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -1383,12 +1383,29 @@ static void pagetypeinfo_showfree_print(struct seq_file *m,
> unsigned long freecount = 0;
> struct free_area *area;
> struct list_head *curr;
> + bool overflow = false;
>
> area = &(zone->free_area[order]);
>
> - list_for_each(curr, &area->free_list[mtype])
> - freecount++;
> - seq_printf(m, "%6lu ", freecount);
> + list_for_each(curr, &area->free_list[mtype]) {
> + /*
> + * Cap the free_list iteration because it might
> + * be really large and we are under a spinlock
> + * so a long time spent here could trigger a
> + * hard lockup detector. Anyway this is a
> + * debugging tool so knowing there is a handful
> + * of pages in this order should be more than
> + * sufficient
> + */
> + if (++freecount >= 100000) {
I suppose it's most precise to check freecount > 1000000 to print >100000,
but I doubt anybody cares :)
> + overflow = true;
> + break;
> + }
> + }
> + seq_printf(m, "%s%6lu ", overflow ? ">" : "", freecount);
> + spin_unlock_irq(&zone->lock);
> + cond_resched();
> + spin_lock_irq(&zone->lock);
> }
> seq_putc(m, '\n');
> }
^ permalink raw reply [flat|nested] 9+ messages in thread