linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm: hugetlb: fix softlockup when a large number of hugepages are freed.
@ 2014-03-31 10:43 Mizuma, Masayoshi
  2014-03-31 15:02 ` Naoya Horiguchi
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Mizuma, Masayoshi @ 2014-03-31 10:43 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Joonsoo Kim, Michal Hocko, Wanpeng Li,
	Aneesh Kumar, KOSAKI Motohiro

Hi,

When I decrease the value of nr_hugepage in procfs a lot, softlockup happens.
It is because there is no chance of context switch during this process.

On the other hand, when I allocate a large number of hugepages,
there is some chance of context switch. Hence softlockup doesn't happen
during this process. So it's necessary to add the context switch
in the freeing process as same as allocating process to avoid softlockup.

When I freed 12 TB hugapages with kernel-2.6.32-358.el6, the freeing process
occupied a CPU over 150 seconds and following softlockup message appeared
twice or more.

--
$ echo 6000000 > /proc/sys/vm/nr_hugepages
$ cat /proc/sys/vm/nr_hugepages
6000000
$ grep ^Huge /proc/meminfo
HugePages_Total:   6000000
HugePages_Free:    6000000
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
$ echo 0 > /proc/sys/vm/nr_hugepages

BUG: soft lockup - CPU#16 stuck for 67s! [sh:12883] ...
Pid: 12883, comm: sh Not tainted 2.6.32-358.el6.x86_64 #1
Call Trace:
 [<ffffffff8115a438>] ? free_pool_huge_page+0xb8/0xd0
 [<ffffffff8115a578>] ? set_max_huge_pages+0x128/0x190
 [<ffffffff8115c663>] ? hugetlb_sysctl_handler_common+0x113/0x140
 [<ffffffff8115c6de>] ? hugetlb_sysctl_handler+0x1e/0x20
 [<ffffffff811f3097>] ? proc_sys_call_handler+0x97/0xd0
 [<ffffffff811f30e4>] ? proc_sys_write+0x14/0x20
 [<ffffffff81180f98>] ? vfs_write+0xb8/0x1a0
 [<ffffffff81181891>] ? sys_write+0x51/0x90
 [<ffffffff810dc565>] ? __audit_syscall_exit+0x265/0x290
 [<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b
--
I have not confirmed this problem with upstream kernels because I am not
able to prepare the machine equipped with 12TB memory now.
However I confirmed that the amount of decreasing hugepages was directly
proportional to the amount of required time.

I measured required times on a smaller machine. It showed 130-145 hugepages
decreased in a millisecond.

Amount of decreasing     Required time      Decreasing rate
hugepages                     (msec)         (pages/msec)
------------------------------------------------------------
10,000 pages == 20GB         70 -  74          135-142
30,000 pages == 60GB        208 - 229          131-144

It means decrement of 6TB hugepages will trigger softlockup with the default
threshold 20sec, in this decreasing rate.

Signed-off-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
 mm/hugetlb.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 7d57af2..fe67f2c 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1535,6 +1535,7 @@ static unsigned long set_max_huge_pages(struct hstate *h, unsigned long count,
 	while (min_count < persistent_huge_pages(h)) {
 		if (!free_pool_huge_page(h, nodes_allowed, 0))
 			break;
+		cond_resched_lock(&hugetlb_lock);
 	}
 	while (count < persistent_huge_pages(h)) {
 		if (!adjust_pool_surplus(h, nodes_allowed, 1))
-- 
1.7.1

Thanks,
Masayoshi Mizuma

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm: hugetlb: fix softlockup when a large number of hugepages are freed.
  2014-03-31 10:43 [PATCH] mm: hugetlb: fix softlockup when a large number of hugepages are freed Mizuma, Masayoshi
@ 2014-03-31 15:02 ` Naoya Horiguchi
  2014-03-31 19:30 ` Andrew Morton
       [not found] ` <1396278140-k1hmxq77@n-horiguchi@ah.jp.nec.com>
  2 siblings, 0 replies; 5+ messages in thread
From: Naoya Horiguchi @ 2014-03-31 15:02 UTC (permalink / raw)
  To: m.mizuma
  Cc: linux-mm, akpm, iamjoonsoo.kim, mhocko, liwanp, aneesh.kumar,
	kosaki.motohiro

On Mon, Mar 31, 2014 at 07:43:32PM +0900, Mizuma, Masayoshi wrote:
> Hi,
> 
> When I decrease the value of nr_hugepage in procfs a lot, softlockup happens.
> It is because there is no chance of context switch during this process.
> 
> On the other hand, when I allocate a large number of hugepages,
> there is some chance of context switch. Hence softlockup doesn't happen
> during this process. So it's necessary to add the context switch
> in the freeing process as same as allocating process to avoid softlockup.
> 
> When I freed 12 TB hugapages with kernel-2.6.32-358.el6, the freeing process
> occupied a CPU over 150 seconds and following softlockup message appeared
> twice or more.
> 
> --
> $ echo 6000000 > /proc/sys/vm/nr_hugepages
> $ cat /proc/sys/vm/nr_hugepages
> 6000000
> $ grep ^Huge /proc/meminfo
> HugePages_Total:   6000000
> HugePages_Free:    6000000
> HugePages_Rsvd:        0
> HugePages_Surp:        0
> Hugepagesize:       2048 kB
> $ echo 0 > /proc/sys/vm/nr_hugepages
> 
> BUG: soft lockup - CPU#16 stuck for 67s! [sh:12883] ...
> Pid: 12883, comm: sh Not tainted 2.6.32-358.el6.x86_64 #1
> Call Trace:
>  [<ffffffff8115a438>] ? free_pool_huge_page+0xb8/0xd0
>  [<ffffffff8115a578>] ? set_max_huge_pages+0x128/0x190
>  [<ffffffff8115c663>] ? hugetlb_sysctl_handler_common+0x113/0x140
>  [<ffffffff8115c6de>] ? hugetlb_sysctl_handler+0x1e/0x20
>  [<ffffffff811f3097>] ? proc_sys_call_handler+0x97/0xd0
>  [<ffffffff811f30e4>] ? proc_sys_write+0x14/0x20
>  [<ffffffff81180f98>] ? vfs_write+0xb8/0x1a0
>  [<ffffffff81181891>] ? sys_write+0x51/0x90
>  [<ffffffff810dc565>] ? __audit_syscall_exit+0x265/0x290
>  [<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b
> --
> I have not confirmed this problem with upstream kernels because I am not
> able to prepare the machine equipped with 12TB memory now.
> However I confirmed that the amount of decreasing hugepages was directly
> proportional to the amount of required time.
> 
> I measured required times on a smaller machine. It showed 130-145 hugepages
> decreased in a millisecond.
> 
> Amount of decreasing     Required time      Decreasing rate
> hugepages                     (msec)         (pages/msec)
> ------------------------------------------------------------
> 10,000 pages == 20GB         70 -  74          135-142
> 30,000 pages == 60GB        208 - 229          131-144
> 
> It means decrement of 6TB hugepages will trigger softlockup with the default
> threshold 20sec, in this decreasing rate.
> 
> Signed-off-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Michal Hocko <mhocko@suse.cz>
> Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com>
> Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com>
> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
> ---
>  mm/hugetlb.c |    1 +
>  1 files changed, 1 insertions(+), 0 deletions(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 7d57af2..fe67f2c 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1535,6 +1535,7 @@ static unsigned long set_max_huge_pages(struct hstate *h, unsigned long count,
>  	while (min_count < persistent_huge_pages(h)) {
>  		if (!free_pool_huge_page(h, nodes_allowed, 0))
>  			break;
> +		cond_resched_lock(&hugetlb_lock);
>  	}
>  	while (count < persistent_huge_pages(h)) {
>  		if (!adjust_pool_surplus(h, nodes_allowed, 1))

It seems that the same thing could happen when freeing a number of surplus pages,
so how about adding cond_resched_lock() also in return_unused_surplus_pages()?

Thanks,
Naoya Horiguchi

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm: hugetlb: fix softlockup when a large number of hugepages are freed.
  2014-03-31 10:43 [PATCH] mm: hugetlb: fix softlockup when a large number of hugepages are freed Mizuma, Masayoshi
  2014-03-31 15:02 ` Naoya Horiguchi
@ 2014-03-31 19:30 ` Andrew Morton
  2014-04-01  7:02   ` Masayoshi Mizuma
       [not found] ` <1396278140-k1hmxq77@n-horiguchi@ah.jp.nec.com>
  2 siblings, 1 reply; 5+ messages in thread
From: Andrew Morton @ 2014-03-31 19:30 UTC (permalink / raw)
  To: Mizuma, Masayoshi
  Cc: linux-mm, Joonsoo Kim, Michal Hocko, Wanpeng Li, Aneesh Kumar,
	KOSAKI Motohiro

On Mon, 31 Mar 2014 19:43:32 +0900 "Mizuma, Masayoshi" <m.mizuma@jp.fujitsu.com> wrote:

> Hi,
> 
> When I decrease the value of nr_hugepage in procfs a lot, softlockup happens.
> It is because there is no chance of context switch during this process.
> 
> On the other hand, when I allocate a large number of hugepages,
> there is some chance of context switch. Hence softlockup doesn't happen
> during this process. So it's necessary to add the context switch
> in the freeing process as same as allocating process to avoid softlockup.
> 
> ...
>
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1535,6 +1535,7 @@ static unsigned long set_max_huge_pages(struct hstate *h, unsigned long count,
>  	while (min_count < persistent_huge_pages(h)) {
>  		if (!free_pool_huge_page(h, nodes_allowed, 0))
>  			break;
> +		cond_resched_lock(&hugetlb_lock);
>  	}
>  	while (count < persistent_huge_pages(h)) {
>  		if (!adjust_pool_surplus(h, nodes_allowed, 1))

Are you sure we don't need a cond_resched_lock() in this second loop as
well?

Let's bear in mind the objective here: it is to avoid long scheduling
stalls, not to prevent softlockup-detector warnings.  A piece of code
which doesn't trip the lockup detector can still be a problem.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm: hugetlb: fix softlockup when a large number of hugepages are freed.
       [not found] ` <1396278140-k1hmxq77@n-horiguchi@ah.jp.nec.com>
@ 2014-04-01  6:53   ` Masayoshi Mizuma
  0 siblings, 0 replies; 5+ messages in thread
From: Masayoshi Mizuma @ 2014-04-01  6:53 UTC (permalink / raw)
  To: Naoya Horiguchi
  Cc: linux-mm, akpm, iamjoonsoo.kim, mhocko, liwanp, aneesh.kumar,
	kosaki.motohiro

Hi, 

On Mon, 31 Mar 2014 11:02:20 -0400 Naoya Horiguchi wrote:
> On Mon, Mar 31, 2014 at 07:43:32PM +0900, Mizuma, Masayoshi wrote:
>> Hi,
>>
>> When I decrease the value of nr_hugepage in procfs a lot, softlockup happens.
>> It is because there is no chance of context switch during this process.
>>
>> On the other hand, when I allocate a large number of hugepages,
>> there is some chance of context switch. Hence softlockup doesn't happen
>> during this process. So it's necessary to add the context switch
>> in the freeing process as same as allocating process to avoid softlockup.
>>
>> When I freed 12 TB hugapages with kernel-2.6.32-358.el6, the freeing process
>> occupied a CPU over 150 seconds and following softlockup message appeared
>> twice or more.
>>
>> --
>> $ echo 6000000 > /proc/sys/vm/nr_hugepages
>> $ cat /proc/sys/vm/nr_hugepages
>> 6000000
>> $ grep ^Huge /proc/meminfo
>> HugePages_Total:   6000000
>> HugePages_Free:    6000000
>> HugePages_Rsvd:        0
>> HugePages_Surp:        0
>> Hugepagesize:       2048 kB
>> $ echo 0 > /proc/sys/vm/nr_hugepages
>>
>> BUG: soft lockup - CPU#16 stuck for 67s! [sh:12883] ...
>> Pid: 12883, comm: sh Not tainted 2.6.32-358.el6.x86_64 #1
>> Call Trace:
>>   [<ffffffff8115a438>] ? free_pool_huge_page+0xb8/0xd0
>>   [<ffffffff8115a578>] ? set_max_huge_pages+0x128/0x190
>>   [<ffffffff8115c663>] ? hugetlb_sysctl_handler_common+0x113/0x140
>>   [<ffffffff8115c6de>] ? hugetlb_sysctl_handler+0x1e/0x20
>>   [<ffffffff811f3097>] ? proc_sys_call_handler+0x97/0xd0
>>   [<ffffffff811f30e4>] ? proc_sys_write+0x14/0x20
>>   [<ffffffff81180f98>] ? vfs_write+0xb8/0x1a0
>>   [<ffffffff81181891>] ? sys_write+0x51/0x90
>>   [<ffffffff810dc565>] ? __audit_syscall_exit+0x265/0x290
>>   [<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b
>> --
>> I have not confirmed this problem with upstream kernels because I am not
>> able to prepare the machine equipped with 12TB memory now.
>> However I confirmed that the amount of decreasing hugepages was directly
>> proportional to the amount of required time.
>>
>> I measured required times on a smaller machine. It showed 130-145 hugepages
>> decreased in a millisecond.
>>
>> Amount of decreasing     Required time      Decreasing rate
>> hugepages                     (msec)         (pages/msec)
>> ------------------------------------------------------------
>> 10,000 pages == 20GB         70 -  74          135-142
>> 30,000 pages == 60GB        208 - 229          131-144
>>
>> It means decrement of 6TB hugepages will trigger softlockup with the default
>> threshold 20sec, in this decreasing rate.
>>
>> Signed-off-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>> Cc: Michal Hocko <mhocko@suse.cz>
>> Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com>
>> Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com>
>> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
>> ---
>>   mm/hugetlb.c |    1 +
>>   1 files changed, 1 insertions(+), 0 deletions(-)
>>
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index 7d57af2..fe67f2c 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -1535,6 +1535,7 @@ static unsigned long set_max_huge_pages(struct hstate *h, unsigned long count,
>>   	while (min_count < persistent_huge_pages(h)) {
>>   		if (!free_pool_huge_page(h, nodes_allowed, 0))
>>   			break;
>> +		cond_resched_lock(&hugetlb_lock);
>>   	}
>>   	while (count < persistent_huge_pages(h)) {
>>   		if (!adjust_pool_surplus(h, nodes_allowed, 1))
> 
> It seems that the same thing could happen when freeing a number of surplus pages,
> so how about adding cond_resched_lock() also in return_unused_surplus_pages()?

Thank you for pointing that out! 
I will also add cond_resched_lock() in the following loop at return_unused_surplus_pages().

static void return_unused_surplus_pages(struct hstate *h,
                                        unsigned long unused_resv_pages)
{
<cut>
        while (nr_pages--) {
                if (!free_pool_huge_page(h, &node_states[N_MEMORY], 1))
                        break;
        }
}

Thanks,
Masayoshi Mizuma

> 
> Thanks,
> Naoya Horiguchi
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm: hugetlb: fix softlockup when a large number of hugepages are freed.
  2014-03-31 19:30 ` Andrew Morton
@ 2014-04-01  7:02   ` Masayoshi Mizuma
  0 siblings, 0 replies; 5+ messages in thread
From: Masayoshi Mizuma @ 2014-04-01  7:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, Joonsoo Kim, Michal Hocko, Wanpeng Li, Aneesh Kumar,
	KOSAKI Motohiro

On Mon, 31 Mar 2014 12:30:28 -0700 Andrew Morton wrote:
> On Mon, 31 Mar 2014 19:43:32 +0900 "Mizuma, Masayoshi" <m.mizuma@jp.fujitsu.com> wrote:
>
>> Hi,
>>
>> When I decrease the value of nr_hugepage in procfs a lot, softlockup happens.
>> It is because there is no chance of context switch during this process.
>>
>> On the other hand, when I allocate a large number of hugepages,
>> there is some chance of context switch. Hence softlockup doesn't happen
>> during this process. So it's necessary to add the context switch
>> in the freeing process as same as allocating process to avoid softlockup.
>>
>> ...
>>
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -1535,6 +1535,7 @@ static unsigned long set_max_huge_pages(struct hstate *h, unsigned long count,
>>   	while (min_count < persistent_huge_pages(h)) {
>>   		if (!free_pool_huge_page(h, nodes_allowed, 0))
>>   			break;
>> +		cond_resched_lock(&hugetlb_lock);
>>   	}
>>   	while (count < persistent_huge_pages(h)) {
>>   		if (!adjust_pool_surplus(h, nodes_allowed, 1))
>
> Are you sure we don't need a cond_resched_lock() in this second loop as
> well?

We don't need a cond_resched_lock() in the second loop. Because,
the long scheduling stalls is occured by freeing hugepage at
free_pool_huge_page() and the freeing is heavy.
adjust_pool_surplus(), which is called at the second loop, is not
heavy, I believe.

>
> Let's bear in mind the objective here: it is to avoid long scheduling
> stalls, not to prevent softlockup-detector warnings.  A piece of code
> which doesn't trip the lockup detector can still be a problem.

I see, thank you!

Thanks,
Masayoshi Mizuma
>
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2014-04-01  7:04 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-03-31 10:43 [PATCH] mm: hugetlb: fix softlockup when a large number of hugepages are freed Mizuma, Masayoshi
2014-03-31 15:02 ` Naoya Horiguchi
2014-03-31 19:30 ` Andrew Morton
2014-04-01  7:02   ` Masayoshi Mizuma
     [not found] ` <1396278140-k1hmxq77@n-horiguchi@ah.jp.nec.com>
2014-04-01  6:53   ` Masayoshi Mizuma

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox