* [PATCH v2] mm/hugetlb: fix total hugetlbfs pages count when memory overcommit accouting
@ 2013-03-14 10:49 Wanpeng Li
2013-03-14 11:09 ` Michal Hocko
0 siblings, 1 reply; 5+ messages in thread
From: Wanpeng Li @ 2013-03-14 10:49 UTC (permalink / raw)
To: Andrew Morton
Cc: Michal Hocko, Aneesh Kumar K.V, Hillf Danton, KAMEZAWA Hiroyuki,
linux-mm, linux-kernel, Wanpeng Li
Changelog:
v1 -> v2:
* update patch description, spotted by Michal
hugetlb_total_pages() does not account for all the supported hugepage
sizes. This can lead to incorrect calculation of the total number of
page frames used by hugetlb. This patch corrects the issue.
Testcase:
boot: hugepagesz=1G hugepages=1
before patch:
egrep 'CommitLimit' /proc/meminfo
CommitLimit: 55434168 kB
after patch:
egrep 'CommitLimit' /proc/meminfo
CommitLimit: 54909880 kB
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
---
mm/hugetlb.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index cdb64e4..9e25040 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2124,8 +2124,11 @@ int hugetlb_report_node_meminfo(int nid, char *buf)
/* Return the number pages of memory we physically have, in PAGE_SIZE units. */
unsigned long hugetlb_total_pages(void)
{
- struct hstate *h = &default_hstate;
- return h->nr_huge_pages * pages_per_huge_page(h);
+ struct hstate *h;
+ unsigned long nr_total_pages = 0;
+ for_each_hstate(h)
+ nr_total_pages += h->nr_huge_pages * pages_per_huge_page(h);
+ return nr_total_pages;
}
static int hugetlb_acct_memory(struct hstate *h, long delta)
--
1.7.11.7
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v2] mm/hugetlb: fix total hugetlbfs pages count when memory overcommit accouting
2013-03-14 10:49 [PATCH v2] mm/hugetlb: fix total hugetlbfs pages count when memory overcommit accouting Wanpeng Li
@ 2013-03-14 11:09 ` Michal Hocko
2013-03-14 11:24 ` Wanpeng Li
2013-03-14 11:24 ` Wanpeng Li
0 siblings, 2 replies; 5+ messages in thread
From: Michal Hocko @ 2013-03-14 11:09 UTC (permalink / raw)
To: Wanpeng Li
Cc: Andrew Morton, Aneesh Kumar K.V, Hillf Danton, KAMEZAWA Hiroyuki,
linux-mm, linux-kernel
On Thu 14-03-13 18:49:49, Wanpeng Li wrote:
> Changelog:
> v1 -> v2:
> * update patch description, spotted by Michal
>
> hugetlb_total_pages() does not account for all the supported hugepage
> sizes.
> This can lead to incorrect calculation of the total number of
> page frames used by hugetlb. This patch corrects the issue.
Sorry to be so picky but this doesn't tell us much. Why do we need to
have the total number of hugetlb pages?
What about the following:
"hugetlb_total_pages is used for overcommit calculations but the
current implementation considers only default hugetlb page size (which
is either the first defined hugepage size or the one specified by
default_hugepagesz kernel boot parameter).
If the system is configured for more than one hugepage size (which is
possible since a137e1cc hugetlbfs: per mount huge page sizes) then
the overcommit estimation done by __vm_enough_memory (resp. shown by
meminfo_proc_show) is not precise - there is an impression of more
available/allowed memory. This can lead to an unexpected ENOMEM/EFAULT
resp. SIGSEGV when memory is accounted."
I think this is also worth pushing to the stable tree (it goes back to
2.6.27)
> Testcase:
> boot: hugepagesz=1G hugepages=1
> before patch:
> egrep 'CommitLimit' /proc/meminfo
> CommitLimit: 55434168 kB
> after patch:
> egrep 'CommitLimit' /proc/meminfo
> CommitLimit: 54909880 kB
This gives some more confusion to a reader because there is only
something like 500M difference here without any explanation.
>
> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
> ---
> mm/hugetlb.c | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index cdb64e4..9e25040 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2124,8 +2124,11 @@ int hugetlb_report_node_meminfo(int nid, char *buf)
> /* Return the number pages of memory we physically have, in PAGE_SIZE units. */
> unsigned long hugetlb_total_pages(void)
> {
> - struct hstate *h = &default_hstate;
> - return h->nr_huge_pages * pages_per_huge_page(h);
> + struct hstate *h;
> + unsigned long nr_total_pages = 0;
> + for_each_hstate(h)
> + nr_total_pages += h->nr_huge_pages * pages_per_huge_page(h);
> + return nr_total_pages;
> }
>
> static int hugetlb_acct_memory(struct hstate *h, long delta)
> --
> 1.7.11.7
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
--
Michal Hocko
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v2] mm/hugetlb: fix total hugetlbfs pages count when memory overcommit accouting
2013-03-14 11:09 ` Michal Hocko
@ 2013-03-14 11:24 ` Wanpeng Li
2013-03-14 12:58 ` Michal Hocko
2013-03-14 11:24 ` Wanpeng Li
1 sibling, 1 reply; 5+ messages in thread
From: Wanpeng Li @ 2013-03-14 11:24 UTC (permalink / raw)
To: Michal Hocko
Cc: Andrew Morton, Aneesh Kumar K.V, Hillf Danton, KAMEZAWA Hiroyuki,
linux-mm, linux-kernel
On Thu, Mar 14, 2013 at 12:09:27PM +0100, Michal Hocko wrote:
>On Thu 14-03-13 18:49:49, Wanpeng Li wrote:
>> Changelog:
>> v1 -> v2:
>> * update patch description, spotted by Michal
>>
>> hugetlb_total_pages() does not account for all the supported hugepage
>> sizes.
>
>> This can lead to incorrect calculation of the total number of
>> page frames used by hugetlb. This patch corrects the issue.
>
Hi Michal,
>Sorry to be so picky but this doesn't tell us much. Why do we need to
>have the total number of hugetlb pages?
>
>What about the following:
>"hugetlb_total_pages is used for overcommit calculations but the
>current implementation considers only default hugetlb page size (which
>is either the first defined hugepage size or the one specified by
>default_hugepagesz kernel boot parameter).
>
>If the system is configured for more than one hugepage size (which is
>possible since a137e1cc hugetlbfs: per mount huge page sizes) then
>the overcommit estimation done by __vm_enough_memory (resp. shown by
>meminfo_proc_show) is not precise - there is an impression of more
>available/allowed memory. This can lead to an unexpected ENOMEM/EFAULT
>resp. SIGSEGV when memory is accounted."
>
Fair enough, thanks. :-)
>I think this is also worth pushing to the stable tree (it goes back to
>2.6.27)
>
Yup, I will Cc Greg in next version.
>> Testcase:
>> boot: hugepagesz=1G hugepages=1
>> before patch:
>> egrep 'CommitLimit' /proc/meminfo
>> CommitLimit: 55434168 kB
>> after patch:
>> egrep 'CommitLimit' /proc/meminfo
>> CommitLimit: 54909880 kB
>
>This gives some more confusion to a reader because there is only
>something like 500M difference here without any explanation.
>
the default overcommit ratio is 50.
Regards,
Wanpeng Li
>>
>> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
>> ---
>> mm/hugetlb.c | 7 +++++--
>> 1 file changed, 5 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index cdb64e4..9e25040 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -2124,8 +2124,11 @@ int hugetlb_report_node_meminfo(int nid, char *buf)
>> /* Return the number pages of memory we physically have, in PAGE_SIZE units. */
>> unsigned long hugetlb_total_pages(void)
>> {
>> - struct hstate *h = &default_hstate;
>> - return h->nr_huge_pages * pages_per_huge_page(h);
>> + struct hstate *h;
>> + unsigned long nr_total_pages = 0;
>> + for_each_hstate(h)
>> + nr_total_pages += h->nr_huge_pages * pages_per_huge_page(h);
>> + return nr_total_pages;
>> }
>>
>> static int hugetlb_acct_memory(struct hstate *h, long delta)
>> --
>> 1.7.11.7
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo@kvack.org. For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
>
>--
>Michal Hocko
>SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v2] mm/hugetlb: fix total hugetlbfs pages count when memory overcommit accouting
2013-03-14 11:09 ` Michal Hocko
2013-03-14 11:24 ` Wanpeng Li
@ 2013-03-14 11:24 ` Wanpeng Li
1 sibling, 0 replies; 5+ messages in thread
From: Wanpeng Li @ 2013-03-14 11:24 UTC (permalink / raw)
To: Michal Hocko
Cc: Andrew Morton, Aneesh Kumar K.V, Hillf Danton, KAMEZAWA Hiroyuki,
linux-mm, linux-kernel
On Thu, Mar 14, 2013 at 12:09:27PM +0100, Michal Hocko wrote:
>On Thu 14-03-13 18:49:49, Wanpeng Li wrote:
>> Changelog:
>> v1 -> v2:
>> * update patch description, spotted by Michal
>>
>> hugetlb_total_pages() does not account for all the supported hugepage
>> sizes.
>
>> This can lead to incorrect calculation of the total number of
>> page frames used by hugetlb. This patch corrects the issue.
>
Hi Michal,
>Sorry to be so picky but this doesn't tell us much. Why do we need to
>have the total number of hugetlb pages?
>
>What about the following:
>"hugetlb_total_pages is used for overcommit calculations but the
>current implementation considers only default hugetlb page size (which
>is either the first defined hugepage size or the one specified by
>default_hugepagesz kernel boot parameter).
>
>If the system is configured for more than one hugepage size (which is
>possible since a137e1cc hugetlbfs: per mount huge page sizes) then
>the overcommit estimation done by __vm_enough_memory (resp. shown by
>meminfo_proc_show) is not precise - there is an impression of more
>available/allowed memory. This can lead to an unexpected ENOMEM/EFAULT
>resp. SIGSEGV when memory is accounted."
>
Fair enough, thanks. :-)
>I think this is also worth pushing to the stable tree (it goes back to
>2.6.27)
>
Yup, I will Cc Greg in next version.
>> Testcase:
>> boot: hugepagesz=1G hugepages=1
>> before patch:
>> egrep 'CommitLimit' /proc/meminfo
>> CommitLimit: 55434168 kB
>> after patch:
>> egrep 'CommitLimit' /proc/meminfo
>> CommitLimit: 54909880 kB
>
>This gives some more confusion to a reader because there is only
>something like 500M difference here without any explanation.
>
the default overcommit ratio is 50.
Regards,
Wanpeng Li
>>
>> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
>> ---
>> mm/hugetlb.c | 7 +++++--
>> 1 file changed, 5 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index cdb64e4..9e25040 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -2124,8 +2124,11 @@ int hugetlb_report_node_meminfo(int nid, char *buf)
>> /* Return the number pages of memory we physically have, in PAGE_SIZE units. */
>> unsigned long hugetlb_total_pages(void)
>> {
>> - struct hstate *h = &default_hstate;
>> - return h->nr_huge_pages * pages_per_huge_page(h);
>> + struct hstate *h;
>> + unsigned long nr_total_pages = 0;
>> + for_each_hstate(h)
>> + nr_total_pages += h->nr_huge_pages * pages_per_huge_page(h);
>> + return nr_total_pages;
>> }
>>
>> static int hugetlb_acct_memory(struct hstate *h, long delta)
>> --
>> 1.7.11.7
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo@kvack.org. For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
>
>--
>Michal Hocko
>SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v2] mm/hugetlb: fix total hugetlbfs pages count when memory overcommit accouting
2013-03-14 11:24 ` Wanpeng Li
@ 2013-03-14 12:58 ` Michal Hocko
0 siblings, 0 replies; 5+ messages in thread
From: Michal Hocko @ 2013-03-14 12:58 UTC (permalink / raw)
To: Wanpeng Li
Cc: Andrew Morton, Aneesh Kumar K.V, Hillf Danton, KAMEZAWA Hiroyuki,
linux-mm, linux-kernel
On Thu 14-03-13 19:24:11, Wanpeng Li wrote:
> On Thu, Mar 14, 2013 at 12:09:27PM +0100, Michal Hocko wrote:
> >On Thu 14-03-13 18:49:49, Wanpeng Li wrote:
> >> Changelog:
> >> v1 -> v2:
> >> * update patch description, spotted by Michal
> >>
> >> hugetlb_total_pages() does not account for all the supported hugepage
> >> sizes.
> >
> >> This can lead to incorrect calculation of the total number of
> >> page frames used by hugetlb. This patch corrects the issue.
> >
>
> Hi Michal,
>
> >Sorry to be so picky but this doesn't tell us much. Why do we need to
> >have the total number of hugetlb pages?
> >
> >What about the following:
> >"hugetlb_total_pages is used for overcommit calculations but the
> >current implementation considers only default hugetlb page size (which
> >is either the first defined hugepage size or the one specified by
> >default_hugepagesz kernel boot parameter).
> >
> >If the system is configured for more than one hugepage size (which is
> >possible since a137e1cc hugetlbfs: per mount huge page sizes) then
> >the overcommit estimation done by __vm_enough_memory (resp. shown by
> >meminfo_proc_show) is not precise - there is an impression of more
> >available/allowed memory. This can lead to an unexpected ENOMEM/EFAULT
> >resp. SIGSEGV when memory is accounted."
> >
>
> Fair enough, thanks. :-)
>
> >I think this is also worth pushing to the stable tree (it goes back to
> >2.6.27)
> >
>
> Yup, I will Cc Greg in next version.
Ccing Greg doesn't help. All that is required is:
Cc: stable@vger.kernel.org # 2.6.27+
> >> Testcase:
> >> boot: hugepagesz=1G hugepages=1
> >> before patch:
> >> egrep 'CommitLimit' /proc/meminfo
> >> CommitLimit: 55434168 kB
> >> after patch:
> >> egrep 'CommitLimit' /proc/meminfo
> >> CommitLimit: 54909880 kB
> >
> >This gives some more confusion to a reader because there is only
> >something like 500M difference here without any explanation.
> >
>
> the default overcommit ratio is 50.
And that part was missing in the description...
[...]
--
Michal Hocko
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2013-03-14 12:58 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-03-14 10:49 [PATCH v2] mm/hugetlb: fix total hugetlbfs pages count when memory overcommit accouting Wanpeng Li
2013-03-14 11:09 ` Michal Hocko
2013-03-14 11:24 ` Wanpeng Li
2013-03-14 12:58 ` Michal Hocko
2013-03-14 11:24 ` Wanpeng Li
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox