* [PATCH] mm: show total hugetlb memory consumption in /proc/meminfo
@ 2017-11-14 12:50 Roman Gushchin
2017-11-14 13:17 ` Michal Hocko
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Roman Gushchin @ 2017-11-14 12:50 UTC (permalink / raw)
To: linux-mm
Cc: Roman Gushchin, Andrew Morton, Michal Hocko, Johannes Weiner,
Mike Kravetz, Aneesh Kumar K.V, Andrea Arcangeli, Dave Hansen,
kernel-team, linux-kernel
Currently we display some hugepage statistics (total, free, etc)
in /proc/meminfo, but only for default hugepage size (e.g. 2Mb).
If hugepages of different sizes are used (like 2Mb and 1Gb on x86-64),
/proc/meminfo output can be confusing, as non-default sized hugepages
are not reflected at all, and there are no signs that they are
existing and consuming system memory.
To solve this problem, let's display the total amount of memory,
consumed by hugetlb pages of all sized (both free and used).
Let's call it "Hugetlb", and display size in kB to match generic
/proc/meminfo style.
For example, (1024 2Mb pages and 2 1Gb pages are pre-allocated):
$ cat /proc/meminfo
MemTotal: 8168984 kB
MemFree: 3789276 kB
<...>
CmaFree: 0 kB
HugePages_Total: 1024
HugePages_Free: 1024
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 4194304 kB
DirectMap4k: 32632 kB
DirectMap2M: 4161536 kB
DirectMap1G: 6291456 kB
Signed-off-by: Roman Gushchin <guro@fb.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: kernel-team@fb.com
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---
mm/hugetlb.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 4b3bbd2980bb..1a65f8482282 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2974,6 +2974,8 @@ int hugetlb_overcommit_handler(struct ctl_table *table, int write,
void hugetlb_report_meminfo(struct seq_file *m)
{
struct hstate *h = &default_hstate;
+ unsigned long total = 0;
+
if (!hugepages_supported())
return;
seq_printf(m,
@@ -2987,6 +2989,11 @@ void hugetlb_report_meminfo(struct seq_file *m)
h->resv_huge_pages,
h->surplus_huge_pages,
1UL << (huge_page_order(h) + PAGE_SHIFT - 10));
+
+ for_each_hstate(h)
+ total += (PAGE_SIZE << huge_page_order(h)) * h->nr_huge_pages;
+
+ seq_printf(m, "Hugetlb: %8lu kB\n", total / 1024);
}
int hugetlb_report_node_meminfo(int nid, char *buf)
--
2.13.6
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] mm: show total hugetlb memory consumption in /proc/meminfo
2017-11-14 12:50 [PATCH] mm: show total hugetlb memory consumption in /proc/meminfo Roman Gushchin
@ 2017-11-14 13:17 ` Michal Hocko
2017-11-14 22:28 ` David Rientjes
2017-11-14 21:07 ` Johannes Weiner
2017-11-14 21:10 ` Dave Hansen
2 siblings, 1 reply; 8+ messages in thread
From: Michal Hocko @ 2017-11-14 13:17 UTC (permalink / raw)
To: Roman Gushchin
Cc: linux-mm, Andrew Morton, Johannes Weiner, Mike Kravetz,
Aneesh Kumar K.V, Andrea Arcangeli, Dave Hansen, kernel-team,
linux-kernel
On Tue 14-11-17 12:50:26, Roman Gushchin wrote:
> Currently we display some hugepage statistics (total, free, etc)
> in /proc/meminfo, but only for default hugepage size (e.g. 2Mb).
>
> If hugepages of different sizes are used (like 2Mb and 1Gb on x86-64),
> /proc/meminfo output can be confusing, as non-default sized hugepages
> are not reflected at all, and there are no signs that they are
> existing and consuming system memory.
>
> To solve this problem, let's display the total amount of memory,
> consumed by hugetlb pages of all sized (both free and used).
> Let's call it "Hugetlb", and display size in kB to match generic
> /proc/meminfo style.
>
> For example, (1024 2Mb pages and 2 1Gb pages are pre-allocated):
> $ cat /proc/meminfo
> MemTotal: 8168984 kB
> MemFree: 3789276 kB
> <...>
> CmaFree: 0 kB
> HugePages_Total: 1024
> HugePages_Free: 1024
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 2048 kB
> Hugetlb: 4194304 kB
> DirectMap4k: 32632 kB
> DirectMap2M: 4161536 kB
> DirectMap1G: 6291456 kB
>
> Signed-off-by: Roman Gushchin <guro@fb.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Mike Kravetz <mike.kravetz@oracle.com>
> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: kernel-team@fb.com
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
/proc/meminfo is paved with mistakes throughout the history. It pretends
to give a good picture of the memory usage, yet we have many pointless
entries while large consumers are not reflected at all in many case.
Hugetlb data with that great details shouldn't have been exported in the
first place when they reflect only one specific hugepage size. I would
argue that if somebody went down to configure non-default hugetlb page
sizes then checking for the sysfs stats would be an immediate place to
look at. Anyway I can see that the cumulative information might be
helpful for those who do not own the machine but merely debug an issue
which is the primary usacase for the file.
That being said, I am not rely happy to add more to the file but it is
too late to fix it now. The patch is non-intrusive
Acked-by: Michal Hocko <mhocko@suse.com>
One nit below
> ---
> mm/hugetlb.c | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 4b3bbd2980bb..1a65f8482282 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2974,6 +2974,8 @@ int hugetlb_overcommit_handler(struct ctl_table *table, int write,
> void hugetlb_report_meminfo(struct seq_file *m)
> {
> struct hstate *h = &default_hstate;
> + unsigned long total = 0;
> +
> if (!hugepages_supported())
> return;
> seq_printf(m,
> @@ -2987,6 +2989,11 @@ void hugetlb_report_meminfo(struct seq_file *m)
> h->resv_huge_pages,
> h->surplus_huge_pages,
> 1UL << (huge_page_order(h) + PAGE_SHIFT - 10));
> +
> + for_each_hstate(h)
> + total += (PAGE_SIZE << huge_page_order(h)) * h->nr_huge_pages;
Please keep the total calculation consistent with what we have there
already.
> +
> + seq_printf(m, "Hugetlb: %8lu kB\n", total / 1024);
> }
>
> int hugetlb_report_node_meminfo(int nid, char *buf)
> --
> 2.13.6
>
--
Michal Hocko
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] mm: show total hugetlb memory consumption in /proc/meminfo
2017-11-14 12:50 [PATCH] mm: show total hugetlb memory consumption in /proc/meminfo Roman Gushchin
2017-11-14 13:17 ` Michal Hocko
@ 2017-11-14 21:07 ` Johannes Weiner
2017-11-14 21:10 ` Dave Hansen
2 siblings, 0 replies; 8+ messages in thread
From: Johannes Weiner @ 2017-11-14 21:07 UTC (permalink / raw)
To: Roman Gushchin
Cc: linux-mm, Andrew Morton, Michal Hocko, Mike Kravetz,
Aneesh Kumar K.V, Andrea Arcangeli, Dave Hansen, kernel-team,
linux-kernel
On Tue, Nov 14, 2017 at 12:50:26PM +0000, Roman Gushchin wrote:
> Currently we display some hugepage statistics (total, free, etc)
> in /proc/meminfo, but only for default hugepage size (e.g. 2Mb).
>
> If hugepages of different sizes are used (like 2Mb and 1Gb on x86-64),
> /proc/meminfo output can be confusing, as non-default sized hugepages
> are not reflected at all, and there are no signs that they are
> existing and consuming system memory.
>
> To solve this problem, let's display the total amount of memory,
> consumed by hugetlb pages of all sized (both free and used).
> Let's call it "Hugetlb", and display size in kB to match generic
> /proc/meminfo style.
>
> For example, (1024 2Mb pages and 2 1Gb pages are pre-allocated):
> $ cat /proc/meminfo
> MemTotal: 8168984 kB
> MemFree: 3789276 kB
> <...>
> CmaFree: 0 kB
> HugePages_Total: 1024
> HugePages_Free: 1024
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 2048 kB
> Hugetlb: 4194304 kB
> DirectMap4k: 32632 kB
> DirectMap2M: 4161536 kB
> DirectMap1G: 6291456 kB
>
> Signed-off-by: Roman Gushchin <guro@fb.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Mike Kravetz <mike.kravetz@oracle.com>
> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: kernel-team@fb.com
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] mm: show total hugetlb memory consumption in /proc/meminfo
2017-11-14 12:50 [PATCH] mm: show total hugetlb memory consumption in /proc/meminfo Roman Gushchin
2017-11-14 13:17 ` Michal Hocko
2017-11-14 21:07 ` Johannes Weiner
@ 2017-11-14 21:10 ` Dave Hansen
2 siblings, 0 replies; 8+ messages in thread
From: Dave Hansen @ 2017-11-14 21:10 UTC (permalink / raw)
To: Roman Gushchin, linux-mm
Cc: Andrew Morton, Michal Hocko, Johannes Weiner, Mike Kravetz,
Aneesh Kumar K.V, Andrea Arcangeli, kernel-team, linux-kernel
Do we get an update for Documentation/vm/hugetlbpage.txt to spell out
what our shiny, new and intentionally-ambiguous entry is supposed to
mean and be used for?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] mm: show total hugetlb memory consumption in /proc/meminfo
2017-11-14 13:17 ` Michal Hocko
@ 2017-11-14 22:28 ` David Rientjes
2017-11-15 8:18 ` Michal Hocko
0 siblings, 1 reply; 8+ messages in thread
From: David Rientjes @ 2017-11-14 22:28 UTC (permalink / raw)
To: Michal Hocko
Cc: Roman Gushchin, linux-mm, Andrew Morton, Johannes Weiner,
Mike Kravetz, Aneesh Kumar K.V, Andrea Arcangeli, Dave Hansen,
kernel-team, linux-kernel
On Tue, 14 Nov 2017, Michal Hocko wrote:
> > Currently we display some hugepage statistics (total, free, etc)
> > in /proc/meminfo, but only for default hugepage size (e.g. 2Mb).
> >
> > If hugepages of different sizes are used (like 2Mb and 1Gb on x86-64),
> > /proc/meminfo output can be confusing, as non-default sized hugepages
> > are not reflected at all, and there are no signs that they are
> > existing and consuming system memory.
> >
> > To solve this problem, let's display the total amount of memory,
> > consumed by hugetlb pages of all sized (both free and used).
> > Let's call it "Hugetlb", and display size in kB to match generic
> > /proc/meminfo style.
> >
> > For example, (1024 2Mb pages and 2 1Gb pages are pre-allocated):
> > $ cat /proc/meminfo
> > MemTotal: 8168984 kB
> > MemFree: 3789276 kB
> > <...>
> > CmaFree: 0 kB
> > HugePages_Total: 1024
> > HugePages_Free: 1024
> > HugePages_Rsvd: 0
> > HugePages_Surp: 0
> > Hugepagesize: 2048 kB
> > Hugetlb: 4194304 kB
> > DirectMap4k: 32632 kB
> > DirectMap2M: 4161536 kB
> > DirectMap1G: 6291456 kB
> >
> > Signed-off-by: Roman Gushchin <guro@fb.com>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: Michal Hocko <mhocko@suse.com>
> > Cc: Johannes Weiner <hannes@cmpxchg.org>
> > Cc: Mike Kravetz <mike.kravetz@oracle.com>
> > Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> > Cc: Andrea Arcangeli <aarcange@redhat.com>
> > Cc: Dave Hansen <dave.hansen@intel.com>
> > Cc: kernel-team@fb.com
> > Cc: linux-mm@kvack.org
> > Cc: linux-kernel@vger.kernel.org
Acked-by: David Rientjes <rientjes@google.com>
> /proc/meminfo is paved with mistakes throughout the history. It pretends
> to give a good picture of the memory usage, yet we have many pointless
> entries while large consumers are not reflected at all in many case.
>
> Hugetlb data with that great details shouldn't have been exported in the
> first place when they reflect only one specific hugepage size. I would
> argue that if somebody went down to configure non-default hugetlb page
> sizes then checking for the sysfs stats would be an immediate place to
> look at. Anyway I can see that the cumulative information might be
> helpful for those who do not own the machine but merely debug an issue
> which is the primary usacase for the file.
>
I agree in principle, but I think it's inevitable on projects that span
decades and accumulate features that evolve over time.
> > ---
> > mm/hugetlb.c | 7 +++++++
> > 1 file changed, 7 insertions(+)
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 4b3bbd2980bb..1a65f8482282 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -2974,6 +2974,8 @@ int hugetlb_overcommit_handler(struct ctl_table *table, int write,
> > void hugetlb_report_meminfo(struct seq_file *m)
> > {
> > struct hstate *h = &default_hstate;
> > + unsigned long total = 0;
> > +
> > if (!hugepages_supported())
> > return;
> > seq_printf(m,
> > @@ -2987,6 +2989,11 @@ void hugetlb_report_meminfo(struct seq_file *m)
> > h->resv_huge_pages,
> > h->surplus_huge_pages,
> > 1UL << (huge_page_order(h) + PAGE_SHIFT - 10));
> > +
> > + for_each_hstate(h)
> > + total += (PAGE_SIZE << huge_page_order(h)) * h->nr_huge_pages;
>
> Please keep the total calculation consistent with what we have there
> already.
>
Yeah, and I'm not sure if your comment eludes to this being racy, but it
would be better to store the default size for default_hstate during the
iteration to total the size for all hstates.
> > +
> > + seq_printf(m, "Hugetlb: %8lu kB\n", total / 1024);
> > }
> >
> > int hugetlb_report_node_meminfo(int nid, char *buf)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] mm: show total hugetlb memory consumption in /proc/meminfo
2017-11-14 22:28 ` David Rientjes
@ 2017-11-15 8:18 ` Michal Hocko
2017-11-15 22:46 ` David Rientjes
0 siblings, 1 reply; 8+ messages in thread
From: Michal Hocko @ 2017-11-15 8:18 UTC (permalink / raw)
To: David Rientjes
Cc: Roman Gushchin, linux-mm, Andrew Morton, Johannes Weiner,
Mike Kravetz, Aneesh Kumar K.V, Andrea Arcangeli, Dave Hansen,
kernel-team, linux-kernel
On Tue 14-11-17 14:28:11, David Rientjes wrote:
[...]
> > /proc/meminfo is paved with mistakes throughout the history. It pretends
> > to give a good picture of the memory usage, yet we have many pointless
> > entries while large consumers are not reflected at all in many case.
> >
> > Hugetlb data with that great details shouldn't have been exported in the
> > first place when they reflect only one specific hugepage size. I would
> > argue that if somebody went down to configure non-default hugetlb page
> > sizes then checking for the sysfs stats would be an immediate place to
> > look at. Anyway I can see that the cumulative information might be
> > helpful for those who do not own the machine but merely debug an issue
> > which is the primary usacase for the file.
> >
>
> I agree in principle, but I think it's inevitable on projects that span
> decades and accumulate features that evolve over time.
Yes, this is acceptable in earlier stages but I believe we have reached
a mature state where we shouldn't repeat those mistakes.
[...]
> > > if (!hugepages_supported())
> > > return;
> > > seq_printf(m,
> > > @@ -2987,6 +2989,11 @@ void hugetlb_report_meminfo(struct seq_file *m)
> > > h->resv_huge_pages,
> > > h->surplus_huge_pages,
> > > 1UL << (huge_page_order(h) + PAGE_SHIFT - 10));
> > > +
> > > + for_each_hstate(h)
> > > + total += (PAGE_SIZE << huge_page_order(h)) * h->nr_huge_pages;
> >
> > Please keep the total calculation consistent with what we have there
> > already.
> >
>
> Yeah, and I'm not sure if your comment eludes to this being racy, but it
> would be better to store the default size for default_hstate during the
> iteration to total the size for all hstates.
I just meant to have the code consistent. I do not prefer one or other
option.
--
Michal Hocko
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] mm: show total hugetlb memory consumption in /proc/meminfo
2017-11-15 8:18 ` Michal Hocko
@ 2017-11-15 22:46 ` David Rientjes
2017-11-15 22:49 ` Roman Gushchin
0 siblings, 1 reply; 8+ messages in thread
From: David Rientjes @ 2017-11-15 22:46 UTC (permalink / raw)
To: Michal Hocko
Cc: Roman Gushchin, linux-mm, Andrew Morton, Johannes Weiner,
Mike Kravetz, Aneesh Kumar K.V, Andrea Arcangeli, Dave Hansen,
kernel-team, linux-kernel
On Wed, 15 Nov 2017, Michal Hocko wrote:
> > > > if (!hugepages_supported())
> > > > return;
> > > > seq_printf(m,
> > > > @@ -2987,6 +2989,11 @@ void hugetlb_report_meminfo(struct seq_file *m)
> > > > h->resv_huge_pages,
> > > > h->surplus_huge_pages,
> > > > 1UL << (huge_page_order(h) + PAGE_SHIFT - 10));
> > > > +
> > > > + for_each_hstate(h)
> > > > + total += (PAGE_SIZE << huge_page_order(h)) * h->nr_huge_pages;
> > >
> > > Please keep the total calculation consistent with what we have there
> > > already.
> > >
> >
> > Yeah, and I'm not sure if your comment eludes to this being racy, but it
> > would be better to store the default size for default_hstate during the
> > iteration to total the size for all hstates.
>
> I just meant to have the code consistent. I do not prefer one or other
> option.
It's always nice when HugePages_Total * Hugepagesize cannot become greater
than Hugetlb. Roman, could you factor something like this into your
change accompanied with a documentation upodate as suggested by Dave?
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2975,20 +2975,33 @@ int hugetlb_overcommit_handler(struct ctl_table *table, int write,
void hugetlb_report_meminfo(struct seq_file *m)
{
- struct hstate *h = &default_hstate;
+ struct hstate *h;
+ unsigned long total = 0;
+
if (!hugepages_supported())
return;
- seq_printf(m,
- "HugePages_Total: %5lu\n"
- "HugePages_Free: %5lu\n"
- "HugePages_Rsvd: %5lu\n"
- "HugePages_Surp: %5lu\n"
- "Hugepagesize: %8lu kB\n",
- h->nr_huge_pages,
- h->free_huge_pages,
- h->resv_huge_pages,
- h->surplus_huge_pages,
- 1UL << (huge_page_order(h) + PAGE_SHIFT - 10));
+
+ for_each_hstate(h) {
+ unsigned long nr_huge_pages = h->nr_huge_pages;
+
+ total += nr_huge_pages <<
+ (huge_page_order(h) + PAGE_SHIFT - 10);
+
+ if (h == &default_hstate) {
+ seq_printf(m,
+ "HugePages_Total: %5lu\n"
+ "HugePages_Free: %5lu\n"
+ "HugePages_Rsvd: %5lu\n"
+ "HugePages_Surp: %5lu\n"
+ "Hugepagesize: %8lu kB\n",
+ nr_huge_pages,
+ h->free_huge_pages,
+ h->resv_huge_pages,
+ h->surplus_huge_pages,
+ 1UL << (huge_page_order(h) + PAGE_SHIFT - 10));
+ }
+ }
+ seq_printf(m, "Hugetlb: %5lu kB\n", total);
}
int hugetlb_report_node_meminfo(int nid, char *buf)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] mm: show total hugetlb memory consumption in /proc/meminfo
2017-11-15 22:46 ` David Rientjes
@ 2017-11-15 22:49 ` Roman Gushchin
0 siblings, 0 replies; 8+ messages in thread
From: Roman Gushchin @ 2017-11-15 22:49 UTC (permalink / raw)
To: David Rientjes
Cc: Michal Hocko, linux-mm, Andrew Morton, Johannes Weiner,
Mike Kravetz, Aneesh Kumar K.V, Andrea Arcangeli, Dave Hansen,
kernel-team, linux-kernel
On Wed, Nov 15, 2017 at 02:46:00PM -0800, David Rientjes wrote:
> On Wed, 15 Nov 2017, Michal Hocko wrote:
>
> > > > > if (!hugepages_supported())
> > > > > return;
> > > > > seq_printf(m,
> > > > > @@ -2987,6 +2989,11 @@ void hugetlb_report_meminfo(struct seq_file *m)
> > > > > h->resv_huge_pages,
> > > > > h->surplus_huge_pages,
> > > > > 1UL << (huge_page_order(h) + PAGE_SHIFT - 10));
> > > > > +
> > > > > + for_each_hstate(h)
> > > > > + total += (PAGE_SIZE << huge_page_order(h)) * h->nr_huge_pages;
> > > >
> > > > Please keep the total calculation consistent with what we have there
> > > > already.
> > > >
> > >
> > > Yeah, and I'm not sure if your comment eludes to this being racy, but it
> > > would be better to store the default size for default_hstate during the
> > > iteration to total the size for all hstates.
> >
> > I just meant to have the code consistent. I do not prefer one or other
> > option.
>
> It's always nice when HugePages_Total * Hugepagesize cannot become greater
> than Hugetlb. Roman, could you factor something like this into your
> change accompanied with a documentation upodate as suggested by Dave?
Hi David!
Working on it... I'll post an update soon.
Thanks!
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2017-11-15 22:50 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-14 12:50 [PATCH] mm: show total hugetlb memory consumption in /proc/meminfo Roman Gushchin
2017-11-14 13:17 ` Michal Hocko
2017-11-14 22:28 ` David Rientjes
2017-11-15 8:18 ` Michal Hocko
2017-11-15 22:46 ` David Rientjes
2017-11-15 22:49 ` Roman Gushchin
2017-11-14 21:07 ` Johannes Weiner
2017-11-14 21:10 ` Dave Hansen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox