From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Dan Carpenter <dan.carpenter@oracle.com>,
"linux-mm@kvack.org" <linux-mm@kvack.org>
Subject: Re: [PATCH] mm/hugetlb: initialize order with UINT_MAX in dissolve_free_huge_pages()
Date: Wed, 13 May 2015 01:44:22 +0000 [thread overview]
Message-ID: <20150513014418.GB14599@hori1.linux.bs1.fc.nec.co.jp> (raw)
In-Reply-To: <20150512161511.7967c400cae6c1d693b61d57@linux-foundation.org>
On Tue, May 12, 2015 at 04:15:11PM -0700, Andrew Morton wrote:
> On Tue, 12 May 2015 09:20:35 +0000 Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> wrote:
>
> > Currently the initial value of order in dissolve_free_huge_page is 64 or 32,
> > which leads to the following warning in static checker:
> >
> > mm/hugetlb.c:1203 dissolve_free_huge_pages()
> > warn: potential right shift more than type allows '9,18,64'
> >
> > This is a potential risk of infinite loop, because 1 << order (== 0) is used
> > in for-loop like this:
> >
> > for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order)
> > ...
> >
> > So this patch simply avoids the risk by initializing with UINT_MAX.
> >
> > ..
> >
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -1188,7 +1188,7 @@ static void dissolve_free_huge_page(struct page *page)
> > */
> > void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
> > {
> > - unsigned int order = 8 * sizeof(void *);
> > + unsigned int order = UINT_MAX;
> > unsigned long pfn;
> > struct hstate *h;
> >
> > @@ -1200,6 +1200,7 @@ void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
> > if (order > huge_page_order(h))
> > order = huge_page_order(h);
> > VM_BUG_ON(!IS_ALIGNED(start_pfn, 1 << order));
> > + VM_BUG_ON(order == UINT_MAX);
> > for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order)
> > dissolve_free_huge_page(pfn_to_page(pfn));
>
> Do we need to calculate this each time? Can it be done in
> hugetlb_init_hstates(), save the result in a global?
Yes, it should work. How about the following?
This adds 4bytes to .data due to a new global variable, but reduces 47 bytes
.text size of code reduces, so it's a win in total.
text data bss dec hex filename
28313 469 84236 113018 1b97a mm/hugetlb.o (above patch)
28266 473 84236 112975 1b94f mm/hugetlb.o (below patch)
---
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 271e4432734c..fecb8bbfe11e 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -40,6 +40,7 @@ int hugepages_treat_as_movable;
int hugetlb_max_hstate __read_mostly;
unsigned int default_hstate_idx;
struct hstate hstates[HUGE_MAX_HSTATE];
+unsigned int minimum_order __read_mostly;
__initdata LIST_HEAD(huge_boot_pages);
@@ -1188,19 +1189,13 @@ static void dissolve_free_huge_page(struct page *page)
*/
void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
{
- unsigned int order = 8 * sizeof(void *);
unsigned long pfn;
- struct hstate *h;
if (!hugepages_supported())
return;
- /* Set scan step to minimum hugepage size */
- for_each_hstate(h)
- if (order > huge_page_order(h))
- order = huge_page_order(h);
- VM_BUG_ON(!IS_ALIGNED(start_pfn, 1 << order));
- for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order)
+ VM_BUG_ON(!IS_ALIGNED(start_pfn, 1 << minimum_order));
+ for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order)
dissolve_free_huge_page(pfn_to_page(pfn));
}
@@ -1626,11 +1621,16 @@ static void __init hugetlb_init_hstates(void)
{
struct hstate *h;
+ minimum_order = UINT_MAX;
for_each_hstate(h) {
+ if (minimum_order > huge_page_order(h))
+ minimum_order = huge_page_order(h);
+
/* oversize hugepages were init'ed in early boot */
if (!hstate_is_gigantic(h))
hugetlb_hstate_alloc_pages(h);
}
+ VM_BUG_ON(minimum_order == UINT_MAX);
}
static char * __init memfmt(char *buf, unsigned long n)
--
2.1.0
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2015-05-13 1:45 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-05-11 11:17 mm: memory-hotplug: enable memory hotplug to handle hugepage Dan Carpenter
2015-05-11 11:29 ` Dan Carpenter
2015-05-11 23:56 ` Naoya Horiguchi
2015-05-11 23:54 ` Naoya Horiguchi
2015-05-12 8:43 ` Dan Carpenter
2015-05-12 9:04 ` Naoya Horiguchi
2015-05-12 9:13 ` Dan Carpenter
2015-05-12 9:16 ` Naoya Horiguchi
2015-05-12 9:20 ` [PATCH] mm/hugetlb: initialize order with UINT_MAX in dissolve_free_huge_pages() Naoya Horiguchi
2015-05-12 23:15 ` Andrew Morton
2015-05-13 1:44 ` Naoya Horiguchi [this message]
2015-05-13 20:55 ` Andrew Morton
2015-05-14 6:15 ` [PATCH] mm/hugetlb: introduce minimum hugepage order Naoya Horiguchi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150513014418.GB14599@hori1.linux.bs1.fc.nec.co.jp \
--to=n-horiguchi@ah.jp.nec.com \
--cc=akpm@linux-foundation.org \
--cc=dan.carpenter@oracle.com \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox