From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
To: zhong jiang <zhongjiang@huawei.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>,
"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] mm: fix the incorrect hugepages count
Date: Wed, 10 Aug 2016 00:07:06 +0000 [thread overview]
Message-ID: <20160810000706.GA28043@hori1.linux.bs1.fc.nec.co.jp> (raw)
In-Reply-To: <57A9B147.1090003@huawei.com>
On Tue, Aug 09, 2016 at 06:32:39PM +0800, zhong jiang wrote:
> On 2016/8/9 1:14, Mike Kravetz wrote:
> > On 08/07/2016 07:49 PM, zhongjiang wrote:
> >> From: zhong jiang <zhongjiang@huawei.com>
> >>
> >> when memory hotplug enable, free hugepages will be freed if movable node offline.
> >> therefore, /proc/sys/vm/nr_hugepages will be incorrect.
This sounds a bit odd to me because /proc/sys/vm/nr_hugepages returns
h->nr_huge_pages or h->nr_huge_pages_node[nid], which is already
considered in dissolve_free_huge_page (via update_and_free_page).
I think that h->max_huge_pages effectively means the pool size, and
h->nr_huge_pages means total hugepage number (which can be greater than
the pool size when there's overcommiting/surplus.)
dissolve_free_huge_page intends to break a hugepage into buddy, and
the destination hugepage is supposed to be allocated from the pool of
the destination node, so the system-wide pool size is reduced.
So adding h->max_huge_pages-- makes sense to me.
Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> >>
> >> The patch fix it by reduce the max_huge_pages when the node offline.
> >>
> >> Signed-off-by: zhong jiang <zhongjiang@huawei.com>
> >> ---
> >> mm/hugetlb.c | 1 +
> >> 1 file changed, 1 insertion(+)
> >>
> >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> >> index f904246..3356e3a 100644
> >> --- a/mm/hugetlb.c
> >> +++ b/mm/hugetlb.c
> >> @@ -1448,6 +1448,7 @@ static void dissolve_free_huge_page(struct page *page)
> >> list_del(&page->lru);
> >> h->free_huge_pages--;
> >> h->free_huge_pages_node[nid]--;
> >> + h->max_huge_pages--;
> >> update_and_free_page(h, page);
> >> }
> >> spin_unlock(&hugetlb_lock);
> >>
> > Adding Naoya as he was the original author of this code.
> >
> > >From quick look it appears that the huge page will be migrated (allocated
> > on another node). If my understanding is correct, then max_huge_pages
> > should not be adjusted here.
> >
> we need to take free hugetlb pages into account. of course, the allocated huge pages is no
> need to reduce. The patch just reduce the free hugetlb pages count.
I
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
prev parent reply other threads:[~2016-08-10 0:11 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-08-08 2:49 zhongjiang
2016-08-08 17:14 ` Mike Kravetz
2016-08-09 10:32 ` zhong jiang
2016-08-10 0:07 ` Naoya Horiguchi [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160810000706.GA28043@hori1.linux.bs1.fc.nec.co.jp \
--to=n-horiguchi@ah.jp.nec.com \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mike.kravetz@oracle.com \
--cc=zhongjiang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox