linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mike Kravetz <mike.kravetz@oracle.com>
To: Gerald Schaefer <gerald.schaefer@de.ibm.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Michal Hocko <mhocko@suse.cz>,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	"Aneesh Kumar K . V" <aneesh.kumar@linux.vnet.ibm.com>,
	Martin Schwidefsky <schwidefsky@de.ibm.com>,
	Heiko Carstens <heiko.carstens@de.ibm.com>,
	Rui Teng <rui.teng@linux.vnet.ibm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>
Subject: Re: [PATCH 0/1] memory offline issues with hugepage size > memory block size
Date: Tue, 20 Sep 2016 10:37:04 -0700	[thread overview]
Message-ID: <bc000c05-3186-da92-e868-f2dbf0c28a98@oracle.com> (raw)
In-Reply-To: <20160920155354.54403-1-gerald.schaefer@de.ibm.com>

On 09/20/2016 08:53 AM, Gerald Schaefer wrote:
> dissolve_free_huge_pages() will either run into the VM_BUG_ON() or a
> list corruption and addressing exception when trying to set a memory
> block offline that is part (but not the first part) of a gigantic
> hugetlb page with a size > memory block size.
> 
> When no other smaller hugepage sizes are present, the VM_BUG_ON() will
> trigger directly. In the other case we will run into an addressing
> exception later, because dissolve_free_huge_page() will not use the head
> page of the compound hugetlb page which will result in a NULL hstate
> from page_hstate(). list_del() would also not work well on a tail page.
> 
> To fix this, first remove the VM_BUG_ON() because it is wrong, and then
> use the compound head page in dissolve_free_huge_page().
> 
> However, this all assumes that it is the desired behaviour to remove
> a (gigantic) unused hugetlb page from the pool, just because a small
> (in relation to the  hugepage size) memory block is going offline. Not
> sure if this is the right thing, and it doesn't look very consistent
> given that in this scenario it is _not_ possible to migrate
> such a (gigantic) hugepage if it is in use. OTOH, has_unmovable_pages()
> will return false in both cases, i.e. the memory block will be reported
> as removable, no matter if the hugepage that it is part of is unused or
> in use.
> 
> This patch is assuming that it would be OK to remove the hugepage,
> i.e. memory offline beats pre-allocated unused (gigantic) hugepages.
> 
> Any thoughts?

Cc'ed Rui Teng and Dave Hansen as they were discussing the issue in
this thread:
https://lkml.org/lkml/2016/9/13/146

Their approach (I believe) would be to fail the offline operation in
this case.  However, I could argue that failing the operation, or
dissolving the unused huge page containing the area to be offlined is
the right thing to do.

I never thought too much about the VM_BUG_ON(), but you are correct in
that it should be removed in either case.

The other thing that needs to be changed is the locking in
dissolve_free_huge_page().  I believe the lock only needs to be held if
we are removing the huge page from the pool.  It is not a correctness
but performance issue.

-- 
Mike Kravetz

> 
> 
> Gerald Schaefer (1):
>   mm/hugetlb: fix memory offline with hugepage size > memory block size
> 
>  mm/hugetlb.c | 16 +++++++++-------
>  1 file changed, 9 insertions(+), 7 deletions(-)
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2016-09-20 17:37 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-20 15:53 Gerald Schaefer
2016-09-20 15:53 ` [PATCH 1/1] mm/hugetlb: fix memory offline " Gerald Schaefer
2016-09-21  6:29   ` Hillf Danton
2016-09-21 12:35     ` [PATCH v2 " Gerald Schaefer
2016-09-21 13:17       ` Rui Teng
2016-09-21 15:13         ` Gerald Schaefer
2016-09-22  7:58       ` Hillf Danton
2016-09-22  9:51       ` Michal Hocko
2016-09-22 13:45         ` Gerald Schaefer
2016-09-22 16:29           ` [PATCH v3] " Gerald Schaefer
2016-09-22 18:12             ` Dave Hansen
2016-09-22 19:13               ` Mike Kravetz
2016-09-23 10:36               ` Gerald Schaefer
2016-09-23  6:40         ` [PATCH v2 1/1] " Rui Teng
2016-09-23 11:03           ` Gerald Schaefer
2016-09-26  2:49             ` Rui Teng
2016-09-20 17:37 ` Mike Kravetz [this message]
2016-09-20 17:45   ` [PATCH 0/1] memory offline issues " Dave Hansen
2016-09-21  9:49     ` Vlastimil Babka
2016-09-21 10:34     ` Gerald Schaefer
2016-09-21 10:30   ` Gerald Schaefer
2016-09-21 18:20   ` Michal Hocko
2016-09-21 18:27     ` Dave Hansen
2016-09-21 19:22       ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bc000c05-3186-da92-e868-f2dbf0c28a98@oracle.com \
    --to=mike.kravetz@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.vnet.ibm.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=gerald.schaefer@de.ibm.com \
    --cc=heiko.carstens@de.ibm.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.cz \
    --cc=n-horiguchi@ah.jp.nec.com \
    --cc=rui.teng@linux.vnet.ibm.com \
    --cc=schwidefsky@de.ibm.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox