From: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
To: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Michal Hocko <mhocko@suse.com>,
Heiko Carstens <hca@linux.ibm.com>,
Sven Schnelle <svens@linux.ibm.com>
Subject: Re: [RFC] linux-next panic in hugepage_subpool_put_pages()
Date: Tue, 23 Feb 2021 17:45:40 +0100 [thread overview]
Message-ID: <20210223174540.5526d843@thinkpad> (raw)
In-Reply-To: <20210223155740.553df3ee@thinkpad>
On Tue, 23 Feb 2021 15:57:40 +0100
Gerald Schaefer <gerald.schaefer@linux.ibm.com> wrote:
[...]
> What I do not understand is how __free_huge_page() would be called at all
> in the call trace below (set_max_huge_pages -> alloc_pool_huge_page ->
> __free_huge_page -> hugepage_subpool_put_pages). From the code it seems
> that this should not be possible, so I must be missing something.
Ok, looking more closely at the dump and code, I see that __free_huge_page()
was called via alloc_pool_huge_page -> put_page() -> destroy_compound_page()
-> compound_page_dtors[2].
It doesn't feel right that alloc_pool_huge_page() ends up freeing the
newly allocated page again. Maybe some refcounting race, so that put_page()
wrongly assumes it was the last reference?
Also from the dump, I could reconstruct the (head) struct page pointer
that __free_huge_page() was called with. Here is the content of the
head and the first tail page, maybe it can help. page->private in the tail
page was zeroed already in __free_huge_page(), but its original value was
the broken *spool pointer 0000004e00000000, as seen in the register output
of the backtrace.
crash> struct -x page 0000037203dec000
struct page {
flags = 0x3ffff00000010000,
{
{
lru = {
next = 0x37203dec008,
prev = 0x37203dec008
},
mapping = 0x0,
index = 0x0,
private = 0x0
},
{
dma_addr = 0x37203dec008
},
{
{
slab_list = {
next = 0x37203dec008,
prev = 0x37203dec008
},
{
next = 0x37203dec008,
pages = 0x372,
pobjects = 0x3dec008
}
},
slab_cache = 0x0,
freelist = 0x0,
{
s_mem = 0x0,
counters = 0x0,
{
inuse = 0x0,
objects = 0x0,
frozen = 0x0
}
}
},
{
compound_head = 0x37203dec008,
compound_dtor = 0x0,
compound_order = 0x0,
compound_mapcount = {
counter = 0x3dec008
},
compound_nr = 0x0
},
{
_compound_pad_1 = 0x37203dec008,
hpage_pinned_refcount = {
counter = 0x372
},
deferred_list = {
next = 0x0,
prev = 0x0
}
},
{
_pt_pad_1 = 0x37203dec008,
pmd_huge_pte = 0x37203dec008,
_pt_pad_2 = 0x0,
{
pt_mm = 0x0,
pt_frag_refcount = {
counter = 0x0
}
},
ptl = {
{
rlock = {
raw_lock = {
lock = 0x0
}
}
}
}
},
{
pgmap = 0x37203dec008,
zone_device_data = 0x37203dec008
},
callback_head = {
next = 0x37203dec008,
func = 0x37203dec008
}
},
{
_mapcount = {
counter = 0xffffffff
},
page_type = 0xffffffff,
active = 0xffffffff,
units = 0xffffffff
},
_refcount = {
counter = 0x0
},
memcg_data = 0x0
}
crash> struct -x page 0000037203dec040
struct page {
flags = 0x3ffff00000000000,
{
{
lru = {
next = 0x37203dec001,
prev = 0x2080372ffffffff
},
mapping = 0x10000000400,
index = 0x2,
private = 0x0
},
{
dma_addr = 0x37203dec001
},
{
{
slab_list = {
next = 0x37203dec001,
prev = 0x2080372ffffffff
},
{
next = 0x37203dec001,
pages = 0x2080372,
pobjects = 0xffffffff
}
},
slab_cache = 0x10000000400,
freelist = 0x2,
{
s_mem = 0x0,
counters = 0x0,
{
inuse = 0x0,
objects = 0x0,
frozen = 0x0
}
}
},
{
compound_head = 0x37203dec001,
compound_dtor = 0x2,
compound_order = 0x8,
compound_mapcount = {
counter = 0xffffffff
},
compound_nr = 0x100
},
{
_compound_pad_1 = 0x37203dec001,
hpage_pinned_refcount = {
counter = 0x2080372
},
deferred_list = {
next = 0x10000000400,
prev = 0x2
}
},
{
_pt_pad_1 = 0x37203dec001,
pmd_huge_pte = 0x2080372ffffffff,
_pt_pad_2 = 0x10000000400,
{
pt_mm = 0x2,
pt_frag_refcount = {
counter = 0x0
}
},
ptl = {
{
rlock = {
raw_lock = {
lock = 0x0
}
}
}
}
},
{
pgmap = 0x37203dec001,
zone_device_data = 0x2080372ffffffff
},
callback_head = {
next = 0x37203dec001,
func = 0x2080372ffffffff
}
},
{
_mapcount = {
counter = 0xffffffff
},
page_type = 0xffffffff,
active = 0xffffffff,
units = 0xffffffff
},
_refcount = {
counter = 0x0
},
memcg_data = 0x0
}
next prev parent reply other threads:[~2021-02-23 16:45 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-23 14:57 Gerald Schaefer
2021-02-23 16:45 ` Gerald Schaefer [this message]
2021-02-23 18:06 ` Mike Kravetz
2021-02-23 23:58 ` Andrew Morton
2021-02-24 1:29 ` Mike Kravetz
2021-02-24 2:08 ` Andrew Morton
2021-02-24 4:04 ` [PATCH] hugetlb: document the new location of page subpool pointer Mike Kravetz
2021-02-24 8:20 ` Oscar Salvador
2021-02-24 8:48 ` [External] " Muchun Song
2021-02-24 10:06 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210223174540.5526d843@thinkpad \
--to=gerald.schaefer@linux.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=hca@linux.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=mike.kravetz@oracle.com \
--cc=svens@linux.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox