From: Xishi Qiu <qiuxishi@huawei.com>
To: Borislav Petkov <bp@alien8.de>, WuJianguo <wujianguo@huawei.com>,
Liujiang <jiang.liu@huawei.com>,
andi@firstfloor.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, akpm@linux-foundation.org
Subject: Re: [PATCH] MCE: fix an error of mce_bad_pages statistics
Date: Fri, 7 Dec 2012 15:35:15 +0800 [thread overview]
Message-ID: <50C19C33.9030502@huawei.com> (raw)
In-Reply-To: <20121207072541.GA27708@liondog.tnic>
On 2012/12/7 15:25, Borislav Petkov wrote:
> On Fri, Dec 07, 2012 at 10:53:41AM +0800, Xishi Qiu wrote:
>> On x86 platform, if we use "/sys/devices/system/memory/soft_offline_page" to offline a
>> free page twice, the value of mce_bad_pages will be added twice. So this is an error,
>> since the page was already marked HWPoison, we should skip the page and don't add the
>> value of mce_bad_pages.
>>
>> $ cat /proc/meminfo | grep HardwareCorrupted
>>
>> soft_offline_page()
>> get_any_page()
>> atomic_long_add(1, &mce_bad_pages)
>>
>> The free page which marked HWPoison is still managed by page buddy allocator. So when
>> offlining it again, get_any_page() always returns 0 with
>> "pr_info("%s: %#lx free buddy page\n", __func__, pfn);".
>>
>> When page is allocated, the PageBuddy is removed in bad_page(), then get_any_page()
>> returns -EIO with pr_info("%s: %#lx: unknown zero refcount page type %lx\n", so
>> mce_bad_pages will not be added.
>>
>> Signed-off-by: Xishi Qiu <qiuxishi@huawei.com>
>> i>>?Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
>> ---
>> mm/memory-failure.c | 5 +++++
>> 1 files changed, 5 insertions(+), 0 deletions(-)
>>
>> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
>> index 8b20278..02a522e 100644
>> --- a/mm/memory-failure.c
>> +++ b/mm/memory-failure.c
>> @@ -1375,6 +1375,11 @@ static int get_any_page(struct page *p, unsigned long pfn, int flags)
>> if (flags & MF_COUNT_INCREASED)
>> return 1;
>>
>> + if (PageHWPoison(p)) {
>> + pr_info("%s: %#lx page already poisoned\n", __func__, pfn);
>> + return -EBUSY;
>> + }
>
> Shouldn't this be done in soft_offline_page() instead, like it is done
> in soft_offline_huge_page() for hugepages?
>
> Thanks.
>
Hi Borislav, you mean we should move this to the beginning of soft_offline_page()?
soft_offline_page()
{
...
get_any_page()
...
/*
* Synchronized using the page lock with memory_failure()
*/
if (PageHWPoison(page)) {
unlock_page(page);
put_page(page);
pr_info("soft offline: %#lx page already poisoned\n", pfn);
return -EBUSY;
}
...
}
Thanks
Xishi Qiu
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2012-12-07 7:36 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-12-07 2:53 Xishi Qiu
2012-12-07 7:25 ` Borislav Petkov
2012-12-07 7:35 ` Xishi Qiu [this message]
2012-12-07 7:46 ` Borislav Petkov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=50C19C33.9030502@huawei.com \
--to=qiuxishi@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=andi@firstfloor.org \
--cc=bp@alien8.de \
--cc=jiang.liu@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=wujianguo@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox