From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
To: Andrew Morton <akpm@linux-foundation.org>,
Hugh Dickins <hughd@google.com>
Cc: David Rientjes <rientjes@google.com>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Naoya Horiguchi <nao.horiguchi@gmail.com>
Subject: [PATCH v5 2/8] mm/hugetlb: pmd_huge() returns true for non-present hugepage
Date: Tue, 2 Dec 2014 08:26:39 +0000 [thread overview]
Message-ID: <1417508759-10848-3-git-send-email-n-horiguchi@ah.jp.nec.com> (raw)
In-Reply-To: <1417508759-10848-1-git-send-email-n-horiguchi@ah.jp.nec.com>
Migrating hugepages and hwpoisoned hugepages are considered as
non-present hugepages, and they are referenced via migration entries
and hwpoison entries in their page table slots.
This behavior causes race condition because pmd_huge() doesn't tell
non-huge pages from migrating/hwpoisoned hugepages. follow_page_mask()
is one example where the kernel would call follow_page_pte() for such
hugepage while this function is supposed to handle only normal pages.
To avoid this, this patch makes pmd_huge() return true when pmd_none()
is true *and* pmd_present() is false. We don't have to worry about mixing
up non-present pmd entry with normal pmd (pointing to leaf level pte
entry) because pmd_present() is true in normal pmd.
The same race condition could happen in (x86-specific) gup_pmd_range(),
where this patch simply adds pmd_present() check instead of pmd_huge().
This is because gup_pmd_range() is fast path. If we have non-present
hugepage in this function, we will go into gup_huge_pmd(), then return
0 at flag mask check, and finally fall back to the slow path.
Fixes: 290408d4a2 ("hugetlb: hugepage migration core")
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: <stable@vger.kernel.org> # [2.6.36+]
---
arch/x86/mm/gup.c | 2 +-
arch/x86/mm/hugetlbpage.c | 8 +++++++-
mm/hugetlb.c | 2 ++
3 files changed, 10 insertions(+), 2 deletions(-)
diff --git mmotm-2014-11-26-15-45.orig/arch/x86/mm/gup.c mmotm-2014-11-26-15-45/arch/x86/mm/gup.c
index 207d9aef662d..448ee8912d9b 100644
--- mmotm-2014-11-26-15-45.orig/arch/x86/mm/gup.c
+++ mmotm-2014-11-26-15-45/arch/x86/mm/gup.c
@@ -172,7 +172,7 @@ static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end,
*/
if (pmd_none(pmd) || pmd_trans_splitting(pmd))
return 0;
- if (unlikely(pmd_large(pmd))) {
+ if (unlikely(pmd_large(pmd) || !pmd_present(pmd))) {
/*
* NUMA hinting faults need to be handled in the GUP
* slowpath for accounting purposes and so that they
diff --git mmotm-2014-11-26-15-45.orig/arch/x86/mm/hugetlbpage.c mmotm-2014-11-26-15-45/arch/x86/mm/hugetlbpage.c
index 03b8a7c11817..9161f764121e 100644
--- mmotm-2014-11-26-15-45.orig/arch/x86/mm/hugetlbpage.c
+++ mmotm-2014-11-26-15-45/arch/x86/mm/hugetlbpage.c
@@ -54,9 +54,15 @@ int pud_huge(pud_t pud)
#else
+/*
+ * pmd_huge() returns 1 if @pmd is hugetlb related entry, that is normal
+ * hugetlb entry or non-present (migration or hwpoisoned) hugetlb entry.
+ * Otherwise, returns 0.
+ */
int pmd_huge(pmd_t pmd)
{
- return !!(pmd_val(pmd) & _PAGE_PSE);
+ return !pmd_none(pmd) &&
+ (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;
}
int pud_huge(pud_t pud)
diff --git mmotm-2014-11-26-15-45.orig/mm/hugetlb.c mmotm-2014-11-26-15-45/mm/hugetlb.c
index 6be4a690e554..dd42878549d5 100644
--- mmotm-2014-11-26-15-45.orig/mm/hugetlb.c
+++ mmotm-2014-11-26-15-45/mm/hugetlb.c
@@ -3679,6 +3679,8 @@ follow_huge_pmd(struct mm_struct *mm, unsigned long address,
{
struct page *page;
+ if (!pmd_present(*pmd))
+ return NULL;
page = pte_page(*(pte_t *)pmd);
if (page)
page += ((address & ~PMD_MASK) >> PAGE_SHIFT);
--
2.2.0.rc0.2.gf745acb
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2014-12-02 8:33 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-12-02 8:26 [PATCH 0/8] hugepage migration fixes (v5) Naoya Horiguchi
2014-12-02 8:26 ` [PATCH v5 1/8] mm/hugetlb: reduce arch dependent code around follow_huge_* Naoya Horiguchi
2014-12-02 8:26 ` Naoya Horiguchi [this message]
2014-12-02 8:26 ` [PATCH v5 4/8] mm/hugetlb: fix getting refcount 0 page in hugetlb_fault() Naoya Horiguchi
2014-12-02 8:26 ` [PATCH v5 3/8] mm/hugetlb: take page table lock in follow_huge_pmd() Naoya Horiguchi
2014-12-02 8:26 ` [PATCH v5 7/8] mm/hugetlb: fix suboptimal migration/hwpoisoned entry check Naoya Horiguchi
2014-12-02 8:26 ` [PATCH v5 6/8] mm/hugetlb: add migration entry check in __unmap_hugepage_range Naoya Horiguchi
2014-12-02 8:26 ` [PATCH v5 5/8] mm/hugetlb: add migration/hwpoisoned entry check in hugetlb_change_protection Naoya Horiguchi
2014-12-02 8:26 ` [PATCH v5 8/8] mm/hugetlb: cleanup and rename is_hugetlb_entry_(migration|hwpoisoned)() Naoya Horiguchi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1417508759-10848-3-git-send-email-n-horiguchi@ah.jp.nec.com \
--to=n-horiguchi@ah.jp.nec.com \
--cc=akpm@linux-foundation.org \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nao.horiguchi@gmail.com \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox