* [PATCH 0/2] hugetlb minor improvements
@ 2014-09-26 20:44 Naoya Horiguchi
2014-09-26 20:44 ` [mmotm][PATCH 1/2] mm/hugetlb: improve suboptimal migration/hwpoisoned entry check Naoya Horiguchi
2014-09-26 20:44 ` [mmotm][PATCH 2/2] mm/hugetlb: cleanup and rename is_hugetlb_entry_(migration|hwpoisoned)() Naoya Horiguchi
0 siblings, 2 replies; 4+ messages in thread
From: Naoya Horiguchi @ 2014-09-26 20:44 UTC (permalink / raw)
To: Andrew Morton
Cc: Hugh Dickins, David Rientjes, linux-mm, linux-kernel, Naoya Horiguchi
This patchset does minor improvements and cleanups for mm/hugetlb.c.
It's based on mmotm-2014-09-25-16-28 and shows no regression in libhugetlbfs test.
Tree: git@github.com:Naoya-Horiguchi/linux.git
Branch: mmotm-2014-09-25-16-28/hugetlb_minor_improvements
---
Summary:
Naoya Horiguchi (2):
mm/hugetlb: improve suboptimal migration/hwpoisoned entry check
mm/hugetlb: cleanup and rename is_hugetlb_entry_(migration|hwpoisoned)()
mm/hugetlb.c | 60 ++++++++++++++++++++++--------------------------------------
1 file changed, 22 insertions(+), 38 deletions(-)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 4+ messages in thread* [mmotm][PATCH 1/2] mm/hugetlb: improve suboptimal migration/hwpoisoned entry check 2014-09-26 20:44 [PATCH 0/2] hugetlb minor improvements Naoya Horiguchi @ 2014-09-26 20:44 ` Naoya Horiguchi 2014-09-26 20:44 ` [mmotm][PATCH 2/2] mm/hugetlb: cleanup and rename is_hugetlb_entry_(migration|hwpoisoned)() Naoya Horiguchi 1 sibling, 0 replies; 4+ messages in thread From: Naoya Horiguchi @ 2014-09-26 20:44 UTC (permalink / raw) To: Andrew Morton Cc: Hugh Dickins, David Rientjes, linux-mm, linux-kernel, Naoya Horiguchi Currently hugetlb_fault() checks at first whether pte of the faulted address is a migration or hwpoisoned entry. The reason of this approach is that without the checks, the BUG_ON() in huge_pte_alloc() is triggered, because it assumes that when pte is not none, it always points to a normal hugepage, which was correct originally but not after hugetlb supports page migration or hwpoison. In order to iron out this weird workaround, this patch changes the wrongly assumed BUG_ON() in huge_pte_alloc(). This allows us to check pte_present() case only in proper place, which makes code simpler. Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> --- mm/hugetlb.c | 28 ++++++++++++---------------- 1 file changed, 12 insertions(+), 16 deletions(-) diff --git mmotm-2014-09-25-16-28.orig/mm/hugetlb.c mmotm-2014-09-25-16-28/mm/hugetlb.c index 1ecb625bc498..e6543359be4d 100644 --- mmotm-2014-09-25-16-28.orig/mm/hugetlb.c +++ mmotm-2014-09-25-16-28/mm/hugetlb.c @@ -3130,20 +3130,10 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, struct hstate *h = hstate_vma(vma); struct address_space *mapping; int need_wait_lock = 0; + int need_wait_migration = 0; address &= huge_page_mask(h); - ptep = huge_pte_offset(mm, address); - if (ptep) { - entry = huge_ptep_get(ptep); - if (unlikely(is_hugetlb_entry_migration(entry))) { - migration_entry_wait_huge(vma, mm, ptep); - return 0; - } else if (unlikely(is_hugetlb_entry_hwpoisoned(entry))) - return VM_FAULT_HWPOISON_LARGE | - VM_FAULT_SET_HINDEX(hstate_index(h)); - } - ptep = huge_pte_alloc(mm, address, huge_page_size(h)); if (!ptep) return VM_FAULT_OOM; @@ -3169,12 +3159,16 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, /* * entry could be a migration/hwpoison entry at this point, so this * check prevents the kernel from going below assuming that we have - * a active hugepage in pagecache. This goto expects the 2nd page fault, - * and is_hugetlb_entry_(migration|hwpoisoned) check will properly - * handle it. + * a active hugepage in pagecache. */ - if (!pte_present(entry)) + if (!pte_present(entry)) { + if (is_hugetlb_entry_migration(entry)) + need_wait_migration = 1; + else if (is_hugetlb_entry_hwpoisoned(entry)) + ret = VM_FAULT_HWPOISON_LARGE | + VM_FAULT_SET_HINDEX(hstate_index(h)); goto out_mutex; + } /* * If we are going to COW the mapping later, we examine the pending @@ -3242,6 +3236,8 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, } out_mutex: mutex_unlock(&htlb_fault_mutex_table[hash]); + if (need_wait_migration) + migration_entry_wait_huge(vma, mm, ptep); if (need_wait_lock) wait_on_page_locked(page); return ret; @@ -3664,7 +3660,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, pte = (pte_t *)pmd_alloc(mm, pud, addr); } } - BUG_ON(pte && !pte_none(*pte) && !pte_huge(*pte)); + BUG_ON(pte && !pte_none(*pte) && pte_present(*pte) && !pte_huge(*pte)); return pte; } -- 1.9.3 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 4+ messages in thread
* [mmotm][PATCH 2/2] mm/hugetlb: cleanup and rename is_hugetlb_entry_(migration|hwpoisoned)() 2014-09-26 20:44 [PATCH 0/2] hugetlb minor improvements Naoya Horiguchi 2014-09-26 20:44 ` [mmotm][PATCH 1/2] mm/hugetlb: improve suboptimal migration/hwpoisoned entry check Naoya Horiguchi @ 2014-09-26 20:44 ` Naoya Horiguchi 1 sibling, 0 replies; 4+ messages in thread From: Naoya Horiguchi @ 2014-09-26 20:44 UTC (permalink / raw) To: Andrew Morton Cc: Hugh Dickins, David Rientjes, linux-mm, linux-kernel, Naoya Horiguchi non_swap_entry() returns true if a given swp_entry_t is a migration entry or hwpoisoned entry. So non_swap_entry() && is_migration_entry() is identical with just is_migration_entry(). By removing non_swap_entry(), we can write is_hugetlb_entry_(migration|hwpoisoned)() more simply. And the name is_hugetlb_entry_(migration|hwpoisoned) is lengthy and it's not predictable from naming convention around pte_* family. Just pte_migration() looks better, but these function contains hugetlb specific (so architecture dependent) huge_pte_none() check, so let's rename them as huge_pte_(migration|hwpoisoned). Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> --- mm/hugetlb.c | 36 ++++++++++++------------------------ 1 file changed, 12 insertions(+), 24 deletions(-) diff --git mmotm-2014-09-25-16-28.orig/mm/hugetlb.c mmotm-2014-09-25-16-28/mm/hugetlb.c index e6543359be4d..e70da7ae36ed 100644 --- mmotm-2014-09-25-16-28.orig/mm/hugetlb.c +++ mmotm-2014-09-25-16-28/mm/hugetlb.c @@ -2516,30 +2516,18 @@ static void set_huge_ptep_writable(struct vm_area_struct *vma, update_mmu_cache(vma, address, ptep); } -static int is_hugetlb_entry_migration(pte_t pte) +static inline int huge_pte_migration(pte_t pte) { - swp_entry_t swp; - if (huge_pte_none(pte) || pte_present(pte)) return 0; - swp = pte_to_swp_entry(pte); - if (non_swap_entry(swp) && is_migration_entry(swp)) - return 1; - else - return 0; + return is_migration_entry(pte_to_swp_entry(pte)); } -static int is_hugetlb_entry_hwpoisoned(pte_t pte) +static inline int huge_pte_hwpoisoned(pte_t pte) { - swp_entry_t swp; - if (huge_pte_none(pte) || pte_present(pte)) return 0; - swp = pte_to_swp_entry(pte); - if (non_swap_entry(swp) && is_hwpoison_entry(swp)) - return 1; - else - return 0; + return is_hwpoison_entry(pte_to_swp_entry(pte)); } int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, @@ -2583,8 +2571,8 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, entry = huge_ptep_get(src_pte); if (huge_pte_none(entry)) { /* skip none entry */ ; - } else if (unlikely(is_hugetlb_entry_migration(entry) || - is_hugetlb_entry_hwpoisoned(entry))) { + } else if (unlikely(huge_pte_migration(entry) || + huge_pte_hwpoisoned(entry))) { swp_entry_t swp_entry = pte_to_swp_entry(entry); if (is_write_migration_entry(swp_entry) && cow) { @@ -3162,9 +3150,9 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, * a active hugepage in pagecache. */ if (!pte_present(entry)) { - if (is_hugetlb_entry_migration(entry)) + if (huge_pte_migration(entry)) need_wait_migration = 1; - else if (is_hugetlb_entry_hwpoisoned(entry)) + else if (huge_pte_hwpoisoned(entry)) ret = VM_FAULT_HWPOISON_LARGE | VM_FAULT_SET_HINDEX(hstate_index(h)); goto out_mutex; @@ -3291,8 +3279,8 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, * (in which case hugetlb_fault waits for the migration,) and * hwpoisoned hugepages (in which case we need to prevent the * caller from accessing to them.) In order to do this, we use - * here is_swap_pte instead of is_hugetlb_entry_migration and - * is_hugetlb_entry_hwpoisoned. This is because it simply covers + * here is_swap_pte instead of huge_pte_migration and + * huge_pte_hwpoisoned. This is because it simply covers * both cases, and because we can't follow correct pages * directly from any kind of swap entries. */ @@ -3370,11 +3358,11 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma, continue; } pte = huge_ptep_get(ptep); - if (unlikely(is_hugetlb_entry_hwpoisoned(pte))) { + if (unlikely(huge_pte_hwpoisoned(pte))) { spin_unlock(ptl); continue; } - if (unlikely(is_hugetlb_entry_migration(pte))) { + if (unlikely(huge_pte_migration(pte))) { swp_entry_t entry = pte_to_swp_entry(pte); if (is_write_migration_entry(entry)) { -- 1.9.3 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH 0/2] hugetlb minor improvements
@ 2014-09-26 20:42 Naoya Horiguchi
2014-09-26 20:42 ` [mmotm][PATCH 1/2] mm/hugetlb: improve suboptimal migration/hwpoisoned entry check Naoya Horiguchi
0 siblings, 1 reply; 4+ messages in thread
From: Naoya Horiguchi @ 2014-09-26 20:42 UTC (permalink / raw)
To: Andrew Morton
Cc: Hugh Dickins, David Rientjes, linux-mm, linux-kernel, Naoya Horiguchi
This patchset does minor improvements and cleanups for mm/hugetlb.c.
It's based on mmotm-2014-09-25-16-28 and shows no regression in libhugetlbfs test.
Tree: git@github.com:Naoya-Horiguchi/linux.git
Branch: mmotm-2014-09-25-16-28/hugetlb_minor_improvements
---
Summary:
Naoya Horiguchi (2):
mm/hugetlb: improve suboptimal migration/hwpoisoned entry check
mm/hugetlb: cleanup and rename is_hugetlb_entry_(migration|hwpoisoned)()
mm/hugetlb.c | 60 ++++++++++++++++++++++--------------------------------------
1 file changed, 22 insertions(+), 38 deletions(-)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 4+ messages in thread* [mmotm][PATCH 1/2] mm/hugetlb: improve suboptimal migration/hwpoisoned entry check 2014-09-26 20:42 [PATCH 0/2] hugetlb minor improvements Naoya Horiguchi @ 2014-09-26 20:42 ` Naoya Horiguchi 0 siblings, 0 replies; 4+ messages in thread From: Naoya Horiguchi @ 2014-09-26 20:42 UTC (permalink / raw) To: Andrew Morton Cc: Hugh Dickins, David Rientjes, linux-mm, linux-kernel, Naoya Horiguchi Currently hugetlb_fault() checks at first whether pte of the faulted address is a migration or hwpoisoned entry. The reason of this approach is that without the checks, the BUG_ON() in huge_pte_alloc() is triggered, because it assumes that when pte is not none, it always points to a normal hugepage, which was correct originally but not after hugetlb supports page migration or hwpoison. In order to iron out this weird workaround, this patch changes the wrongly assumed BUG_ON() in huge_pte_alloc(). This allows us to check pte_present() case only in proper place, which makes code simpler. Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> --- mm/hugetlb.c | 28 ++++++++++++---------------- 1 file changed, 12 insertions(+), 16 deletions(-) diff --git mmotm-2014-09-25-16-28.orig/mm/hugetlb.c mmotm-2014-09-25-16-28/mm/hugetlb.c index 1ecb625bc498..e6543359be4d 100644 --- mmotm-2014-09-25-16-28.orig/mm/hugetlb.c +++ mmotm-2014-09-25-16-28/mm/hugetlb.c @@ -3130,20 +3130,10 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, struct hstate *h = hstate_vma(vma); struct address_space *mapping; int need_wait_lock = 0; + int need_wait_migration = 0; address &= huge_page_mask(h); - ptep = huge_pte_offset(mm, address); - if (ptep) { - entry = huge_ptep_get(ptep); - if (unlikely(is_hugetlb_entry_migration(entry))) { - migration_entry_wait_huge(vma, mm, ptep); - return 0; - } else if (unlikely(is_hugetlb_entry_hwpoisoned(entry))) - return VM_FAULT_HWPOISON_LARGE | - VM_FAULT_SET_HINDEX(hstate_index(h)); - } - ptep = huge_pte_alloc(mm, address, huge_page_size(h)); if (!ptep) return VM_FAULT_OOM; @@ -3169,12 +3159,16 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, /* * entry could be a migration/hwpoison entry at this point, so this * check prevents the kernel from going below assuming that we have - * a active hugepage in pagecache. This goto expects the 2nd page fault, - * and is_hugetlb_entry_(migration|hwpoisoned) check will properly - * handle it. + * a active hugepage in pagecache. */ - if (!pte_present(entry)) + if (!pte_present(entry)) { + if (is_hugetlb_entry_migration(entry)) + need_wait_migration = 1; + else if (is_hugetlb_entry_hwpoisoned(entry)) + ret = VM_FAULT_HWPOISON_LARGE | + VM_FAULT_SET_HINDEX(hstate_index(h)); goto out_mutex; + } /* * If we are going to COW the mapping later, we examine the pending @@ -3242,6 +3236,8 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, } out_mutex: mutex_unlock(&htlb_fault_mutex_table[hash]); + if (need_wait_migration) + migration_entry_wait_huge(vma, mm, ptep); if (need_wait_lock) wait_on_page_locked(page); return ret; @@ -3664,7 +3660,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, pte = (pte_t *)pmd_alloc(mm, pud, addr); } } - BUG_ON(pte && !pte_none(*pte) && !pte_huge(*pte)); + BUG_ON(pte && !pte_none(*pte) && pte_present(*pte) && !pte_huge(*pte)); return pte; } -- 1.9.3 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2014-09-26 21:36 UTC | newest] Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2014-09-26 20:44 [PATCH 0/2] hugetlb minor improvements Naoya Horiguchi 2014-09-26 20:44 ` [mmotm][PATCH 1/2] mm/hugetlb: improve suboptimal migration/hwpoisoned entry check Naoya Horiguchi 2014-09-26 20:44 ` [mmotm][PATCH 2/2] mm/hugetlb: cleanup and rename is_hugetlb_entry_(migration|hwpoisoned)() Naoya Horiguchi -- strict thread matches above, loose matches on Subject: below -- 2014-09-26 20:42 [PATCH 0/2] hugetlb minor improvements Naoya Horiguchi 2014-09-26 20:42 ` [mmotm][PATCH 1/2] mm/hugetlb: improve suboptimal migration/hwpoisoned entry check Naoya Horiguchi
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox