From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pa0-f48.google.com (mail-pa0-f48.google.com [209.85.220.48]) by kanga.kvack.org (Postfix) with ESMTP id A27FC9003CD for ; Thu, 3 Sep 2015 11:14:26 -0400 (EDT) Received: by pacwi10 with SMTP id wi10so49528396pac.3 for ; Thu, 03 Sep 2015 08:14:26 -0700 (PDT) Received: from mga09.intel.com (mga09.intel.com. [134.134.136.24]) by mx.google.com with ESMTP id ry8si41888265pbb.86.2015.09.03.08.14.04 for ; Thu, 03 Sep 2015 08:14:04 -0700 (PDT) From: "Kirill A. Shutemov" Subject: [PATCHv10 30/36] thp: add option to setup migration entries during PMD split Date: Thu, 3 Sep 2015 18:13:16 +0300 Message-Id: <1441293202-137314-31-git-send-email-kirill.shutemov@linux.intel.com> In-Reply-To: <1441293202-137314-1-git-send-email-kirill.shutemov@linux.intel.com> References: <1441293202-137314-1-git-send-email-kirill.shutemov@linux.intel.com> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton , Andrea Arcangeli , Hugh Dickins Cc: Dave Hansen , Mel Gorman , Rik van Riel , Vlastimil Babka , Christoph Lameter , Naoya Horiguchi , Steve Capper , "Aneesh Kumar K.V" , Johannes Weiner , Michal Hocko , Jerome Marchand , Sasha Levin , linux-kernel@vger.kernel.org, linux-mm@kvack.org, "Kirill A. Shutemov" We are going to use migration PTE entries to stabilize page counts. If the page is mapped with PMDs we need to split the PMD and setup migration entries. It's reasonable to combine these operations to avoid double-scanning over the page table. Signed-off-by: Kirill A. Shutemov Tested-by: Sasha Levin Tested-by: Aneesh Kumar K.V Acked-by: Vlastimil Babka Acked-by: Jerome Marchand --- mm/huge_memory.c | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index f30af16c2b16..932814605b8a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -26,6 +26,7 @@ #include #include #include +#include #include #include @@ -2625,7 +2626,7 @@ static void __split_huge_zero_page_pmd(struct vm_area_struct *vma, } static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, - unsigned long haddr) + unsigned long haddr, bool freeze) { struct mm_struct *mm = vma->vm_mm; struct page *page; @@ -2669,12 +2670,18 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, * transferred to avoid any possibility of altering * permissions across VMAs. */ - entry = mk_pte(page + i, vma->vm_page_prot); - entry = maybe_mkwrite(pte_mkdirty(entry), vma); - if (!write) - entry = pte_wrprotect(entry); - if (!young) - entry = pte_mkold(entry); + if (freeze) { + swp_entry_t swp_entry; + swp_entry = make_migration_entry(page + i, write); + entry = swp_entry_to_pte(swp_entry); + } else { + entry = mk_pte(page + i, vma->vm_page_prot); + entry = maybe_mkwrite(pte_mkdirty(entry), vma); + if (!write) + entry = pte_wrprotect(entry); + if (!young) + entry = pte_mkold(entry); + } pte = pte_offset_map(&_pmd, haddr); BUG_ON(!pte_none(*pte)); set_pte_at(mm, haddr, pte, entry); @@ -2715,7 +2722,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, mmu_notifier_invalidate_range_start(mm, haddr, haddr + HPAGE_PMD_SIZE); ptl = pmd_lock(mm, pmd); if (likely(pmd_trans_huge(*pmd))) - __split_huge_pmd_locked(vma, pmd, haddr); + __split_huge_pmd_locked(vma, pmd, haddr, false); spin_unlock(ptl); mmu_notifier_invalidate_range_end(mm, haddr, haddr + HPAGE_PMD_SIZE); } -- 2.5.0 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org