From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by kanga.kvack.org (Postfix) with ESMTP id 827AD6B1C9C for ; Mon, 19 Nov 2018 16:54:28 -0500 (EST) Received: by mail-pf1-f197.google.com with SMTP id g63-v6so27543532pfc.9 for ; Mon, 19 Nov 2018 13:54:28 -0800 (PST) Received: from mga11.intel.com (mga11.intel.com. [192.55.52.93]) by mx.google.com with ESMTPS id j7si4446716plb.91.2018.11.19.13.54.27 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 19 Nov 2018 13:54:27 -0800 (PST) From: Yu-cheng Yu Subject: [RFC PATCH v6 16/26] mm: Handle THP/HugeTLB shadow stack page fault Date: Mon, 19 Nov 2018 13:47:59 -0800 Message-Id: <20181119214809.6086-17-yu-cheng.yu@intel.com> In-Reply-To: <20181119214809.6086-1-yu-cheng.yu@intel.com> References: <20181119214809.6086-1-yu-cheng.yu@intel.com> Sender: owner-linux-mm@kvack.org List-ID: To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue Cc: Yu-cheng Yu This patch implements THP shadow stack (SHSTK) copying in the same way as in the previous patch for regular PTE. In copy_huge_pmd(), clear the dirty bit from the PMD to cause a page fault upon the next SHSTK access to the PMD. At that time, fix the PMD and copy/re-use the page. Signed-off-by: Yu-cheng Yu --- arch/x86/mm/pgtable.c | 8 ++++++++ include/asm-generic/pgtable.h | 2 ++ mm/huge_memory.c | 4 ++++ 3 files changed, 14 insertions(+) diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 75dddc3d8451..4275c80f5832 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -897,6 +897,14 @@ inline pte_t pte_set_vma_features(pte_t pte, struct vm_area_struct *vma) return pte; } +inline pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_SHSTK) + return pmd_mkdirty_shstk(pmd); + else + return pmd; +} + inline bool arch_copy_pte_mapping(vm_flags_t vm_flags) { return (vm_flags & VM_SHSTK); diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index 30ac390fb2d4..b0b375d8bb34 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -1145,9 +1145,11 @@ static inline bool arch_has_pfn_modify_check(void) #ifndef CONFIG_ARCH_HAS_SHSTK #define pte_set_vma_features(pte, vma) pte +#define pmd_set_vma_features(pmd, vma) pmd #define arch_copy_pte_mapping(vma_flags) false #else pte_t pte_set_vma_features(pte_t pte, struct vm_area_struct *vma); +pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma); bool arch_copy_pte_mapping(vm_flags_t vm_flags); #endif diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 55478ab3c83b..12148a5b60e0 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -597,6 +597,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, entry = mk_huge_pmd(page, vma->vm_page_prot); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); + entry = pmd_set_vma_features(entry, vma); page_add_new_anon_rmap(page, vma, haddr, true); mem_cgroup_commit_charge(page, memcg, false, true); lru_cache_add_active_or_unevictable(page, vma); @@ -1209,6 +1210,7 @@ static vm_fault_t do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, pte_t entry; entry = mk_pte(pages[i], vma->vm_page_prot); entry = maybe_mkwrite(pte_mkdirty(entry), vma); + entry = pte_set_vma_features(entry, vma); memcg = (void *)page_private(pages[i]); set_page_private(pages[i], 0); page_add_new_anon_rmap(pages[i], vmf->vma, haddr, false); @@ -1293,6 +1295,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) pmd_t entry; entry = pmd_mkyoung(orig_pmd); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); + entry = pmd_set_vma_features(entry, vma); if (pmdp_set_access_flags(vma, haddr, vmf->pmd, entry, 1)) update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); ret |= VM_FAULT_WRITE; @@ -1365,6 +1368,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) pmd_t entry; entry = mk_huge_pmd(new_page, vma->vm_page_prot); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); + entry = pmd_set_vma_features(entry, vma); pmdp_huge_clear_flush_notify(vma, haddr, vmf->pmd); page_add_new_anon_rmap(new_page, vma, haddr, true); mem_cgroup_commit_charge(new_page, memcg, false, true); -- 2.17.1