From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-io0-f200.google.com (mail-io0-f200.google.com [209.85.223.200]) by kanga.kvack.org (Postfix) with ESMTP id BC3096B0260 for ; Mon, 11 Jul 2016 02:42:02 -0400 (EDT) Received: by mail-io0-f200.google.com with SMTP id t74so215514925ioi.3 for ; Sun, 10 Jul 2016 23:42:02 -0700 (PDT) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com. [119.145.14.65]) by mx.google.com with ESMTPS id k51si770690otb.181.2016.07.10.23.42.00 for (version=TLS1 cipher=AES128-SHA bits=128/128); Sun, 10 Jul 2016 23:42:02 -0700 (PDT) From: zhongjiang Subject: [PATCH 2/2] kexec: add a pmd huge entry condition during the page table Date: Mon, 11 Jul 2016 14:36:01 +0800 Message-ID: <1468218961-11018-2-git-send-email-zhongjiang@huawei.com> In-Reply-To: <1468218961-11018-1-git-send-email-zhongjiang@huawei.com> References: <1468218961-11018-1-git-send-email-zhongjiang@huawei.com> MIME-Version: 1.0 Content-Type: text/plain Sender: owner-linux-mm@kvack.org List-ID: To: akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org From: zhong jiang when image is loaded into kernel, we need set up page table for it. and all valid pfn also set up new mapping. it will set up a pmd huge entry if pud_present is true. relocate_kernel points to code segment can locate in the pmd huge entry in init_transtion_pgtable. therefore, we need to take the situation into account. Signed-off-by: zhong jiang --- arch/x86/kernel/machine_kexec_64.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c index 5a294e4..c33e344 100644 --- a/arch/x86/kernel/machine_kexec_64.c +++ b/arch/x86/kernel/machine_kexec_64.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -34,6 +35,17 @@ static struct kexec_file_ops *kexec_file_loaders[] = { }; #endif +static void split_pmd(pmd_t *pmd, pte_t *pte) +{ + unsigned long pfn = pmd_pfn(*pmd); + int i = 0; + + do { + set_pte(pte, pfn_pte(pfn, PAGE_KERNEL_EXEC)); + pfn++; + } while (pte++, i++, i < PTRS_PER_PTE); +} + static void free_transition_pgtable(struct kimage *image) { free_page((unsigned long)image->arch.pud); @@ -68,15 +80,19 @@ static int init_transition_pgtable(struct kimage *image, pgd_t *pgd) set_pud(pud, __pud(__pa(pmd) | _KERNPG_TABLE)); } pmd = pmd_offset(pud, vaddr); - if (!pmd_present(*pmd)) { + if (!pmd_present(*pmd) || pmd_huge(*pmd)) { pte = (pte_t *)get_zeroed_page(GFP_KERNEL); if (!pte) goto err; image->arch.pte = pte; - set_pmd(pmd, __pmd(__pa(pte) | _KERNPG_TABLE)); + if (pmd_huge(*pmd)) + split_pmd(pmd, pte); + else + set_pmd(pmd, __pmd(__pa(pte) | _KERNPG_TABLE)); } pte = pte_offset_kernel(pmd, vaddr); set_pte(pte, pfn_pte(paddr >> PAGE_SHIFT, PAGE_KERNEL_EXEC)); + return 0; err: free_transition_pgtable(image); -- 1.8.3.1 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org