From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF72FC4360F for ; Tue, 26 Mar 2019 16:27:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8016920863 for ; Tue, 26 Mar 2019 16:27:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8016920863 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 39C8A6B028A; Tue, 26 Mar 2019 12:27:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 348FD6B028B; Tue, 26 Mar 2019 12:27:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 239E66B028C; Tue, 26 Mar 2019 12:27:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com [209.85.208.71]) by kanga.kvack.org (Postfix) with ESMTP id BA8EF6B028A for ; Tue, 26 Mar 2019 12:27:47 -0400 (EDT) Received: by mail-ed1-f71.google.com with SMTP id n12so5521802edo.5 for ; Tue, 26 Mar 2019 09:27:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=kX4dLJraJo092n4y125DLKk3n4IU7HsLifEATF3quag=; b=gd7QEyU7D2bMqvfWmUvk+91h8yZMSPjhOtJJE0YlvTOtCKfqug2klk6/Pmn7GO5p6w JM9C4D3wITe5lW9em4rrh9HoJI5K4pE3cIqslNrPGanGiX4USu+2oLhb+RaloGDsOmPJ 7PtNCZpEYRTK3U9zGjHYbzjn179H0NFevDgE0at5PeB2K0g9OVgKLohVxa2/API21+Ni Qxuk0tToOgZRQoqy6K1FEojdSiaQX015BArVdNM0vOuma40kjEtaSh75ljfL84w4QH6J T8iag3RkUI2xL1zexdok7zl9hxGhQ3rEiKcEsKBShMZRhflAJEZ/82BQ/SJT2p3WWEcu 1B6Q== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of steven.price@arm.com designates 217.140.101.70 as permitted sender) smtp.mailfrom=steven.price@arm.com X-Gm-Message-State: APjAAAUWZ5fWkFJVbve2Sm3VXO3bWFe4LGa5fJdiD0pKu1rTegc5dlmn 2XgPjU2zmEMd92IdKmOFkTmSVWd36RlV3w4UKQrqcxEGlA7k/d9A7l74QG6G+/v5FpUlrxpOUgE x0xNJ7jQygq4Gqaozr76v6Ui02ULpQZem5TCfz/HuF/YYa++sm4FjGVtIs7BN2shsxw== X-Received: by 2002:aa7:c64a:: with SMTP id z10mr6541562edr.84.1553617667229; Tue, 26 Mar 2019 09:27:47 -0700 (PDT) X-Google-Smtp-Source: APXvYqxb5gJltPO2LhC3lSy35SymjoXQYw7dJlnLi48rn3FS++1fmg3LZbQYNKl5fk6iHrCAFyoW X-Received: by 2002:aa7:c64a:: with SMTP id z10mr6541502edr.84.1553617665999; Tue, 26 Mar 2019 09:27:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553617665; cv=none; d=google.com; s=arc-20160816; b=pZFlKKVcR3kNSqc7TNBRiP1icbId/pCJL8U2IjVDzGDf7TNQVnj+GxpXxRX0zXiVL0 P3Bm4O5NHtNp3oybi0EyCk9E+hRDqo2P9Fddb08SnquXAGDWKVprc9sMOaQsIi10tMax Qih3LAGz+Pm3zGtaJfFy5rFYl+T0Xs7PFZKChhaMgJ+28B5z9vLTq3u15XjztCw2724g bYh1iFVoo8/jjXKuZLKp8wrkvSC9SuMiTxLKm+XQUcbQP19C1k5ygBNYAl1ORaYSEL2J xQquHiUR29rEtR2F+iBmzizqOHb4IQXss3yPc9IMeO5AeNvMq1dEtjXJNKu5k/YKb80L yvTA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=kX4dLJraJo092n4y125DLKk3n4IU7HsLifEATF3quag=; b=hEib/TmXMgHmzgwLCH903rDGIBcR5banVYtMZkXcdm/R+XVo2NYc2QqT/u+FQjIt96 23Nl28P9BgE+zDVxkj5aNOUIeOAOta1E9TOHtoR3NHK1ErBsO5xSO+txp/Lciofj7Jhd zVl1JWb9/SSb6eA5V1UIjjlc/dDlXI7KH1GY2bVSm/rsC8y/zDvfAShMcMCpNpxT5aP8 ncpQSboq/ZtdSH1lZbQviTP11147LXvo4jT6nmRbkVygsCYUc1yMsQNDWUHDJQiodG34 ebV0YRdVoyTILZ8m9lmJxnhW2XHwL+3cMiTvXtV+mOUmmmId6MEAZZna6/3XSra5uXU1 VKDg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of steven.price@arm.com designates 217.140.101.70 as permitted sender) smtp.mailfrom=steven.price@arm.com Received: from foss.arm.com (usa-sjc-mx-foss1.foss.arm.com. [217.140.101.70]) by mx.google.com with ESMTP id v4si2117085edm.402.2019.03.26.09.27.45 for ; Tue, 26 Mar 2019 09:27:45 -0700 (PDT) Received-SPF: pass (google.com: domain of steven.price@arm.com designates 217.140.101.70 as permitted sender) client-ip=217.140.101.70; Authentication-Results: mx.google.com; spf=pass (google.com: domain of steven.price@arm.com designates 217.140.101.70 as permitted sender) smtp.mailfrom=steven.price@arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 04E7319F6; Tue, 26 Mar 2019 09:27:45 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BD5E93F614; Tue, 26 Mar 2019 09:27:41 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Cc: Steven Price , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Dave Hansen , Ingo Molnar , James Morse , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Peter Zijlstra , Thomas Gleixner , Will Deacon , x86@kernel.org, "H. Peter Anvin" , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Mark Rutland , "Liang, Kan" Subject: [PATCH v6 19/19] x86: mm: Convert dump_pagetables to use walk_page_range Date: Tue, 26 Mar 2019 16:26:24 +0000 Message-Id: <20190326162624.20736-20-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190326162624.20736-1-steven.price@arm.com> References: <20190326162624.20736-1-steven.price@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make use of the new functionality in walk_page_range to remove the arch page walking code and use the generic code to walk the page tables. The effective permissions are passed down the chain using new fields in struct pg_state. The KASAN optimisation is implemented by including test_p?d callbacks which can decide to skip an entire tree of entries Signed-off-by: Steven Price --- arch/x86/mm/dump_pagetables.c | 280 ++++++++++++++++++---------------- 1 file changed, 146 insertions(+), 134 deletions(-) diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c index c0fbb9e5a790..f6b814aaddf7 100644 --- a/arch/x86/mm/dump_pagetables.c +++ b/arch/x86/mm/dump_pagetables.c @@ -33,6 +33,10 @@ struct pg_state { int level; pgprot_t current_prot; pgprotval_t effective_prot; + pgprotval_t effective_prot_pgd; + pgprotval_t effective_prot_p4d; + pgprotval_t effective_prot_pud; + pgprotval_t effective_prot_pmd; unsigned long start_address; unsigned long current_address; const struct addr_marker *marker; @@ -356,22 +360,21 @@ static inline pgprotval_t effective_prot(pgprotval_t prot1, pgprotval_t prot2) ((prot1 | prot2) & _PAGE_NX); } -static void walk_pte_level(struct pg_state *st, pmd_t addr, pgprotval_t eff_in, - unsigned long P) +static int ptdump_pte_entry(pte_t *pte, unsigned long addr, + unsigned long next, struct mm_walk *walk) { - int i; - pte_t *pte; - pgprotval_t prot, eff; - - for (i = 0; i < PTRS_PER_PTE; i++) { - st->current_address = normalize_addr(P + i * PTE_LEVEL_MULT); - pte = pte_offset_map(&addr, st->current_address); - prot = pte_flags(*pte); - eff = effective_prot(eff_in, prot); - note_page(st, __pgprot(prot), eff, 5); - pte_unmap(pte); - } + struct pg_state *st = walk->private; + pgprotval_t eff, prot; + + st->current_address = normalize_addr(addr); + + prot = pte_flags(*pte); + eff = effective_prot(st->effective_prot_pmd, prot); + note_page(st, __pgprot(prot), eff, 5); + + return 0; } + #ifdef CONFIG_KASAN /* @@ -400,131 +403,152 @@ static inline bool kasan_page_table(struct pg_state *st, void *pt) } #endif -#if PTRS_PER_PMD > 1 - -static void walk_pmd_level(struct pg_state *st, pud_t addr, - pgprotval_t eff_in, unsigned long P) +static int ptdump_test_pmd(unsigned long addr, unsigned long next, + pmd_t *pmd, struct mm_walk *walk) { - int i; - pmd_t *start, *pmd_start; - pgprotval_t prot, eff; - - pmd_start = start = (pmd_t *)pud_page_vaddr(addr); - for (i = 0; i < PTRS_PER_PMD; i++) { - st->current_address = normalize_addr(P + i * PMD_LEVEL_MULT); - if (!pmd_none(*start)) { - prot = pmd_flags(*start); - eff = effective_prot(eff_in, prot); - if (pmd_large(*start) || !pmd_present(*start)) { - note_page(st, __pgprot(prot), eff, 4); - } else if (!kasan_page_table(st, pmd_start)) { - walk_pte_level(st, *start, eff, - P + i * PMD_LEVEL_MULT); - } - } else - note_page(st, __pgprot(0), 0, 4); - start++; - } + struct pg_state *st = walk->private; + + st->current_address = normalize_addr(addr); + + if (kasan_page_table(st, pmd)) + return 1; + return 0; } -#else -#define walk_pmd_level(s,a,e,p) walk_pte_level(s,__pmd(pud_val(a)),e,p) -#undef pud_large -#define pud_large(a) pmd_large(__pmd(pud_val(a))) -#define pud_none(a) pmd_none(__pmd(pud_val(a))) -#endif +static int ptdump_pmd_entry(pmd_t *pmd, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + struct pg_state *st = walk->private; + pgprotval_t eff, prot; + + prot = pmd_flags(*pmd); + eff = effective_prot(st->effective_prot_pud, prot); + + st->current_address = normalize_addr(addr); + + if (pmd_large(*pmd)) + note_page(st, __pgprot(prot), eff, 4); -#if PTRS_PER_PUD > 1 + st->effective_prot_pmd = eff; -static void walk_pud_level(struct pg_state *st, p4d_t addr, pgprotval_t eff_in, - unsigned long P) + return 0; +} + +static int ptdump_test_pud(unsigned long addr, unsigned long next, + pud_t *pud, struct mm_walk *walk) { - int i; - pud_t *start, *pud_start; - pgprotval_t prot, eff; - - pud_start = start = (pud_t *)p4d_page_vaddr(addr); - - for (i = 0; i < PTRS_PER_PUD; i++) { - st->current_address = normalize_addr(P + i * PUD_LEVEL_MULT); - if (!pud_none(*start)) { - prot = pud_flags(*start); - eff = effective_prot(eff_in, prot); - if (pud_large(*start) || !pud_present(*start)) { - note_page(st, __pgprot(prot), eff, 3); - } else if (!kasan_page_table(st, pud_start)) { - walk_pmd_level(st, *start, eff, - P + i * PUD_LEVEL_MULT); - } - } else - note_page(st, __pgprot(0), 0, 3); + struct pg_state *st = walk->private; - start++; - } + st->current_address = normalize_addr(addr); + + if (kasan_page_table(st, pud)) + return 1; + return 0; } -#else -#define walk_pud_level(s,a,e,p) walk_pmd_level(s,__pud(p4d_val(a)),e,p) -#undef p4d_large -#define p4d_large(a) pud_large(__pud(p4d_val(a))) -#define p4d_none(a) pud_none(__pud(p4d_val(a))) -#endif +static int ptdump_pud_entry(pud_t *pud, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + struct pg_state *st = walk->private; + pgprotval_t eff, prot; + + prot = pud_flags(*pud); + eff = effective_prot(st->effective_prot_p4d, prot); + + st->current_address = normalize_addr(addr); + + if (pud_large(*pud)) + note_page(st, __pgprot(prot), eff, 3); + + st->effective_prot_pud = eff; -static void walk_p4d_level(struct pg_state *st, pgd_t addr, pgprotval_t eff_in, - unsigned long P) + return 0; +} + +static int ptdump_test_p4d(unsigned long addr, unsigned long next, + p4d_t *p4d, struct mm_walk *walk) { - int i; - p4d_t *start, *p4d_start; - pgprotval_t prot, eff; - - if (PTRS_PER_P4D == 1) - return walk_pud_level(st, __p4d(pgd_val(addr)), eff_in, P); - - p4d_start = start = (p4d_t *)pgd_page_vaddr(addr); - - for (i = 0; i < PTRS_PER_P4D; i++) { - st->current_address = normalize_addr(P + i * P4D_LEVEL_MULT); - if (!p4d_none(*start)) { - prot = p4d_flags(*start); - eff = effective_prot(eff_in, prot); - if (p4d_large(*start) || !p4d_present(*start)) { - note_page(st, __pgprot(prot), eff, 2); - } else if (!kasan_page_table(st, p4d_start)) { - walk_pud_level(st, *start, eff, - P + i * P4D_LEVEL_MULT); - } - } else - note_page(st, __pgprot(0), 0, 2); + struct pg_state *st = walk->private; - start++; - } + st->current_address = normalize_addr(addr); + + if (kasan_page_table(st, p4d)) + return 1; + return 0; } -#undef pgd_large -#define pgd_large(a) (pgtable_l5_enabled() ? pgd_large(a) : p4d_large(__p4d(pgd_val(a)))) -#define pgd_none(a) (pgtable_l5_enabled() ? pgd_none(a) : p4d_none(__p4d(pgd_val(a)))) +static int ptdump_p4d_entry(p4d_t *p4d, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + struct pg_state *st = walk->private; + pgprotval_t eff, prot; + + prot = p4d_flags(*p4d); + eff = effective_prot(st->effective_prot_pgd, prot); + + st->current_address = normalize_addr(addr); + + if (p4d_large(*p4d)) + note_page(st, __pgprot(prot), eff, 2); + + st->effective_prot_p4d = eff; + + return 0; +} -static inline bool is_hypervisor_range(int idx) +static int ptdump_pgd_entry(pgd_t *pgd, unsigned long addr, + unsigned long next, struct mm_walk *walk) { -#ifdef CONFIG_X86_64 - /* - * A hole in the beginning of kernel address space reserved - * for a hypervisor. - */ - return (idx >= pgd_index(GUARD_HOLE_BASE_ADDR)) && - (idx < pgd_index(GUARD_HOLE_END_ADDR)); + struct pg_state *st = walk->private; + pgprotval_t eff, prot; + + prot = pgd_flags(*pgd); + +#ifdef CONFIG_X86_PAE + eff = _PAGE_USER | _PAGE_RW; #else - return false; + eff = prot; #endif + + st->current_address = normalize_addr(addr); + + if (pgd_large(*pgd)) + note_page(st, __pgprot(prot), eff, 1); + + st->effective_prot_pgd = eff; + + return 0; +} + +static int ptdump_hole(unsigned long addr, unsigned long next, + struct mm_walk *walk) +{ + struct pg_state *st = walk->private; + + st->current_address = normalize_addr(addr); + + note_page(st, __pgprot(0), 0, -1); + + return 0; } static void ptdump_walk_pgd_level_core(struct seq_file *m, struct mm_struct *mm, bool checkwx, bool dmesg) { - pgd_t *start = mm->pgd; - pgprotval_t prot, eff; - int i; struct pg_state st = {}; + struct mm_walk walk = { + .mm = mm, + .pgd_entry = ptdump_pgd_entry, + .p4d_entry = ptdump_p4d_entry, + .pud_entry = ptdump_pud_entry, + .pmd_entry = ptdump_pmd_entry, + .pte_entry = ptdump_pte_entry, + .test_p4d = ptdump_test_p4d, + .test_pud = ptdump_test_pud, + .test_pmd = ptdump_test_pmd, + .pte_hole = ptdump_hole, + .private = &st + }; st.to_dmesg = dmesg; st.check_wx = checkwx; @@ -532,27 +556,15 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m, struct mm_struct *mm, if (checkwx) st.wx_pages = 0; - for (i = 0; i < PTRS_PER_PGD; i++) { - st.current_address = normalize_addr(i * PGD_LEVEL_MULT); - if (!pgd_none(*start) && !is_hypervisor_range(i)) { - prot = pgd_flags(*start); -#ifdef CONFIG_X86_PAE - eff = _PAGE_USER | _PAGE_RW; + down_read(&mm->mmap_sem); +#ifdef CONFIG_X86_64 + walk_page_range(0, PTRS_PER_PGD*PGD_LEVEL_MULT/2, &walk); + walk_page_range(normalize_addr(PTRS_PER_PGD*PGD_LEVEL_MULT/2), ~0, + &walk); #else - eff = prot; + walk_page_range(0, ~0, &walk); #endif - if (pgd_large(*start) || !pgd_present(*start)) { - note_page(&st, __pgprot(prot), eff, 1); - } else { - walk_p4d_level(&st, *start, eff, - i * PGD_LEVEL_MULT); - } - } else - note_page(&st, __pgprot(0), 0, 1); - - cond_resched(); - start++; - } + up_read(&mm->mmap_sem); /* Flush out the last page */ st.current_address = normalize_addr(PTRS_PER_PGD*PGD_LEVEL_MULT); -- 2.20.1