From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CAA05C43460 for ; Fri, 16 Apr 2021 09:29:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 284246115B for ; Fri, 16 Apr 2021 09:29:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 284246115B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ACB356B0036; Fri, 16 Apr 2021 05:29:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AA1E26B006C; Fri, 16 Apr 2021 05:29:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9908A6B0070; Fri, 16 Apr 2021 05:29:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0106.hostedemail.com [216.40.44.106]) by kanga.kvack.org (Postfix) with ESMTP id 7EDD36B0036 for ; Fri, 16 Apr 2021 05:29:05 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 2F72E12EE for ; Fri, 16 Apr 2021 09:29:05 +0000 (UTC) X-FDA: 78037706250.22.76B1247 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf11.hostedemail.com (Postfix) with ESMTP id 47F9D200026C for ; Fri, 16 Apr 2021 09:28:54 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E41111396; Fri, 16 Apr 2021 02:29:03 -0700 (PDT) Received: from [192.168.1.179] (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0C9EB3FA35; Fri, 16 Apr 2021 02:29:01 -0700 (PDT) Subject: Re: [PATCH v1 3/5] mm: ptdump: Provide page size to notepage() To: Christophe Leroy , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , akpm@linux-foundation.org Cc: linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-riscv@lists.infradead.org, x86@kernel.org, linux-mm@kvack.org References: <1ef6b954fb7b0f4dfc78820f1e612d2166c13227.1618506910.git.christophe.leroy@csgroup.eu> From: Steven Price Message-ID: <41819925-3ee5-4771-e98b-0073e8f095cf@arm.com> Date: Fri, 16 Apr 2021 10:28:56 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 MIME-Version: 1.0 In-Reply-To: <1ef6b954fb7b0f4dfc78820f1e612d2166c13227.1618506910.git.christophe.leroy@csgroup.eu> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 47F9D200026C X-Stat-Signature: zietgtxk1scd9ko4ri78fwwkmqxswzf6 X-Rspamd-Server: rspam02 Received-SPF: none (arm.com>: No applicable sender policy available) receiver=imf11; identity=mailfrom; envelope-from=""; helo=foss.arm.com; client-ip=217.140.110.172 X-HE-DKIM-Result: none/none X-HE-Tag: 1618565334-771764 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 15/04/2021 18:18, Christophe Leroy wrote: > In order to support large pages on powerpc, notepage() > needs to know the page size of the page. > > Add a page_size argument to notepage(). > > Signed-off-by: Christophe Leroy > --- > arch/arm64/mm/ptdump.c | 2 +- > arch/riscv/mm/ptdump.c | 2 +- > arch/s390/mm/dump_pagetables.c | 3 ++- > arch/x86/mm/dump_pagetables.c | 2 +- > include/linux/ptdump.h | 2 +- > mm/ptdump.c | 16 ++++++++-------- > 6 files changed, 14 insertions(+), 13 deletions(-) > [...] > diff --git a/mm/ptdump.c b/mm/ptdump.c > index da751448d0e4..61cd16afb1c8 100644 > --- a/mm/ptdump.c > +++ b/mm/ptdump.c > @@ -17,7 +17,7 @@ static inline int note_kasan_page_table(struct mm_walk *walk, > { > struct ptdump_state *st = walk->private; > > - st->note_page(st, addr, 4, pte_val(kasan_early_shadow_pte[0])); > + st->note_page(st, addr, 4, pte_val(kasan_early_shadow_pte[0]), PAGE_SIZE); I'm not completely sure what the page_size is going to be used for, but note that KASAN presents an interesting case here. We short-cut by detecting it's a KASAN region at a high level (PGD/P4D/PUD/PMD) and instead of walking the tree down just call note_page() *once* but with level==4 because we know KASAN sets up the page table like that. However the one call actually covers a much larger region - so while PAGE_SIZE matches the level it doesn't match the region covered. AFAICT this will lead to odd results if you enable KASAN on powerpc. To be honest I don't fully understand why powerpc requires the page_size - it appears to be using it purely to find "holes" in the calls to note_page(), but I haven't worked out why such holes would occur. Steve > > walk->action = ACTION_CONTINUE; > > @@ -41,7 +41,7 @@ static int ptdump_pgd_entry(pgd_t *pgd, unsigned long addr, > st->effective_prot(st, 0, pgd_val(val)); > > if (pgd_leaf(val)) > - st->note_page(st, addr, 0, pgd_val(val)); > + st->note_page(st, addr, 0, pgd_val(val), PGDIR_SIZE); > > return 0; > } > @@ -62,7 +62,7 @@ static int ptdump_p4d_entry(p4d_t *p4d, unsigned long addr, > st->effective_prot(st, 1, p4d_val(val)); > > if (p4d_leaf(val)) > - st->note_page(st, addr, 1, p4d_val(val)); > + st->note_page(st, addr, 1, p4d_val(val), P4D_SIZE); > > return 0; > } > @@ -83,7 +83,7 @@ static int ptdump_pud_entry(pud_t *pud, unsigned long addr, > st->effective_prot(st, 2, pud_val(val)); > > if (pud_leaf(val)) > - st->note_page(st, addr, 2, pud_val(val)); > + st->note_page(st, addr, 2, pud_val(val), PUD_SIZE); > > return 0; > } > @@ -102,7 +102,7 @@ static int ptdump_pmd_entry(pmd_t *pmd, unsigned long addr, > if (st->effective_prot) > st->effective_prot(st, 3, pmd_val(val)); > if (pmd_leaf(val)) > - st->note_page(st, addr, 3, pmd_val(val)); > + st->note_page(st, addr, 3, pmd_val(val), PMD_SIZE); > > return 0; > } > @@ -116,7 +116,7 @@ static int ptdump_pte_entry(pte_t *pte, unsigned long addr, > if (st->effective_prot) > st->effective_prot(st, 4, pte_val(val)); > > - st->note_page(st, addr, 4, pte_val(val)); > + st->note_page(st, addr, 4, pte_val(val), PAGE_SIZE); > > return 0; > } > @@ -126,7 +126,7 @@ static int ptdump_hole(unsigned long addr, unsigned long next, > { > struct ptdump_state *st = walk->private; > > - st->note_page(st, addr, depth, 0); > + st->note_page(st, addr, depth, 0, 0); > > return 0; > } > @@ -153,5 +153,5 @@ void ptdump_walk_pgd(struct ptdump_state *st, struct mm_struct *mm, pgd_t *pgd) > mmap_read_unlock(mm); > > /* Flush out the last page */ > - st->note_page(st, 0, -1, 0); > + st->note_page(st, 0, -1, 0, 0); > } >