From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B7E3C433EF for ; Mon, 4 Jul 2022 16:38:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F165F6B0074; Mon, 4 Jul 2022 12:38:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EC5D16B0075; Mon, 4 Jul 2022 12:38:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D90E26B007E; Mon, 4 Jul 2022 12:38:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id C992F6B0074 for ; Mon, 4 Jul 2022 12:38:26 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id A7FA7608D6 for ; Mon, 4 Jul 2022 16:38:26 +0000 (UTC) X-FDA: 79649975412.21.D36212C Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf23.hostedemail.com (Postfix) with ESMTP id 1C1FB140044 for ; Mon, 4 Jul 2022 16:38:25 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 30D57B8116B; Mon, 4 Jul 2022 16:38:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3F12DC3411E; Mon, 4 Jul 2022 16:38:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656952702; bh=mzG0J5YucAyiXuHgKf+Yrwf+nVvaGzlD91flHtMAJqU=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=UPBvt1m5LagRjTUnGpdL55TH6POJhdKUiYb/f9qV5b4BTLSTSYE6NXvrr/Sk4oOuB Dou9TT6a4GBPcrRpmb2hM0PVJrsMO2SBj3hx3Q7B5tRb+eFJMX2Hu6rpGKUkd9AkNZ GBNs/KbnZi1aRktiXcW81yaNCfGf9GBfw1YPrYcMFdOPTvWWwB7xy1O5W/uG/PToU2 T3Zb58pVmsyfMz8hF9YVAJ4nMThFp2WlTV2nsZo/Fjq0uX6NrNG+8USsYthWjiU3sc H4J8+LUNrPOVxntBROCp+GxEkX/cEtmAezkg4ElE7RhSNQKjvbhceqwjbaP5jlLqpe d6qZ//6nD9JJA== Date: Mon, 4 Jul 2022 17:38:15 +0100 From: Will Deacon To: "guanghui.fgh" Cc: baolin.wang@linux.alibaba.com, catalin.marinas@arm.com, akpm@linux-foundation.org, david@redhat.com, jianyong.wu@arm.com, james.morse@arm.com, quic_qiancai@quicinc.com, christophe.leroy@csgroup.eu, jonathan@marek.ca, mark.rutland@arm.com, thunder.leizhen@huawei.com, anshuman.khandual@arm.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, rppt@kernel.org, geert+renesas@glider.be, ardb@kernel.org, linux-mm@kvack.org, yaohongbo@linux.alibaba.com, alikernel-developer@linux.alibaba.com Subject: Re: [PATCH v4] arm64: mm: fix linear mem mapping access performance degradation Message-ID: <20220704163815.GA32177@willie-the-truck> References: <1656777473-73887-1-git-send-email-guanghuifeng@linux.alibaba.com> <20220704103523.GC31437@willie-the-truck> <73f0c53b-fd17-c5e9-3773-1d71e564eb50@linux.alibaba.com> <20220704111402.GA31553@willie-the-truck> <4accaeda-572f-f72d-5067-2d0999e4d00a@linux.alibaba.com> <20220704131516.GC31684@willie-the-truck> <2ae1cae0-ee26-aa59-7ed9-231d67194dce@linux.alibaba.com> <20220704142313.GE31684@willie-the-truck> <6977c692-78ca-5a67-773e-0389c85f2650@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <6977c692-78ca-5a67-773e-0389c85f2650@linux.alibaba.com> User-Agent: Mutt/1.10.1 (2018-07-13) ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=UPBvt1m5; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf23.hostedemail.com: domain of will@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=will@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1656952706; a=rsa-sha256; cv=none; b=q3SNrvB75BcjJRDnLh21fC6azDfUQfrHDRa9QpzBLBKyfeDtmOdX90x4hEcNtJa3qRkxtP EYXk1qqMbcDsQMU8DC8ptGhMg+yinfJJMBxLlSaQZ808pwOD+4qA7uhFVv3kzgy129CEwS tyS8UBhquTb6AKA4W7uAPEqpVe8DIpA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1656952706; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9u36j1GpuPTYt7B2Mtjpq7dJVMss9DynQJGXBq6juu0=; b=jcsNzXgIJ77h1VhmG10itpZQXm2RYP+3TJJuMgCxtRG/da+iFd5fR+RJVmukipWL3L/+6D nOno+0PpMyQI9uC+lq++qMbEAsa78RrkuGQdKSBlrn5f+3G3dIvNBeWCn5d3r0L/5mfRWh DLH14raLs2RBHAoPiPNFWQu70q+eXn0= Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=UPBvt1m5; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf23.hostedemail.com: domain of will@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=will@kernel.org X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: 9uawdw35dinntnernky7yyzuf4jcybrr X-Rspamd-Queue-Id: 1C1FB140044 X-HE-Tag: 1656952705-881021 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jul 04, 2022 at 10:34:07PM +0800, guanghui.fgh wrote: > Thanks. > > 在 2022/7/4 22:23, Will Deacon 写道: > > On Mon, Jul 04, 2022 at 10:11:27PM +0800, guanghui.fgh wrote: > > > 在 2022/7/4 21:15, Will Deacon 写道: > > > > On Mon, Jul 04, 2022 at 08:05:59PM +0800, guanghui.fgh wrote: > > > > > > > 1.Quoted messages from arch/arm64/mm/init.c > > > > > > > > > > > > > > "Memory reservation for crash kernel either done early or deferred > > > > > > > depending on DMA memory zones configs (ZONE_DMA) -- > > > > > > > > > > > > > > In absence of ZONE_DMA configs arm64_dma_phys_limit initialized > > > > > > > here instead of max_zone_phys(). This lets early reservation of > > > > > > > crash kernel memory which has a dependency on arm64_dma_phys_limit. > > > > > > > Reserving memory early for crash kernel allows linear creation of block > > > > > > > mappings (greater than page-granularity) for all the memory bank rangs. > > > > > > > In this scheme a comparatively quicker boot is observed. > > > > > > > > > > > > > > If ZONE_DMA configs are defined, crash kernel memory reservation > > > > > > > is delayed until DMA zone memory range size initialization performed in > > > > > > > zone_sizes_init(). The defer is necessary to steer clear of DMA zone > > > > > > > memory range to avoid overlap allocation. > > > > > > > > > > > > > > [[[ > > > > > > > So crash kernel memory boundaries are not known when mapping all bank memory > > > > > > > ranges, which otherwise means not possible to exclude crash kernel range > > > > > > > from creating block mappings so page-granularity mappings are created for > > > > > > > the entire memory range. > > > > > > > ]]]" > > > > > > > > > > > > > > Namely, the init order: memblock init--->linear mem mapping(4k mapping for > > > > > > > crashkernel, requirinig page-granularity changing))--->zone dma > > > > > > > limit--->reserve crashkernel. > > > > > > > So when enable ZONE DMA and using crashkernel, the mem mapping using 4k > > > > > > > mapping. > > > > > > > > > > > > Yes, I understand that is how things work today but I'm saying that we may > > > > > > as well leave the crashkernel mapped (at block granularity) if > > > > > > !can_set_direct_map() and then I think your patch becomes a lot simpler. > > > > > > > > > > But Page-granularity mapppings are necessary for crash kernel memory range > > > > > for shrinking its size via /sys/kernel/kexec_crash_size interfac(Quoted from > > > > > arch/arm64/mm/init.c). > > > > > So this patch split block/section mapping to 4k page-granularity mapping for > > > > > crashkernel mem. > > > > > > > > Why? I don't see why the mapping granularity is relevant at all if we > > > > always leave the whole thing mapped. > > > > > > > There is another reason. > > > > > > When loading crashkernel finish, the do_kexec_load will use > > > arch_kexec_protect_crashkres to invalid all the pagetable for crashkernel > > > mem(protect crashkernel mem from access). > > > > > > arch_kexec_protect_crashkres--->set_memory_valid--->...--->apply_to_pmd_range > > > > > > In the apply_to_pmd_range, there is a judement: BUG_ON(pud_huge(*pud)). And > > > if the crashkernel use block/section mapping, there will be some error. > > > > > > Namely, it's need to use non block/section mapping for crashkernel mem > > > before shringking. > > > > Well, yes, but we can change arch_kexec_[un]protect_crashkres() not to do > > that if we're leaving the thing mapped, no? > > > I think we should use arch_kexec_[un]protect_crashkres for crashkernel mem. > > Because when invalid crashkernel mem pagetable, there is no chance to rd/wr > the crashkernel mem by mistake. > > If we don't use arch_kexec_[un]protect_crashkres to invalid crashkernel mem > pagetable, there maybe some write operations to these mem by mistake which > may cause crashkernel boot error and vmcore saving error. I don't really buy this line of reasoning. The entire main kernel is writable, so why do we care about protecting the crashkernel so much? The _code_ to launch the crash kernel is writable! If you care about preventing writes to memory which should not be writable, then you should use rodata=full. Will