From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C212AC433EF for ; Wed, 6 Jul 2022 15:18:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 182A36B0071; Wed, 6 Jul 2022 11:18:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 134216B0073; Wed, 6 Jul 2022 11:18:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 021BE6B0074; Wed, 6 Jul 2022 11:18:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id E8C146B0071 for ; Wed, 6 Jul 2022 11:18:31 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 9A26814E2 for ; Wed, 6 Jul 2022 15:18:31 +0000 (UTC) X-FDA: 79657031622.22.1F79F9D Received: from out30-131.freemail.mail.aliyun.com (out30-131.freemail.mail.aliyun.com [115.124.30.131]) by imf03.hostedemail.com (Postfix) with ESMTP id DE4ED2000C for ; Wed, 6 Jul 2022 15:18:29 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R691e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045168;MF=guanghuifeng@linux.alibaba.com;NM=1;PH=DS;RN=21;SR=0;TI=SMTPD_---0VIYaV52_1657120703; Received: from 30.225.28.170(mailfrom:guanghuifeng@linux.alibaba.com fp:SMTPD_---0VIYaV52_1657120703) by smtp.aliyun-inc.com; Wed, 06 Jul 2022 23:18:24 +0800 Message-ID: <9974bea5-4db9-0104-c9c9-d9b49c390f1b@linux.alibaba.com> Date: Wed, 6 Jul 2022 23:18:22 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.11.0 Subject: Re: [PATCH v4] arm64: mm: fix linear mem mapping access performance degradation To: Mike Rapoport , Catalin Marinas Cc: Will Deacon , Ard Biesheuvel , baolin.wang@linux.alibaba.com, akpm@linux-foundation.org, david@redhat.com, jianyong.wu@arm.com, james.morse@arm.com, quic_qiancai@quicinc.com, christophe.leroy@csgroup.eu, jonathan@marek.ca, mark.rutland@arm.com, thunder.leizhen@huawei.com, anshuman.khandual@arm.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, geert+renesas@glider.be, linux-mm@kvack.org, yaohongbo@linux.alibaba.com, alikernel-developer@linux.alibaba.com References: <20220705095231.GB552@willie-the-truck> <5d044fdd-a61a-d60f-d294-89e17de37712@linux.alibaba.com> <20220705121115.GB1012@willie-the-truck> From: "guanghui.fgh" In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657120711; a=rsa-sha256; cv=none; b=kdM8moHfSpN6QjF1YmMA3KNRNG6+JDXhv9NGAZAa4kyWnYz+t6dQOYI2B5g+J22uGSMhVG JfCKFf+wNjllMfbC1lukNDoJ61lyPh6dbQLfw1IodJQ3GmDY5tmpyxv7MnNvUHxphAYMtm nCufrqKCU6cOQ3e5M2cDUe0Jam+MZCI= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf03.hostedemail.com: domain of guanghuifeng@linux.alibaba.com designates 115.124.30.131 as permitted sender) smtp.mailfrom=guanghuifeng@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657120711; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WdZJxvmFos4QjsFkDuzqbVRH6vQBdaMSvYj+mnDPESs=; b=zpPL02msL6DLMlJDbg09zssYgIdZlCvGsvfxgXZTnmnutiYAHqHQvV+FyxOnO8GMNyOC7O tWu0GV1cEYLhz7+pHBGuqXfCpt8ogHahRHCl7a0+GLXGxav7lr1eUD+8HXNJveoC8nWs76 6UtL4vO5MAtQonw/rXazCzwcLHX/AZU= X-Rspamd-Server: rspam04 X-Rspam-User: Authentication-Results: imf03.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf03.hostedemail.com: domain of guanghuifeng@linux.alibaba.com designates 115.124.30.131 as permitted sender) smtp.mailfrom=guanghuifeng@linux.alibaba.com X-Stat-Signature: njqdjmx8dgbtpmf5bqt79hrk3qxezioe X-Rspamd-Queue-Id: DE4ED2000C X-HE-Tag: 1657120709-231299 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Thanks. 在 2022/7/6 21:54, Mike Rapoport 写道: > On Wed, Jul 06, 2022 at 11:04:24AM +0100, Catalin Marinas wrote: >> On Tue, Jul 05, 2022 at 11:45:40PM +0300, Mike Rapoport wrote: >>> On Tue, Jul 05, 2022 at 06:05:01PM +0100, Catalin Marinas wrote: >>>> On Tue, Jul 05, 2022 at 06:57:53PM +0300, Mike Rapoport wrote: >>>>> On Tue, Jul 05, 2022 at 04:34:09PM +0100, Catalin Marinas wrote: >>>>>> On Tue, Jul 05, 2022 at 06:02:02PM +0300, Mike Rapoport wrote: >>>>>>> +void __init remap_crashkernel(void) >>>>>>> +{ >>>>>>> +#ifdef CONFIG_KEXEC_CORE >>>>>>> + phys_addr_t start, end, size; >>>>>>> + phys_addr_t aligned_start, aligned_end; >>>>>>> + >>>>>>> + if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE)) >>>>>>> + return; >>>>>>> + >>>>>>> + if (!crashk_res.end) >>>>>>> + return; >>>>>>> + >>>>>>> + start = crashk_res.start & PAGE_MASK; >>>>>>> + end = PAGE_ALIGN(crashk_res.end); >>>>>>> + >>>>>>> + aligned_start = ALIGN_DOWN(crashk_res.start, PUD_SIZE); >>>>>>> + aligned_end = ALIGN(end, PUD_SIZE); >>>>>>> + >>>>>>> + /* Clear PUDs containing crash kernel memory */ >>>>>>> + unmap_hotplug_range(__phys_to_virt(aligned_start), >>>>>>> + __phys_to_virt(aligned_end), false, NULL); >>>>>> >>>>>> What I don't understand is what happens if there's valid kernel data >>>>>> between aligned_start and crashk_res.start (or the other end of the >>>>>> range). >>>>> >>>>> Data shouldn't go anywhere :) >>>>> >>>>> There is >>>>> >>>>> + /* map area from PUD start to start of crash kernel with large pages */ >>>>> + size = start - aligned_start; >>>>> + __create_pgd_mapping(swapper_pg_dir, aligned_start, >>>>> + __phys_to_virt(aligned_start), >>>>> + size, PAGE_KERNEL, early_pgtable_alloc, 0); >>>>> >>>>> and >>>>> >>>>> + /* map area from end of crash kernel to PUD end with large pages */ >>>>> + size = aligned_end - end; >>>>> + __create_pgd_mapping(swapper_pg_dir, end, __phys_to_virt(end), >>>>> + size, PAGE_KERNEL, early_pgtable_alloc, 0); >>>>> >>>>> after the unmap, so after we tear down a part of a linear map we >>>>> immediately recreate it, just with a different page size. >>>>> >>>>> This all happens before SMP, so there is no concurrency at that point. >>>> >>>> That brief period of unmap worries me. The kernel text, data and stack >>>> are all in the vmalloc space but any other (memblock) allocation to this >>>> point may be in the unmapped range before and after the crashkernel >>>> reservation. The interrupts are off, so I think the only allocation and >>>> potential access that may go in this range is the page table itself. But >>>> it looks fragile to me. >>> >>> I agree there are chances there will be an allocation from the unmapped >>> range. >>> >>> We can make sure this won't happen, though. We can cap the memblock >>> allocations with memblock_set_current_limit(aligned_end) or >>> memblock_reserve(algined_start, aligned_end) until the mappings are >>> restored. >> >> We can reserve the region just before unmapping to avoid new allocations >> for the page tables but we can't do much about pages already allocated >> prior to calling remap_crashkernel(). > > Right, this was bothering me too after I re-read you previous email. > > One thing I can think of is to only remap the crash kernel memory if it is > a part of an allocation that exactly fits into one ore more PUDs. > > Say, in reserve_crashkernel() we try the memblock_phys_alloc() with > PUD_SIZE as alignment and size rounded up to PUD_SIZE. If this allocation > succeeds, we remap the entire area that now contains only memory allocated > in reserve_crashkernel() and free the extra memory after remapping is done. > If the large allocation fails, we fall back to the original size and > alignment and don't allow unmapping crash kernel memory in > arch_kexec_protect_crashkres(). > >> -- >> Catalin > Thanks. There is a new method. I think we should use the patch v3(similar but need add some changes) 1.We can walk crashkernle block/section pagetable, [[[(keep the origin block/section mapping valid]]] rebuild the pte level page mapping for the crashkernel mem rebuild left & right margin mem(which is in same block/section mapping but out of crashkernel mem) with block/section mapping 2.'replace' the origin block/section mapping by new builded mapping iterately With this method, all the mem mapping keep valid all the time. 3.the patch v3 link: https://lore.kernel.org/linux-mm/6dc308db-3685-4df5-506a-71f9e3794ec8@linux.alibaba.com/T/ (Need some changes)