From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 854A8CCA473 for ; Wed, 6 Jul 2022 15:30:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0004D6B0074; Wed, 6 Jul 2022 11:30:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EF2F26B0075; Wed, 6 Jul 2022 11:30:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE24C6B0078; Wed, 6 Jul 2022 11:30:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id CF91D6B0074 for ; Wed, 6 Jul 2022 11:30:30 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id A096634445 for ; Wed, 6 Jul 2022 15:30:30 +0000 (UTC) X-FDA: 79657061820.29.24FDCA3 Received: from out30-131.freemail.mail.aliyun.com (out30-131.freemail.mail.aliyun.com [115.124.30.131]) by imf15.hostedemail.com (Postfix) with ESMTP id C0B7FA0056 for ; Wed, 6 Jul 2022 15:30:26 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046049;MF=guanghuifeng@linux.alibaba.com;NM=1;PH=DS;RN=21;SR=0;TI=SMTPD_---0VIYaXRV_1657121420; Received: from 30.225.28.170(mailfrom:guanghuifeng@linux.alibaba.com fp:SMTPD_---0VIYaXRV_1657121420) by smtp.aliyun-inc.com; Wed, 06 Jul 2022 23:30:21 +0800 Message-ID: <5e73380a-be07-a3fe-8ee2-e38cd7f8fb2a@linux.alibaba.com> Date: Wed, 6 Jul 2022 23:30:19 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.11.0 Subject: Re: [PATCH v4] arm64: mm: fix linear mem mapping access performance degradation From: "guanghui.fgh" To: Mike Rapoport , Catalin Marinas Cc: Will Deacon , Ard Biesheuvel , baolin.wang@linux.alibaba.com, akpm@linux-foundation.org, david@redhat.com, jianyong.wu@arm.com, james.morse@arm.com, quic_qiancai@quicinc.com, christophe.leroy@csgroup.eu, jonathan@marek.ca, mark.rutland@arm.com, thunder.leizhen@huawei.com, anshuman.khandual@arm.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, geert+renesas@glider.be, linux-mm@kvack.org, yaohongbo@linux.alibaba.com, alikernel-developer@linux.alibaba.com References: <20220705095231.GB552@willie-the-truck> <5d044fdd-a61a-d60f-d294-89e17de37712@linux.alibaba.com> <20220705121115.GB1012@willie-the-truck> <9974bea5-4db9-0104-c9c9-d9b49c390f1b@linux.alibaba.com> In-Reply-To: <9974bea5-4db9-0104-c9c9-d9b49c390f1b@linux.alibaba.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657121430; a=rsa-sha256; cv=none; b=v1qByuQ5q8bIWzXxgJ9HRt/eq219B96lCCstgx1g9eKi9uUPZJLHPg6HvlJ7QUNx2ZyiVF qzSH2Fo5VUJyIpu45oWeEf7Xy5MjQwUCLm3t+CfiiES0zWXPhGU/Z69ZpFye0NLE7j41Cw eRelCLuj5Xf+ZcdeUsuPywnO/zZ55sI= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf15.hostedemail.com: domain of guanghuifeng@linux.alibaba.com designates 115.124.30.131 as permitted sender) smtp.mailfrom=guanghuifeng@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657121430; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VnSyUwE8slh+nBjCTzHuZQxiVPMc72e0sp8eYaNrrSo=; b=yJP9h1GZ0T+4CKCzPSvPkroOj+sUt5hrkunG2ysXCRWbGlXJL0cAzH3JBq8Nlx9RnZjzOd ioBSrkOW7r8PGmrqx90X3kS9i0uEF7gS9ABgoUb7qA1VXbcBWnSPFx574jK9qeA8IjBc3+ /iJCFPO6Rx8zGifyVYgsXwzeQ+TttDk= X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: C0B7FA0056 X-Rspam-User: Authentication-Results: imf15.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf15.hostedemail.com: domain of guanghuifeng@linux.alibaba.com designates 115.124.30.131 as permitted sender) smtp.mailfrom=guanghuifeng@linux.alibaba.com X-Stat-Signature: 7ne8bkcu7ccg5udmfewnirctukh9kg3i X-HE-Tag: 1657121426-346144 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: 在 2022/7/6 23:18, guanghui.fgh 写道: > Thanks. > > 在 2022/7/6 21:54, Mike Rapoport 写道: >> On Wed, Jul 06, 2022 at 11:04:24AM +0100, Catalin Marinas wrote: >>> On Tue, Jul 05, 2022 at 11:45:40PM +0300, Mike Rapoport wrote: >>>> On Tue, Jul 05, 2022 at 06:05:01PM +0100, Catalin Marinas wrote: >>>>> On Tue, Jul 05, 2022 at 06:57:53PM +0300, Mike Rapoport wrote: >>>>>> On Tue, Jul 05, 2022 at 04:34:09PM +0100, Catalin Marinas wrote: >>>>>>> On Tue, Jul 05, 2022 at 06:02:02PM +0300, Mike Rapoport wrote: >>>>>>>> +void __init remap_crashkernel(void) >>>>>>>> +{ >>>>>>>> +#ifdef CONFIG_KEXEC_CORE >>>>>>>> +    phys_addr_t start, end, size; >>>>>>>> +    phys_addr_t aligned_start, aligned_end; >>>>>>>> + >>>>>>>> +    if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE)) >>>>>>>> +        return; >>>>>>>> + >>>>>>>> +    if (!crashk_res.end) >>>>>>>> +        return; >>>>>>>> + >>>>>>>> +    start = crashk_res.start & PAGE_MASK; >>>>>>>> +    end = PAGE_ALIGN(crashk_res.end); >>>>>>>> + >>>>>>>> +    aligned_start = ALIGN_DOWN(crashk_res.start, PUD_SIZE); >>>>>>>> +    aligned_end = ALIGN(end, PUD_SIZE); >>>>>>>> + >>>>>>>> +    /* Clear PUDs containing crash kernel memory */ >>>>>>>> +    unmap_hotplug_range(__phys_to_virt(aligned_start), >>>>>>>> +                __phys_to_virt(aligned_end), false, NULL); >>>>>>> >>>>>>> What I don't understand is what happens if there's valid kernel data >>>>>>> between aligned_start and crashk_res.start (or the other end of the >>>>>>> range). >>>>>> >>>>>> Data shouldn't go anywhere :) >>>>>> >>>>>> There is >>>>>> >>>>>> +    /* map area from PUD start to start of crash kernel with >>>>>> large pages */ >>>>>> +    size = start - aligned_start; >>>>>> +    __create_pgd_mapping(swapper_pg_dir, aligned_start, >>>>>> +                 __phys_to_virt(aligned_start), >>>>>> +                 size, PAGE_KERNEL, early_pgtable_alloc, 0); >>>>>> >>>>>> and >>>>>> >>>>>> +    /* map area from end of crash kernel to PUD end with large >>>>>> pages */ >>>>>> +    size = aligned_end - end; >>>>>> +    __create_pgd_mapping(swapper_pg_dir, end, __phys_to_virt(end), >>>>>> +                 size, PAGE_KERNEL, early_pgtable_alloc, 0); >>>>>> >>>>>> after the unmap, so after we tear down a part of a linear map we >>>>>> immediately recreate it, just with a different page size. >>>>>> >>>>>> This all happens before SMP, so there is no concurrency at that >>>>>> point. >>>>> >>>>> That brief period of unmap worries me. The kernel text, data and stack >>>>> are all in the vmalloc space but any other (memblock) allocation to >>>>> this >>>>> point may be in the unmapped range before and after the crashkernel >>>>> reservation. The interrupts are off, so I think the only allocation >>>>> and >>>>> potential access that may go in this range is the page table >>>>> itself. But >>>>> it looks fragile to me. >>>> >>>> I agree there are chances there will be an allocation from the unmapped >>>> range. >>>> >>>> We can make sure this won't happen, though. We can cap the memblock >>>> allocations with memblock_set_current_limit(aligned_end) or >>>> memblock_reserve(algined_start, aligned_end) until the mappings are >>>> restored. >>> >>> We can reserve the region just before unmapping to avoid new allocations >>> for the page tables but we can't do much about pages already allocated >>> prior to calling remap_crashkernel(). >> >> Right, this was bothering me too after I re-read you previous email. >> >> One thing I can think of is to only remap the crash kernel memory if >> it is >> a part of an allocation that exactly fits into one ore more PUDs. >> >> Say, in reserve_crashkernel() we try the memblock_phys_alloc() with >> PUD_SIZE as alignment and size rounded up to PUD_SIZE. If this allocation >> succeeds, we remap the entire area that now contains only memory >> allocated >> in reserve_crashkernel() and free the extra memory after remapping is >> done. >> If the large allocation fails, we fall back to the original size and >> alignment and don't allow unmapping crash kernel memory in >> arch_kexec_protect_crashkres(). >>> -- >>> Catalin >> > Thanks. > > There is a new method. > I think we should use the patch v3(similar but need add some changes) > > 1.We can walk crashkernle block/section pagetable, > [[[(keep the origin block/section mapping valid]]] > rebuild the pte level page mapping for the crashkernel mem > rebuild left & right margin mem(which is in same block/section mapping > but out of crashkernel mem) with block/section mapping > > 2.'replace' the origin block/section mapping by new builded mapping > iterately > > With this method, all the mem mapping keep valid all the time. > > 3.the patch v3 link: > https://lore.kernel.org/linux-mm/6dc308db-3685-4df5-506a-71f9e3794ec8@linux.alibaba.com/T/ > > (Need some changes) Namely, When rebuilding for crashkernel mem pagemapping, there is no change to the origin mapping. When the new mapping is ready, we replace old mapping with the new builded mapping. With this method, keep all mem mapping valid all the time.