From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38600E77188 for ; Thu, 2 Jan 2025 09:14:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C302E6B009F; Thu, 2 Jan 2025 04:14:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BE02D6B00A0; Thu, 2 Jan 2025 04:14:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AA83D6B00A1; Thu, 2 Jan 2025 04:14:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 8D9BC6B009F for ; Thu, 2 Jan 2025 04:14:09 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 2F7941A16EA for ; Thu, 2 Jan 2025 09:14:09 +0000 (UTC) X-FDA: 82961949252.04.8CC30B6 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by imf03.hostedemail.com (Postfix) with ESMTP id 339532000C for ; Thu, 2 Jan 2025 09:13:44 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=quicinc.com header.s=qcppdkim1 header.b=ZoKFiDUl; dmarc=pass (policy=none) header.from=quicinc.com; spf=pass (imf03.hostedemail.com: domain of quic_zhenhuah@quicinc.com designates 205.220.180.131 as permitted sender) smtp.mailfrom=quic_zhenhuah@quicinc.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735809223; a=rsa-sha256; cv=none; b=MDADV5lNx5D4gGSeEV2w66Ui2Yib9AVonn0CIC6Yw91wDDotpeig/dXq+Phiu5JUfWOyoz Et3X8gDcEMN+1veSu2tnpSaNerGipU6nvaKBkD6Df1E2p7m1jUxALAtQiwByJ/EWz24HBu IXrtLBN/8GKibLi5e28Mpi1XRHxjcB0= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=quicinc.com header.s=qcppdkim1 header.b=ZoKFiDUl; dmarc=pass (policy=none) header.from=quicinc.com; spf=pass (imf03.hostedemail.com: domain of quic_zhenhuah@quicinc.com designates 205.220.180.131 as permitted sender) smtp.mailfrom=quic_zhenhuah@quicinc.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735809223; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=d2Z7ljRosyxzRIcbKhe3YbkoxAYWz4U36Hhq5IW19dE=; b=T+G1tOfr8aFajGtL3142qwp9b9saQykXqlKZS+mfcYE8viGbM2Fm82IJ/5HAlgsTOV39Wx LxxRu2DiknZHTkMZ0GWEIWh7bwTICLpcpNENOkeax4Ukea8BspJRni0RbOPDH463rmIJQe rw4tBx0d3rGQIqc8pk3CpBjzxNdTPUY= Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 502465vE020986; Thu, 2 Jan 2025 09:14:00 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= d2Z7ljRosyxzRIcbKhe3YbkoxAYWz4U36Hhq5IW19dE=; b=ZoKFiDUljF5G8sWi ZdiZs1IZzKOi7WEuz/9mM29GsKzsiq2cMjs7oSnyq9C9QBH1sogo5/7kZep2pSOw ZDQLbKS/V+XRGx4sNOuvzxkqP8dLu7m7PIObNG2QWj5yyAQw9hOztT5ep6WVasDj +Q0usF4mVJCP/ihA4BCTeH67A+FlmLdDbPkwi1JuJjb/HR6TvTY+O50ELBx5THG7 RkcV7qKa4XLWyaCK+dTAlU1ZOcZ/I1Mfk1wUsRwb3L/dmfCWn3VgNib3R4pIwINZ y2vn8mKaV2bXP7h+lk3gh/Qtqn9OabVbJQ8KE06sZP9HNYvaOugLZkYvrUEmFakt I5tdlA== Received: from nalasppmta04.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 43wkfj8hey-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 02 Jan 2025 09:14:00 +0000 (GMT) Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com [10.47.209.196]) by NALASPPMTA04.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 5029DxkI018347 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 2 Jan 2025 09:13:59 GMT Received: from [10.239.132.245] (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Thu, 2 Jan 2025 01:13:55 -0800 Message-ID: <79d23b92-998c-4148-a93c-75979d047d4e@quicinc.com> Date: Thu, 2 Jan 2025 17:13:53 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 1/2] arm64: mm: vmemmap populate to page level if not section aligned To: Anshuman Khandual , Catalin Marinas CC: , , , , , , , , , , , , Tingwei Zhang References: <20241209094227.1529977-1-quic_zhenhuah@quicinc.com> <20241209094227.1529977-2-quic_zhenhuah@quicinc.com> <971b9a05-4ae0-4e6c-8e48-e9e529896ecf@arm.com> <22fd3452-748c-4d4b-bd51-08e6faeaa867@quicinc.com> <88dc8823-1439-40ed-80a3-becb4a51aff7@arm.com> Content-Language: en-US From: Zhenhua Huang In-Reply-To: <88dc8823-1439-40ed-80a3-becb4a51aff7@arm.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01a.na.qualcomm.com (10.47.209.196) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: 105t-5MwB74blxyhsr57r4wz3VJWlyLu X-Proofpoint-ORIG-GUID: 105t-5MwB74blxyhsr57r4wz3VJWlyLu X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 lowpriorityscore=0 mlxlogscore=999 priorityscore=1501 impostorscore=0 suspectscore=0 bulkscore=0 mlxscore=0 clxscore=1015 malwarescore=0 adultscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2411120000 definitions=main-2501020078 X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 339532000C X-Stat-Signature: umsfnzxpoanseuanxt1dkq4o1y7swbyq X-HE-Tag: 1735809224-260076 X-HE-Meta: U2FsdGVkX18R2C5N+p7XyMApexKR+e47r2kSqZvsPSZeVn7Mk/jx/GP07CL8Lj9NtgDvWDf2R+P829H7PD0/FmbxGP5t9GCUB1iKB8HETjqW5G6hjgvAeps2tJ6sgblKAPl7PDrG3U/hkZPgyG9RfGkutQtiVR3yJIURkZwNvxDp+zDt6Z+E3YmtJQ/+cr+jBhnBbKX/Biiqs4JFzK3ZUWPdCunK/YDVOqiIzMAqqJKCQMqq4NpWwwMdkNYqeQamkBk+pkJd8tM8eFEimthnCjwJbdSUn4dPJaidVZVxrRiMrA8GOWH+OQyhBW/ZdUPa54j9tHjj/CURrkRGAAjI6Gysh71CkA8SILutvJnriqpsGwnpAjkzWrTlu4pFQ6NN4HISoGATnIiW1ARWbe0rTwa01T99PhImqBIqs20LWktg0knevrf/DXSZf/i8hEjnMRsn8KJQDcMOPAqqstMZZU1+iM2oG5llwxVugxgPhHprC6o6Xm24QGLLJTcQf0eymjEgl1jDpOaEnm+04bHJdvVabA+mF4J7a5D3yxa+ChrZpBVuz/GNv/4hexCuOBFk8pki0DEQJ1SIceEzS5U3vHKZGAjcLRbneXGfGrWZhPCto+DePIAocihoaiuwy/OsJ1Jiw96KQkU1MThN36U2KD7/askeRH7iWhVJNiBjmWLfti6iHY4cIp+lpI1JxAlZV6SRY7miYM7dpHFiWXPht7aw3cJ5lEGE0utgyUKMmLL5iAtK/W27rXuyoCmu5mFqPeU3z8Ld4wVgm9YidDDczTSjqcB310R73xTrGFkyC2HeNGqPUV6KgRgmltmDwIoYP182JIj3ZMrCimum4igDFkCTGL5AEkZPwac79Srl87KgWvhdXRAZC1iVWcClyArnfxwxYKjWdkSj20vj2CLzgxfy7/nk78oyAPGmoSWmpekebnOUZbVnh8eYYLrapP/IHueUJXSqXVPGKtbQGTt 7EYfsCzI pweZfDsyyKH4Ed0ckv3tUH1X/1TwdK/ViqauS2BfuyNBNgOn5oIvOvzyX7TBONmIKAeV43CAOjB8l1zTPr69813B20odPsyGsGIV9dqW5sXJAoM+EzDbyncgZD+9bl9xU175578AbehrL7VRYywxM5UDuRMYP2HOpLkQkfnKOm9HiErlQfalSxpfIMHCcY7fxk79W5Na9dHnadfy2RBcIA8nguwsxKGON75sA/gnKa8BYxwpRxwTUM9t4SJhUiCjSWNlinSasqNpi9TSJuvC15cgS3Z60Y5uCUT6mxiVjulRr/BhhQbaVQJ7G3S55XA8GNwQOhWw/16J+Ts0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2025/1/2 11:51, Anshuman Khandual wrote: > On 12/30/24 13:18, Zhenhua Huang wrote: >> Hi Anshuman, >> >> On 2024/12/27 15:49, Anshuman Khandual wrote: >>> On 12/24/24 19:39, Catalin Marinas wrote: >>>> On Tue, Dec 24, 2024 at 05:32:06PM +0800, Zhenhua Huang wrote: >>>>> Thanks Catalin for review! >>>>> Merry Christmas. >>>> >>>> Merry Christmas to you too! >>>> >>>>> On 2024/12/21 2:30, Catalin Marinas wrote: >>>>>> On Mon, Dec 09, 2024 at 05:42:26PM +0800, Zhenhua Huang wrote: >>>>>>> Fixes: c1cc1552616d ("arm64: MMU initialisation") >>>>>> >>>>>> I wouldn't add a fix for the first commit adding arm64 support, we did >>>>>> not even have memory hotplug at the time (added later in 5.7 by commit >>>>>> bbd6ec605c0f ("arm64/mm: Enable memory hot remove")). IIUC, this hasn't >>>>>> been a problem until commit ba72b4c8cf60 ("mm/sparsemem: support >>>>>> sub-section hotplug"). That commit broke some arm64 assumptions. >>>>> >>>>> Shall we add ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug") >>>>> because it broke arm64 assumptions ? >>>> >>>> Yes, I think that would be better. And a cc stable to 5.4 (the above >>>> commit appeared in 5.3). >>> >>> Agreed. This is a problem which needs fixing but not sure if proposed patch >>> here fixes that problem. >>> >>>> >>>>>>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c >>>>>>> index e2739b69e11b..fd59ee44960e 100644 >>>>>>> --- a/arch/arm64/mm/mmu.c >>>>>>> +++ b/arch/arm64/mm/mmu.c >>>>>>> @@ -1177,7 +1177,9 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, >>>>>>>    { >>>>>>>        WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END)); >>>>>>> -    if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES)) >>>>>>> +    if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) || >>>>>>> +    !IS_ALIGNED(page_to_pfn((struct page *)start), PAGES_PER_SECTION) || >>>>>>> +    !IS_ALIGNED(page_to_pfn((struct page *)end), PAGES_PER_SECTION)) >>>>>>>            return vmemmap_populate_basepages(start, end, node, altmap); >>>>>>>        else >>>>>>>            return vmemmap_populate_hugepages(start, end, node, altmap); >>>>>> >>>>>> An alternative would be to fix unmap_hotplug_pmd_range() etc. to avoid >>>>>> nuking the whole vmemmap pmd section if it's not empty. Not sure how >>>>>> easy that is, whether we have the necessary information (I haven't >>>>>> looked in detail). >>>>>> >>>>>> A potential issue - can we hotplug 128MB of RAM and only unplug 2MB? If >>>>>> that's possible, the problem isn't solved by this patch. >>> >>> Believe this is possible after sub-section hotplug and hotremove support. >>> >>>>> >>>>> Indeed, seems there is no guarantee that plug size must be equal to unplug >>>>> size... >>>>> >>>>> I have two ideas: >>>>> 1. Completely disable this PMD mapping optimization since there is no >>>>> guarantee we must align 128M memory for hotplug .. >>>> >>>> I'd be in favour of this, at least if CONFIG_MEMORY_HOTPLUG is enabled. >>>> I think the only advantage here is that we don't allocate a full 2MB >>>> block for vmemmap when only plugging in a sub-section. >>> >>> Agreed, that will be the right fix for the problem which can be back ported. >>> We will have to prevent PMD/PUD/CONT mappings for both linear and as well as >> >> Thanks Anshuman, yeah.. we must handle linear mapping as well. >> >>> vmemmap for all non-boot memory sections, that can be hot-unplugged. >>> >>> Something like the following ? [untested] >>> >>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c >>> index 216519663961..56b9c6891f46 100644 >>> --- a/arch/arm64/mm/mmu.c >>> +++ b/arch/arm64/mm/mmu.c >>> @@ -1171,9 +1171,15 @@ int __meminit vmemmap_check_pmd(pmd_t *pmdp, int node, >>>   int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, >>>                  struct vmem_altmap *altmap) >>>   { >>> +       unsigned long start_pfn; >>> +       struct mem_section *ms; >>> + >>>          WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END)); >>>   -       if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES)) >>> +       start_pfn = page_to_pfn((struct page *)start); >>> +       ms = __pfn_to_section(start_pfn); >>> + >>> +       if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) || !early_section(ms)) >> >> LGTM. I will follow your and Catalin's suggestion to prepare further patches, Thanks! >> >>>                  return vmemmap_populate_basepages(start, end, node, altmap); >>>          else >>>                  return vmemmap_populate_hugepages(start, end, node, altmap); >>> @@ -1334,10 +1340,15 @@ struct range arch_get_mappable_range(void) >>>   int arch_add_memory(int nid, u64 start, u64 size, >>>                      struct mhp_params *params) >>>   { >>> +       unsigned long start_pfn = page_to_pfn((struct page *)start); >>> +       struct mem_section *ms = __pfn_to_section(start_pfn); >>>          int ret, flags = NO_EXEC_MAPPINGS; >>>            VM_BUG_ON(!mhp_range_allowed(start, size, true)); >>>   +       if (!early_section(ms)) >>> +               flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; >> >> However, here comes another doubt, given that the subsection size is 2M, shouldn't we have ability to support PMD SECTION MAPPING if CONFIG_ARM64_4K_PAGES? This might be the optimization we want to maintain?> >> Should we remove NO_BLOCK_MAPPINGS and add more constraints to avoid pud_set_huge if CONFIG_ARM64_4K_PAGES ? > > I guess this has been covered on the thread. Yeah, the example I shared on separated thread, we can wrap up the discussion here :) > >> >>> + >>>          if (can_set_direct_map()) >>>                  flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; >>> >>>> >>>>> 2. If we want to take this optimization. >>>>> I propose adding one argument to vmemmap_free to indicate if the entire >>>>> section is freed(based on subsection map). Vmemmap_free is a common function >>>>> and might affect other architectures... The process would be: >>>>> vmemmap_free >>>>>     unmap_hotplug_range //In unmap_hotplug_pmd_range() as you mentioned:if >>>>> whole section is freed, proceed as usual. Otherwise, *just clear out struct >>>>> page content but do not free*. >>>>>     free_empty_tables // will be called only if entire section is freed >>>>> >>>>> On the populate side, >>>>> else if (vmemmap_check_pmd(pmd, node, addr, next)) //implement this function >>>>>     continue;    //Buffer still exists, just abort.. >>>>> >>>>> Could you please comment further whether #2 is feasible ? >>>> >>>> vmemmap_free() already gets start/end, so it could at least check the >>>> alignment and avoid freeing if it's not unplugging a full section. It >>> >>> unmap_hotplug_pmd_range() >>> { >>>     do { >>>         if (pmd_sect(pmd)) { >>>             pmd_clear(pmdp); >>>             flush_tlb_kernel_range(addr, addr + PAGE_SIZE); >>>                          if (free_mapped) >>>                                  free_hotplug_page_range(pmd_page(pmd), >>>                                                          PMD_SIZE, altmap); >>>         } >>>     } while () >>> } >>> >>> Do you mean clearing the PMD entry but not freeing the mapped page for vmemmap ? >>> In that case should the hot-unplug fail or not ? If we free the pfns (successful >>> hot-unplug), then leaving behind entire PMD entry for covering the remaining sub >>> sections, is going to be problematic as it still maps the removed pfns as well ! >> >> Could you please help me to understand in which scenarios this might cause issue? I assume we won't touch these struct page further? > > Regardless of whether the non-present pfns are accessed or not from the cpu, having > page table mappings covering them might probably create corresponding TLB entries ? > IIUC from an arch perspective, this seems undesirable and possibly some what risky. > Got it.. I know it's tricky indeed. >> >>> >>>> does leave a 2MB vmemmap block in place when freeing the last subsection >>>> but it's safer than freeing valid struct page entries. In addition, it >>>> could query the memory hotplug state with something like >>>> find_memory_block() and figure out whether the section is empty. >>> >>> I guess there are two potential solutions, if unmap_hotplug_pmd_range() were to >>> handle sub-section removal. >>> >>> 1) Skip pmd_clear() when entire section is not covered >>> >>> a. pmd_clear() only if all but the current subsection have been removed earlier >>>     via is_subsection_map_empty() or something similar. >>> >>> b. Skip pmd_clear() if the entire section covering that PMD is not being removed >>>     but that might be problematic, as it still maps potentially unavailable pfns, >>>     which are now hot-unplugged out. >>> >>> 2) Break PMD into base pages >>> >>> a. pmd_clear() only if all but the current subsection have been removed earlier >>>     via is_subsection_map_empty() or something similar. >>> >>> b. Break entire PMD into base page mappings and remove entries corresponding to >>>     the subsection being removed. Although the BBM sequence needs to be followed >>>     while making sure that no other part of the kernel is accessing subsections, >>>     that are mapped via the erstwhile PMD but currently not being removed. >>> >>>> >>>> Anyway, I'll be off until the new year, maybe I get other ideas by then. >>>> >>