From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 106BBE77188 for ; Mon, 30 Dec 2024 07:48:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 19C756B007B; Mon, 30 Dec 2024 02:48:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1261E6B0083; Mon, 30 Dec 2024 02:48:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB9BB6B0085; Mon, 30 Dec 2024 02:48:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C92796B007B for ; Mon, 30 Dec 2024 02:48:29 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 364C5B1825 for ; Mon, 30 Dec 2024 07:48:29 +0000 (UTC) X-FDA: 82950846804.15.5A3E5F5 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by imf29.hostedemail.com (Postfix) with ESMTP id 20277120005 for ; Mon, 30 Dec 2024 07:47:20 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=quicinc.com header.s=qcppdkim1 header.b=g31heNQ8; dmarc=pass (policy=none) header.from=quicinc.com; spf=pass (imf29.hostedemail.com: domain of quic_zhenhuah@quicinc.com designates 205.220.168.131 as permitted sender) smtp.mailfrom=quic_zhenhuah@quicinc.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735544863; a=rsa-sha256; cv=none; b=crMfiynA7XV0rbvuxEZGvMRhJOKkzrz5DrluczF0jxbacurmtb7iHV5oyZxNDkR7JwsX7t uyOPfc+kqxWhd4wy+URxGIbwf0LX6kudPxLFcnEmu+3qg2G7oua7Q873y7h9Ji3Im7tjfu 2B9y+6f9yK1RAsmg7pyHI5gdoq+EbkM= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=quicinc.com header.s=qcppdkim1 header.b=g31heNQ8; dmarc=pass (policy=none) header.from=quicinc.com; spf=pass (imf29.hostedemail.com: domain of quic_zhenhuah@quicinc.com designates 205.220.168.131 as permitted sender) smtp.mailfrom=quic_zhenhuah@quicinc.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735544863; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=j0sXGxu75spkgf0LfbfFjvZVDDY4C4+gD8sqjl7c/C0=; b=670+3ZgGutlgQ5iviKZDvEVYKlrfQKFjsyXl8Lxt3dBsayLMwv7fVP+rSUCiexmCuULrvc C1nu77r0q7rsdlZsD7E1c65viT14jnSAxfNM43axD1sec0iWC6MrWwF/mq/k/J3EqLYlve 9ccLV+RessISf7SuZm9B1TEKbFskQco= Received: from pps.filterd (m0279864.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 4BU7Fucp027215; Mon, 30 Dec 2024 07:48:20 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= j0sXGxu75spkgf0LfbfFjvZVDDY4C4+gD8sqjl7c/C0=; b=g31heNQ8uYcd3L+T 17zRuZj82ozB7C4vLMyqZ4qDM9jJhsGG0sWmO3kb7glY62s3p7jacnFql2Jk7ACr kGrdsX5jiJW6Og/FbXjrSEigTbk/ShHcy5bsXp2SXZ/tQWXFxwm9JSACw7lgwhJZ UkYjy+mthQk3BBH01+gRYSLEBOCSWxeRSfQrKjurusNx5Tj2B2Nyhk4izftYdqgl V3TXdyJU14CSmxQPOAx2N88kWD0odDbDq0xCSYjZb30btiMdeM4ao4DryhlSCScf qhJDrhzBk9Y0Xv0LebiFD8+RC7iPhu6iVa74CcSJZ85Dmnav595Yinw/BWTHErXr h1KyDw== Received: from nalasppmta05.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 43upyr81t2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 30 Dec 2024 07:48:19 +0000 (GMT) Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com [10.47.209.196]) by NALASPPMTA05.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 4BU7mJxb031488 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 30 Dec 2024 07:48:19 GMT Received: from [10.239.132.245] (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Sun, 29 Dec 2024 23:48:15 -0800 Message-ID: <22fd3452-748c-4d4b-bd51-08e6faeaa867@quicinc.com> Date: Mon, 30 Dec 2024 15:48:05 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 1/2] arm64: mm: vmemmap populate to page level if not section aligned To: Anshuman Khandual , Catalin Marinas CC: , , , , , , , , , , , , Tingwei Zhang References: <20241209094227.1529977-1-quic_zhenhuah@quicinc.com> <20241209094227.1529977-2-quic_zhenhuah@quicinc.com> <971b9a05-4ae0-4e6c-8e48-e9e529896ecf@arm.com> Content-Language: en-US From: Zhenhua Huang In-Reply-To: <971b9a05-4ae0-4e6c-8e48-e9e529896ecf@arm.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01a.na.qualcomm.com (10.47.209.196) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: Uoli5BeXGBPBAnKo4Ijafq6SjVjHcIXT X-Proofpoint-ORIG-GUID: Uoli5BeXGBPBAnKo4Ijafq6SjVjHcIXT X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 malwarescore=0 bulkscore=0 lowpriorityscore=0 mlxlogscore=999 adultscore=0 clxscore=1015 impostorscore=0 priorityscore=1501 suspectscore=0 phishscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2411120000 definitions=main-2412300065 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 20277120005 X-Stat-Signature: ra6s94ccawe7sjtdn8h1mbp9pjhc3fia X-Rspam-User: X-HE-Tag: 1735544840-55527 X-HE-Meta: U2FsdGVkX18fft0ISGvpSoMCK/DFemaeLzQJ4ymIMitzRp/gh1JiY3PuYmurZqwBXHJSvDK3gGUBM9+aVWk+rk4LI/xph1k0/fvjoVFFxSsWGlr2P6UlvQaUVSmFKZqUifZy67x+BVIUVZp9K+3OYilUg2Vw7hXQW5E9cT4poUZ15QiKt4nDd+uG0huSHZUszIUG0FqodCOfcUeb2WEp0tm0WcnxXXhQjmZf4OdmmJji3TSJEmtEYNiFcEOrQ/QcLtF6Jm+srgPQ8bb/cw7fdpGQys76b2JQ3npwZfDWX2sb5BF4HPD7Ksz4Adcxj6hn0b6s4ejj/8PIY5NKqHczBoeyKAZ6TbuuamcnpJNIO6ZG2usIoUjO7OM9Xty9ek1B23yM+ZFuP2HPdLQPwubOKia+AoUD1jkWURZ/O4e9p0ZrP3qQNQWiF1FsvjCZ/GTW2B8ejXOOzM/8gIq+EjHn5jlGmNCMxrydpjxG0Spney6qMKIuJr0gbTdiPIpWlrfITMjd4qKx5liNCAtYflA7GiecTRq71r+hjX3y3DTifc80SWhYhwjIh6xFK+eexSgEWblqhsIDVEMY6ttZoFBXN/DNpPzJdBFr5ArXLL+DvjaHoMlf1BDckrMymttnchrG2aU+XdpNif3BPz0LgOanHhT4tLKD+mtWySF+C7t8BPZRSqrd4rCg1mpS8L9R+UHbPkbuApGDesyF2qlHhVn5KFaHa3/crPS8Vuzr8iCWiFP64hYvSSASbcNq3Z4qHxDmz+6EYGuObrTf/RVMQFyMtVGzD5f2gL2oE5kuyziMl+oTSuE3b9FISBZXiv1MP9cWSBWSruIfuARrlqTrbKeEpR9U6VMwOzviuPlV0qkiR9l/0ycHtnTUK0Qg17HcGMO2EXZtZSaRL5mhfrsugf57kzVoipLTBV4HprEH3K9/q6S8mr8R1cBqp3x3Bz52YmsJIVXF8/TfqMjdNOs1om5 sxBwPd4c IdS24nKt5rDj7NnaXpFRFmgaoy+CiHT7XcF6HkHpVdug7/x9jMOqz8ymA9TI/X81+/qP5ixQ/GJ4ej6C0pgo7vZYP+5ex5htP7ZFNyN/ptjS6D3Ig8tRS54sQUjGoa055gZmcG0ceheWBiBUaYPwz0bxhaBL/r3CVz2Mvb9gXnEB0sxom73Yy1NXUdFs/Vu1E94dbXboKbCeuMfPAi9uHdwQdd7ax5XDBJuXyc/iK+d0QhqSl6VK874hrTtrVP0ezretWQ+VNbF7gD0DeCoOrrZbcigrVMpPtLOhu1BGzk32nN4BSg61ZdnGMNtyA4ol/ju5NaOl3J1Olwd4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Anshuman, On 2024/12/27 15:49, Anshuman Khandual wrote: > On 12/24/24 19:39, Catalin Marinas wrote: >> On Tue, Dec 24, 2024 at 05:32:06PM +0800, Zhenhua Huang wrote: >>> Thanks Catalin for review! >>> Merry Christmas. >> >> Merry Christmas to you too! >> >>> On 2024/12/21 2:30, Catalin Marinas wrote: >>>> On Mon, Dec 09, 2024 at 05:42:26PM +0800, Zhenhua Huang wrote: >>>>> Fixes: c1cc1552616d ("arm64: MMU initialisation") >>>> >>>> I wouldn't add a fix for the first commit adding arm64 support, we did >>>> not even have memory hotplug at the time (added later in 5.7 by commit >>>> bbd6ec605c0f ("arm64/mm: Enable memory hot remove")). IIUC, this hasn't >>>> been a problem until commit ba72b4c8cf60 ("mm/sparsemem: support >>>> sub-section hotplug"). That commit broke some arm64 assumptions. >>> >>> Shall we add ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug") >>> because it broke arm64 assumptions ? >> >> Yes, I think that would be better. And a cc stable to 5.4 (the above >> commit appeared in 5.3). > > Agreed. This is a problem which needs fixing but not sure if proposed patch > here fixes that problem. > >> >>>>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c >>>>> index e2739b69e11b..fd59ee44960e 100644 >>>>> --- a/arch/arm64/mm/mmu.c >>>>> +++ b/arch/arm64/mm/mmu.c >>>>> @@ -1177,7 +1177,9 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, >>>>> { >>>>> WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END)); >>>>> - if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES)) >>>>> + if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) || >>>>> + !IS_ALIGNED(page_to_pfn((struct page *)start), PAGES_PER_SECTION) || >>>>> + !IS_ALIGNED(page_to_pfn((struct page *)end), PAGES_PER_SECTION)) >>>>> return vmemmap_populate_basepages(start, end, node, altmap); >>>>> else >>>>> return vmemmap_populate_hugepages(start, end, node, altmap); >>>> >>>> An alternative would be to fix unmap_hotplug_pmd_range() etc. to avoid >>>> nuking the whole vmemmap pmd section if it's not empty. Not sure how >>>> easy that is, whether we have the necessary information (I haven't >>>> looked in detail). >>>> >>>> A potential issue - can we hotplug 128MB of RAM and only unplug 2MB? If >>>> that's possible, the problem isn't solved by this patch. > > Believe this is possible after sub-section hotplug and hotremove support. > >>> >>> Indeed, seems there is no guarantee that plug size must be equal to unplug >>> size... >>> >>> I have two ideas: >>> 1. Completely disable this PMD mapping optimization since there is no >>> guarantee we must align 128M memory for hotplug .. >> >> I'd be in favour of this, at least if CONFIG_MEMORY_HOTPLUG is enabled. >> I think the only advantage here is that we don't allocate a full 2MB >> block for vmemmap when only plugging in a sub-section. > > Agreed, that will be the right fix for the problem which can be back ported. > We will have to prevent PMD/PUD/CONT mappings for both linear and as well as Thanks Anshuman, yeah.. we must handle linear mapping as well. > vmemmap for all non-boot memory sections, that can be hot-unplugged. > > Something like the following ? [untested] > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index 216519663961..56b9c6891f46 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -1171,9 +1171,15 @@ int __meminit vmemmap_check_pmd(pmd_t *pmdp, int node, > int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, > struct vmem_altmap *altmap) > { > + unsigned long start_pfn; > + struct mem_section *ms; > + > WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END)); > > - if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES)) > + start_pfn = page_to_pfn((struct page *)start); > + ms = __pfn_to_section(start_pfn); > + > + if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) || !early_section(ms)) LGTM. I will follow your and Catalin's suggestion to prepare further patches, Thanks! > return vmemmap_populate_basepages(start, end, node, altmap); > else > return vmemmap_populate_hugepages(start, end, node, altmap); > @@ -1334,10 +1340,15 @@ struct range arch_get_mappable_range(void) > int arch_add_memory(int nid, u64 start, u64 size, > struct mhp_params *params) > { > + unsigned long start_pfn = page_to_pfn((struct page *)start); > + struct mem_section *ms = __pfn_to_section(start_pfn); > int ret, flags = NO_EXEC_MAPPINGS; > > VM_BUG_ON(!mhp_range_allowed(start, size, true)); > > + if (!early_section(ms)) > + flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; However, here comes another doubt, given that the subsection size is 2M, shouldn't we have ability to support PMD SECTION MAPPING if CONFIG_ARM64_4K_PAGES? This might be the optimization we want to maintain? Should we remove NO_BLOCK_MAPPINGS and add more constraints to avoid pud_set_huge if CONFIG_ARM64_4K_PAGES ? > + > if (can_set_direct_map()) > flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; > > >> >>> 2. If we want to take this optimization. >>> I propose adding one argument to vmemmap_free to indicate if the entire >>> section is freed(based on subsection map). Vmemmap_free is a common function >>> and might affect other architectures... The process would be: >>> vmemmap_free >>> unmap_hotplug_range //In unmap_hotplug_pmd_range() as you mentioned:if >>> whole section is freed, proceed as usual. Otherwise, *just clear out struct >>> page content but do not free*. >>> free_empty_tables // will be called only if entire section is freed >>> >>> On the populate side, >>> else if (vmemmap_check_pmd(pmd, node, addr, next)) //implement this function >>> continue; //Buffer still exists, just abort.. >>> >>> Could you please comment further whether #2 is feasible ? >> >> vmemmap_free() already gets start/end, so it could at least check the >> alignment and avoid freeing if it's not unplugging a full section. It > > unmap_hotplug_pmd_range() > { > do { > if (pmd_sect(pmd)) { > pmd_clear(pmdp); > flush_tlb_kernel_range(addr, addr + PAGE_SIZE); > if (free_mapped) > free_hotplug_page_range(pmd_page(pmd), > PMD_SIZE, altmap); > } > } while () > } > > Do you mean clearing the PMD entry but not freeing the mapped page for vmemmap ? > In that case should the hot-unplug fail or not ? If we free the pfns (successful > hot-unplug), then leaving behind entire PMD entry for covering the remaining sub > sections, is going to be problematic as it still maps the removed pfns as well ! Could you please help me to understand in which scenarios this might cause issue? I assume we won't touch these struct page further? > >> does leave a 2MB vmemmap block in place when freeing the last subsection >> but it's safer than freeing valid struct page entries. In addition, it >> could query the memory hotplug state with something like >> find_memory_block() and figure out whether the section is empty. > > I guess there are two potential solutions, if unmap_hotplug_pmd_range() were to > handle sub-section removal. > > 1) Skip pmd_clear() when entire section is not covered > > a. pmd_clear() only if all but the current subsection have been removed earlier > via is_subsection_map_empty() or something similar. > > b. Skip pmd_clear() if the entire section covering that PMD is not being removed > but that might be problematic, as it still maps potentially unavailable pfns, > which are now hot-unplugged out. > > 2) Break PMD into base pages > > a. pmd_clear() only if all but the current subsection have been removed earlier > via is_subsection_map_empty() or something similar. > > b. Break entire PMD into base page mappings and remove entries corresponding to > the subsection being removed. Although the BBM sequence needs to be followed > while making sure that no other part of the kernel is accessing subsections, > that are mapped via the erstwhile PMD but currently not being removed. > >> >> Anyway, I'll be off until the new year, maybe I get other ideas by then. >>