From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7373CEB64DD for ; Mon, 24 Jul 2023 06:11:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CF26A8D0002; Mon, 24 Jul 2023 02:11:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CA3D36B0075; Mon, 24 Jul 2023 02:11:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B43778D0002; Mon, 24 Jul 2023 02:11:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id A2DCA6B0074 for ; Mon, 24 Jul 2023 02:11:46 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 674B28080B for ; Mon, 24 Jul 2023 06:11:46 +0000 (UTC) X-FDA: 81045484212.05.9C8A689 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf16.hostedemail.com (Postfix) with ESMTP id 1B228180017 for ; Mon, 24 Jul 2023 06:11:42 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=XUOO8utW; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf16.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690179103; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZzX6sMsWd1Xa4P9aMwHiVVCrxSpSp5y8mDZqQ97ppFk=; b=pVFp1K58AbbedBq5OhvdOtsXkJelY+/bEDxz8GmJziOCNF/95KHTj083WF/5WHCdoTHODO i2eTswdWDFV3+vG5EgAXsXtHYidwKMvGlt0HWfR6rTAtwZ5bsvZZ8J3T/lBdP+W1yZZVWH R16+g+piVN+H7tLGII+PaAcroRH+6io= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=XUOO8utW; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf16.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690179103; a=rsa-sha256; cv=none; b=afY5TiQYV0whgTihWFiZsItFQE598mym0KsL5FIviQddsodhANMG/CbwGgvXAxOplVRIfa wJMcLl2ucFmi0U/r6WujzZ1544wmFNd2ty7ovmgkgCL3e0Psq0Rf0QvriaocRmzh2fCEuN PsjNzV2/qtVswxlYIIe4lHt4mmh86qI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1690179102; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZzX6sMsWd1Xa4P9aMwHiVVCrxSpSp5y8mDZqQ97ppFk=; b=XUOO8utW7nle+bJbKSJeM9sx5n+baVSggVBmvMwZgfvsV/bEqOW55Wgk21ky1cfqHHxu8W pypSFWmP5R9w3RIFqeuIe34C3DxN+/MLxg8X8er5XdCGAMsc7CJD0a2GYVVwlaxwqpFtYR /VFMwOI9ITV+1wHqnyvinvpcNfEMIoE= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-113-HEc8JD2kNtGpcrlGrhtrwA-1; Mon, 24 Jul 2023 02:11:40 -0400 X-MC-Unique: HEc8JD2kNtGpcrlGrhtrwA-1 Received: by mail-wm1-f70.google.com with SMTP id 5b1f17b1804b1-3fa96b67ac1so23980995e9.0 for ; Sun, 23 Jul 2023 23:11:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690179099; x=1690783899; h=content-transfer-encoding:in-reply-to:subject:organization:from :content-language:references:cc:to:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ZzX6sMsWd1Xa4P9aMwHiVVCrxSpSp5y8mDZqQ97ppFk=; b=Qj6j2D1C4iLVjsfXZZRXxB2xJbVfJqZOkd4kJCk+Js9j3pWC60+3M0c3Ia4gkbzS+4 kuTI9CtyZmIxTlNzDS3b8siAR5oScWE2bhr5KXsl4L5+Bg4MytnhxKL9pN3K64DeVdzH q8o9Ipq0skZUiZbt3ypdiwatxprxGiXZWSZPxTJQ5ZCq8Wt3V4B0e9TUPjXzmcrCORb+ EnHKUc/hSpjsrhtZPmRfODYYoPkdamhaNOy9Xv8LvoExgXBulUzyQO84kCRZYsMvZHeJ L20STeHp7vc6PLrVhcq5SxkYvhBobp0oEjp3t6uFq53UD7PGoP6S2QosTrFnE0Bb9um7 Hxyw== X-Gm-Message-State: ABy/qLYYkOBpmSKPhau9qNhO6krwjZgiuc/15rsUJXkJssiPEAEIxRXw v817Zw0fnMLdBLJs0N/ZwYs7wqZ9sdHfRyB6pfNTMsM7EwYpua2ehJPfv40ElP/NnTt8FRll1CE V7+DlBWtzSvw= X-Received: by 2002:a7b:ce8b:0:b0:3fb:f0ef:4669 with SMTP id q11-20020a7bce8b000000b003fbf0ef4669mr7034795wmj.17.1690179099627; Sun, 23 Jul 2023 23:11:39 -0700 (PDT) X-Google-Smtp-Source: APBJJlEeo9IYf/xJhqgogtwMskzdi/GmaWsfUMRZidSGGwJcR4TUwnzSStl4LPa8+LpfNw94G3TFaQ== X-Received: by 2002:a7b:ce8b:0:b0:3fb:f0ef:4669 with SMTP id q11-20020a7bce8b000000b003fbf0ef4669mr7034772wmj.17.1690179099181; Sun, 23 Jul 2023 23:11:39 -0700 (PDT) Received: from ?IPV6:2003:d8:2f45:d000:62f2:4df0:704a:e859? (p200300d82f45d00062f24df0704ae859.dip0.t-ipconnect.de. [2003:d8:2f45:d000:62f2:4df0:704a:e859]) by smtp.gmail.com with ESMTPSA id f11-20020a7bcd0b000000b003fbfc61d36asm9419673wmj.5.2023.07.23.23.11.37 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sun, 23 Jul 2023 23:11:38 -0700 (PDT) Message-ID: <732e0db0-eb41-6c58-85b7-46257b4ba0b7@redhat.com> Date: Mon, 24 Jul 2023 08:11:37 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.13.0 To: Anshuman Khandual , mawupeng , will@kernel.org Cc: catalin.marinas@arm.com, akpm@linux-foundation.org, sudaraja@codeaurora.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, wangkefeng.wang@huawei.com, linux-arm-kernel@lists.infradead.org, mark.rutland@arm.com References: <20230717115150.1806954-1-mawupeng1@huawei.com> <20230721103628.GA12601@willie-the-truck> <35a0dad6-4f3b-f2c3-f835-b13c1e899f8d@huawei.com> From: David Hildenbrand Organization: Red Hat Subject: Re: [RFC PATCH] arm64: mm: Fix kernel page tables incorrectly deleted during memory removal In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 1B228180017 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: 1ui4436fe1baow35wmm3er7gt7exwzwy X-HE-Tag: 1690179102-344518 X-HE-Meta: U2FsdGVkX1+7Wfc1BzG9MAnJ2LweULkBGs1sEgJpjkDjog5lCgR8Es1+/NYiTVCGhCUDM1WzlxR7RAkOoGrM3goFbAWA2tknEsRC2BS5E94vHaDMTOZuKpEfZ8oVBG0kUR8LCBReFLPY+Kat9EeJwQ1HyGvJ7PjDVEW1NJz189B18PnWeZFW+dcAKAngrU9ZjJQdlFFxtsSYdx4aRG7WS+QSpBnxjjd5QSBkKXOsOdQVvgcHX1mrugC3z9xfB0gFujdZCSHI1H5c7K8KRvhLv9mzfWV/hZWzKeyeun5WwYSdJnClvFT57ZOTG0P4aK9XVzVuPqYQLS8KJpeTD0UUs9QQBBzSHOiR3VTh1mkmoiPipW5Dpf7GCgvb/Mpaa6cksdqAAudKV0X7Y7ppIghBiM1HgUhM/SViM6OOHWzCz0Jfs5pWOKaJbxbqzaQA55TwhqDPDzbwM+0qbqnAlLn4aqGg7o43cdDf9+zCv+ck+EKTlYs62a/qpCjDho7QQ8e98PyJv60IUwCaY77WsgJk01XFWXELsaguGqNOVjakxw23GodWaFY1EfXhX1+sDVq16xWjpdJeYvtnnUn2rKna1B0Hcmi3EWBeKhglj+rNtP9/34a4ZJ/M0bEpUuIwybQ6jRq4blsfnOlIxXdUOGZreW3YutieloNrHeA9vHSSWIqYjsVqaWwF5B7ehYJgieFJVoOvutvEQZwkB7PymF2JzcAlgil4rOtmI9zU4BZKc0UPumzEyHq3fxYMiu2dqILIaae0t7imSURSrsnDtbi57Xlo5CDyABbG7Q2wPcFEWqUrM8dKFPKBz+NGB2dM5ivR/93Q+WDD3TX7DdeJ/v1ffmMfdc6bnZPYGCAeoN8reNNTWBNd4IPbt9sX5XaEfiOqkK8TBHLJURVrozOXxnvx4cjVpBoUL0LFL9448OZIi74JyZT8GB+jQ9RBcqz4d3VPjvkzvnLzzqI24ovyXqi t1E9B1y/ mQr8VRjQMV3MONdrPxuAwJE2qCIGhR6HmGG5IVREk+2BlvII+lTR5m6NBMiIh2lqm1harYg1ri4PCj1C0ZrfiuzxMOhChnWAdFNq3AxShSKL9zSH3V5XHAFQNuMhPqSuqEalx5ec73iPRpKOS9OEPMk8ZHTGqQsdV2Rp7rR4i0po/m+89xYbHFljV8xToW4IiN1nFDYLVuwwiYhoZsHwAkMtw+Hcx3cdL+D2x3hvyovFi/hUBr//PhpEX2A4gcgSbRPLeaIehvRp0qajnUHExPU6+lup3ilvxhzjeGx31CEF/LO/9AlawKgJL1DanXtrJQzn1tYyc5MvtDGoIVQfmNIhnnUH7XCNjHPkltHo3azCAc9VwE+shYf7IwBiCYHCpusfNtzR72/roU+kC2rb4OU/KhNRI3G3xsCBvjjkRmJgojx4nRDDeLjerVUmdzWnvd8k5IGuV9ESAJuA+KCwlKLtMTVrpfJ33aJhuKcvZdMM/bJc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 24.07.23 07:54, Anshuman Khandual wrote: > > > On 7/24/23 06:55, mawupeng wrote: >> >> On 2023/7/21 18:36, Will Deacon wrote: >>> On Mon, Jul 17, 2023 at 07:51:50PM +0800, Wupeng Ma wrote: >>>> From: Ma Wupeng >>>> >>>> During our test, we found that kernel page table may be unexpectedly >>>> cleared with rodata off. The root cause is that the kernel page is >>>> initialized with pud size(1G block mapping) while offline is memory >>>> block size(MIN_MEMORY_BLOCK_SIZE 128M), eg, if 2G memory is hot-added, >>>> when offline a memory block, the call trace is shown below, Is someone adding memory in 2 GiB granularity and then removing parts of it in 128 MiB granularity? That would be against what we support using the add_memory() / offline_and_remove_memory() API and that driver should be fixed instead. Or does this trigger only when a hotplugged memory block falls into the same 2 GiB area as boot memory? >>>> >>>> offline_and_remove_memory >>>> try_remove_memory >>>> arch_remove_memory >>>> __remove_pgd_mapping >>>> unmap_hotplug_range >>>> unmap_hotplug_p4d_range >>>> unmap_hotplug_pud_range >>>> if (pud_sect(pud)) >>>> pud_clear(pudp); Which drivers triggers that? In-tree is only virtio-mem and dax/kmem. Both add and remove memory in the same granularity. For example, virtio-mem will only call add_memory(memory_block_size()) to then offline_and_remove_memory(memory_block_size()). Could that trigger it as well? >>> Sorry, but I'm struggling to understand the problem here. If we're adding >>> and removing a 2G memory region, why _wouldn't_ we want to use large 1GiB >>> mappings? >> >>> Or are you saying that only a subset of the memory is removed, >>> but we then accidentally unmap the whole thing? >> Yes, umap a subset but the whole thing page table entry is removed. >> Can we have some more details about the user and how to trigger it? >>>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c >>>> index 95d360805f8a..44c724ce4f70 100644 >>>> --- a/arch/arm64/mm/mmu.c >>>> +++ b/arch/arm64/mm/mmu.c >>>> @@ -44,6 +44,7 @@ >>>> #define NO_BLOCK_MAPPINGS BIT(0) >>>> #define NO_CONT_MAPPINGS BIT(1) >>>> #define NO_EXEC_MAPPINGS BIT(2) /* assumes FEAT_HPDS is not used */ >>>> +#define NO_PUD_MAPPINGS BIT(3) >>>> >>>> int idmap_t0sz __ro_after_init; >>>> >>>> @@ -344,7 +345,7 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end, >>>> */ >>>> if (pud_sect_supported() && >>>> ((addr | next | phys) & ~PUD_MASK) == 0 && >>>> - (flags & NO_BLOCK_MAPPINGS) == 0) { >>>> + (flags & (NO_BLOCK_MAPPINGS | NO_PUD_MAPPINGS)) == 0) { >>>> pud_set_huge(pudp, phys, prot); >>>> >>>> /* >>>> @@ -1305,7 +1306,7 @@ struct range arch_get_mappable_range(void) >>>> int arch_add_memory(int nid, u64 start, u64 size, >>>> struct mhp_params *params) >>>> { >>>> - int ret, flags = NO_EXEC_MAPPINGS; >>>> + int ret, flags = NO_EXEC_MAPPINGS | NO_PUD_MAPPINGS; >>> I think we should allow large mappings here and instead prevent partial >>> removal of the block, if that's what is causing the issue. >> This could solve this problem. >> Or we can prevent partial removal? Or rebulid page table entry which is not removed? > > + David Hildenbrand > > Splitting the block mapping and rebuilding page table entry to reflect non-removed > areas will require additional information such as flags and pgtable alloc function > as in __create_pgd_mapping(), which need to be passed along, depending on whether > it's tearing down vmemmap (would not have PUD block map) or linear mapping. But I > am just wondering if we have to go in that direction at all or just prevent partial > memory block removal as suggested by Will. > > - arch_remove_memory() does not have return type, core MM hotremove would not fail > because arch_remove_memory() failed or warned > > - core MM hotremove does check_hotplug_memory_range() which ensures the range and > start address are memory_block_size_bytes() aligned > > - Default memory_block_size_bytes() is dependent on SECTION_SIZE_BITS which on arm64 > now can be less than PUD_SIZE triggering this problem. > > #define MIN_MEMORY_BLOCK_SIZE (1UL << SECTION_SIZE_BITS) > > unsigned long __weak memory_block_size_bytes(void) > { > return MIN_MEMORY_BLOCK_SIZE; > } > EXPORT_SYMBOL_GPL(memory_block_size_bytes); > > - We would need to override memory_block_size_bytes() on arm64 to accommodate such > scenarios here > > Something like this might work (built but not tested) > > commit 2eb8dc0d08dfe0b2a3bb71df93b12f7bf74a2ca6 (HEAD) > Author: Anshuman Khandual > Date: Mon Jul 24 06:45:34 2023 +0100 > > arm64/mm: Define memory_block_size_bytes() > > Define memory_block_size_bytes() on arm64 platforms to set minimum hot plug > and remove granularity as PUD_SIZE in case where MIN_MEMORY_BLOCK_SIZE just > falls below PUD_SIZE. Otherwise a complete PUD block mapping will be teared > down while unmapping MIN_MEMORY_BLOCK_SIZE range. > > Signed-off-by: Anshuman Khandual > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index 95d360805f8a..1918459b3460 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -1157,6 +1157,17 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, > } > > #ifdef CONFIG_MEMORY_HOTPLUG > +unsigned long memory_block_size_bytes(void) > +{ > + /* > + * Linear mappings might include PUD based block mappings which > + * cannot be teared down in part during memory hotremove. Hence > + * PUD_SIZE needs to be the minimum granularity, for memory hot > + * removal in case MIN_MEMORY_BLOCK_SIZE falls below. > + */ > + return max_t(unsigned long, MIN_MEMORY_BLOCK_SIZE, PUD_SIZE); > +} > + > void vmemmap_free(unsigned long start, unsigned long end, > struct vmem_altmap *altmap) > { > OH god no. That would seriously degrade memory hotplug capabilities in virtual environments (especially, virtio-mem and DIMMS). If someone adds memory in 128 MiB chunks and removes memory in 128 MiB chunks, that has to be working. Removing boot memory is blocked via register_memory_notifier(&prevent_bootmem_remove_nb); -- Cheers, David / dhildenb