From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F489C433EF for ; Thu, 13 Jan 2022 06:28:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 853CD6B0078; Thu, 13 Jan 2022 01:28:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7DBAD6B007B; Thu, 13 Jan 2022 01:28:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 67D976B007D; Thu, 13 Jan 2022 01:28:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 565476B0078 for ; Thu, 13 Jan 2022 01:28:47 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 140D210A4 for ; Thu, 13 Jan 2022 06:28:47 +0000 (UTC) X-FDA: 79024285494.02.E985380 Received: from mail-yb1-f179.google.com (mail-yb1-f179.google.com [209.85.219.179]) by imf25.hostedemail.com (Postfix) with ESMTP id AE4C6A0004 for ; Thu, 13 Jan 2022 06:28:45 +0000 (UTC) Received: by mail-yb1-f179.google.com with SMTP id g81so12154608ybg.10 for ; Wed, 12 Jan 2022 22:28:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=UdjffI2T5RQy9Fhj4zBGCPcU9B4inFX0s175+8/fA+U=; b=mkc7Fx2U+lFgpGEhhkWswpNHjp6VUEmh5OfB8VaEfpWP77PTBvjXtKlkhDBwfS7zu4 HELs1yPHLU68FOH7AZal/9wXLunwzAfhz+fW2DmaMqqT2yr5QOx6npCoi0Q4TOrRmGE0 hP9jE5m/btPaRvL3ibgFqFqMpsP83V/ks896J+PfNE1FsbHFqoa3R6T1DK5ugrwBCXMy UxpaXVZ/fzQLVgP0g2WkBNAWPtUYCZoslbqjJLYfjdcd6D9zOmkiSSQvOagRLmKFuC4l S+yYh+d61V3V/mkM3LAoC2xesB0v1eZBELnoQazcETCoy9KjqlWqMJkOitT9GvIdGpmt R6qQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=UdjffI2T5RQy9Fhj4zBGCPcU9B4inFX0s175+8/fA+U=; b=cbIgffydkFtVaKPNyJJmWLexoza+XYmY/9wN+sS1mkZO6BwjZ29V6g3PTqvoaP4cZ5 xfYl/keWan8TwLOT5UHDnr7GRmi6GjU6kTZkPJRNQ4onrqd7J3QKYrhNDplsvIOexz5b VExiC0u20yatYK3xb+JH5fqn+ej0C/LleGxSgNItFOFU5nBoJbvH4pqStHpR6jh2QkCv CYb2lAjZ8SpaJ6nNVVtzMIR7rntVl5U8BhyghrRd1WoCOBDgi18Nk+rsf7LZISn1z19m C0AzKhh8wjY0gm6PFyKe955RXeu91bMrqrb7NfJMVgiNSk4yUyMOwh+kwvjCFNavDeUl AtWw== X-Gm-Message-State: AOAM530PsRK9EZZOVV/EotiK3eO/wRLX6NygNCbdt6GMEDS8ozzKMAPw bgLKUU+veA8M3tYIQVTVBlZ7Uge/shyXLIDbvlSDdA== X-Google-Smtp-Source: ABdhPJyrJaJhgs/1cBvU8PqRSgRNTV6Wse4sswNuBXOO2NA8diqbz+VreoH33Ej/KjbqbW7AmkdcSkvUX57bLoTPcgU= X-Received: by 2002:a25:6d09:: with SMTP id i9mr96060ybc.703.1642055324532; Wed, 12 Jan 2022 22:28:44 -0800 (PST) MIME-Version: 1.0 References: <20220111131652.61947-1-songmuchun@bytedance.com> In-Reply-To: From: Muchun Song Date: Thu, 13 Jan 2022 14:28:08 +0800 Message-ID: Subject: Re: [PATCH] arm64: mm: hugetlb: add support for free vmemmap pages of HugeTLB To: Mark Rutland Cc: Will Deacon , Andrew Morton , David Hildenbrand , "Bodeddula, Balasubramaniam" , Oscar Salvador , Mike Kravetz , David Rientjes , Catalin Marinas , james.morse@arm.com, linux-arm-kernel@lists.infradead.org, LKML , Linux Memory Management List , Xiongchun duan , Fam Zheng Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: AE4C6A0004 X-Stat-Signature: oeir73g41p49ews3u9r6j1bsagrorb3c Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=mkc7Fx2U; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf25.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.219.179 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspamd-Server: rspam06 X-HE-Tag: 1642055325-816530 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jan 12, 2022 at 8:02 PM Mark Rutland wrote: > > Hi, > > On Tue, Jan 11, 2022 at 09:16:52PM +0800, Muchun Song wrote: > > The preparation of supporting freeing vmemmap associated with each > > HugeTLB page is ready, so we can support this feature for arm64. > > > > Signed-off-by: Muchun Song > > It's a bit difficult to understand this commit message, as there's not much > context here. Hi Mark, My bad. More infos can be found here [1]. [1] https://lore.kernel.org/all/20210510030027.56044-1-songmuchun@bytedance.com/T/#u > > What is HUGETLB_PAGE_FREE_VMEMMAP intended to achieve? Is this intended to save > memory, find bugs, or some other goal? If this is a memory saving or > performance improvement, can we quantify that benefit? It is for memory saving. It can save about 12GB or 16GB per 1TB HugeTLB pages (2MB or 1GB type). > > Does the alloc/free happen dynamically, or does this happen once during kernel > boot? IIUC it's the former, which sounds pretty scary. Especially if we need to > re-allocate the vmmemmap pages later -- can't we run out of memory, and then > fail to free a HugeTLB page? Right. The implementations about this can found in commit: ad2fa3717b74994 ("mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page") > > Are there any requirements upon arch code, e.g. mutual exclusion? No. The implementation is generic. There is no architecture specific code needed to be implemented. > > Below there are a bunch of comments trying to explain that this is safe. Having > some of that rationale in the commit message itself would be helpful. > > I see that commit: > > 6be24bed9da367c2 ("mm: hugetlb: introduce a new config HUGETLB_PAGE_FREE_VMEMMAP") > > ... has a much more complete description, and cribbing some of that wording > would be helpful. Will do in the next version once we are on the same page about this feature. > > > > --- > > There is already some discussions about this in [1], but there was no > > conclusion in the end. I copied the concern proposed by Anshuman to here. > > > > 1st concern: > > " > > But what happens when a hot remove section's vmemmap area (which is being > > teared down) is nearby another vmemmap area which is either created or > > being destroyed for HugeTLB alloc/free purpose. As you mentioned HugeTLB > > pages inside the hot remove section might be safe. But what about other > > HugeTLB areas whose vmemmap area shares page table entries with vmemmap > > entries for a section being hot removed ? Massive HugeTLB alloc/use/free > > test cycle using memory just adjacent to a memory hotplug area, which is > > always added and removed periodically, should be able to expose this problem. > > " > > My Answer: As you already know HugeTLB pages inside the hot remove section > > is safe. > > It would be helpful if you could explain *why* that's safe, since those of us > coming at this cold have no idea whether this is the case. At the time memory is removed, all huge pages either have been migrated away or dissolved. So there is no race between memory hot remove and free_huge_page_vmemmap(). > > > Let's talk your question "what about other HugeTLB areas whose > > vmemmap area shares page table entries with vmemmap entries for a section > > being hot removed ?", the question is not established. Why? The minimal > > granularity size of hotplug memory 128MB (on arm64, 4k base page), so any > > HugeTLB smaller than 128MB is within a section, then, there is no share > > (PTE) page tables between HugeTLB in this section and ones in other > > sections and a HugeTLB could not cross two sections. > > Am I correct in assuming that in this case we never free the section? Right. So there is no race between memory hot remove and free_huge_page_vmemmap() as well. > > > Any HugeTLB bigger than 128MB (e.g. 1GB) whose size is an integer multible of > > a section and vmemmap area is also an integer multiple of 2MB. At the time > > memory is removed, all huge pages either have been migrated away or > > dissolved. The vmemmap is stable. So there is no problem in this case as > > well. > > Are you mention 2MB here because we PMD-map the vmemmap with 4K pages? Right. > > IIUC, so long as: > > 1) HugeTLBs are naturally aligned, power-of-two sizes > 2) The HugeTLB size >= the section size > 3) The HugeTLB size >= the vmemmap leaf mapping size > > ... then a HugeTLB will not share any leaf page table entries with *anything > else*, but will share intermediate entries. Right. > > Perhaps that's a clearer line of argument? > > Regardless, this should be in the commit message. Will do. > > > 2nd concern: > > " > > differently, not sure if ptdump would require any synchronization. > > > > Dumping an wrong value is probably okay but crashing because a page table > > entry is being freed after ptdump acquired the pointer is bad. On arm64, > > ptdump() is protected against hotremove via [get|put]_online_mems(). > > " > > My Answer: The ptdump should be fine since vmemmap_remap_free() only exchanges > > PTEs or split the PMD entry (which means allocating a PTE page table). Both > > operations do not free any page tables, so ptdump cannot run into a UAF on > > any page tables. The wrost case is just dumping an wrong value. > > This should be in the commit message. > Will do. Thanks.