From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0186FC43331 for ; Tue, 24 Mar 2020 12:03:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A662F20714 for ; Tue, 24 Mar 2020 12:03:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A662F20714 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EE2326B000A; Tue, 24 Mar 2020 08:03:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E929C6B000C; Tue, 24 Mar 2020 08:03:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D5E516B000D; Tue, 24 Mar 2020 08:03:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0057.hostedemail.com [216.40.44.57]) by kanga.kvack.org (Postfix) with ESMTP id BDE9E6B000A for ; Tue, 24 Mar 2020 08:03:37 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 90FF22AEFE for ; Tue, 24 Mar 2020 12:03:37 +0000 (UTC) X-FDA: 76630121274.01.robin32_3ca9cd93add19 X-HE-Tag: robin32_3ca9cd93add19 X-Filterd-Recvd-Size: 11939 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Mar 2020 12:03:36 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9E6CE1FB; Tue, 24 Mar 2020 05:03:35 -0700 (PDT) Received: from [10.163.1.71] (unknown [10.163.1.71]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 27AAE3F792; Tue, 24 Mar 2020 05:03:30 -0700 (PDT) From: Anshuman Khandual Subject: Re: [PATCH V2 2/2] arm64/mm: Enable vmem_altmap support for vmemmap mappings To: Robin Murphy , linux-mm@kvack.org Cc: Mark Rutland , Yu Zhao , David Hildenbrand , Catalin Marinas , Steve Capper , linux-kernel@vger.kernel.org, Hsin-Yi Wang , Thomas Gleixner , Will Deacon , Andrew Morton , linux-arm-kernel@lists.infradead.org References: <1583331030-7335-1-git-send-email-anshuman.khandual@arm.com> <1583331030-7335-3-git-send-email-anshuman.khandual@arm.com> Message-ID: <57af8e0a-5e47-cf3f-19dc-ce047793cd1f@arm.com> Date: Tue, 24 Mar 2020 17:33:26 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 03/21/2020 01:05 AM, Robin Murphy wrote: > On 2020-03-04 2:10 pm, Anshuman Khandual wrote: >> Device memory ranges when getting hot added into ZONE_DEVICE, might re= quire >> their vmemmap mapping's backing memory to be allocated from their own = range >> instead of consuming system memory. This prevents large system memory = usage >> for potentially large device memory ranges. Device driver communicates= this >> request via vmem_altmap structure. Architecture needs to take this req= uest >> into account while creating and tearing down vemmmap mappings. >> >> This enables vmem_altmap support in vmemmap_populate() and vmemmap_fre= e() >> which includes vmemmap_populate_basepages() used for ARM64_16K_PAGES a= nd >> ARM64_64K_PAGES configs. >> >> Cc: Catalin Marinas >> Cc: Will Deacon >> Cc: Mark Rutland >> Cc: Steve Capper >> Cc: David Hildenbrand >> Cc: Yu Zhao >> Cc: Hsin-Yi Wang >> Cc: Thomas Gleixner >> Cc: Andrew Morton >> Cc: linux-arm-kernel@lists.infradead.org >> Cc: linux-kernel@vger.kernel.org >> >> Signed-off-by: Anshuman Khandual >> --- >> =C2=A0 arch/arm64/mm/mmu.c | 71 ++++++++++++++++++++++++++++++++------= ------- >> =C2=A0 1 file changed, 51 insertions(+), 20 deletions(-) >> >> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c >> index 27cb95c471eb..0e0a0ecc812e 100644 >> --- a/arch/arm64/mm/mmu.c >> +++ b/arch/arm64/mm/mmu.c >> @@ -727,15 +727,30 @@ int kern_addr_valid(unsigned long addr) >> =C2=A0 } >> =C2=A0 =C2=A0 #ifdef CONFIG_MEMORY_HOTPLUG >> -static void free_hotplug_page_range(struct page *page, size_t size) >> +static void free_hotplug_page_range(struct page *page, size_t size, >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct vmem_altmap *altmap) >> =C2=A0 { >> -=C2=A0=C2=A0=C2=A0 WARN_ON(PageReserved(page)); >> -=C2=A0=C2=A0=C2=A0 free_pages((unsigned long)page_address(page), get_= order(size)); >> +=C2=A0=C2=A0=C2=A0 if (altmap) { >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * Though unmap_hotpl= ug_range() will tear down altmap based >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * vmemmap mappings a= t all page table levels, these mappings >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * should only have b= een created either at PTE or PMD level >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * with vmemmap_popul= ate_basepages() or vmemmap_populate() >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * respectively. Unma= pping requests at any other level will >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * be problematic. Dr= op these warnings when vmemmap mapping >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * is supported at PU= D (even perhaps P4D) level. >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 */ >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 WARN_ON((size !=3D PAGE_SI= ZE) && (size !=3D PMD_SIZE)); >=20 > Isn't that comment equally true of the regular case? AFAICS we don't ca= ll vmemmap_alloc_block_buf() with larger than PMD_SIZE either. If the war= nings are useful, shouldn't they cover both cases equally? However, given= that we never warned before, and the code here appears that it would wor= k fine anyway, *are* they really useful? Sure, this is not something exclusively applicable for altmap based vmemmap mappings alone. Will drop it from here. >=20 >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 vmem_altmap_free(altmap, s= ize >> PAGE_SHIFT); >> +=C2=A0=C2=A0=C2=A0 } else { >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 WARN_ON(PageReserved(page)= ); >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 free_pages((unsigned long)= page_address(page), get_order(size)); >> +=C2=A0=C2=A0=C2=A0 } >> =C2=A0 } >> =C2=A0 =C2=A0 static void free_hotplug_pgtable_page(struct page *page) >> =C2=A0 { >> -=C2=A0=C2=A0=C2=A0 free_hotplug_page_range(page, PAGE_SIZE); >> +=C2=A0=C2=A0=C2=A0 free_hotplug_page_range(page, PAGE_SIZE, NULL); >> =C2=A0 } >> =C2=A0 =C2=A0 static bool pgtable_range_aligned(unsigned long start, u= nsigned long end, >> @@ -758,7 +773,8 @@ static bool pgtable_range_aligned(unsigned long st= art, unsigned long end, >> =C2=A0 } >> =C2=A0 =C2=A0 static void unmap_hotplug_pte_range(pmd_t *pmdp, unsigne= d long addr, >> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long end, bool fre= e_mapped) >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long end, bool fre= e_mapped, >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct vmem_altmap *altmap) >> =C2=A0 { >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pte_t *ptep, pte; >> =C2=A0 @@ -772,12 +788,14 @@ static void unmap_hotplug_pte_range(pmd_t= *pmdp, unsigned long addr, >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pte_clear(&init= _mm, addr, ptep); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 flush_tlb_kerne= l_range(addr, addr + PAGE_SIZE); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (free_mapped= ) >> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 fr= ee_hotplug_page_range(pte_page(pte), PAGE_SIZE); >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 fr= ee_hotplug_page_range(pte_page(pte), >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 PAG= E_SIZE, altmap); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } while (addr +=3D PAGE_SIZE, addr < en= d); >> =C2=A0 } >> =C2=A0 =C2=A0 static void unmap_hotplug_pmd_range(pud_t *pudp, unsigne= d long addr, >> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long end, bool fre= e_mapped) >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long end, bool fre= e_mapped, >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct vmem_altmap *altmap) >> =C2=A0 { >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long next; >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pmd_t *pmdp, pmd; >> @@ -800,16 +818,17 @@ static void unmap_hotplug_pmd_range(pud_t *pudp,= unsigned long addr, >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 flush_tlb_kernel_range(addr, addr + PAGE_SIZE); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 if (free_mapped) >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 free_hotplug_page_range(pmd_page(pmd), >> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 PMD_SIZE); >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 PMD_SIZE, altmap); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 continue; >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 WARN_ON(!pmd_ta= ble(pmd)); >> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unmap_hotplug_pte_range(pm= dp, addr, next, free_mapped); >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unmap_hotplug_pte_range(pm= dp, addr, next, free_mapped, altmap); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } while (addr =3D next, addr < end); >> =C2=A0 } >> =C2=A0 =C2=A0 static void unmap_hotplug_pud_range(p4d_t *p4dp, unsigne= d long addr, >> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long end, bool fre= e_mapped) >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long end, bool fre= e_mapped, >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct vmem_altmap *altmap) >> =C2=A0 { >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long next; >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pud_t *pudp, pud; >> @@ -832,16 +851,17 @@ static void unmap_hotplug_pud_range(p4d_t *p4dp,= unsigned long addr, >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 flush_tlb_kernel_range(addr, addr + PAGE_SIZE); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 if (free_mapped) >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 free_hotplug_page_range(pud_page(pud), >> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 PUD_SIZE); >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 PUD_SIZE, altmap); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 continue; >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 WARN_ON(!pud_ta= ble(pud)); >> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unmap_hotplug_pmd_range(pu= dp, addr, next, free_mapped); >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unmap_hotplug_pmd_range(pu= dp, addr, next, free_mapped, altmap); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } while (addr =3D next, addr < end); >> =C2=A0 } >> =C2=A0 =C2=A0 static void unmap_hotplug_p4d_range(pgd_t *pgdp, unsigne= d long addr, >> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long end, bool fre= e_mapped) >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long end, bool fre= e_mapped, >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct vmem_altmap *altmap) >> =C2=A0 { >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long next; >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 p4d_t *p4dp, p4d; >> @@ -854,16 +874,24 @@ static void unmap_hotplug_p4d_range(pgd_t *pgdp,= unsigned long addr, >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 continue; >> =C2=A0 =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 WARN_ON(= !p4d_present(p4d)); >> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unmap_hotplug_pud_range(p4= dp, addr, next, free_mapped); >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unmap_hotplug_pud_range(p4= dp, addr, next, free_mapped, altmap); >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } while (addr =3D next, addr < end); >> =C2=A0 } >> =C2=A0 =C2=A0 static void unmap_hotplug_range(unsigned long addr, unsi= gned long end, >> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 bool free_mapped) >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 bool free_mapped, struct vmem_altmap *altmap) >> =C2=A0 { >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long next; >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pgd_t *pgdp, pgd; >> =C2=A0 +=C2=A0=C2=A0=C2=A0 /* >> +=C2=A0=C2=A0=C2=A0=C2=A0 * vmem_altmap can only be used as backing me= mory in a given >> +=C2=A0=C2=A0=C2=A0=C2=A0 * page table mapping. In case backing memory= itself is not >> +=C2=A0=C2=A0=C2=A0=C2=A0 * being freed, then altmap is irrelevant. Wa= rn about this >> +=C2=A0=C2=A0=C2=A0=C2=A0 * inconsistency when encountered. >> +=C2=A0=C2=A0=C2=A0=C2=A0 */ >> +=C2=A0=C2=A0=C2=A0 WARN_ON(!free_mapped && altmap); >=20 > Personally I find that comment a bit unclear (particularly the first se= ntence which just seems like a confusing tautology). Is the overall point= that the altmap only matters when we're unmapping and freeing vmemmap pa= ges (such that we free them to the right allocator)? At face value it doe= sn't seem to warrant a warning - it's not necessary to know which allocat= or owns pages that we aren't freeing, but it isn't harmful either. Probably will change the comment to something like this instead. /* * altmap can only be used as vmemmap mapping backing memory. * In case the backing memory itself is not being freed, then * altmap is just irrelevant. Warn about this inconsistency * when encountered. */ altmap does decide which allocator, the backing pages will get freed into. The primary purpose here is to just warn about this invalid combination i.e (!free_mapped && altmap) which the function should never be called with. >=20 > That said, however, after puzzling through the code I get the distinct = feeling it would be more useful if all those "free_mapped" arguments were= actually named "is_vmemmap" instead. A that point, the conceptual incons= istency would be a little more obvious (and arguably might not even need = commenting). 'free_mapped' was a conscious decision [1] that got added during hot remove series V9. It avoided the name to be just vmemmap specific as the unmapping and freeing functions are very generic in nature. [1] https://lkml.org/lkml/2019/10/8/310 >=20 > All the altmap plumbing itself looks pretty mechanical and hard to disa= gree with :) Okay. >=20 > Robin.